WorldWideScience

Sample records for levels hardware requirements

  1. Flight Hardware Packaging Design for Stringent EMC Radiated Emission Requirements

    Science.gov (United States)

    Lortz, Charlene L.; Huang, Chi-Chien N.; Ravich, Joshua A.; Steiner, Carl N.

    2013-01-01

    This packaging design approach can help heritage hardware meet a flight project's stringent EMC radiated emissions requirement. The approach requires only minor modifications to a hardware's chassis and mainly concentrates on its connector interfaces. The solution is to raise the surface area where the connector is mounted by a few millimeters using a pedestal, and then wrapping with conductive tape from the cable backshell down to the surface-mounted connector. This design approach has been applied to JPL flight project subsystems. The EMC radiated emissions requirements for flight projects can vary from benign to mission critical. If the project's EMC requirements are stringent, the best approach to meet EMC requirements would be to design an EMC control program for the project early on and implement EMC design techniques starting with the circuit board layout. This is the ideal scenario for hardware that is built from scratch. Implementation of EMC radiated emissions mitigation techniques can mature as the design progresses, with minimal impact to the design cycle. The real challenge exists for hardware that is planned to be flown following a built-to-print approach, in which heritage hardware from a past project with a different set of requirements is expected to perform satisfactorily for a new project. With acceptance of heritage, the design would already be established (circuit board layout and components have already been pre-determined), and hence any radiated emissions mitigation techniques would only be applicable at the packaging level. The key is to take a heritage design with its known radiated emissions spectrum and repackage, or modify its chassis design so that it would have a better chance of meeting the new project s radiated emissions requirements.

  2. Fast Sparse Level Sets on Graphics Hardware

    NARCIS (Netherlands)

    Jalba, Andrei C.; Laan, Wladimir J. van der; Roerdink, Jos B.T.M.

    The level-set method is one of the most popular techniques for capturing and tracking deformable interfaces. Although level sets have demonstrated great potential in visualization and computer graphics applications, such as surface editing and physically based modeling, their use for interactive

  3. Generation of Efficient High-Level Hardware Code from Dataflow Programs

    OpenAIRE

    Siret , Nicolas; Wipliez , Matthieu; Nezan , Jean François; Palumbo , Francesca

    2012-01-01

    High-level synthesis (HLS) aims at reducing the time-to-market by providing an automated design process that interprets and compiles high-level abstraction programs into hardware. However, HLS tools still face limitations regarding the performance of the generated code, due to the difficulties of compiling input imperative languages into efficient hardware code. Moreover the hardware code generated by the HLS tools is usually target-dependant and at a low level of abstraction (i.e. gate-level...

  4. Automatic Generation of Optimized and Synthesizable Hardware Implementation from High-Level Dataflow Programs

    Directory of Open Access Journals (Sweden)

    Khaled Jerbi

    2012-01-01

    Full Text Available In this paper, we introduce the Reconfigurable Video Coding (RVC standard based on the idea that video processing algorithms can be defined as a library of components that can be updated and standardized separately. MPEG RVC framework aims at providing a unified high-level specification of current MPEG coding technologies using a dataflow language called Cal Actor Language (CAL. CAL is associated with a set of tools to design dataflow applications and to generate hardware and software implementations. Before this work, the existing CAL hardware compilers did not support high-level features of the CAL. After presenting the main notions of the RVC standard, this paper introduces an automatic transformation process that analyses the non-compliant features and makes the required changes in the intermediate representation of the compiler while keeping the same behavior. Finally, the implementation results of the transformation on video and still image decoders are summarized. We show that the obtained results can largely satisfy the real time constraints for an embedded design on FPGA as we obtain a throughput of 73 FPS for MPEG 4 decoder and 34 FPS for coding and decoding process of the LAR coder using a video of CIF image size. This work resolves the main limitation of hardware generation from CAL designs.

  5. Performance Evaluation at the Hardware Architecture Level and the Operating System Kernel Design Level.

    Science.gov (United States)

    1977-12-01

    program utilizing kernel semaphores for synchronization . The Hydra kernel instructions were sampled at random using the hardware monitor. The changes in...thatf r~i~h olvrAt- 1,o;lil armcrl han itf,. own sell of primitive func ions; and c onparinoms acrosns dif fc’rnt opt ratieg ; .emsf is riot possiblc...kcrnel dcsign level is complicated by the fact that each operating system kernel ha. its own set of primitive functions and compari!ons across

  6. Structural Design Requirements and Factors of Safety for Spaceflight Hardware: For Human Spaceflight. Revision A

    Science.gov (United States)

    Bernstein, Karen S.; Kujala, Rod; Fogt, Vince; Romine, Paul

    2011-01-01

    This document establishes the structural requirements for human-rated spaceflight hardware including launch vehicles, spacecraft and payloads. These requirements are applicable to Government Furnished Equipment activities as well as all related contractor, subcontractor and commercial efforts. These requirements are not imposed on systems other than human-rated spacecraft, such as ground test articles, but may be tailored for use in specific cases where it is prudent to do so such as for personnel safety or when assets are at risk. The requirements in this document are focused on design rather than verification. Implementation of the requirements is expected to be described in a Structural Verification Plan (SVP), which should describe the verification of each structural item for the applicable requirements. The SVP may also document unique verifications that meet or exceed these requirements with NASA Technical Authority approval.

  7. A hardware acceleration based on high-level synthesis approach for glucose-insulin analysis

    Science.gov (United States)

    Daud, Nur Atikah Mohd; Mahmud, Farhanahani; Jabbar, Muhamad Hairol

    2017-01-01

    In this paper, the research is focusing on Type 1 Diabetes Mellitus (T1DM). Since this disease requires a full attention on the blood glucose concentration with the help of insulin injection, it is important to have a tool that able to predict that level when consume a certain amount of carbohydrate during meal time. Therefore, to make it realizable, a Hovorka model which is aiming towards T1DM is chosen in this research. A high-level language is chosen that is C++ to construct the mathematical model of the Hovorka model. Later, this constructed code is converted into intellectual property (IP) which is also known as a hardware accelerator by using of high-level synthesis (HLS) approach which able to improve in terms of design and performance for glucose-insulin analysis tool later as will be explained further in this paper. This is the first step in this research before implementing the design into system-on-chip (SoC) to achieve a high-performance system for the glucose-insulin analysis tool.

  8. Analysis of near-term spent fuel transportation hardware requirements and transportation costs

    International Nuclear Information System (INIS)

    Daling, P.M.; Engel, R.L.

    1983-01-01

    A computer model was developed to quantify the transportation hardware requirements and transportation costs associated with shipping spent fuel in the commercial nucler fuel cycle in the near future. Results from this study indicate that alternative spent fuel shipping systems (consolidated or disassembled fuel elements and new casks designed for older fuel) will significantly reduce the transportation hardware requirements and costs for shipping spent fuel in the commercial nuclear fuel cycle, if there is no significant change in their operating/handling characteristics. It was also found that a more modest cost reduction results from increasing the fraction of spent fuel shipped by truck from 25% to 50%. Larger transportation cost reductions could be realized with further increases in the truck shipping fraction. Using the given set of assumptions, it was found that the existing spent fuel cask fleet size is generally adequate to perform the needed transportation services until a fuel reprocessing plant (FRP) begins to receive fuel (assumed in 1987). Once the FRP opens, up to 7 additional truck systems and 16 additional rail systems are required at the reference truck shipping fraction of 25%. For the 50% truck shipping fraction, 17 additional truck systems and 9 additional rail systems are required. If consolidated fuel only is shipped (25% by truck), 5 additional rail casks are required and the current truck cask fleet is more than adequate until at least 1995. Changes in assumptions could affect the results. Transportation costs for a federal interim storage program could total about $25M if the FRP begins receiving fuel in 1987 or about $95M if the FRP is delayed until 1989. This is due to an increased utilization of federal interim storage facility from 350 MTU for the reference scenario to about 750 MTU if reprocessing is delayed by two years

  9. System-level protection and hardware Trojan detection using weighted voting.

    Science.gov (United States)

    Amin, Hany A M; Alkabani, Yousra; Selim, Gamal M I

    2014-07-01

    The problem of hardware Trojans is becoming more serious especially with the widespread of fabless design houses and design reuse. Hardware Trojans can be embedded on chip during manufacturing or in third party intellectual property cores (IPs) during the design process. Recent research is performed to detect Trojans embedded at manufacturing time by comparing the suspected chip with a golden chip that is fully trusted. However, Trojan detection in third party IP cores is more challenging than other logic modules especially that there is no golden chip. This paper proposes a new methodology to detect/prevent hardware Trojans in third party IP cores. The method works by gradually building trust in suspected IP cores by comparing the outputs of different untrusted implementations of the same IP core. Simulation results show that our method achieves higher probability of Trojan detection over a naive implementation of simple voting on the output of different IP cores. In addition, experimental results show that the proposed method requires less hardware overhead when compared with a simple voting technique achieving the same degree of security.

  10. System-level protection and hardware Trojan detection using weighted voting

    Directory of Open Access Journals (Sweden)

    Hany A.M. Amin

    2014-07-01

    Full Text Available The problem of hardware Trojans is becoming more serious especially with the widespread of fabless design houses and design reuse. Hardware Trojans can be embedded on chip during manufacturing or in third party intellectual property cores (IPs during the design process. Recent research is performed to detect Trojans embedded at manufacturing time by comparing the suspected chip with a golden chip that is fully trusted. However, Trojan detection in third party IP cores is more challenging than other logic modules especially that there is no golden chip. This paper proposes a new methodology to detect/prevent hardware Trojans in third party IP cores. The method works by gradually building trust in suspected IP cores by comparing the outputs of different untrusted implementations of the same IP core. Simulation results show that our method achieves higher probability of Trojan detection over a naive implementation of simple voting on the output of different IP cores. In addition, experimental results show that the proposed method requires less hardware overhead when compared with a simple voting technique achieving the same degree of security.

  11. A leading-edge hardware family for diagnostics applications and low-level RF in CERN's ELENA ring

    CERN Document Server

    Angoletta, M E; Jaussi, M; Leiononen, P; Levens, T E; Molendijk, J C; Sanchez-Quesada, J; Simonin, J

    2013-01-01

    The CERN Extra Low ENergy Antiproton (ELENA) Ring is a new synchrotron that will be commissioned in 2016 to further decelerate the antiprotons transferred from the CERN’s Antiproton Decelerator (AD). The requirements for the acquisition and treatment of signals for longitudinal diagnostics are very demanding, owing to the revolution frequency swing as well as to the digital signal processing required. The requirements for the Low-Level Radio-Frequency (LLRF) system are very demanding as well, especially in terms of revolution frequency swing, dynamic range and low noise required by the cavity voltage control and digital signal processing to be performed. Both sets of requirements will be satisfied by using a leading-edge hardware family, developed to cover the LLRF needs of all synchrotrons in the Meyrin site; it will be first deployed in 2014 in the CERN’s PSB and in the medical machine MedAustron. This paper gives an overview of the main building blocks of the hardware family and of th...

  12. Interplay between requirements, software architecture, and hardware constraints in the development of a home control user interface

    DEFF Research Database (Denmark)

    Loft, M.S.; Nielsen, S.S.; Nørskov, Kim

    2012-01-01

    is to propose the hardware platform as a third Twin Peaks element that must be given attention in projects such as the one described in this paper. Specifically, we discuss how the presence of severe hardware constraints exacerbates making trade-offs between requirements and architecture.......We have developed a new graphical user interface for a home control device for a large industrial customer. In this industrial case study, we first present our approaches to requirements engineering and to software architecture; we also describe the given hardware platform. Then we make two...... contributions. Our first contribution is to provide a specific example of a real-world project in which a Twin Peaks-compliant approach to software development has been used, and to describe and discuss three examples of interplay between requirements and software architecture decisions. Our second contribution...

  13. Spent fuel disassembly hardware and other non-fuel bearing components: characterization, disposal cost estimates, and proposed repository acceptance requirements

    Energy Technology Data Exchange (ETDEWEB)

    Luksic, A.T.; McKee, R.W.; Daling, P.M.; Konzek, G.J.; Ludwick, J.D.; Purcell, W.L.

    1986-10-01

    There are two categories of waste considered in this report. The first is the spent fuel disassembly (SFD) hardware. This consists of the hardware remaining after the fuel pins have been removed from the fuel assembly. This includes end fittings, spacer grids, water rods (BWR) or guide tubes (PWR) as appropriate, and assorted springs, fasteners, etc. The second category is other non-fuel-bearing (NFB) components the DOE has agreed to accept for disposal, such as control rods, fuel channels, etc., under Appendix E of the standard utiltiy contract (10 CFR 961). It is estimated that there will be approximately 150 kg of SFD and NFB waste per average metric ton of uranium (MTU) of spent uranium. PWR fuel accounts for approximately two-thirds of the average spent-fuel mass but only 50 kg of the SFD and NFB waste, with most of that being spent fuel disassembly hardware. BWR fuel accounts for one-third of the average spent-fuel mass and the remaining 100 kg of the waste. The relatively large contribution of waste hardware in BWR fuel, will be non-fuel-bearing components, primarily consisting of the fuel channels. Chapters are devoted to a description of spent fuel disassembly hardware and non-fuel assembly components, characterization of activated components, disposal considerations (regulatory requirements, economic analysis, and projected annual waste quantities), and proposed acceptance requirements for spent fuel disassembly hardware and other non-fuel assembly components at a geologic repository. The economic analysis indicates that there is a large incentive for volume reduction.

  14. Hardware-based Tracking at Trigger Level for ATLAS: The Fast TracKer (FTK) Project

    CERN Document Server

    Gramling, Johanna; The ATLAS collaboration

    2015-01-01

    Physics collisions at 13 TeV are expected at the LHC with an average of 40-50 proton-proton collisions per bunch crossing. Tracking at trigger level is an essential tool to control the rate in high-pileup conditions while maintaining a good efficiency for relevant physics processes. The Fast TracKer (FTK) is an integral part of the trigger upgrade for the ATLAS detector. For every event passing the Level 1 trigger (at a maximum rate of 100 kHz) the FTK receives data from the 80 million channels of the silicon detectors, providing tracking information to the High Level Trigger in order to ensure a selection robust against pile-up. The FTK performs a hardware- based track reconstruction, using associative memory (AM) that is based on the use of a custom chip, designed to perform pattern matching at very high speed. It finds track candidates at low resolution (roads) that seed a full-resolution track fitting done by FPGAs. Narrow roads permit a fast track fitting but need many patterns stored in the AM to ensure...

  15. Hardware-based tracking at trigger level for ATLAS: The Fast Tracker (FTK) Project

    CERN Document Server

    Gramling, Johanna; The ATLAS collaboration

    2015-01-01

    Physics collisions at 13 TeV are expected at the LHC with an average of 40-50 proton-proton collisions per bunch crossing. Tracking at trigger level is an essential tool to control the rate in high-pileup conditions while maintaining a good efficiency for relevant physics processes. The Fast TracKer (FTK) is an integral part of the trigger upgrade for the ATLAS detector. For every event passing the Level 1 trigger (at a maximum rate of 100 kHz) the FTK receives data from the 80 million channels of the silicon detectors, providing tracking information to the High Level Trigger in order to ensure a selection robust against pile-up. The FTK performs a hardware-based track reconstruction, using associative memory (AM) that is based on the use of a custom chip, designed to perform pattern matching at very high speed. It finds track candidates at low resolution (roads) that seed a full-resolution track fitting done by FPGAs. Narrow roads permit a fast track fitting but need many patterns stored in the AM to ensure ...

  16. Hardware-based Tracking at Trigger Level for ATLAS the Fast TracKer (FTK) Project

    CERN Document Server

    INSPIRE-00245767

    2015-01-01

    Physics collisions at 13 TeV are expected at the LHC with an average of 40-50 proton-proton collisions per bunch crossing under nominal conditions. Tracking at trigger level is an essential tool to control the rate in high-pileup conditions while maintaining a good efficiency for relevant physics processes. The Fast TracKer is an integral part of the trigger upgrade for the ATLAS detector. For every event passing the Level-1 trigger (at a maximum rate of 100 kHz) the FTK receives data from all the channels of the silicon detectors, providing tracking information to the High Level Trigger in order to ensure a selection robust against pile-up. The FTK performs a hardware-based track reconstruction, using associative memory that is based on the use of a custom chip, designed to perform pattern matching at very high speed. It finds track candidates at low resolution (roads) that seed a full-resolution track fitting done by FPGAs. An overview of the FTK system with focus on the pattern matching procedure will be p...

  17. System-Level Testing of the Advanced Stirling Radioisotope Generator Engineering Hardware

    Science.gov (United States)

    Chan, Jack; Wiser, Jack; Brown, Greg; Florin, Dominic; Oriti, Salvatore M.

    2014-01-01

    To support future NASA deep space missions, a radioisotope power system utilizing Stirling power conversion technology was under development. This development effort was performed under the joint sponsorship of the Department of Energy and NASA, until its termination at the end of 2013 due to budget constraints. The higher conversion efficiency of the Stirling cycle compared with that of the Radioisotope Thermoelectric Generators (RTGs) used in previous missions (Viking, Pioneer, Voyager, Galileo, Ulysses, Cassini, Pluto New Horizons and Mars Science Laboratory) offers the advantage of a four-fold reduction in Pu-238 fuel, thereby extending its limited domestic supply. As part of closeout activities, system-level testing of flight-like Advanced Stirling Convertors (ASCs) with a flight-like ASC Controller Unit (ACU) was performed in February 2014. This hardware is the most representative of the flight design tested to date. The test fully demonstrates the following ACU and system functionality: system startup; ASC control and operation at nominal and worst-case operating conditions; power rectification; DC output power management throughout nominal and out-of-range host voltage levels; ACU fault management, and system command / telemetry via MIL-STD 1553 bus. This testing shows the viability of such a system for future deep space missions and bolsters confidence in the maturity of the flight design.

  18. ATLAS level-1 calorimeter trigger hardware: initial timing and energy calibration

    CERN Document Server

    Childers, JT; The ATLAS collaboration

    2010-01-01

    The ATLAS Level-1 Calorimeter Trigger identifies high-pT objects in the Liquid Argon and Tile Calorimeters with a fixed latency of up to 2.4 microseconds using a hardware-based, pipelined system built with custom electronics. The Preprocessor Module conditions and digitizes about 7200 pre-summed analogue signals from the calorimeters at the LHC bunch-crossing frequency of 40 MHz, and performs bunch-crossing identification (BCID) and deposited energy measurement for each input signal. This information is passed to further processors for object classification and total energy calculation, and the results are used to make the Level-1 trigger decision for the ATLAS detector. The BCID and energy measurement in the trigger depend on precise timing adjustments to achieve correct sampling of the input signal peak. Test pulses from the calorimeters were analysed to derive the initial timing and energy calibration, and first data from the LHC restart in autumn 2009 and early 2010 were used for validation and further op...

  19. ATLAS level-1 calorimeter trigger hardware: initial timing and energy calibration

    International Nuclear Information System (INIS)

    Childers, J T

    2011-01-01

    The ATLAS Level-1 Calorimeter Trigger identifies high-pT objects in the Liquid Argon and Tile Calorimeters with a fixed latency of up to 2.5μs using a hardware-based, pipelined system built with custom electronics. The Preprocessor Module conditions and digitizes about 7200 pre-summed analogue signals from the calorimeters at the LHC bunch-crossing frequency of 40 MHz, and performs bunch-crossing identification (BCID) and deposited energy measurement for each input signal. This information is passed to further processors for object classification and total energy calculation, and the results are used to make the Level-1 trigger decision for the ATLAS detector. The BCID and energy measurement in the trigger depend on precise timing adjustments to achieve correct sampling of the input signal peak. Test pulses from the calorimeters were analysed to derive the initial timing and energy calibration, and first data from the LHC restart in autumn 2009 and early 2010 were used for validation and further optimization. The results from these calibration measurements are presented.

  20. FY1995 evolvable hardware chip; 1995 nendo shinkasuru hardware chip

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This project aims at the development of 'Evolvable Hardware' (EHW) which can adapt its hardware structure to the environment to attain better hardware performance, under the control of genetic algorithms. EHW is a key technology to explore the new application area requiring real-time performance and on-line adaptation. 1. Development of EHW-LSI for function level hardware evolution, which includes 15 DSPs in one chip. 2. Application of the EHW to the practical industrial applications such as data compression, ATM control, digital mobile communication. 3. Two patents : (1) the architecture and the processing method for programmable EHW-LSI. (2) The method of data compression for loss-less data, using EHW. 4. The first international conference for evolvable hardware was held by authors: Intl. Conf. on Evolvable Systems (ICES96). It was determined at ICES96 that ICES will be held every two years between Japan and Europe. So the new society has been established by us. (NEDO)

  1. Characteristics of spent fuel, high-level waste, and other radioactive wastes which may require long-term isolation: Appendix 2E, Physical descriptions of LWR nonfuel assembly hardware, Appendix 2F, User's guide to the LWR nonfuel assembly data base

    International Nuclear Information System (INIS)

    1987-12-01

    This appendix includes a two to three page Physical Description report for each Non-fuel Assembly (NFA) Hardware item identified from the current data. Information was obtained via subcontracts with these NFA hardware vendors: Babcock and Wildox, Combustion Engineering and Westinghouse. Data for some NFA hardware are not available. For such hardware, the information shown in this report was obtained from the open literature. Efforts to obtain additional information are continuing. NFA hardware can be grouped into six categories: BWR Channels, Control Elements, Guide Tube Plugs/Orifice Rods, Instrumentation, Neutron Poisons, and Neutron Sources. This appendix lists Physical Description reports alphabetically by vendor within each category. Individual Physical Description reports can be generated interactively through the menu-driven LWR Non-Fuel Assembly Hardware Data Base system. These reports can be viewed on the screen, directed to a printer, or saved in a text file for later use. Special reports and compilations of specific data items can be produced on request

  2. Detailed requirements document for Stowage List and Hardware Tracking System (SLAHTS). [computer based information management system in support of space shuttle orbiter stowage configuration

    Science.gov (United States)

    Keltner, D. J.

    1975-01-01

    The stowage list and hardware tracking system, a computer based information management system, used in support of the space shuttle orbiter stowage configuration and the Johnson Space Center hardware tracking is described. The input, processing, and output requirements that serve as a baseline for system development are defined.

  3. Level 3 trigger algorithm and hardware platform for the HADES experiment

    International Nuclear Information System (INIS)

    Kirschner, Daniel Georg

    2007-01-01

    One focus of the HADES experiment is the investigation of the decay of light vector mesons inside a dense medium into lepton pairs. These decays provide a conceptually ideal tool to study the invariant mass of the vector meson in-medium, since the lepton pairs of these meson decays leave the reaction without further strong interaction. Thus, no final state interaction affects the measurement. Unfortunately, the branching ratios of vector mesons into lepton pairs are very small (∼ 10 -5 ). This calls for a high rate, high acceptance experiment. In addition, a sophisticated real time trigger system is used in HADES to enrich the interesting events in the recorded data. The focus of this thesis is the development of a next generation real time trigger method to improve the enrichment of lepton events in the HADES trigger. In addition, a flexible hardware platform (GE-MN) was developed to implement and test the trigger method. The GE-MN features two Gigabit-Ethernet interfaces for data transport, a VMEbus for slow control and configuration, and a TigerSHARC DSP for data processing. It provides the experience to discuss the challenges and benefits of using a commercial standard network technology based system in an experiment. The developed and tested trigger method correlates the ring information of the HADES RICH with the fired wires (cells) of the HADES MDC detector. This correlation method operates by calculating for each event the cells which should have seen the signal of a traversing lepton, and compares these calculated cells to all the cells that did see a signal. The cells which should have fired are calculated from the polar and azimuthal angle information of the RICH rings by assuming a straight line in space, which is starting at the target and extending into a direction given by the ring angles. The line extends through the inner MDC chambers and the traversed cells are those that should have been hit. To compensate different sources for inaccuracies not

  4. Level 3 trigger algorithm and hardware platform for the HADES experiment

    Energy Technology Data Exchange (ETDEWEB)

    Kirschner, Daniel Georg

    2007-10-26

    One focus of the HADES experiment is the investigation of the decay of light vector mesons inside a dense medium into lepton pairs. These decays provide a conceptually ideal tool to study the invariant mass of the vector meson in-medium, since the lepton pairs of these meson decays leave the reaction without further strong interaction. Thus, no final state interaction affects the measurement. Unfortunately, the branching ratios of vector mesons into lepton pairs are very small ({approx} 10{sup -5}). This calls for a high rate, high acceptance experiment. In addition, a sophisticated real time trigger system is used in HADES to enrich the interesting events in the recorded data. The focus of this thesis is the development of a next generation real time trigger method to improve the enrichment of lepton events in the HADES trigger. In addition, a flexible hardware platform (GE-MN) was developed to implement and test the trigger method. The GE-MN features two Gigabit-Ethernet interfaces for data transport, a VMEbus for slow control and configuration, and a TigerSHARC DSP for data processing. It provides the experience to discuss the challenges and benefits of using a commercial standard network technology based system in an experiment. The developed and tested trigger method correlates the ring information of the HADES RICH with the fired wires (cells) of the HADES MDC detector. This correlation method operates by calculating for each event the cells which should have seen the signal of a traversing lepton, and compares these calculated cells to all the cells that did see a signal. The cells which should have fired are calculated from the polar and azimuthal angle information of the RICH rings by assuming a straight line in space, which is starting at the target and extending into a direction given by the ring angles. The line extends through the inner MDC chambers and the traversed cells are those that should have been hit. To compensate different sources for

  5. Design requirements for SRB production control system. Volume 3: Package evaluation, modification and hardware

    Science.gov (United States)

    1981-01-01

    The software package evaluation was designed to analyze commercially available, field-proven, production control or manufacturing resource planning management technology and software package. The analysis was conducted by comparing SRB production control software requirements and conceptual system design to software package capabilities. The methodology of evaluation and the findings at each stage of evaluation are described. Topics covered include: vendor listing; request for information (RFI) document; RFI response rate and quality; RFI evaluation process; and capabilities versus requirements.

  6. A Prediction Packetizing Scheme for Reducing Channel Traffic in Transaction-Level Hardware/Software Co-Emulation

    OpenAIRE

    Lee , Jae-Gon; Chung , Moo-Kyoung; Ahn , Ki-Yong; Lee , Sang-Heon; Kyung , Chong-Min

    2005-01-01

    Submitted on behalf of EDAA (http://www.edaa.com/); International audience; This paper presents a scheme for efficient channel usage between simulator and accelerator where the accelerator models some RTL sub-blocks in the accelerator-based hardware/software co-simulation while the simulator runs transaction-level model of the remaining part of the whole chip being verified. With conventional simulation accelerator, evaluations of simulator and accelerator alternate at every valid simulation ...

  7. Hardware requirements: A new generation partial reflection radar for studies of the equatorial mesosphere

    Science.gov (United States)

    Vincent, R. A.

    1986-01-01

    A new partial reflection (PR) radar is being developed for operation at the proposed Equatorial Observatory. The system is being designed to make maximum use of recent advances in solid-state technology in order to minimize the power requirements. In particular, it is planned to use a solid-state transmitter in place of the tube transmitters previously used in PR systems. Solid-state transmitters have the advantages that they do not need high voltage supplies, they do not require cathode heaters with a corresponding saving in power consumption and parts are readily available and inexpensive. It should be possible to achieve 15 kW peak powers with recently announced fast switching transistors. Since high mean powers are desirable for obtaining good signal-to-noise ratios, it is also planned to phase code the transmitted pulses and decode after coherent integration. All decoding and signal processing will be carried out in dedicated microprocessors before the signals are passed to a microcomputer for on-line analysis. Recent tests have shown that an Olivetti M24 micro (an IBM compatible) running an 8-MHz clock with a 8087 coprocessor can analyze data at least as fast as the minicomputers presently being used with the Adelaide PR rad ar and at a significantly lower cost. The processed winds data will be stored in nonvolatile CMOS RAM modules; about 0.5 to 1 Mbyte is required to store one week's information.

  8. The CMS Trigger Supervisor: Control and Hardware Monitoring System of the CMS Level-1 Trigger at CERN

    CERN Document Server

    Ildefons Magrans de Abril

    2008-01-01

    The experiments CMS (Compact Muon Solenoid) and ATLAS (A Toroidal LHC ApparatuS) at the LargeHadron Collider (LHC) are the greatest exponents of the rising complexity in High Energy Physics (HEP) datahandling instrumentation. Tens of millions of readout channels, tens of thousands of hardware boards and thesame order of connections are figures of merit. However, the hardware volume is not the only complexitydimension, the unprecedented large number of research institutes and scientists that form the internationalcollaborations, and the long design, development, commissioning and operational phases are additional factorsthat must be taken into account.The Level-1 (L1) trigger decision loop is an excellent example of these difficulties. This system is based on apipelined logic destined to analyze without deadtime the data from each LHC bunch crossing occurring every25_ns, using special coarsely segmented trigger data from the detectors. The L1 trigger is responsible forreducing the rate of accepted crossings to...

  9. Report on data requirements and hardware selection for in-situ ball viscometer

    International Nuclear Information System (INIS)

    Shepard, C.L.

    1994-12-01

    The in-situ ball rheometer is designed to provide data concerning the rheological properties of the waste contained in tank 101-SY. It is imperative that the data collected and the results obtained are useful to the community presently concerned with the mitigation of the waste contained within this tank. To ensure that this objective is met, discussions were held with representatives of different groups in order to determine their data needs. This report is a synopsis of these discussions. Four separate groups were identified as potential users of the data. Persons contacted included Don Trent (Pacific Northwest Laboratory (PNL)), who is involved with Tempest modeling of the tank; Randy Marlow and John Strehlow (Westinghouse Hanford Company (WHC)), involved with structural analysis of the tank; Kemal Pasamehmetoglu and Cetin Unal (Los Alamos National Laboratory (LANL)), who are concerned with the safety analysis of activities performed within the tank; and Judith Bamberger, Paul Scott, and Gita Golcar (PNL) who are involved with the eventual retrieval of waste from the tank. Very specific questions were asked of these groups, including: From where in the tank are data needed? When should data be collected? In what manner are the data useful? What is the required accuracy of the data? Responses from each group are given

  10. Spacelab Level 4 Programmatic Implementation Assessment Study. Volume 2: Ground Processing requirements

    Science.gov (United States)

    1978-01-01

    Alternate ground processing options are summarized, including installation and test requirements for payloads, space processing, combined astronomy, and life sciences. The level 4 integration resource requirements are also reviewed for: personnel, temporary relocation, transportation, ground support equipment, and Spacelab flight hardware.

  11. Hardware Demonstrator of a Level-1 Track Finding Algorithm with FPGAs for the Phase II CMS Experiment

    International Nuclear Information System (INIS)

    Cieri, D.

    2016-01-01

    At the HL-LHC, proton bunches collide every 25 ns, producing an average of 140 pp interactions per bunch crossing. To operate in such an environment, the CMS experiment will need a Level-1 (L1) hardware trigger, able to identify interesting events within a latency of 12.5 μs. This novel L1 trigger will make use of data coming from the silicon tracker to constrain the trigger rate . Goal of this new track trigger will be to build L1 tracks from the tracker information. The architecture that will be implemented in future to process tracker data is still under discussion. One possibility is to adopt a system entirely based on FPGA electronic. The proposed track finding algorithm is based on the Hough transform method. The algorithm has been tested using simulated pp collision data and it is currently being demonstrated in hardware, using the “MP7”, which is a μTCA board with a powerful FPGA capable of handling data rates approaching 1 Tb/s. Two different implementations of the Hough transform technique are currently under investigation: one utilizes a systolic array to represent the Hough space, while the other exploits a pipelined approach. (paper)

  12. Hardware Demonstrator of a Level-1 Track Finding Algorithm with FPGAs for the Phase II CMS Experiment

    CERN Document Server

    AUTHOR|(CDS)2090481

    2016-01-01

    At the HL-LHC, proton bunches collide every 25\\,ns, producing an average of 140 pp interactions per bunch crossing. To operate in such an environment, the CMS experiment will need a Level-1 (L1) hardware trigger, able to identify interesting events within a latency of 12.5\\,$\\mu$s. This novel L1 trigger will make use of data coming from the silicon tracker to constrain the trigger rate. Goal of this new \\textit{track trigger} will be to build L1 tracks from the tracker information. The architecture that will be implemented in future to process tracker data is still under discussion. One possibility is to adopt a system entirely based on FPGA electronic. The proposed track finding algorithm is based on the Hough transform method. The algorithm has been tested using simulated pp collision data and it is currently being demonstrated in hardware, using the ``MP7'', which is a $\\mu$TCA board with a powerful FPGA capable of handling data rates approaching 1 Tb/s. Two different implementations of the Hough tran...

  13. Real-time Terrain Rendering using Smooth Hardware Optimized Level of Detail

    DEFF Research Database (Denmark)

    Larsen, Bent Dalgaard; Christensen, Niels Jørgen

    2003-01-01

    We present a method for real-time level of detail reduction that is able to display high-complexity polygonal surface data. A compact and efficient regular grid representation is used. The method is optimized for modern, low-end consumer 3D graphics cards. We avoid sudden changes of the geometry...

  14. Cascade Boosting-Based Object Detection from High-Level Description to Hardware Implementation

    Directory of Open Access Journals (Sweden)

    K. Khattab

    2009-01-01

    Full Text Available Object detection forms the first step of a larger setup for a wide variety of computer vision applications. The focus of this paper is the implementation of a real-time embedded object detection system while relying on high-level description language such as SystemC. Boosting-based object detection algorithms are considered as the fastest accurate object detection algorithms today. However, the implementation of a real time solution for such algorithms is still a challenge. A new parallel implementation, which exploits the parallelism and the pipelining in these algorithms, is proposed. We show that using a SystemC description model paired with a mainstream automatic synthesis tool can lead to an efficient embedded implementation. We also display some of the tradeoffs and considerations, for this implementation to be effective. This implementation proves capable of achieving 42 fps for 320×240 images as well as bringing regularity in time consuming.

  15. FY1995 evolvable hardware chip; 1995 nendo shinkasuru hardware chip

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This project aims at the development of 'Evolvable Hardware' (EHW) which can adapt its hardware structure to the environment to attain better hardware performance, under the control of genetic algorithms. EHW is a key technology to explore the new application area requiring real-time performance and on-line adaptation. 1. Development of EHW-LSI for function level hardware evolution, which includes 15 DSPs in one chip. 2. Application of the EHW to the practical industrial applications such as data compression, ATM control, digital mobile communication. 3. Two patents : (1) the architecture and the processing method for programmable EHW-LSI. (2) The method of data compression for loss-less data, using EHW. 4. The first international conference for evolvable hardware was held by authors: Intl. Conf. on Evolvable Systems (ICES96). It was determined at ICES96 that ICES will be held every two years between Japan and Europe. So the new society has been established by us. (NEDO)

  16. Comparison of energy performance requirements levels

    DEFF Research Database (Denmark)

    Spiekman, Marleen; Thomsen, Kirsten Engelund; Rose, Jørgen

    This summary report provides a synthesis of the work within the EU SAVE project ASIEPI on developing a method to compare the energy performance (EP) requirement levels among the countries of Europe. Comparing EP requirement levels constitutes a major challenge. From the comparison of for instance...... the present Dutch requirement level (EPC) of 0,8 with the present Flemish level of E80, it can easily be seen that direct comparison is not possible. The conclusions and recommendations of the study are presented in part A. These constitute the most important result of the project. Part B gives an overview...... of all other project material related to that topic, which allows to easily identify the most pertinent information. Part C lists the project partners and sponsors....

  17. Technical safety requirements control level verification

    International Nuclear Information System (INIS)

    STEWART, J.L.

    1999-01-01

    A Technical Safety Requirement (TSR) control level verification process was developed for the Tank Waste Remediation System (TWRS) TSRs at the Hanford Site in Richland, WA, at the direction of the US. Department of Energy, Richland Operations Office (RL). The objective of the effort was to develop a process to ensure that the TWRS TSR controls are designated and managed at the appropriate levels as Safety Limits (SLs), Limiting Control Settings (LCSs), Limiting Conditions for Operation (LCOs), Administrative Controls (ACs), or Design Features. The TSR control level verification process was developed and implemented by a team of contractor personnel with the participation of Fluor Daniel Hanford, Inc. (FDH), the Project Hanford Management Contract (PHMC) integrating contractor, and RL representatives. The team was composed of individuals with the following experience base: nuclear safety analysis; licensing; nuclear industry and DOE-complex TSR preparation/review experience; tank farm operations; FDH policy and compliance; and RL-TWRS oversight. Each TSR control level designation was completed utilizing TSR control logic diagrams and TSR criteria checklists based on DOE Orders, Standards, Contractor TSR policy, and other guidance. The control logic diagrams and criteria checklists were reviewed and modified by team members during team meetings. The TSR control level verification process was used to systematically evaluate 12 LCOs, 22 AC programs, and approximately 100 program key elements identified in the TWRS TSR document. The verification of each TSR control required a team consensus. Based on the results of the process, refinements were identified and the TWRS TSRs were modified as appropriate. A final report documenting key assumptions and the control level designation for each TSR control was prepared and is maintained on file for future reference. The results of the process were used as a reference in the RL review of the final TWRS TSRs and control suite. RL

  18. Technical safety requirements control level verification; TOPICAL

    International Nuclear Information System (INIS)

    STEWART, J.L.

    1999-01-01

    A Technical Safety Requirement (TSR) control level verification process was developed for the Tank Waste Remediation System (TWRS) TSRs at the Hanford Site in Richland, WA, at the direction of the US. Department of Energy, Richland Operations Office (RL). The objective of the effort was to develop a process to ensure that the TWRS TSR controls are designated and managed at the appropriate levels as Safety Limits (SLs), Limiting Control Settings (LCSs), Limiting Conditions for Operation (LCOs), Administrative Controls (ACs), or Design Features. The TSR control level verification process was developed and implemented by a team of contractor personnel with the participation of Fluor Daniel Hanford, Inc. (FDH), the Project Hanford Management Contract (PHMC) integrating contractor, and RL representatives. The team was composed of individuals with the following experience base: nuclear safety analysis; licensing; nuclear industry and DOE-complex TSR preparation/review experience; tank farm operations; FDH policy and compliance; and RL-TWRS oversight. Each TSR control level designation was completed utilizing TSR control logic diagrams and TSR criteria checklists based on DOE Orders, Standards, Contractor TSR policy, and other guidance. The control logic diagrams and criteria checklists were reviewed and modified by team members during team meetings. The TSR control level verification process was used to systematically evaluate 12 LCOs, 22 AC programs, and approximately 100 program key elements identified in the TWRS TSR document. The verification of each TSR control required a team consensus. Based on the results of the process, refinements were identified and the TWRS TSRs were modified as appropriate. A final report documenting key assumptions and the control level designation for each TSR control was prepared and is maintained on file for future reference. The results of the process were used as a reference in the RL review of the final TWRS TSRs and control suite. RL

  19. Open Hardware at CERN

    CERN Multimedia

    CERN Knowledge Transfer Group

    2015-01-01

    CERN is actively making its knowledge and technology available for the benefit of society and does so through a variety of different mechanisms. Open hardware has in recent years established itself as a very effective way for CERN to make electronics designs and in particular printed circuit board layouts, accessible to anyone, while also facilitating collaboration and design re-use. It is creating an impact on many levels, from companies producing and selling products based on hardware designed at CERN, to new projects being released under the CERN Open Hardware Licence. Today the open hardware community includes large research institutes, universities, individual enthusiasts and companies. Many of the companies are actively involved in the entire process from design to production, delivering services and consultancy and even making their own products available under open licences.

  20. Hardware protection through obfuscation

    CERN Document Server

    Bhunia, Swarup; Tehranipoor, Mark

    2017-01-01

    This book introduces readers to various threats faced during design and fabrication by today’s integrated circuits (ICs) and systems. The authors discuss key issues, including illegal manufacturing of ICs or “IC Overproduction,” insertion of malicious circuits, referred as “Hardware Trojans”, which cause in-field chip/system malfunction, and reverse engineering and piracy of hardware intellectual property (IP). The authors provide a timely discussion of these threats, along with techniques for IC protection based on hardware obfuscation, which makes reverse-engineering an IC design infeasible for adversaries and untrusted parties with any reasonable amount of resources. This exhaustive study includes a review of the hardware obfuscation methods developed at each level of abstraction (RTL, gate, and layout) for conventional IC manufacturing, new forms of obfuscation for emerging integration strategies (split manufacturing, 2.5D ICs, and 3D ICs), and on-chip infrastructure needed for secure exchange o...

  1. Hardware malware

    CERN Document Server

    Krieg, Christian

    2013-01-01

    In our digital world, integrated circuits are present in nearly every moment of our daily life. Even when using the coffee machine in the morning, or driving our car to work, we interact with integrated circuits. The increasing spread of information technology in virtually all areas of life in the industrialized world offers a broad range of attack vectors. So far, mainly software-based attacks have been considered and investigated, while hardware-based attacks have attracted comparatively little interest. The design and production process of integrated circuits is mostly decentralized due to

  2. Hardware Objects for Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Thalinger, Christian; Korsholm, Stephan

    2008-01-01

    Java, as a safe and platform independent language, avoids access to low-level I/O devices or direct memory access. In standard Java, low-level I/O it not a concern; it is handled by the operating system. However, in the embedded domain resources are scarce and a Java virtual machine (JVM) without...... an underlying middleware is an attractive architecture. When running the JVM on bare metal, we need access to I/O devices from Java; therefore we investigate a safe and efficient mechanism to represent I/O devices as first class Java objects, where device registers are represented by object fields. Access...... to those registers is safe as Java’s type system regulates it. The access is also fast as it is directly performed by the bytecodes getfield and putfield. Hardware objects thus provide an object-oriented abstraction of low-level hardware devices. As a proof of concept, we have implemented hardware objects...

  3. PL-DA-PS: A hardware architecture and software toolbox for neurophysiology requiring complex visual stimuli and online behavioral control

    Directory of Open Access Journals (Sweden)

    Kyler M. Eastman

    2012-01-01

    Full Text Available Neurophysiological studies in awake, behaving primates (both human and nonhuman primates have focused with increasing scrutiny on the temporal relationship between neural signals and behaviors. Consequently, laboratories are often faced with the problem of developing experimental equipment that can support data recording with high temporal precision and also be flexible enough to accommodate a wide variety of experimental paradigms. To this end, we have developed an architecture that integrates several modern pieces of equipment, but still grants experimenters a high degree of flexibility. Our hardware architecture and software tools take advantage of three popular and powerful technologies: the PLexon apparatus for neurophysiological recordings (Plexon, Inc., Dallas TX, a DAtapixx box (Vpixx Technologies, Saint-Bruno, QC, Canada for analog, digital, and video signal input-output control, and the PSychtoolbox MATLAB toolbox for stimulus generation (Brainard, 1997. The PL-DA-PS (Platypus system is designed to support the study of the visual systems of awake, behaving primates during multi-electrode neurophysiological recordings, but can be easily applied to other related domains. Despite its wide range of capabilities and support for cutting-edge video displays and neural recording systems, the PLDAPS system is simple enough for someone with basic MATLAB programming skills to design their own experiments.

  4. The VMTG Hardware Description

    CERN Document Server

    Puccio, B

    1998-01-01

    The document describes the hardware features of the CERN Master Timing Generator. This board is the common platform for the transmission of General Timing Machine required by the CERN accelerators. In addition, the paper shows the various jumper options to customise the card which is compliant to the VMEbus standard.

  5. Greater-than-Class C low-level waste characterization. Appendix G: Evaluation of potential for greater-than-Class C classification of irradiated hardware generated by utility-operated reactors

    International Nuclear Information System (INIS)

    Cline, J.E.

    1991-08-01

    This study compiles and evaluates data from many sources to expand a base of data from which to estimate the activity concentrations and volumes of greater-than-Class C low-level waste that the Department of Energy will receive from the commercial power industry. Sources of these data include measurements of irradiated hardware made by or for the utilities that was classified for disposal in commercial burial sites, measurements of neutron flux in the appropriate regions of the reactor pressure vessel, analyses of elemental constituents of the particular structural material used for the components, and the activation analysis calculations done for hardware. Evaluations include results and assumptions in the activation analyses. Sections of this report and the appendices present interpretation of data and the classification definitions and requirements

  6. A CU-Level Rate and Distortion Estimation Scheme for RDO of Hardware-Friendly HEVC Encoders Using Low-Complexity Integer DCTs.

    Science.gov (United States)

    Lee, Bumshik; Kim, Munchurl

    2016-08-01

    In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of

  7. PLDAPS: A Hardware Architecture and Software Toolbox for Neurophysiology Requiring Complex Visual Stimuli and Online Behavioral Control.

    Science.gov (United States)

    Eastman, Kyler M; Huk, Alexander C

    2012-01-01

    Neurophysiological studies in awake, behaving primates (both human and non-human) have focused with increasing scrutiny on the temporal relationship between neural signals and behaviors. Consequently, laboratories are often faced with the problem of developing experimental equipment that can support data recording with high temporal precision and also be flexible enough to accommodate a wide variety of experimental paradigms. To this end, we have developed a MATLAB toolbox that integrates several modern pieces of equipment, but still grants experimenters the flexibility of a high-level programming language. Our toolbox takes advantage of three popular and powerful technologies: the Plexon apparatus for neurophysiological recordings (Plexon, Inc., Dallas, TX, USA), a Datapixx peripheral (Vpixx Technologies, Saint-Bruno, QC, Canada) for control of analog, digital, and video input-output signals, and the Psychtoolbox MATLAB toolbox for stimulus generation (Brainard, 1997; Pelli, 1997; Kleiner et al., 2007). The PLDAPS ("Platypus") system is designed to support the study of the visual systems of awake, behaving primates during multi-electrode neurophysiological recordings, but can be easily applied to other related domains. Despite its wide range of capabilities and support for cutting-edge video displays and neural recording systems, the PLDAPS system is simple enough for someone with basic MATLAB programming skills to design their own experiments.

  8. Hardware for soft computing and soft computing for hardware

    CERN Document Server

    Nedjah, Nadia

    2014-01-01

    Single and Multi-Objective Evolutionary Computation (MOEA),  Genetic Algorithms (GAs), Artificial Neural Networks (ANNs), Fuzzy Controllers (FCs), Particle Swarm Optimization (PSO) and Ant colony Optimization (ACO) are becoming omnipresent in almost every intelligent system design. Unfortunately, the application of the majority of these techniques is complex and so requires a huge computational effort to yield useful and practical results. Therefore, dedicated hardware for evolutionary, neural and fuzzy computation is a key issue for designers. With the spread of reconfigurable hardware such as FPGAs, digital as well as analog hardware implementations of such computation become cost-effective. The idea behind this book is to offer a variety of hardware designs for soft computing techniques that can be embedded in any final product. Also, to introduce the successful application of soft computing technique to solve many hard problem encountered during the design of embedded hardware designs. Reconfigurable em...

  9. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Dominique Houzet

    2006-08-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  10. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Ouadjaout Salim

    2006-01-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  11. Combining on-hardware prototyping and high-level simulation for DSE of multi-ASIP systems

    NARCIS (Netherlands)

    Meloni, P.; Pomata, S.; Raffo, L.; Piscitelli, R.; Pimentel, A.D.; McAllister, J.; Bhattacharyya, S.

    2012-01-01

    Modern heterogeneous multi-processor embedded systems very often expose to the designer a large number of degrees of freedom, related to the application partitioning/mapping and to the component- and system-level architecture composition. The number is even larger when the designer targets systems

  12. Targeting multiple heterogeneous hardware platforms with OpenCL

    Science.gov (United States)

    Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.

    2014-06-01

    The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware

  13. Introduction to Hardware Security

    Directory of Open Access Journals (Sweden)

    Yier Jin

    2015-10-01

    Full Text Available Hardware security has become a hot topic recently with more and more researchers from related research domains joining this area. However, the understanding of hardware security is often mixed with cybersecurity and cryptography, especially cryptographic hardware. For the same reason, the research scope of hardware security has never been clearly defined. To help researchers who have recently joined in this area better understand the challenges and tasks within the hardware security domain and to help both academia and industry investigate countermeasures and solutions to solve hardware security problems, we will introduce the key concepts of hardware security as well as its relations to related research topics in this survey paper. Emerging hardware security topics will also be clearly depicted through which the future trend will be elaborated, making this survey paper a good reference for the continuing research efforts in this area.

  14. A Compositional Knowledge Level Process Model of Requirements Engineering

    NARCIS (Netherlands)

    Herlea, D.E.; Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.

    2002-01-01

    In current literature few detailed process models for Requirements Engineering are presented: usually high-level activities are distinguished, without a more precise specification of each activity. In this paper the process of Requirements Engineering has been analyzed using knowledge-level

  15. Cost-optimal levels for energy performance requirements

    DEFF Research Database (Denmark)

    Thomsen, Kirsten Engelund; Aggerholm, Søren; Kluttig-Erhorn, Heike

    2011-01-01

    The CA conducted a study on experiences and challenges for setting cost optimal levels for energy performance requirements. The results were used as input by the EU Commission in their work of establishing the Regulation on a comparative methodology framework for calculating cost optimal levels...... of minimum energy performance requirements. In addition to the summary report released in August 2011, the full detailed report on this study is now also made available, just as the EC is about to publish its proposed Regulation for MS to apply in their process to update national building requirements....

  16. Open Hardware Business Models

    Directory of Open Access Journals (Sweden)

    Edy Ferreira

    2008-04-01

    Full Text Available In the September issue of the Open Source Business Resource, Patrick McNamara, president of the Open Hardware Foundation, gave a comprehensive introduction to the concept of open hardware, including some insights about the potential benefits for both companies and users. In this article, we present the topic from a different perspective, providing a classification of market offers from companies that are making money with open hardware.

  17. Open Hardware Business Models

    OpenAIRE

    Edy Ferreira

    2008-01-01

    In the September issue of the Open Source Business Resource, Patrick McNamara, president of the Open Hardware Foundation, gave a comprehensive introduction to the concept of open hardware, including some insights about the potential benefits for both companies and users. In this article, we present the topic from a different perspective, providing a classification of market offers from companies that are making money with open hardware.

  18. Operating safety requirements for the intermediate level liquid waste system

    International Nuclear Information System (INIS)

    1980-07-01

    The operation of the Intermediate Level Liquid Waste (ILW) System, which is described in the Final Safety Analysis, consists of two types of operations, namely: (1) the operation of a tank farm which involves the storage and transportation through pipelines of various radioactive liquids; and (2) concentration of the radioactive liquids by evaporation including rejection of the decontaminated condensate to the Waste Treatment Plant and retention of the concentrate. The following safety requirements in regard to these operations are presented: safety limits and limiting control settings; limiting conditions for operation; and surveillance requirements. Staffing requirements, reporting requirements, and steps to be taken in the event of an abnormal occurrence are also described

  19. Constructing Hardware in a Scale Embedded Language

    Energy Technology Data Exchange (ETDEWEB)

    2014-08-21

    Chisel is a new open-source hardware construction language developed at UC Berkeley that supports advanced hardware design using highly parameterized generators and layered domain-specific hardware languages. Chisel is embedded in the Scala programming language, which raises the level of hardware design abstraction by providing concepts including object orientation, functional programming, parameterized types, and type inference. From the same source, Chisel can generate a high-speed C++-based cycle-accurate software simulator, or low-level Verilog designed to pass on to standard ASIC or FPGA tools for synthesis and place and route.

  20. Open-source hardware for medical devices.

    Science.gov (United States)

    Niezen, Gerrit; Eslambolchilar, Parisa; Thimbleby, Harold

    2016-04-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device.

  1. Generation of embedded Hardware/Software from SystemC

    OpenAIRE

    Houzet , Dominique; Ouadjaout , Salim

    2006-01-01

    International audience; Designers increasingly rely on reusing intellectual property (IP) and on raising the level of abstraction to respect system-on-chip (SoC) market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propo...

  2. Optimal Multi-Level Lot Sizing for Requirements Planning Systems

    OpenAIRE

    Earle Steinberg; H. Albert Napier

    1980-01-01

    The wide spread use of advanced information systems such as Material Requirements Planning (MRP) has significantly altered the practice of dependent demand inventory management. Recent research has focused on development of multi-level lot sizing heuristics for such systems. In this paper, we develop an optimal procedure for the multi-period, multi-product, multi-level lot sizing problem by modeling the system as a constrained generalized network with fixed charge arcs and side constraints. T...

  3. LHCb: Hardware Data Injector

    CERN Multimedia

    Delord, V; Neufeld, N

    2009-01-01

    The LHCb High Level Trigger and Data Acquisition system selects about 2 kHz of events out of the 1 MHz of events, which have been selected previously by the first-level hardware trigger. The selected events are consolidated into files and then sent to permanent storage for subsequent analysis on the Grid. The goal of the upgrade of the LHCb readout is to lift the limitation to 1 MHz. This means speeding up the DAQ to 40 MHz. Such a DAQ system will certainly employ 10 Gigabit or technologies and might also need new networking protocols: a customized TCP or proprietary solutions. A test module is being presented, which integrates in the existing LHCb infrastructure. It is a 10-Gigabit traffic generator, flexible enough to generate LHCb's raw data packets using dummy data or simulated data. These data are seen as real data coming from sub-detectors by the DAQ. The implementation is based on an FPGA using 10 Gigabit Ethernet interface. This module is integrated in the experiment control system. The architecture, ...

  4. Hardware standardization for embedded systems

    International Nuclear Information System (INIS)

    Sharma, M.K.; Kalra, Mohit; Patil, M.B.; Mohanty, Ashutos; Ganesh, G.; Biswas, B.B.

    2010-01-01

    Reactor Control Division (RCnD) has been one of the main designers of safety and safety related systems for power reactors. These systems have been built using in-house developed hardware. Since the present set of hardware was designed long ago, a need was felt to design a new family of hardware boards. A Working Group on Electronics Hardware Standardization (WG-EHS) was formed with an objective to develop a family of boards, which is general purpose enough to meet the requirements of the system designers/end users. RCnD undertook the responsibility of design, fabrication and testing of boards for embedded systems. VME and a proprietary I/O bus were selected as the two system buses. The boards have been designed based on present day technology and components. The intelligence of these boards has been implemented on FPGA/CPLD using VHDL. This paper outlines the various boards that have been developed with a brief description. (author)

  5. Hardware description languages

    Science.gov (United States)

    Tucker, Jerry H.

    1994-01-01

    Hardware description languages are special purpose programming languages. They are primarily used to specify the behavior of digital systems and are rapidly replacing traditional digital system design techniques. This is because they allow the designer to concentrate on how the system should operate rather than on implementation details. Hardware description languages allow a digital system to be described with a wide range of abstraction, and they support top down design techniques. A key feature of any hardware description language environment is its ability to simulate the modeled system. The two most important hardware description languages are Verilog and VHDL. Verilog has been the dominant language for the design of application specific integrated circuits (ASIC's). However, VHDL is rapidly gaining in popularity.

  6. ZEUS hardware control system

    Science.gov (United States)

    Loveless, R.; Erhard, P.; Ficenec, J.; Gather, K.; Heath, G.; Iacovacci, M.; Kehres, J.; Mobayyen, M.; Notz, D.; Orr, R.; Orr, R.; Sephton, A.; Stroili, R.; Tokushuku, K.; Vogel, W.; Whitmore, J.; Wiggers, L.

    1989-12-01

    The ZEUS collaboration is building a system to monitor, control and document the hardware of the ZEUS detector. This system is based on a network of VAX computers and microprocessors connected via ethernet. The database for the hardware values will be ADAMO tables; the ethernet connection will be DECNET, TCP/IP, or RPC. Most of the documentation will also be kept in ADAMO tables for easy access by users.

  7. ZEUS hardware control system

    International Nuclear Information System (INIS)

    Loveless, R.; Erhard, P.; Ficenec, J.; Gather, K.; Heath, G.; Iacovacci, M.; Kehres, J.; Mobayyen, M.; Notz, D.; Orr, R.; Sephton, A.; Stroili, R.; Tokushuku, K.; Vogel, W.; Whitmore, J.; Wiggers, L.

    1989-01-01

    The ZEUS collaboration is building a system to monitor, control and document the hardware of the ZEUS detector. This system is based on a network of VAX computers and microprocessors connected via ethernet. The database for the hardware values will be ADAMO tables; the ethernet connection will be DECNET, TCP/IP, or RPC. Most of the documentation will also be kept in ADAMO tables for easy access by users. (orig.)

  8. Regulatory requirements on PSA level 2: Review, aspects and applications

    International Nuclear Information System (INIS)

    Husarcek, J.

    2003-01-01

    The general requirements concerning utility obligations, probabilistic safety criteria (CDF should not exceed 1.0E-4/reactor year and LERF should not exceed 1.0E-5/reactor year), documentation and results, living PSA requirements and major steps in level 2 PSA are presented. PSA developments in Slovakia, collection and assembly of information, plant damage states, containment performance and failure modes, severe accident progression analyses, containment failure modes and source terms as a part of performed level 2 PSA are discussed. The PSA applications in design and operation evaluation, support to plant upgrade and modifications are also described. At the end, the following conclusion is made: more extensive PSA application needs to foster the exchange of experience and communication between PSA specialists, non-PSA engineers, designers, and the regulatory body staff responsible for safety assessment, inspection and enforcement

  9. Blood leptin levels and erythropoietin requirement in Iranian hemodialysis patients

    Directory of Open Access Journals (Sweden)

    Rahimi A

    2008-12-01

    Full Text Available "nBackground: Anemia is a common complication accompanied by high morbidity and mortality in hemodialysis patients. Considering the fact that the reduction of erythropoietin (EPO synthesis is the main cause of uremic anemia, receiving recombinant human erythropoietin (rHuEPO can improve the condition in these patients. Some of these hemodialysis patients, however, have acceptable hemoglobin levels without any need to EPO. Higher BMI, higher albumin and leptin plasma levels and longer durations of hemodialysis are possible factors contributing to the reduced need for rHuEPO in these patients. The present study is designed to asses the relationship between the plasma levels of leptin and the reduced EPO need. "nMethods: Fifty eligible hemodialysis patients with hemoglobin levels higher than 11 mg/dl were enrolled in the cross-sectional study. The information on age, sex, hemodialysis duration and the cause of renal dysfunction were extracted from the files. The baseline plasma levels of Leptin and albumin were measured. The patients BMI and the weekly need for rHuEPO were also calculated. "nResults: There was no correlation between the weekly need for rHuEPO and sex, BMI, the cause of renal dysfunction and the plasma levels of albumin and leptin; it, however, was related with age and the duration of dialysis. While age negatively influences the weekly need, the duration of dialysis has a positive effect on the need. "nConclusion: The plasma levels of leptin are not directly correlated with the required amounts of rHuEPO, indicating that leptin is not an effective factor in erythropoiesis. Conversely, older age and shorter hemodialysis durations are accompanied by reduced need for rHuEPO.

  10. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  11. Transmission delays in hardware clock synchronization

    Science.gov (United States)

    Shin, Kang G.; Ramanathan, P.

    1988-01-01

    Various methods, both with software and hardware, have been proposed to synchronize a set of physical clocks in a system. Software methods are very flexible and economical but suffer an excessive time overhead, whereas hardware methods require no time overhead but are unable to handle transmission delays in clock signals. The effects of nonzero transmission delays in synchronization have been studied extensively in the communication area in the absence of malicious or Byzantine faults. The authors show that it is easy to incorporate the ideas from the communication area into the existing hardware clock synchronization algorithms to take into account the presence of both malicious faults and nonzero transmission delays.

  12. The LASS hardware processor

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1976-01-01

    The problems of data analysis with hardware processors are reviewed and a description is given of a programmable processor. This processor, the 168/E, has been designed for use in the LASS multi-processor system; it has an execution speed comparable to the IBM 370/168 and uses the subset of IBM 370 instructions appropriate to the LASS analysis task. (Auth.)

  13. CERN Neutrino Platform Hardware

    CERN Document Server

    Nelson, Kevin

    2017-01-01

    My summer research was broadly in CERN's neutrino platform hardware efforts. This project had two main components: detector assembly and data analysis work for ICARUS. Specifically, I worked on assembly for the ProtoDUNE project and monitored the safety of ICARUS as it was transported to Fermilab by analyzing the accelerometer data from its move.

  14. RRFC hardware operation manual

    International Nuclear Information System (INIS)

    Abhold, M.E.; Hsue, S.T.; Menlove, H.O.; Walton, G.

    1996-05-01

    The Research Reactor Fuel Counter (RRFC) system was developed to assay the 235 U content in spent Material Test Reactor (MTR) type fuel elements underwater in a spent fuel pool. RRFC assays the 235 U content using active neutron coincidence counting and also incorporates an ion chamber for gross gamma-ray measurements. This manual describes RRFC hardware, including detectors, electronics, and performance characteristics

  15. Requirements for Participatory Framework on Governmental Policy Level

    Directory of Open Access Journals (Sweden)

    Birutė PITRĖNAITĖ

    2012-06-01

    Full Text Available The article seeks to specify the requirements of the framework for public participation in policy making on the governmental level aiming to elaborate a substantial content of the participatory policy. The research methodology engages both qualitative and quantitative approaches based on document analysis and interviews. We analysed a range of documents, issued by the Ministry of Education and Science of the Republic of Lithuania, where participatory groups are nominated for the annual terms of 2007 and 2010. Results of the research testify that, notwithstanding the considerable number of participatory facts, public administrators hold more than a half of the places in the participatory groups. Stakeholders other than public administrators are considered to be rather consultants than partners in policy development. We suggest that for a substantial, effective and efficient participation framework, several requirements should be met including a correct arena for stakes’ expression; completeness of the stake representation; balanced stake representation; sensitivity to research based evidence; monitoring and evaluation of participation quality.

  16. Hardware descriptions of the I and C systems for NPP

    International Nuclear Information System (INIS)

    Lee, Cheol Kwon; Oh, In Suk; Park, Joo Hyun; Kim, Dong Hoon; Han, Jae Bok; Shin, Jae Whal; Kim, Young Bak

    2003-09-01

    The hardware specifications for I and C Systems of SNPP(Standard Nuclear Power Plant) are reviewed in order to acquire the hardware requirement and specification of KNICS (Korea Nuclear Instrumentation and Control System). In the study, we investigated hardware requirements, hardware configuration, hardware specifications, man-machine hardware requirements, interface requirements with the other system, and data communication requirements that are applicable to SNP. We reviewed those things of control systems, protection systems, monitoring systems, information systems, and process instrumentation systems. Through the study, we described the requirements and specifications of digital systems focusing on a microprocessor and a communication interface, and repeated it for analog systems focusing on the manufacturing companies. It is expected that the experience acquired from this research will provide vital input for the development of the KNICS

  17. Designing Secure Systems on Reconfigurable Hardware

    OpenAIRE

    Huffmire, Ted; Brotherton, Brett; Callegari, Nick; Valamehr, Jonathan; White, Jeff; Kastner, Ryan; Sherwood, Ted

    2008-01-01

    The extremely high cost of custom ASIC fabrication makes FPGAs an attractive alternative for deployment of custom hardware. Embedded systems based on reconfigurable hardware integrate many functions onto a single device. Since embedded designers often have no choice but to use soft IP cores obtained from third parties, the cores operate at different trust levels, resulting in mixed trust designs. The goal of this project is to evaluate recently proposed security primitives for reconfigurab...

  18. Hardware Accelerated Simulated Radiography

    International Nuclear Information System (INIS)

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S; Frank, R

    2005-01-01

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32 bit floating point texture capabilities to obtain validated solutions to the radiative transport equation for X-rays. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedra that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester. We show that the hardware accelerated solution is faster than the current technique used by scientists

  19. Sterilization of space hardware.

    Science.gov (United States)

    Pflug, I. J.

    1971-01-01

    Discussion of various techniques of sterilization of space flight hardware using either destructive heating or the action of chemicals. Factors considered in the dry-heat destruction of microorganisms include the effects of microbial water content, temperature, the physicochemical properties of the microorganism and adjacent support, and nature of the surrounding gas atmosphere. Dry-heat destruction rates of microorganisms on the surface, between mated surface areas, or buried in the solid material of space vehicle hardware are reviewed, along with alternative dry-heat sterilization cycles, thermodynamic considerations, and considerations of final sterilization-process design. Discussed sterilization chemicals include ethylene oxide, formaldehyde, methyl bromide, dimethyl sulfoxide, peracetic acid, and beta-propiolactone.

  20. Hardware characteristic and application

    International Nuclear Information System (INIS)

    Gu, Dong Hyeon

    1990-03-01

    The contents of this book are system board on memory, performance, system timer system click and specification, coprocessor such as programing interface and hardware interface, power supply on input and output, protection for DC output, Power Good signal, explanation on 84 keyboard and 101/102 keyboard,BIOS system, 80286 instruction set and 80287 coprocessor, characters, keystrokes and colors, communication and compatibility of IBM personal computer on application direction, multitasking and code for distinction of system.

  1. Implementation of Hardware Accelerators on Zynq

    DEFF Research Database (Denmark)

    Toft, Jakob Kenn

    of the ARM Cortex-9 processor featured on the Zynq SoC, with regard to execution time, power dissipation and energy consumption. The implementation of the hardware accelerators were successful. Use of the Monte Carlo processor resulted in a significant increase in performance. The Telco hardware accelerator......In the recent years it has become obvious that the performance of general purpose processors are having trouble meeting the requirements of high performance computing applications of today. This is partly due to the relatively high power consumption, compared to the performance, of general purpose...... processors, which has made hardware accelerators an essential part of several datacentres and the worlds fastest super-computers. In this work, two different hardware accelerators were implemented on a Xilinx Zynq SoC platform mounted on the ZedBoard platform. The two accelerators are based on two different...

  2. Reconfigurable Hardware for Compressing Hyperspectral Image Data

    Science.gov (United States)

    Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua

    2010-01-01

    High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of

  3. Site safety requirements for high level waste disposal

    International Nuclear Information System (INIS)

    Chen Weiming; Wang Ju

    2006-01-01

    This paper outlines the content, status and trend of site safety requirements of International Atomic Energy Agency, America, France, Sweden, Finland and Japan. Site safety requirements are usually represented as advantageous vis-a-vis disadvantagous conditions, and potential advantage vis-a-vis disadvantage conditions, respectively in aspects of geohydrology, geochemistry, lithology, climate and human intrusion etc. Study framework and steps of site safety requirements for China are discussed under the view of systems science. (authors)

  4. COMPUTER HARDWARE MARKING

    CERN Multimedia

    Groupe de protection des biens

    2000-01-01

    As part of the campaign to protect CERN property and for insurance reasons, all computer hardware belonging to the Organization must be marked with the words 'PROPRIETE CERN'.IT Division has recently introduced a new marking system that is both economical and easy to use. From now on all desktop hardware (PCs, Macintoshes, printers) issued by IT Division with a value equal to or exceeding 500 CHF will be marked using this new system.For equipment that is already installed but not yet marked, including UNIX workstations and X terminals, IT Division's Desktop Support Service offers the following services free of charge:Equipment-marking wherever the Service is called out to perform other work (please submit all work requests to the IT Helpdesk on 78888 or helpdesk@cern.ch; for unavoidable operational reasons, the Desktop Support Service will only respond to marking requests when these coincide with requests for other work such as repairs, system upgrades, etc.);Training of personnel designated by Division Leade...

  5. A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems

    Science.gov (United States)

    Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.

    2015-01-01

    Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a SimulinkR library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.

  6. 42 CFR 409.31 - Level of care requirement.

    Science.gov (United States)

    2010-10-01

    ... services means services that: (1) Are ordered by a physician; (2) Require the skills of technical or professional personnel such as registered nurses, licensed practical (vocational) nurses, physical therapists...

  7. Foundations of hardware IP protection

    CERN Document Server

    Torres, Lionel

    2017-01-01

    This book provides a comprehensive and up-to-date guide to the design of security-hardened, hardware intellectual property (IP). Readers will learn how IP can be threatened, as well as protected, by using means such as hardware obfuscation/camouflaging, watermarking, fingerprinting (PUF), functional locking, remote activation, hidden transmission of data, hardware Trojan detection, protection against hardware Trojan, use of secure element, ultra-lightweight cryptography, and digital rights management. This book serves as a single-source reference to design space exploration of hardware security and IP protection. · Provides readers with a comprehensive overview of hardware intellectual property (IP) security, describing threat models and presenting means of protection, from integrated circuit layout to digital rights management of IP; · Enables readers to transpose techniques fundamental to digital rights management (DRM) to the realm of hardware IP security; · Introduce designers to the concept of salutar...

  8. Open hardware for open science

    CERN Multimedia

    CERN Bulletin

    2011-01-01

    Inspired by the open source software movement, the Open Hardware Repository was created to enable hardware developers to share the results of their R&D activities. The recently published CERN Open Hardware Licence offers the legal framework to support this knowledge and technology exchange.   Two years ago, a group of electronics designers led by Javier Serrano, a CERN engineer, working in experimental physics laboratories created the Open Hardware Repository (OHR). This project was initiated in order to facilitate the exchange of hardware designs across the community in line with the ideals of “open science”. The main objectives include avoiding duplication of effort by sharing results across different teams that might be working on the same need. “For hardware developers, the advantages of open hardware are numerous. For example, it is a great learning tool for technologies some developers would not otherwise master, and it avoids unnecessary work if someone ha...

  9. Full Wafer Redistribution and Wafer Embedding as Key Technologies for a Multi-Scale Neuromorphic Hardware Cluster

    OpenAIRE

    Zoschke, Kai; Güttler, Maurice; Böttcher, Lars; Grübl, Andreas; Husmann, Dan; Schemmel, Johannes; Meier, Karlheinz; Ehrmann, Oswin

    2018-01-01

    Together with the Kirchhoff-Institute for Physics(KIP) the Fraunhofer IZM has developed a full wafer redistribution and embedding technology as base for a large-scale neuromorphic hardware system. The paper will give an overview of the neuromorphic computing platform at the KIP and the associated hardware requirements which drove the described technological developments. In the first phase of the project standard redistribution technologies from wafer level packaging were adapted to enable a ...

  10. Environmental Control System Software & Hardware Development

    Science.gov (United States)

    Vargas, Daniel Eduardo

    2017-01-01

    ECS hardware: (1) Provides controlled purge to SLS Rocket and Orion spacecraft. (2) Provide mission-focused engineering products and services. ECS software: (1) NASA requires Compact Unique Identifiers (CUIs); fixed-length identifier used to identify information items. (2) CUI structure; composed of nine semantic fields that aid the user in recognizing its purpose.

  11. Organisational and information requirements at the municipal level

    International Nuclear Information System (INIS)

    Mandos, J.L.M.

    1993-01-01

    There are three types of government in The Netherlands: Kingdom, province and municipality. The nuclear power station in Borsele (Pressurised Water Reactor) has a capacity of 488 MW. The basis of disaster relief in The Netherlands is in the hands of the municipality. In 1991 a new disaster relief plan for the nuclear power station in Borsele was prepared. A large scale national exercise took place in November of that year. This disaster relief plan has three levels: local, regional, national. At the regional level seven mayors co-operate in a management team. At the national level there is a management team of ministers. At the regional level the co-ordination of the advice and operational services is executed under the direction of an operational leader (leader of the regional fire-brigade). The advice to the management team of mayors is in the hands of the operational leader. The public generally has a distorted idea about nuclear power and the effects of any possible disaster. This can be a great problem when giving information to the public during a disaster. Reliable information, given in time, is indispensable. Sufficient statistical information must be available. Models, advance calculations and forecasts can be of great help in decision making from the beginning of a serious event. (author)

  12. Estimating Bandwidth Requirements using Flow-level Measurements

    NARCIS (Netherlands)

    Bruyère, P.; de Oliveira Schmidt, R.; Sperotto, Anna; Sadre, R.; Pras, Aiko

    Bandwidth provisioning is an important task of network management and it is done aiming to meet desired levels of quality of service. Current practices of provisioning are mostly based on rules-of-thumb and use coarse traffic measurements that may lead to problems of under and over dimensioning of

  13. VEG-01: Veggie Hardware Verification Testing

    Science.gov (United States)

    Massa, Gioia; Newsham, Gary; Hummerick, Mary; Morrow, Robert; Wheeler, Raymond

    2013-01-01

    The Veggie plant/vegetable production system is scheduled to fly on ISS at the end of2013. Since much of the technology associated with Veggie has not been previously tested in microgravity, a hardware validation flight was initiated. This test will allow data to be collected about Veggie hardware functionality on ISS, allow crew interactions to be vetted for future improvements, validate the ability of the hardware to grow and sustain plants, and collect data that will be helpful to future Veggie investigators as they develop their payloads. Additionally, food safety data on the lettuce plants grown will be collected to help support the development of a pathway for the crew to safely consume produce grown on orbit. Significant background research has been performed on the Veggie plant growth system, with early tests focusing on the development of the rooting pillow concept, and the selection of fertilizer, rooting medium and plant species. More recent testing has been conducted to integrate the pillow concept into the Veggie hardware and to ensure that adequate water is provided throughout the growth cycle. Seed sanitation protocols have been established for flight, and hardware sanitation between experiments has been studied. Methods for shipping and storage of rooting pillows and the development of crew procedures and crew training videos for plant activities on-orbit have been established. Science verification testing was conducted and lettuce plants were successfully grown in prototype Veggie hardware, microbial samples were taken, plant were harvested, frozen, stored and later analyzed for microbial growth, nutrients, and A TP levels. An additional verification test, prior to the final payload verification testing, is desired to demonstrate similar growth in the flight hardware and also to test a second set of pillows containing zinnia seeds. Issues with root mat water supply are being resolved, with final testing and flight scheduled for later in 2013.

  14. Hardware Support for Embedded Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    The general Java runtime environment is resource hungry and unfriendly for real-time systems. To reduce the resource consumption of Java in embedded systems, direct hardware support of the language is a valuable option. Furthermore, an implementation of the Java virtual machine in hardware enables...... worst-case execution time analysis of Java programs. This chapter gives an overview of current approaches to hardware support for embedded and real-time Java....

  15. HARDWARE TROJAN IDENTIFICATION AND DETECTION

    OpenAIRE

    Samer Moein; Fayez Gebali; T. Aaron Gulliver; Abdulrahman Alkandari

    2017-01-01

    ABSTRACT The majority of techniques developed to detect hardware trojans are based on specific attributes. Further, the ad hoc approaches employed to design methods for trojan detection are largely ineffective. Hardware trojans have a number of attributes which can be used to systematically develop detection techniques. Based on this concept, a detailed examination of current trojan detection techniques and the characteristics of existing hardware trojans is presented. This is used to dev...

  16. Hardware assisted hypervisor introspection.

    Science.gov (United States)

    Shi, Jiangyong; Yang, Yuexiang; Tang, Chuan

    2016-01-01

    In this paper, we introduce hypervisor introspection, an out-of-box way to monitor the execution of hypervisors. Similar to virtual machine introspection which has been proposed to protect virtual machines in an out-of-box way over the past decade, hypervisor introspection can be used to protect hypervisors which are the basis of cloud security. Virtual machine introspection tools are usually deployed either in hypervisor or in privileged virtual machines, which might also be compromised. By utilizing hardware support including nested virtualization, EPT protection and #BP, we are able to monitor all hypercalls belongs to the virtual machines of one hypervisor, include that of privileged virtual machine and even when the hypervisor is compromised. What's more, hypercall injection method is used to simulate hypercall-based attacks and evaluate the performance of our method. Experiment results show that our method can effectively detect hypercall-based attacks with some performance cost. Lastly, we discuss our furture approaches of reducing the performance cost and preventing the compromised hypervisor from detecting the existence of our introspector, in addition with some new scenarios to apply our hypervisor introspection system.

  17. Hardware Development Process for Human Research Facility Applications

    Science.gov (United States)

    Bauer, Liz

    2000-01-01

    The simple goal of the Human Research Facility (HRF) is to conduct human research experiments on the International Space Station (ISS) astronauts during long-duration missions. This is accomplished by providing integration and operation of the necessary hardware and software capabilities. A typical hardware development flow consists of five stages: functional inputs and requirements definition, market research, design life cycle through hardware delivery, crew training, and mission support. The purpose of this presentation is to guide the audience through the early hardware development process: requirement definition through selecting a development path. Specific HRF equipment is used to illustrate the hardware development paths. The source of hardware requirements is the science community and HRF program. The HRF Science Working Group, consisting of SCientists from various medical disciplines, defined a basic set of equipment with functional requirements. This established the performance requirements of the hardware. HRF program requirements focus on making the hardware safe and operational in a space environment. This includes structural, thermal, human factors, and material requirements. Science and HRF program requirements are defined in a hardware requirements document which includes verification methods. Once the hardware is fabricated, requirements are verified by inspection, test, analysis, or demonstration. All data is compiled and reviewed to certify the hardware for flight. Obviously, the basis for all hardware development activities is requirement definition. Full and complete requirement definition is ideal prior to initiating the hardware development. However, this is generally not the case, but the hardware team typically has functional inputs as a guide. The first step is for engineers to conduct market research based on the functional inputs provided by scientists. CommerCially available products are evaluated against the science requirements as

  18. Programming time-multiplexed reconfigurable hardware using a scalable neuromorphic compiler.

    Science.gov (United States)

    Minkovich, Kirill; Srinivasa, Narayan; Cruz-Albrecht, Jose M; Cho, Youngkwan; Nogin, Aleksey

    2012-06-01

    Scalability and connectivity are two key challenges in designing neuromorphic hardware that can match biological levels. In this paper, we describe a neuromorphic system architecture design that addresses an approach to meet these challenges using traditional complementary metal-oxide-semiconductor (CMOS) hardware. A key requirement in realizing such neural architectures in hardware is the ability to automatically configure the hardware to emulate any neural architecture or model. The focus for this paper is to describe the details of such a programmable front-end. This programmable front-end is composed of a neuromorphic compiler and a digital memory, and is designed based on the concept of synaptic time-multiplexing (STM). The neuromorphic compiler automatically translates any given neural architecture to hardware switch states and these states are stored in digital memory to enable desired neural architectures. STM enables our proposed architecture to address scalability and connectivity using traditional CMOS hardware. We describe the details of the proposed design and the programmable front-end, and provide examples to illustrate its capabilities. We also provide perspectives for future extensions and potential applications.

  19. Secure coupling of hardware components

    NARCIS (Netherlands)

    Hoepman, J.H.; Joosten, H.J.M.; Knobbe, J.W.

    2011-01-01

    A method and a system for securing communication between at least a first and a second hardware components of a mobile device is described. The method includes establishing a first shared secret between the first and the second hardware components during an initialization of the mobile device and,

  20. Hardware and software constructs for a vibration analysis network

    International Nuclear Information System (INIS)

    Cook, S.A.; Crowe, R.D.; Toffer, H.

    1985-01-01

    Vibration level monitoring and analysis has been initiated at N Reactor, the dual purpose reactor operated at Hanford, Washington by UNC Nuclear Industries (UNC) for the Department of Energy (DOE). The machinery to be monitored was located in several buildings scattered over the plant site, necessitating an approach using satellite stations to collect, monitor and temporarily store data. The satellite stations are, in turn, linked to a centralized processing computer for further analysis. The advantages of a networked data analysis system are discussed in this paper along with the hardware and software required to implement such a system

  1. Reliable software for unreliable hardware a cross layer perspective

    CERN Document Server

    Rehman, Semeen; Henkel, Jörg

    2016-01-01

    This book describes novel software concepts to increase reliability under user-defined constraints. The authors’ approach bridges, for the first time, the reliability gap between hardware and software. Readers will learn how to achieve increased soft error resilience on unreliable hardware, while exploiting the inherent error masking characteristics and error (stemming from soft errors, aging, and process variations) mitigations potential at different software layers. · Provides a comprehensive overview of reliability modeling and optimization techniques at different hardware and software levels; · Describes novel optimization techniques for software cross-layer reliability, targeting unreliable hardware.

  2. SYNTHESIS OF INFORMATION SYSTEM FOR SMART HOUSE HARDWARE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Vikentyeva Olga Leonidovna

    2017-10-01

    Full Text Available Subject: smart house maintenance requires taking into account a number of factors: resource-saving, reduction of operational expenditures, safety enhancement, providing comfortable working and leisure conditions. Automation of the corresponding engineering systems of illumination, climate control, security as well as communication systems and networks via utilization of contemporary technologies (e.g., IoT - Internet of Things poses a significant challenge related to storage and processing of the overwhelmingly massive volume of data whose utilization extent is extremely low nowadays. Since a building’s lifespan is large enough and exceeds the lifespan of codes and standards that take into account the requirements of safety, comfort, energy saving, etc., it is necessary to consider management aspects in the context of rational use of large data at the stage of information modeling. Research objectives: increase the efficiency of managing the subsystems of smart buildings hardware on the basis of a web-based information system that has a flexible multi-level architecture with several control loops and an adaptation model. Materials and methods: since a smart house belongs to man-machine systems, the cybernetic approach is considered as the basic method for design and research of information management system. Instrumental research methods are represented by set-theoretical modelling, automata theory and architectural principles of organization of information management systems. Results: a flexible architecture of information system for management of smart house hardware subsystems has been synthesized. This architecture encompasses several levels: client level, application level and data level as well as three layers: presentation level, actuating device layer and analytics layer. The problem of growing volumes of information processed by realtime message controller is attended by employment of sensors and actuating mechanisms with configurable

  3. A Scalable Approach for Hardware Semiformal Verification

    OpenAIRE

    Grimm, Tomas; Lettnin, Djones; Hübner, Michael

    2018-01-01

    The current verification flow of complex systems uses different engines synergistically: virtual prototyping, formal verification, simulation, emulation and FPGA prototyping. However, none is able to verify a complete architecture. Furthermore, hybrid approaches aiming at complete verification use techniques that lower the overall complexity by increasing the abstraction level. This work focuses on the verification of complex systems at the RT level to handle the hardware peculiarities. Our r...

  4. Expert System analysis of non-fuel assembly hardware and spent fuel disassembly hardware: Its generation and recommended disposal

    International Nuclear Information System (INIS)

    Williamson, D.A.

    1991-01-01

    Almost all of the effort being expended on radioactive waste disposal in the United States is being focused on the disposal of spent Nuclear Fuel, with little consideration for other areas that will have to be disposed of in the same facilities. one area of radioactive waste that has not been addressed adequately because it is considered a secondary part of the waste issue is the disposal of the various Non-Fuel Bearing Components of the reactor core. These hardware components fall somewhat arbitrarily into two categories: Non-Fuel Assembly (NFA) hardware and Spent Fuel Disassembly (SFD) hardware. This work provides a detailed examination of the generation and disposal of NFA hardware and SFD hardware by the nuclear utilities of the United States as it relates to the Civilian Radioactive Waste Management Program. All available sources of data on NFA and SFD hardware are analyzed with particular emphasis given to the Characteristics Data Base developed by Oak Ridge National Laboratory and the characterization work performed by Pacific Northwest Laboratories and Rochester Gas ampersand Electric. An Expert System developed as a portion of this work is used to assist in the prediction of quantities of NFA hardware and SFD hardware that will be generated by the United States' utilities. Finally, the hardware waste management practices of the United Kingdom, France, Germany, Sweden, and Japan are studied for possible application to the disposal of domestic hardware wastes. As a result of this work, a general classification scheme for NFA and SFD hardware was developed. Only NFA and SFD hardware constructed of zircaloy and experiencing a burnup of less than 70,000 MWD/MTIHM and PWR control rods constructed of stainless steel are considered Low-Level Waste. All other hardware is classified as Greater-ThanClass-C waste

  5. COLD-WORKED HARDWARE

    Directory of Open Access Journals (Sweden)

    N. M. Strizhak

    2007-01-01

    Full Text Available The different types of cold-worked accessory are examined in the article. The necessity of development of such type of accessory in the Republic of Belarus due to requirements of market is shown. High emphasis is placed on the methods of increase of plasticity of cold-worked accessory from usual mill of RUP and CIS countries.

  6. NDAS Hardware Translation Layer Development

    Science.gov (United States)

    Nazaretian, Ryan N.; Holladay, Wendy T.

    2011-01-01

    The NASA Data Acquisition System (NDAS) project is aimed to replace all DAS software for NASA s Rocket Testing Facilities. There must be a software-hardware translation layer so the software can properly talk to the hardware. Since the hardware from each test stand varies, drivers for each stand have to be made. These drivers will act more like plugins for the software. If the software is being used in E3, then the software should point to the E3 driver package. If the software is being used at B2, then the software should point to the B2 driver package. The driver packages should also be filled with hardware drivers that are universal to the DAS system. For example, since A1, A2, and B2 all use the Preston 8300AU signal conditioners, then the driver for those three stands should be the same and updated collectively.

  7. Hardware for dynamic quantum computing.

    Science.gov (United States)

    Ryan, Colm A; Johnson, Blake R; Ristè, Diego; Donovan, Brian; Ohki, Thomas A

    2017-10-01

    We describe the hardware, gateware, and software developed at Raytheon BBN Technologies for dynamic quantum information processing experiments on superconducting qubits. In dynamic experiments, real-time qubit state information is fed back or fed forward within a fraction of the qubits' coherence time to dynamically change the implemented sequence. The hardware presented here covers both control and readout of superconducting qubits. For readout, we created a custom signal processing gateware and software stack on commercial hardware to convert pulses in a heterodyne receiver into qubit state assignments with minimal latency, alongside data taking capability. For control, we developed custom hardware with gateware and software for pulse sequencing and steering information distribution that is capable of arbitrary control flow in a fraction of superconducting qubit coherence times. Both readout and control platforms make extensive use of field programmable gate arrays to enable tailored qubit control systems in a reconfigurable fabric suitable for iterative development.

  8. Remote hardware-reconfigurable robotic camera

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  9. Multi-User Hardware Solutions to Combustion Science ISS Research

    Science.gov (United States)

    Otero, Angel M.

    2001-01-01

    In response to the budget environment and to expand on the International Space Station (ISS) Fluids and Combustion Facility (FCF) Combustion Integrated Rack (CIR), common hardware approach, the NASA Combustion Science Program shifted focus in 1999 from single investigator PI (Principal Investigator)-specific hardware to multi-user 'Minifacilities'. These mini-facilities would take the CIR common hardware philosophy to the next level. The approach that was developed re-arranged all the investigations in the program into sub-fields of research. Then common requirements within these subfields were used to develop a common system that would then be complemented by a few PI-specific components. The sub-fields of research selected were droplet combustion, solids and fire safety, and gaseous fuels. From these research areas three mini-facilities have sprung: the Multi-user Droplet Combustion Apparatus (MDCA) for droplet research, Flow Enclosure for Novel Investigations in Combustion of Solids (FEANICS) for solids and fire safety, and the Multi-user Gaseous Fuels Apparatus (MGFA) for gaseous fuels. These mini-facilities will develop common Chamber Insert Assemblies (CIA) and diagnostics for the respective investigators complementing the capability provided by CIR. Presently there are four investigators for MDCA, six for FEANICS, and four for MGFA. The goal of these multi-user facilities is to drive the cost per PI down after the initial development investment is made. Each of these mini-facilities will become a fixture of future Combustion Science NASA Research Announcements (NRAs), enabling investigators to propose against an existing capability. Additionally, an investigation is provided the opportunity to enhance the existing capability to bridge the gap between the capability and their specific science requirements. This multi-user development approach will enable the Combustion Science Program to drive cost per investigation down while drastically reducing the time

  10. Testing Microgravity Flight Hardware Concepts on the NASA KC-135

    Science.gov (United States)

    Motil, Susan M.; Harrivel, Angela R.; Zimmerli, Gregory A.

    2001-01-01

    This paper provides an overview of utilizing the NASA KC-135 Reduced Gravity Aircraft for the Foam Optics and Mechanics (FOAM) microgravity flight project. The FOAM science requirements are summarized, and the KC-135 test-rig used to test hardware concepts designed to meet the requirements are described. Preliminary results regarding foam dispensing, foam/surface slip tests, and dynamic light scattering data are discussed in support of the flight hardware development for the FOAM experiment.

  11. Event-driven processing for hardware-efficient neural spike sorting

    Science.gov (United States)

    Liu, Yan; Pereira, João L.; Constandinou, Timothy G.

    2018-02-01

    Objective. The prospect of real-time and on-node spike sorting provides a genuine opportunity to push the envelope of large-scale integrated neural recording systems. In such systems the hardware resources, power requirements and data bandwidth increase linearly with channel count. Event-based (or data-driven) processing can provide here a new efficient means for hardware implementation that is completely activity dependant. In this work, we investigate using continuous-time level-crossing sampling for efficient data representation and subsequent spike processing. Approach. (1) We first compare signals (synthetic neural datasets) encoded with this technique against conventional sampling. (2) We then show how such a representation can be directly exploited by extracting simple time domain features from the bitstream to perform neural spike sorting. (3) The proposed method is implemented in a low power FPGA platform to demonstrate its hardware viability. Main results. It is observed that considerably lower data rates are achievable when using 7 bits or less to represent the signals, whilst maintaining the signal fidelity. Results obtained using both MATLAB and reconfigurable logic hardware (FPGA) indicate that feature extraction and spike sorting accuracies can be achieved with comparable or better accuracy than reference methods whilst also requiring relatively low hardware resources. Significance. By effectively exploiting continuous-time data representation, neural signal processing can be achieved in a completely event-driven manner, reducing both the required resources (memory, complexity) and computations (operations). This will see future large-scale neural systems integrating on-node processing in real-time hardware.

  12. Hardware for mammography

    International Nuclear Information System (INIS)

    Rozhkova, N.I.; Chikirdin, Eh.G.; Ryudiger, Yu.G.; Kochetova, G.P.; Lisachenko, I.V.; Yakobs, O.Eh.

    2000-01-01

    The comparative studies on various visualization means, in particular, the intensifying screens and films with application of quantitative methods for determining small details on photographs, including measurements of corresponding exposures, absorbed doses and verification of conclusions through the analysis of clinical observations are carried out. It is shown, that technical equipment of the modern mammography room should include the X-ray mammographic apparatus, providing for the image high-quality by low dose loads with special film holders, meeting the mammography requirements, the corresponding X-ray film and the automatic photolaboratory process, provided by one and the same company. The quality of photographs under such conditions is guarantied, the defects and errors by the image interpretation are excluded. The modern computerized information technologies for work with medical images on the basic of creating new generations of diagnostic instrumentation with digital video channels and computerized working places dispose of many medical, technological, organizational and financial problems [ru

  13. High level waste storage tanks 242-A evaporator standards/requirement identification document

    International Nuclear Information System (INIS)

    Biebesheimer, E.

    1996-01-01

    This document, the Standards/Requirements Identification Document (S/RIDS) for the subject facility, represents the necessary and sufficient requirements to provide an adequate level of protection of the worker, public health and safety, and the environment. It lists those source documents from which requirements were extracted, and those requirements documents considered, but from which no requirements where taken. Documents considered as source documents included State and Federal Regulations, DOE Orders, and DOE Standards

  14. Primer on hardware prefetching

    CERN Document Server

    Falsafi, Babak

    2014-01-01

    Since the 1970's, microprocessor-based digital platforms have been riding Moore's law, allowing for doubling of density for the same area roughly every two years. However, whereas microprocessor fabrication has focused on increasing instruction execution rate, memory fabrication technologies have focused primarily on an increase in capacity with negligible increase in speed. This divergent trend in performance between the processors and memory has led to a phenomenon referred to as the "Memory Wall."To overcome the memory wall, designers have resorted to a hierarchy of cache memory levels, whi

  15. Requirements for High Level Models Supporting Design Space Exploration in Model-based Systems Engineering

    OpenAIRE

    Haveman, Steven P.; Bonnema, G. Maarten

    2013-01-01

    Most formal models are used in detailed design and focus on a single domain. Few effective approaches exist that can effectively tie these lower level models to a high level system model during design space exploration. This complicates the validation of high level system requirements during detailed design. In this paper, we define requirements for a high level model that is firstly driven by key systems engineering challenges present in industry and secondly connects to several formal and d...

  16. High exposure rate hardware ALARA plan

    International Nuclear Information System (INIS)

    Nellesen, A.L.

    1996-10-01

    This as low as reasonably achievable review provides a description of the engineering and administrative controls used to manage personnel exposure and to control contamination levels and airborne radioactivity concentrations. HERH waste is hardware found in the N-Fuel Storage Basin, which has a contact dose rate greater than 1 R/hr and used filters. This waste will be collected in the fuel baskets at various locations in the basins

  17. Fuel cell hardware-in-loop

    Energy Technology Data Exchange (ETDEWEB)

    Moore, R.M.; Randolf, G.; Virji, M. [University of Hawaii, Hawaii Natural Energy Institute (United States); Hauer, K.H. [Xcellvision (Germany)

    2006-11-08

    Hardware-in-loop (HiL) methodology is well established in the automotive industry. One typical application is the development and validation of control algorithms for drive systems by simulating the vehicle plus the vehicle environment in combination with specific control hardware as the HiL component. This paper introduces the use of a fuel cell HiL methodology for fuel cell and fuel cell system design and evaluation-where the fuel cell (or stack) is the unique HiL component that requires evaluation and development within the context of a fuel cell system designed for a specific application (e.g., a fuel cell vehicle) in a typical use pattern (e.g., a standard drive cycle). Initial experimental results are presented for the example of a fuel cell within a fuel cell vehicle simulation under a dynamic drive cycle. (author)

  18. Requirements-level semantics and model checking of object-oriented statecharts

    NARCIS (Netherlands)

    Eshuis, H.; Jansen, D.N.; Wieringa, Roelf J.

    2002-01-01

    In this paper we define a requirements-level execution semantics for object-oriented statecharts and show how properties of a system specified by these statecharts can be model checked using tool support for model checkers. Our execution semantics is requirements-level because it uses the perfect

  19. Test Hardware Design for Flightlike Operation of Advanced Stirling Convertors (ASC-E3)

    Science.gov (United States)

    Oriti, Salvatore M.

    2012-01-01

    NASA Glenn Research Center (GRC) has been supporting development of the Advanced Stirling Radioisotope Generator (ASRG) since 2006. A key element of the ASRG project is providing life, reliability, and performance testing of the Advanced Stirling Convertor (ASC). For this purpose, the Thermal Energy Conversion branch at GRC has been conducting extended operation of a multitude of free-piston Stirling convertors. The goal of this effort is to generate long-term performance data (tens of thousands of hours) simultaneously on multiple units to build a life and reliability database. The test hardware for operation of these convertors was designed to permit in-air investigative testing, such as performance mapping over a range of environmental conditions. With this, there was no requirement to accurately emulate the flight hardware. For the upcoming ASC-E3 units, the decision has been made to assemble the convertors into a flight-like configuration. This means the convertors will be arranged in the dual-opposed configuration in a housing that represents the fit, form, and thermal function of the ASRG. The goal of this effort is to enable system level tests that could not be performed with the traditional test hardware at GRC. This offers the opportunity to perform these system-level tests much earlier in the ASRG flight development, as they would normally not be performed until fabrication of the qualification unit. This paper discusses the requirements, process, and results of this flight-like hardware design activity.

  20. Test Hardware Design for Flight-Like Operation of Advanced Stirling Convertors

    Science.gov (United States)

    Oriti, Salvatore M.

    2012-01-01

    NASA Glenn Research Center (GRC) has been supporting development of the Advanced Stirling Radioisotope Generator (ASRG) since 2006. A key element of the ASRG project is providing life, reliability, and performance testing of the Advanced Stirling Convertor (ASC). For this purpose, the Thermal Energy Conversion branch at GRC has been conducting extended operation of a multitude of free-piston Stirling convertors. The goal of this effort is to generate long-term performance data (tens of thousands of hours) simultaneously on multiple units to build a life and reliability database. The test hardware for operation of these convertors was designed to permit in-air investigative testing, such as performance mapping over a range of environmental conditions. With this, there was no requirement to accurately emulate the flight hardware. For the upcoming ASC-E3 units, the decision has been made to assemble the convertors into a flight-like configuration. This means the convertors will be arranged in the dual-opposed configuration in a housing that represents the fit, form, and thermal function of the ASRG. The goal of this effort is to enable system level tests that could not be performed with the traditional test hardware at GRC. This offers the opportunity to perform these system-level tests much earlier in the ASRG flight development, as they would normally not be performed until fabrication of the qualification unit. This paper discusses the requirements, process, and results of this flight-like hardware design activity.

  1. Establishing managerial requirements for low-and intermediate-level waste repository

    International Nuclear Information System (INIS)

    Chung, C. W.; Lee, Y. K.; Kim, H. T.; Park, W. J.; Suk, T. W.; Park, S. H.

    2004-01-01

    This paper reviews basic considerations for establishing managerial requirements on the domestic low-and intermediate-level radioactive waste repository and presents the corresponding draft requirements. The draft emphasizes their close linking with the related regulations, standards and safety assessment for the repository. It also proposes a desirable direction towards harmonizing together with the existing waste acceptance requirements for the repository

  2. Programming languages and compiler design for realistic quantum hardware

    Science.gov (United States)

    Chong, Frederic T.; Franklin, Diana; Martonosi, Margaret

    2017-09-01

    Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.

  3. Programming languages and compiler design for realistic quantum hardware.

    Science.gov (United States)

    Chong, Frederic T; Franklin, Diana; Martonosi, Margaret

    2017-09-13

    Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.

  4. Locating hardware faults in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-04-13

    Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

  5. Raspberry Pi hardware projects 1

    CERN Document Server

    Robinson, Andrew

    2013-01-01

    Learn how to take full advantage of all of Raspberry Pi's amazing features and functions-and have a blast doing it! Congratulations on becoming a proud owner of a Raspberry Pi, the credit-card-sized computer! If you're ready to dive in and start finding out what this amazing little gizmo is really capable of, this ebook is for you. Taken from the forthcoming Raspberry Pi Projects, Raspberry Pi Hardware Projects 1 contains three cool hardware projects that let you have fun with the Raspberry Pi while developing your Raspberry Pi skills. The authors - PiFace inventor, Andrew Robinson and Rasp

  6. Learning Machines Implemented on Non-Deterministic Hardware

    OpenAIRE

    Gupta, Suyog; Sindhwani, Vikas; Gopalakrishnan, Kailash

    2014-01-01

    This paper highlights new opportunities for designing large-scale machine learning systems as a consequence of blurring traditional boundaries that have allowed algorithm designers and application-level practitioners to stay -- for the most part -- oblivious to the details of the underlying hardware-level implementations. The hardware/software co-design methodology advocated here hinges on the deployment of compute-intensive machine learning kernels onto compute platforms that trade-off deter...

  7. Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware

    International Nuclear Information System (INIS)

    Nakata, Susumu

    2008-01-01

    This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.

  8. 33 CFR 149.697 - What are the requirements for a noise level survey?

    Science.gov (United States)

    2010-07-01

    ... and Equipment Noise Limits § 149.697 What are the requirements for a noise level survey? (a) A survey... measured over 12 hours to derive a time weighted average (TWA) using a sound level meter and an A-weighted filter or equivalent device. (c) If the noise level throughout a space is determined to exceed 85 db(A...

  9. Requirements for high level models supporting design space exploration in model-based systems engineering

    NARCIS (Netherlands)

    Haveman, Steven; Bonnema, Gerrit Maarten

    2013-01-01

    Most formal models are used in detailed design and focus on a single domain. Few effective approaches exist that can effectively tie these lower level models to a high level system model during design space exploration. This complicates the validation of high level system requirements during

  10. MODIS information, data and control system (MIDACS) level 2 functional requirements

    Science.gov (United States)

    Han, D.; Salomonson, V.; Ormsby, J.; Sharts, B.; Folta, D.; Ardanuy, P.; Mckay, A.; Hoyt, D.; Jaffin, S.; Vallette, B.

    1988-01-01

    The MODIS Information, Data and Control System (MIDACS) Level 2 Functional Requirements Document establishes the functional requirements for MIDACS and provides a basis for the mutual understanding between the users and the designers of the EosDIS, including the requirements, operating environment, external interfaces, and development plan. In defining the requirements and scope of the system, this document describes how MIDACS will operate as an element of the EOS within the EosDIS environment. This version of the Level 2 Requirements Document follows an earlier release of a preliminary draft version. The sections on functional and performance requirements do not yet fully represent the requirements of the data system needed to achieve the scientific objectives of the MODIS instruments and science teams. Indeed, the team members have not yet been selected and the team has not yet been formed; however, it has been possible to identify many relevant requirements based on the present concept of EosDIS and through interviews and meetings with key members of the scientific community. These requirements have been grouped by functional component of the data system, and by function within each component. These requirements have been merged with the complete set of Level 1 and Level 2 context diagrams, data flow diagrams, and data dictionary.

  11. Commodity hardware and software summary

    International Nuclear Information System (INIS)

    Wolbers, S.

    1997-04-01

    A review is given of the talks and papers presented in the Commodity Hardware and Software Session at the CHEP97 conference. An examination of the trends leading to the consideration of PC's for HEP is given, and a status of the work that is being done at various HEP labs and Universities is given

  12. Review of Maxillofacial Hardware Complications and Indications for Salvage

    OpenAIRE

    Hernandez Rosa, Jonatan; Villanueva, Nathaniel L.; Sanati-Mehrizy, Paymon; Factor, Stephanie H.; Taub, Peter J.

    2015-01-01

    From 2002 to 2006, more than 117,000 facial fractures were recorded in the U.S. National Trauma Database. These fractures are commonly treated with open reduction and internal fixation. While in place, the hardware facilitates successful bony union. However, when postoperative complications occur, the plates may require removal before bony union. Indications for salvage versus removal of the maxillofacial hardware are not well defined. A literature review was performed to identify instances w...

  13. The relationship between chiropractor required and current level of business knowledge.

    Science.gov (United States)

    Ciolfi, Michael Anthony; Kasen, Patsy Anne

    2017-01-01

    Chiropractors frequently practice within health care systems requiring the business acumen of an entrepreneur. However, some chiropractors do not know the relationship between the level of business knowledge required for practice success and their current level of business knowledge. The purpose of this quantitative study was to examine the relationship between chiropractors' perceived level of business knowledge required and their perceived level of current business knowledge. Two hundred and seventy-four participants completed an online survey (Health Care Training and Education Needs Survey) which included eight key business items. Participants rated the level of perceived business knowledge required (Part I) and their current perceived level of knowledge (Part II) for the same eight items. Data was collected from November 27, 2013 to December 18, 2013. Data were analyzed using Spearman's ranked correlation to determine the statistically significant relationships for the perceived level of knowledge required and the perceived current level of knowledge for each of the paired eight items from Parts I and II of the survey. Wilcoxon Signed Ranks Tests were performed to determine the statistical difference between the paired items. The results of Spearman's correlation testing indicated a statistically significant ( p business items (6 of 8) however a statistically difference was demonstrated in only three of the paired business items. The implications of this study for social change include the potential to improve chiropractors' business knowledge and skills, enable practice success, enhance health services delivery and positively influence the profession as a viable career.

  14. Understanding to requirements for educational level in qualification of reactor operators

    International Nuclear Information System (INIS)

    Zhang Chi; Yang Di; Zhou Limin

    2007-01-01

    Requirements for qualification of reactor operators in nuclear safety regulations were discussed in this paper. The new issue was described in the confirmation of education level of reactor operators. The understanding to the requirements for Educational Level in Qualification of Reactor Operators was provided according to Higher Education Law of the People's Republic of China. It was proposed to improve the confirmation of qualification of reactor operators as soon as possible. (authors)

  15. Requirements for a top level hierarchy for a next generation nuclear data format

    International Nuclear Information System (INIS)

    Brown, D.A.; Koning, A.; Roubtsov, Y.D.; Mills, R.; Mattoon, C.M.; Beck, B.; Vogt, R.

    2014-01-01

    This document attempts to compile the requirements for the top-levels of a hierarchical arrangement of nuclear data such as is found in the ENDF format. This set of requirements will be used to guide the development of a new set of formats to replace the legacy ENDF format. (authors)

  16. Educational Requirements for Entry-Level Practice in the Profession of Nutrition and Dietetics

    Science.gov (United States)

    Abad-Jorge, Ana

    2012-01-01

    The profession of nutrition and dietetics has experienced significant changes over the past 100 years due to advances in nutrition science and healthcare delivery. Although these advances have prompted changes in educational requirements in other healthcare professions, the requirements for entry-level registered dietitians have not changed since…

  17. LISA Pathfinder: hardware tests and their input to the mission

    Science.gov (United States)

    Audley, Heather

    The Laser Interferometer Space Antenna (LISA) is a joint ESA-NASA mission for the first space-borne gravitational wave detector. LISA aims to detect sources in the 0.1mHz to 1Hz range, which include supermassive black holes and galactic binary stars. Core technologies required for the LISA mission, including drag-free test mass control, picometre interferometry and micro-Newton thrusters, cannot be tested on-ground. Therefore, a precursor satellite, LISA Pathfinder, has been developed as a technology demonstration mission. The preparations for the LISA Pathfinder mission have reached an exciting stage. Tests of the engineering model of the optical metrology system have recently been completed at the Albert Einstein Institute, Hannover, and flight model tests are now underway. Significantly, they represent the first complete integration and testing of the space-qualified hardware and are the first tests on system level. The results and test procedures of these campaigns will be utilised directly in the ground-based flight hardware tests, and subsequently within in-flight operations. In addition, they allow valuable testing of the data analysis methods using the MatLab based LTP data analysis toolbox. This contribution presents an overview of the test campaigns calibration, control and perfor-mance results, focusing on the implications for the Experimental Master Plan which provides the basis for the in-flight operations and procedures.

  18. The hardware track finder processor in CMS at CERN

    International Nuclear Information System (INIS)

    Kluge, A.

    1997-07-01

    The work covers the design of the Track Finder Processor in the high energy experiment CMS at CERN/Geneva. The task of this processor is to identify muons and to measure their transverse momentum. The Track Finder makes it possible to determine the physical relevance of each high energetic collision and to forward only interesting data to the data analysis units. Data of more than two hundred thousand detector cells are used to determine the location of muons and to measure their transverse momentum. Each 25 ns a new data set is generated. Measurement of location and transverse momentum of the muons can be terminated within 350 ns by using an ASIC. The classical method in high energy physics experiments is to employ a pattern comparison method. The predefined patterns are compared to the found patterns. The high number of data channels and the complex requirements to the spatial detector resolution do not permit to employ a pattern comparison method. A so called track following algorithm was designed, which is able to assemble complete tracks through the whole detector starting from single track segments. Instead of storing a high number of track patterns the problem is brought back to the algorithm level. Comprehensive simulations, employing the hardware simulation language VHDL, were conducted in order to optimize the algorithm and its hardware implementation. A FPGA (field program able gate array)-prototype was designed. A feasibility study to implement the track finder processor employing ASICs was conducted. (author)

  19. Memory Based Machine Intelligence Techniques in VLSI hardware

    OpenAIRE

    James, Alex Pappachen

    2012-01-01

    We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high ...

  20. BIOLOGICALLY INSPIRED HARDWARE CELL ARCHITECTURE

    DEFF Research Database (Denmark)

    2010-01-01

    Disclosed is a system comprising: - a reconfigurable hardware platform; - a plurality of hardware units defined as cells adapted to be programmed to provide self-organization and self-maintenance of the system by means of implementing a program expressed in a programming language defined as DNA...... language, where each cell is adapted to communicate with one or more other cells in the system, and where the system further comprises a converter program adapted to convert keywords from the DNA language to a binary DNA code; where the self-organisation comprises that the DNA code is transmitted to one...... or more of the cells, and each of the one or more cells is adapted to determine its function in the system; where if a fault occurs in a first cell and the first cell ceases to perform its function, self-maintenance is performed by that the system transmits information to the cells that the first cell has...

  1. Hardware-Accelerated Simulated Radiography

    International Nuclear Information System (INIS)

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S.; Frank, R

    2005-01-01

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32-bit floating point texture capabilities to obtain solutions to the radiative transport equation for X-rays. The hardware accelerated solutions are accurate enough to enable scientists to explore the experimental design space with greater efficiency than the methods currently in use. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedral meshes that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester

  2. The principles of computer hardware

    CERN Document Server

    Clements, Alan

    2000-01-01

    Principles of Computer Hardware, now in its third edition, provides a first course in computer architecture or computer organization for undergraduates. The book covers the core topics of such a course, including Boolean algebra and logic design; number bases and binary arithmetic; the CPU; assembly language; memory systems; and input/output methods and devices. It then goes on to cover the related topics of computer peripherals such as printers; the hardware aspects of the operating system; and data communications, and hence provides a broader overview of the subject. Its readable, tutorial-based approach makes it an accessible introduction to the subject. The book has extensive in-depth coverage of two microprocessors, one of which (the 68000) is widely used in education. All chapters in the new edition have been updated. Major updates include: powerful software simulations of digital systems to accompany the chapters on digital design; a tutorial-based introduction to assembly language, including many exam...

  3. Training Requirements of Entry Level Accountants: CA (India) vs. CPA (US)

    Science.gov (United States)

    Arora, Alka

    2012-01-01

    In the accounting arena, tax returns are increasingly being outsourced to India. Tax returns that are outsourced to India are usually prepared by entry level accountants. Questions are often raised about the quality of education and training of entry level accountants in India. This article compares the training requirements and costs to become an…

  4. Development of High-Level Safety Requirements for a Pyroprocessing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Seok Jun; Jo, Woo Jin; You, Gil Sung; Choung, Won Myung; Lee, Ho Hee; Kim, Hyun Min; Jeon, Hong Rae; Ku, Jeong Hoe; Lee, Hyo Jik [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    Korea Atomic Energy Research Institute (KAERI) has been developing a pyroproceesing technology to reduce the waste volume and recycle some elements. The pyroprocessing includes several treatment processes which are related with not only radiological and physical but also chemical and electrochemical properties. Thus, it is of importance to establish safety design requirements considering all the aspects of those properties for a reliable pyroprocessing facility. In this study, high-level requirements are presented in terms of not only radiation protection, nuclear criticality, fire protection, and seismic safety but also confinement and chemical safety for the unique characteristics of a pyroprocessing facility. Several high-level safety design requirements such as radiation protection, nuclear criticality, fire protection, seismic, confinement, and chemical processing were presented for a pyroprocessing facility. The requirements must fulfill domestic and international safety technology standards for a nuclear facility. Furthermore, additional requirements should be considered for the unique electrochemical treatments in a pyroprocessing facility.

  5. Open Hardware for CERN's accelerator control systems

    International Nuclear Information System (INIS)

    Bij, E van der; Serrano, J; Wlostowski, T; Cattin, M; Gousiou, E; Sanchez, P Alvarez; Boccardi, A; Voumard, N; Penacoba, G

    2012-01-01

    The accelerator control systems at CERN will be upgraded and many electronics modules such as analog and digital I/O, level converters and repeaters, serial links and timing modules are being redesigned. The new developments are based on the FPGA Mezzanine Card, PCI Express and VME64x standards while the Wishbone specification is used as a system on a chip bus. To attract partners, the projects are developed in an 'Open' fashion. Within this Open Hardware project new ways of working with industry are being evaluated and it has been proven that industry can be involved at all stages, from design to production and support.

  6. Hunting for hardware changes in data centres

    International Nuclear Information System (INIS)

    Coelho dos Santos, M; Steers, I; Szebenyi, I; Xafi, A; Barring, O; Bonfillou, E

    2012-01-01

    With many servers and server parts the environment of warehouse sized data centres is increasingly complex. Server life-cycle management and hardware failures are responsible for frequent changes that need to be managed. To manage these changes better a project codenamed “hardware hound” focusing on hardware failure trending and hardware inventory has been started at CERN. By creating and using a hardware oriented data set - the inventory - with detailed information on servers and their parts as well as tracking changes to this inventory, the project aims at, for example, being able to discover trends in hardware failure rates.

  7. ISS Logistics Hardware Disposition and Metrics Validation

    Science.gov (United States)

    Rogers, Toneka R.

    2010-01-01

    I was assigned to the Logistics Division of the International Space Station (ISS)/Spacecraft Processing Directorate. The Division consists of eight NASA engineers and specialists that oversee the logistics portion of the Checkout, Assembly, and Payload Processing Services (CAPPS) contract. Boeing, their sub-contractors and the Boeing Prime contract out of Johnson Space Center, provide the Integrated Logistics Support for the ISS activities at Kennedy Space Center. Essentially they ensure that spares are available to support flight hardware processing and the associated ground support equipment (GSE). Boeing maintains a Depot for electrical, mechanical and structural modifications and/or repair capability as required. My assigned task was to learn project management techniques utilized by NASA and its' contractors to provide an efficient and effective logistics support infrastructure to the ISS program. Within the Space Station Processing Facility (SSPF) I was exposed to Logistics support components, such as, the NASA Spacecraft Services Depot (NSSD) capabilities, Mission Processing tools, techniques and Warehouse support issues, required for integrating Space Station elements at the Kennedy Space Center. I also supported the identification of near-term ISS Hardware and Ground Support Equipment (GSE) candidates for excessing/disposition prior to October 2010; and the validation of several Logistics Metrics used by the contractor to measure logistics support effectiveness.

  8. Performance comparison between ISCSI and other hardware and software solutions

    CERN Document Server

    Gug, M

    2003-01-01

    We report on our investigations on some technologies that can be used to build disk servers and networks of disk servers using commodity hardware and software solutions. It focuses on the performance that can be achieved by these systems and gives measured figures for different configurations. It is divided into two parts : iSCSI and other technologies and hardware and software RAID solutions. The first part studies different technologies that can be used by clients to access disk servers using a gigabit ethernet network. It covers block access technologies (iSCSI, hyperSCSI, ENBD). Experimental figures are given for different numbers of clients and servers. The second part compares a system based on 3ware hardware RAID controllers, a system using linux software RAID and IDE cards and a system mixing both hardware RAID and software RAID. Performance measurements for reading and writing are given for different RAID levels.

  9. Is a 4-bit synaptic weight resolution enough? - constraints on enabling spike-timing dependent plasticity in neuromorphic hardware.

    Science.gov (United States)

    Pfeil, Thomas; Potjans, Tobias C; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz

    2012-01-01

    Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists.

  10. Is a 4-bit synaptic weight resolution enough? - Constraints on enabling spike-timing dependent plasticity in neuromorphic hardware

    Directory of Open Access Journals (Sweden)

    Thomas ePfeil

    2012-07-01

    Full Text Available Large-scale neuromorphic hardware systems typically bear the trade-off be-tween detail level and required chip resources. Especially when implementingspike-timing-dependent plasticity, reduction in resources leads to limitations ascompared to floating point precision. By design, a natural modification that savesresources would be reducing synaptic weight resolution. In this study, we give anestimate for the impact of synaptic weight discretization on different levels, rangingfrom random walks of individual weights to computer simulations of spiking neuralnetworks. The FACETS wafer-scale hardware system offers a 4-bit resolution ofsynaptic weights, which is shown to be sufficient within the scope of our networkbenchmark. Our findings indicate that increasing the resolution may not even beuseful in light of further restrictions of customized mixed-signal synapses. In ad-dition, variations due to production imperfections are investigated and shown tobe uncritical in the context of the presented study. Our results represent a generalframework for setting up and configuring hardware-constrained synapses. We sug-gest how weight discretization could be considered for other backends dedicatedto large-scale simulations. Thus, our proposition of a good hardware verificationpractice may rise synergy effects between hardware developers and neuroscientists.

  11. Computer hardware for radiologists: Part 2

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU, chipset, random access memory (RAM, and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. "Storage drive" is a term describing a "memory" hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. "Drive interfaces" connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular "input/output devices" used commonly with computers are the printer, monitor, mouse, and keyboard. The "bus" is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated ISA bus. "Ports" are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ′ever increasing′ digital future.

  12. Computer hardware for radiologists: Part 2

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. “Storage drive” is a term describing a “memory” hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. “Drive interfaces” connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular “input/output devices” used commonly with computers are the printer, monitor, mouse, and keyboard. The “bus” is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. “Ports” are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ‘ever increasing’ digital future

  13. Protein level affects the relative lysine requirement of growing rainbow trout (Oncorhynchus mykiss) fry.

    Science.gov (United States)

    Bodin, Noelie; Govaerts, Bernadette; Abboudi, Tarik; Detavernier, Christel; De Saeger, Sarah; Larondelle, Yvan; Rollin, Xavier

    2009-07-01

    The effect of two digestible protein levels (310 and 469 g/kg DM) on the relative lysine (Lys; g Lys/kg DM or g Lys/100 g protein) and the absolute Lys (g Lys intake/kg 0.75 per d) requirements was studied in rainbow trout fry using a dose-response trial. At each protein level, sixteen isoenergetic (22-23 MJ digestible energy/kg DM) diets were tested, involving a full range (2-70 g/kg DM) of sixteen Lys levels. Each diet was given to one group of sixty rainbow trout fry (mean initial body weight 0.78 g) reared at 15 degrees C for 31 feeding d. The Lys requirements were estimated based on the relationships between weight, protein, and Lys gains (g/kg 0.75 per d) and Lys concentration (g/kg DM or g/100 g protein) or Lys intake (g/kg 0.75 per d), using the broken-line model (BLM) and the non-linear four-parameter saturation kinetics model (SKM-4). Both the model and the response criterion chosen markedly impacted the relative Lys requirement. The relative Lys requirement for Lys gain of rainbow trout estimated with the BLM (and SKM-4 at 90 % of the maximum response) increased from 16.8 (19.6) g/kg DM at a low protein level to 23.4 (24.5) g/kg DM at a high protein level. However, the dietary protein content affected neither the absolute Lys requirement nor the relative Lys requirement expressed as g Lys/100 g protein nor the Lys requirement for maintenance (21 mg Lys/kg 0.75 per d).

  14. Qualification of software and hardware

    International Nuclear Information System (INIS)

    Gossner, S.; Schueller, H.; Gloee, G.

    1987-01-01

    The qualification of on-line process control equipment is subdivided into three areas: 1) materials and structural elements; 2) on-line process-control components and devices; 3) electrical systems (reactor protection and confinement system). Microprocessor-aided process-control equipment are difficult to verify for failure-free function owing to the complexity of the functional structures of the hardware and to the variety of the software feasible for microprocessors. Hence, qualification will make great demands on the inspecting expert. (DG) [de

  15. Meeting the International Health Regulations (2005) surveillance core capacity requirements at the subnational level in Europe

    DEFF Research Database (Denmark)

    Ziemann, Alexandra; Rosenkötter, Nicole; Riesgo, Luis Garcia-Castrillo

    2015-01-01

    BACKGROUND: The revised World Health Organization's International Health Regulations (2005) request a timely and all-hazard approach towards surveillance, especially at the subnational level. We discuss three questions of syndromic surveillance application in the European context for assessing...... public health emergencies of international concern: (i) can syndromic surveillance support countries, especially the subnational level, to meet the International Health Regulations (2005) core surveillance capacity requirements, (ii) are European syndromic surveillance systems comparable to enable cross...... effect of different types of public health emergencies in a timely manner as required by the International Health Regulations (2005)....

  16. Door Hardware and Installations; Carpentry: 901894.

    Science.gov (United States)

    Dade County Public Schools, Miami, FL.

    The curriculum guide outlines a course designed to provide instruction in the selection, preparation, and installation of hardware for door assemblies. The course is divided into five blocks of instruction (introduction to doors and hardware, door hardware, exterior doors and jambs, interior doors and jambs, and a quinmester post-test) totaling…

  17. Travel Software using GPU Hardware

    CERN Document Server

    Szalwinski, Chris M; Dimov, Veliko Atanasov; CERN. Geneva. ATS Department

    2015-01-01

    Travel is the main multi-particle tracking code being used at CERN for the beam dynamics calculations through hadron and ion linear accelerators. It uses two routines for the calculation of space charge forces, namely, rings of charges and point-to-point. This report presents the studies to improve the performance of Travel using GPU hardware. The studies showed that the performance of Travel with the point-to-point simulations of space-charge effects can be speeded up at least 72 times using current GPU hardware. Simple recompilation of the source code using an Intel compiler can improve performance at least 4 times without GPU support. The limited memory of the GPU is the bottleneck. Two algorithms were investigated on this point: repeated computation and tiling. The repeating computation algorithm is simpler and is the currently recommended solution. The tiling algorithm was more complicated and degraded performance. Both build and test instructions for the parallelized version of the software are inclu...

  18. CASIS Fact Sheet: Hardware and Facilities

    Science.gov (United States)

    Solomon, Michael R.; Romero, Vergel

    2016-01-01

    Vencore is a proven information solutions, engineering, and analytics company that helps our customers solve their most complex challenges. For more than 40 years, we have designed, developed and delivered mission-critical solutions as our customers' trusted partner. The Engineering Services Contract, or ESC, provides engineering and design services to the NASA organizations engaged in development of new technologies at the Kennedy Space Center. Vencore is the ESC prime contractor, with teammates that include Stinger Ghaffarian Technologies, Sierra Lobo, Nelson Engineering, EASi, and Craig Technologies. The Vencore team designs and develops systems and equipment to be used for the processing of space launch vehicles, spacecraft, and payloads. We perform flight systems engineering for spaceflight hardware and software; develop technologies that serve NASA's mission requirements and operations needs for the future. Our Flight Payload Support (FPS) team at Kennedy Space Center (KSC) provides engineering, development, and certification services as well as payload integration and management services to NASA and commercial customers. Our main objective is to assist principal investigators (PIs) integrate their science experiments into payload hardware for research aboard the International Space Station (ISS), commercial spacecraft, suborbital vehicles, parabolic flight aircrafts, and ground-based studies. Vencore's FPS team is AS9100 certified and a recognized implementation partner for the Center for Advancement of Science in Space (CASIS

  19. Assessment of the requirements for DOE's annual report to congress on low-level radioactive waste

    International Nuclear Information System (INIS)

    1987-10-01

    The Low-level Radioactive Waste Policy Amendments Act of 1985 (PL99-240; LLRWPAA) requires the Department of Energy (DOE) to ''submit to Congress on an annual basis a report which: (1) summarizes the progress of low-level waste disposal siting and licensing activities within each compact region, (2) reviews the available volume reduction technologies, their applications, effectiveness, and costs on a per unit volume basis, (3) reviews interim storage facility requirements, costs, and usage, (4) summarizes transportation requirements for such wastes on an inter- and intra-regional basis, (5) summarizes the data on the total amount of low-level waste shipped for disposal on a yearly basis, the proportion of such wastes subjected to volume reduction, the average volume reduction attained,, and the proportion of wastes stored on an interim basis, and (6) projects the interim storage and final disposal volume requirements anticipated for the following year, on a regional basis (Sec. 7(b)).'' This report reviews and assesses what is required for development of the annual report specified in the LLRWPAA. This report addresses each of the subject areas set out in the LLRWPAA

  20. B4G local area: high level requirements and system design

    DEFF Research Database (Denmark)

    Mogensen, Preben; Pajukoski, Kari; Raaf, Bernhard

    2012-01-01

    A next generation Beyond 4G (B4G) radio access technology is expected to become available around 2020 in order to cope with the exponential increase of mobile data traffic. In this paper, research motivations and high level requirements for a B4G local area concept are discussed. Our suggestions ...

  1. Handling and storage of high-level radioactive liquid wastes requiring cooling

    International Nuclear Information System (INIS)

    1979-01-01

    The technology of high-level liquid wastes storage and experience in this field gained over the past 25 years are reviewed in this report. It considers the design requirements for storage facilities, describes the systems currently in use, together with essential accessories such as the transfer and off-gas cleaning systems, and examines the safety and environmental factors

  2. Analysis of Skills Requirement for Entry-Level Programmer/Analysts in Fortune 500 Corporations

    Science.gov (United States)

    Lee, Choong Kwon; Han, Hyo-Joo

    2008-01-01

    This paper presents the most up-to-date skill requirements for programmer/analyst, one of the most demanded entry-level job titles in the Information Systems (IS) field. In the past, several researchers studied job skills for IS professionals, but few have focused especially on "programmer/analyst." The authors conducted an extensive empirical…

  3. Medical physics personnel for medical imaging: requirements, conditions of involvement and staffing levels-French recommendations

    International Nuclear Information System (INIS)

    Isambert, Aurelie; Valero, Marc; Rousse, Carole; Blanchard, Vincent; Le Du, Dominique; Guilhem, Marie-Therese; Dieudonne, Arnaud; Pierrat, Noelle; Salvat, Cecile

    2015-01-01

    The French regulations concerning the involvement of medical physicists in medical imaging procedures are relatively vague. In May 2013, the ASN and the SFPM issued recommendations regarding Medical Physics Personnel for Medical Imaging: Requirements, Conditions of Involvement and Staffing Levels. In these recommendations, the various areas of activity of medical physicists in radiology and nuclear medicine have been identified and described, and the time required to perform each task has been evaluated. Criteria for defining medical physics staffing levels are thus proposed. These criteria are defined according to the technical platform, the procedures and techniques practised on it, the number of patients treated and the number of persons in the medical and paramedical teams requiring periodic training. The result of this work is an aid available to each medical establishment to determine their own needs in terms of medical physics. (authors)

  4. Medical physics personnel for medical imaging: requirements, conditions of involvement and staffing levels-French recommendations.

    Science.gov (United States)

    Isambert, Aurélie; Le Du, Dominique; Valéro, Marc; Guilhem, Marie-Thérèse; Rousse, Carole; Dieudonné, Arnaud; Blanchard, Vincent; Pierrat, Noëlle; Salvat, Cécile

    2015-04-01

    The French regulations concerning the involvement of medical physicists in medical imaging procedures are relatively vague. In May 2013, the ASN and the SFPM issued recommendations regarding Medical Physics Personnel for Medical Imaging: Requirements, Conditions of Involvement and Staffing Levels. In these recommendations, the various areas of activity of medical physicists in radiology and nuclear medicine have been identified and described, and the time required to perform each task has been evaluated. Criteria for defining medical physics staffing levels are thus proposed. These criteria are defined according to the technical platform, the procedures and techniques practised on it, the number of patients treated and the number of persons in the medical and paramedical teams requiring periodic training. The result of this work is an aid available to each medical establishment to determine their own needs in terms of medical physics. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Proposal for basic safety requirements regarding the disposal of high-level radioactive waste

    International Nuclear Information System (INIS)

    1980-04-01

    A working group commissioned to prepare proposals for basic safety requirements for the storage and transport of radioactive waste prepared its report to the Danish Agency of Environmental Protection. The proposals include: radiation protection requirements, requirements concerning the properties of high-level waste units, the geological conditions of the waste disposal location, the supervision of waste disposal areas. The proposed primary requirements for safety evaluation of the disposal of high-level waste in deep geological formations are of a general nature, not being tied to specific assumptions regarding the waste itself, the geological and other conditions at the place of disposal, and the technical methods of disposal. It was impossible to test the proposals for requirements on a working repository. As no country has, to the knowledge of the working group, actually disposed of hifg-level radioactive waste or approved of plans for such disposal. Methods for evaluating the suitability of geological formations for waste disposal, and background material concerning the preparation of these proposals for basic safety requirements relating to radiation, waste handling and geological conditions are reviewed. Appended to the report is a description of the phases of the fuel cycle that are related to the storage of spent fuel and the disposal of high-level reprocessing waste in a salt formation. It should be noted that the proposals of the working group are not limited to the disposal of reprocessed fuel, but also include the direct disposal of spent fuel as well as disposal in geological formations other than salt. (EG)

  6. An environmental testing facility for Space Station Freedom power management and distribution hardware

    Science.gov (United States)

    Jackola, Arthur S.; Hartjen, Gary L.

    1992-01-01

    The plans for a new test facility, including new environmental test systems, which are presently under construction, and the major environmental Test Support Equipment (TSE) used therein are addressed. This all-new Rocketdyne facility will perform space simulation environmental tests on Power Management and Distribution (PMAD) hardware to Space Station Freedom (SSF) at the Engineering Model, Qualification Model, and Flight Model levels of fidelity. Testing will include Random Vibration in three axes - Thermal Vacuum, Thermal Cycling and Thermal Burn-in - as well as numerous electrical functional tests. The facility is designed to support a relatively high throughput of hardware under test, while maintaining the high standards required for a man-rated space program.

  7. Hardware Support for Dynamic Languages

    DEFF Research Database (Denmark)

    Schleuniger, Pascal; Karlsson, Sven; Probst, Christian W.

    2011-01-01

    In recent years, dynamic programming languages have enjoyed increasing popularity. For example, JavaScript has become one of the most popular programming languages on the web. As the complexity of web applications is growing, compute-intensive workloads are increasingly handed off to the client...... side. While a lot of effort is put in increasing the performance of web browsers, we aim for multicore systems with dedicated cores to effectively support dynamic languages. We have designed Tinuso, a highly flexible core for experimentation that is optimized for high performance when implemented...... on FPGA. We composed a scalable multicore configuration where we study how hardware support for software speculation can be used to increase the performance of dynamic languages....

  8. Quality assurance requirements and methods for high level waste package acceptability

    International Nuclear Information System (INIS)

    1992-12-01

    This document should serve as guidance for assigning the necessary items to control the conditioning process in such a way that waste packages are produced in compliance with the waste acceptance requirements. It is also provided to promote the exchange of information on quality assurance requirements and on the application of quality assurance methods associated with the production of high level waste packages, to ensure that these waste packages comply with the requirements for transportation, interim storage and waste disposal in deep geological formations. The document is intended to assist both the operators of conditioning facilities and repositories as well as national authorities and regulatory bodies, involved in the licensing of the conditioning of high level radioactive wastes or in the development of deep underground disposal systems. The document recommends the quality assurance requirements and methods which are necessary to generate data for these parameters identified in IAEA-TECDOC-560 on qualitative acceptance criteria, and indicates where and when the control methods can be applied, e.g. in the operation or commissioning of a process or in the development of a waste package design. Emphasis is on the control of the process and little reliance is placed on non-destructive or destructive testing. Qualitative criteria, relevant to disposal of high level waste, are repository dependent and are not addressed here. 37 refs, 3 figs, 2 tabs

  9. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Kimura, N; The ATLAS collaboration

    2012-01-01

    Selecting interesting events with triggering is very challenging at the LHC due to the busy hadronic environment. Starting in 2014 the LHC will run with an energy of 13 or 14 TeV and instantaneous luminosities which could exceed 1034 interactions per cm2 and per second. The triggering in the ATLAS detector is realized using a three level trigger approach, in which the first level (Level-1) is hardware based and the second (Level-2) and third (EF) stag are realized using large computing farms. It is a crucial and non-trivial task for triggering to maintain a high efficiency for events of interest while suppressing effectively the very high rates of inclusive QCD process, which constitute mainly background. At the same time the trigger system has to be robust and provide sufficient operational margins to adapt to changes in the running environment. In the current design track reconstruction can be performed only in limited regions of interest at L2 and the CPU requirements may limit this even further at the hig...

  10. Serum CCL-18 level is a risk factor for COPD exacerbations requiring hospitalization

    Science.gov (United States)

    Dilektasli, Asli Gorek; Demirdogen Cetinoglu, Ezgi; Uzaslan, Esra; Budak, Ferah; Coskun, Funda; Ursavas, Ahmet; Ercan, Ilker; Ege, Ercument

    2017-01-01

    Introduction Chemokine (C-C motif) ligand 18 (CCL-18) has been shown to be elevated in chronic obstructive pulmonary disease (COPD) patients. This study primarily aimed to evaluate whether the serum CCL-18 level differentiates the frequent exacerbator COPD phenotype from infrequent exacerbators. The secondary aim was to investigate whether serum CCL-18 level is a risk factor for exacerbations requiring hospitalization. Materials and methods Clinically stable COPD patients and participants with smoking history but normal spirometry (NSp) were recruited for the study. Modified Medical Research Council Dyspnea Scale, COPD Assessment Test, spirometry, and 6-min walking test were performed. Serum CCL-18 levels were measured with a commercial ELISA Kit. Results Sixty COPD patients and 20 NSp patients were recruited. Serum CCL-18 levels were higher in COPD patients than those in NSp patients (169 vs 94 ng/mL, PCOPD (168 vs 196 ng/mL) subgroups did not achieve statistical significance (P=0.09). Serum CCL-18 levels were significantly higher in COPD patients who had experienced at least one exacerbation during the previous 12 months. Overall, ROC analysis revealed that a serum CCL-18 level of 181.71 ng/mL could differentiate COPD patients with hospitalized exacerbations from those who were not hospitalized with a 88% sensitivity and 88.2% specificity (area under curve: 0.92). Serum CCL-18 level had a strong correlation with the frequency of exacerbations requiring hospitalization (r=0.68, PCOPD, as it is associated with frequency of exacerbations, particularly with severe COPD exacerbations requiring hospitalization, as well as with functional parameters and symptom scores. PMID:28115842

  11. Battery Management System Hardware Concepts: An Overview

    Directory of Open Access Journals (Sweden)

    Markus Lelie

    2018-03-01

    Full Text Available This paper focuses on the hardware aspects of battery management systems (BMS for electric vehicle and stationary applications. The purpose is giving an overview on existing concepts in state-of-the-art systems and enabling the reader to estimate what has to be considered when designing a BMS for a given application. After a short analysis of general requirements, several possible topologies for battery packs and their consequences for the BMS’ complexity are examined. Four battery packs that were taken from commercially available electric vehicles are shown as examples. Later, implementation aspects regarding measurement of needed physical variables (voltage, current, temperature, etc. are discussed, as well as balancing issues and strategies. Finally, safety considerations and reliability aspects are investigated.

  12. The double Chooz hardware trigger system

    Energy Technology Data Exchange (ETDEWEB)

    Cucoanes, Andi; Beissel, Franz; Reinhold, Bernd; Roth, Stefan; Stahl, Achim; Wiebusch, Christopher [RWTH Aachen (Germany)

    2008-07-01

    The double Chooz neutrino experiment aims to improve the present knowledge on {theta}{sub 13} mixing angle using two similar detectors placed at {proportional_to}280 m and respectively 1 km from the Chooz power plant reactor cores. The detectors measure the disappearance of reactor antineutrinos. The hardware trigger has to be very efficient for antineutrinos as well as for various types of background events. The triggering condition is based on discriminated PMT sum signals and the multiplicity of groups of PMTs. The talk gives an outlook to the double Chooz experiment and explains the requirements of the trigger system. The resulting concept and its performance is shown as well as first results from a prototype system.

  13. Greater-than-Class C low-level radioactive waste transportation regulations and requirements study

    International Nuclear Information System (INIS)

    Tyacke, M.; Schmitt, R.

    1993-07-01

    The purpose of this report is to identify the regulations and requirements for transporting greater-than-Class C (GTCC) low-level radioactive waste (LLW) and to identify planning activities that need to be accomplished in preparation for transporting GTCC LLW. The regulations and requirements for transporting hazardous materials, of which GTCC LLW is included, are complex and include several Federal agencies, state and local governments, and Indian tribes. This report is divided into five sections and three appendices. Section 1 introduces the report. Section 2 identifies and discusses the transportation regulations and requirements. The regulations and requirements are divided into Federal, state, local government, and Indian tribes subsections. This report does not identify the regulations or requirements of specific state, local government, and Indian tribes, since the storage, treatment, and disposal facility locations and transportation routes have not been specifically identified. Section 3 identifies the planning needed to ensure that all transportation activities are in compliance with the regulations and requirements. It is divided into (a) transportation packaging; (b) transportation operations; (c) system safety and risk analysis, (d) route selection; (e) emergency preparedness and response; and (f) safeguards and security. This section does not provide actual planning since the details of the Department of Energy (DOE) GTCC LLW Program have not been finalized, e.g., waste characterization and quantity, storage, treatment and disposal facility locations, and acceptance criteria. Sections 4 and 5 provide conclusions and referenced documents, respectively

  14. Hardware and software techniques for boiler operation and management

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, Hiroshi (Hirakawa Iron Works, Ltd., Osaka (Japan))

    1989-04-01

    A study was conducted on the requirements for easy-operable boiler from the view points of hardware and software technologies. Relation among efficiency, energy-saving, and economics, and control of total emission regarding low NOx operation, were explained, with suggestion of orientation to developed necessary hard- and soft- ware for the realization. 8 figs.

  15. A selective logging mechanism for hardware transactional memory systems

    OpenAIRE

    Lupon Navazo, Marc; Magklis, Grigorios; González Colás, Antonio María

    2011-01-01

    Log-based Hardware Transactional Memory (HTM) systems offer an elegant solution to handle speculative data that overflow transactional L1 caches. By keeping the pre-transactional values on a software-resident log, speculative values can be safely moved across the memory hierarchy, without requiring expensive searches on L1 misses or commits.

  16. Hardware Transactional Memory Optimization Guidelines, Applied to Ordered Maps

    DEFF Research Database (Denmark)

    Bonnichsen, Lars Frydendal; Probst, Christian W.; Karlsson, Sven

    2015-01-01

    efficiently requires reasoning about those differences. In this paper we present 5 guidelines for applying hardware transactional memory efficiently, and apply the guidelines to BT-trees, a concurrent ordered map. Evaluating BT-trees on standard benchmarks shows that they are up to 5.3 times faster than...

  17. [Anthropometrical profile of Brazilian junior volleyball players for different sports requirement levels].

    Science.gov (United States)

    Fonseca-Toledo, Cláudio; Roquetti, Paula; Fernandes-Filho, José

    2010-12-01

    This study was aimed at investigating the anthropometrics characteristics of male Brazilian junior volleyball players, organised into 3 sports requirement groups: high qualification (HQ) formed by the national team, middle qualification (MQ) formed by athletes playing in the Brazilian national championships and low qualification (LQ) formed by players at school level. 101 athletes were observed, HQ (n=16), MQ (n=68) and LQ (n=17), aged 16.7 ± 0.5; 16.6 ± 0.5 and 16.2 ± 0.7 years, respectively. There following were evaluated: body mass, height, standing reach height, % body fat and Heath & Carter somatotype. The statistical analysis was descriptive and inferential, the Kruskal Wallis test being used for detecting differences between groups (pcharacteristics and requirement levels, considering psports' selection instrument for young talented volleyball players.

  18. Study Results on Knowledge Requirements for Entry-Level Airport Operations and Management Personnel

    Science.gov (United States)

    Quilty, Stephen M.

    2005-01-01

    This paper identifies important topical knowledge areas required of individuals employed in airport operations and management positions. A total of 116 airport managers and airfield operations personnel responded to a survey that sought to identify the importance of various subject matter for entry level airport operations personnel. The results from this study add to the body of research on aviation management curriculum development and can be used to better develop university curriculum and supplemental training focused on airport management and operations. Recommendations are made for specialized airport courses within aviation management programs. Further, this study identifies for job seekers or individuals employed in entry level positions those knowledge requirements deemed important by airport managers and operations personnel at different sized airports.

  19. Level of Understanding and Requirement of Education of Patients on Radiotherapy

    International Nuclear Information System (INIS)

    Kang, Soo Man; Lee, Choul Soo

    2006-01-01

    The purpose of this study is to understand preliminary education. Level of understanding and the degrees of educational requirement for cancer patients on radiotherapy and to present the preliminary data to development of effective and practical patients treatment programs. Based on the above mentioned results of this study. Relationship between degrees of knowledge and demand for educational requirement for patients who are undertaking radiotherapy could be varied with different factors such as educational background, ages, regions of treatment, experience of symptoms. In general, patients do not have enough information, on the other hand, have very high demand for educational requirement. Customized education patients by patients would not be possible in reality. However, if we could provide standard for patients and establish systematic sessions during treatment based on this study, more and better patients satisfaction and results of treatments could be achieved.

  20. Quality assurance program preparation - review of requirements and plant systems - selection of program levels

    International Nuclear Information System (INIS)

    Asmuss, G.

    1980-01-01

    The establishment and implementation for a practicable quality assurance program for a nuclear power plant demands a detailed background in the field of engineering, manufacturing, organization and quality assurance. It will be demonstrated with examples to define and control the achievement of quality related activities during the phases of design, procurement, manufactoring, commissioning and operation. In general the quality assurance program applies to all items, processes and services important to safety of nuclear power plant. The classification for safety related and non-safety related items and services demonstrate the levels of quality assurance requirements. The lecture gives an introduction of QA Program preparation under the following topics: -Basic criteria and international requirements - Interaction of QA activities - Modular and product oriented QA programs - Structuring of organization for the QA program - Identification of the main quality assurance functions and required actions - Quality Assurance Program documentation - Documentation of planning of activities - Control of program documents - Definitions. (orig./RW)

  1. Regulatory requirements for demonstration of the achieved safety level at the Mochovce NPP before commissioning

    International Nuclear Information System (INIS)

    Lipar, M.

    1997-01-01

    A review of regulatory requirements for demonstration of the achieved safety level at the Mochovce NPP before commissioning is given. It contains licensing steps in Slovakia during commissioning; Status and methodology of Mochovce safety analysis report; Mochovce NPP safety enhancement program; Regulatory body policy towards Mochovce NPP safety enhancement; Recent development in Mochovce pre-operational safety enhancement program review and assessment process; Licensing steps in Slovakia during commissioning

  2. Fast computation of voxel-level brain connectivity maps from resting-state functional MRI using l₁-norm as approximation of Pearson's temporal correlation: proof-of-concept and example vector hardware implementation.

    Science.gov (United States)

    Minati, Ludovico; Zacà, Domenico; D'Incerti, Ludovico; Jovicich, Jorge

    2014-09-01

    An outstanding issue in graph-based analysis of resting-state functional MRI is choice of network nodes. Individual consideration of entire brain voxels may represent a less biased approach than parcellating the cortex according to pre-determined atlases, but entails establishing connectedness for 1(9)-1(11) links, with often prohibitive computational cost. Using a representative Human Connectome Project dataset, we show that, following appropriate time-series normalization, it may be possible to accelerate connectivity determination replacing Pearson correlation with l1-norm. Even though the adjacency matrices derived from correlation coefficients and l1-norms are not identical, their similarity is high. Further, we describe and provide in full an example vector hardware implementation of l1-norm on an array of 4096 zero instruction-set processors. Calculation times correlation in very high-density resting-state functional connectivity analyses. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  3. Management competencies in higher education: Perceived job importance in relation to level of training required

    Directory of Open Access Journals (Sweden)

    Ingrid L. Potgieter

    2010-11-01

    Research purpose: The aim of this article is to determine the relationship between a specific set of HOD managerial competencies identified as being important for the job and the level of training required in terms of these competencies. Motivation for the study: Research has provided evidence that HODs are often ill-prepared for their managerial role, which requires the development of specific management competencies to enable them to fulfil their roles effectively. Research design, approach and method: A non-experimental quantitative survey design approach was followed and correlational data analyses were performed. A cross-sectional sample of 41 HODs of 22 departments from various faculties of a higher education institution in Gauteng participated in this study. The Management Competency Inventory (MCI of Visser (2009 was applied as a measure. Main findings: The Pearson product-moment analysis indicated that there is a significant relationship between the competencies indicated as being important for the job and the level of training required. Practical/Managerial implications: Training needs of HODs should be formally assessed and the depth of training required in terms of the identified management competencies should be considered in the design of training programmes. Contributions/Value-add: The information obtained in this study may potentially serve as a foundation for the development of an HOD training programme in the South African higher education environment.

  4. Analysis for Parallel Execution without Performing Hardware/Software Co-simulation

    OpenAIRE

    Muhammad Rashid

    2014-01-01

    Hardware/software co-simulation improves the performance of embedded applications by executing the applications on a virtual platform before the actual hardware is available in silicon. However, the virtual platform of the target architecture is often not available during early stages of the embedded design flow. Consequently, analysis for parallel execution without performing hardware/software co-simulation is required. This article presents an analysis methodology for parallel execution of ...

  5. Communication Estimation for Hardware/Software Codesign

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1998-01-01

    This paper presents a general high level estimation model of communication throughput for the implementation of a given communication protocol. The model, which is part of a larger model that includes component price, software driver object code size and hardware driver area, is intended...... to be general enough to be able to capture the characteristics of a wide range of communication protocols and yet to be sufficiently detailed as to allow the designer or design tool to efficiently explore tradeoffs between throughput, bus widths, burst/non-burst transfers and data packing strategies. Thus...... it provides a basis for decision making with respect to communication protocols/components and communication driver design in the initial design space exploration phase of a co-synthesis process where a large number of possibilities must be examined and where fast estimators are therefore necessary. The fill...

  6. Hardware and software maintenance strategies for upgrading vintage computers

    International Nuclear Information System (INIS)

    Wang, B.C.; Buijs, W.J.; Banting, R.D.

    1992-01-01

    The paper focuses on the maintenance of the computer hardware and software for digital control computers (DCC). Specific design and problems related to various maintenance strategies are reviewed. A foundation was required for a reliable computer maintenance and upgrading program to provide operation of the DCC with high availability and reliability for 40 years. This involved a carefully planned and executed maintenance and upgrading program, involving complementary hardware and software strategies. The computer system was designed on a modular basis, with large sections easily replaceable, to facilitate maintenance and improve availability of the system. Advances in computer hardware have made it possible to replace DCC peripheral devices with reliable, inexpensive, and widely available components from PC-based systems (PC = personal computer). By providing a high speed link from the DCC to a PC, it is now possible to use many commercial software packages to process data from the plant. 1 fig

  7. Hardware Resource Allocation for Hardware/Software Partitioning in the LYCOS System

    DEFF Research Database (Denmark)

    Grode, Jesper Nicolai Riis; Knudsen, Peter Voigt; Madsen, Jan

    1998-01-01

    as a designer's/design tool's aid to generate good hardware allocations for use in hardware/software partitioning. The algorithm has been implemented in a tool under the LYCOS system. The results show that the allocations produced by the algorithm come close to the best allocations obtained by exhaustive search.......This paper presents a novel hardware resource allocation technique for hardware/software partitioning. It allocates hardware resources to the hardware data-path using information such as data-dependencies between operations in the application, and profiling information. The algorithm is useful...

  8. Global climate targets and future consumption level: an evaluation of the required GHG intensity

    International Nuclear Information System (INIS)

    Girod, Bastien; Van Vuuren, Detlef Peter; Hertwich, Edgar G

    2013-01-01

    Discussion and analysis on international climate policy often focuses on the rather abstract level of total national and regional greenhouse gas (GHG) emissions. At some point, however, emission reductions need to be translated to consumption level. In this article, we evaluate the implications of the strictest IPCC representative concentration pathway for key consumption categories (food, travel, shelter, goods, services). We use IPAT style identities to account for possible growth in global consumption levels and indicate the required change in GHG emission intensity for each category (i.e. GHG emission per calorie, person kilometer, square meter, kilogram, US dollar). The proposed concept provides guidance for product developers, consumers and policymakers. To reach the 2 °C climate target (2.1 tCO 2 -eq. per capita in 2050), the GHG emission intensity of consumption has to be reduced by a factor of 5 in 2050. The climate targets on consumption level allow discussion of the feasibility of this climate target at product and consumption level. In most consumption categories products in line with this climate target are available. For animal food and air travel, reaching the GHG intensity targets with product modifications alone will be challenging and therefore structural changes in consumption patterns might be needed. The concept opens up possibilities for further research on potential solutions on the consumption and product level to global climate mitigation. (letter)

  9. Hardware availability calculations and results of the IFMIF accelerator facility

    International Nuclear Information System (INIS)

    Bargalló, Enric; Arroyo, Jose Manuel; Abal, Javier; Beauvais, Pierre-Yves; Gobin, Raphael; Orsini, Fabienne; Weber, Moisés; Podadera, Ivan; Grespan, Francesco; Fagotti, Enrico; De Blas, Alfredo; Dies, Javier; Tapia, Carlos; Mollá, Joaquín; Ibarra, Ángel

    2014-01-01

    Highlights: • IFMIF accelerator facility hardware availability analyses methodology is described. • Results of the individual hardware availability analyses are shown for the reference design. • Accelerator design improvements are proposed for each system. • Availability results are evaluated and compared with the requirements. - Abstract: Hardware availability calculations have been done individually for each system of the deuteron accelerators of the International Fusion Materials Irradiation Facility (IFMIF). The principal goal of these analyses is to estimate the availability of the systems, compare it with the challenging IFMIF requirements and find new paths to improve availability performances. Major unavailability contributors are highlighted and possible design changes are proposed in order to achieve the hardware availability requirements established for each system. In this paper, such possible improvements are implemented in fault tree models and the availability results are evaluated. The parallel activity on the design and construction of the linear IFMIF prototype accelerator (LIPAc) provides detailed design information for the RAMI (reliability, availability, maintainability and inspectability) analyses and allows finding out the improvements that the final accelerator could have. Because of the R and D behavior of the LIPAc, RAMI improvements could be the major differences between the prototype and the IFMIF accelerator design

  10. Hardware availability calculations and results of the IFMIF accelerator facility

    Energy Technology Data Exchange (ETDEWEB)

    Bargalló, Enric, E-mail: enric.bargallo-font@upc.edu [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Arroyo, Jose Manuel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Abal, Javier [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Beauvais, Pierre-Yves; Gobin, Raphael; Orsini, Fabienne [Commissariat à l’Energie Atomique, Saclay (France); Weber, Moisés; Podadera, Ivan [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Grespan, Francesco; Fagotti, Enrico [Istituto Nazionale di Fisica Nucleare, Legnaro (Italy); De Blas, Alfredo; Dies, Javier; Tapia, Carlos [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Mollá, Joaquín; Ibarra, Ángel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain)

    2014-10-15

    Highlights: • IFMIF accelerator facility hardware availability analyses methodology is described. • Results of the individual hardware availability analyses are shown for the reference design. • Accelerator design improvements are proposed for each system. • Availability results are evaluated and compared with the requirements. - Abstract: Hardware availability calculations have been done individually for each system of the deuteron accelerators of the International Fusion Materials Irradiation Facility (IFMIF). The principal goal of these analyses is to estimate the availability of the systems, compare it with the challenging IFMIF requirements and find new paths to improve availability performances. Major unavailability contributors are highlighted and possible design changes are proposed in order to achieve the hardware availability requirements established for each system. In this paper, such possible improvements are implemented in fault tree models and the availability results are evaluated. The parallel activity on the design and construction of the linear IFMIF prototype accelerator (LIPAc) provides detailed design information for the RAMI (reliability, availability, maintainability and inspectability) analyses and allows finding out the improvements that the final accelerator could have. Because of the R and D behavior of the LIPAc, RAMI improvements could be the major differences between the prototype and the IFMIF accelerator design.

  11. Computer organization and design the hardware/software interface

    CERN Document Server

    Hennessy, John L

    1994-01-01

    Computer Organization and Design: The Hardware/Software Interface presents the interaction between hardware and software at a variety of levels, which offers a framework for understanding the fundamentals of computing. This book focuses on the concepts that are the basis for computers.Organized into nine chapters, this book begins with an overview of the computer revolution. This text then explains the concepts and algorithms used in modern computer arithmetic. Other chapters consider the abstractions and concepts in memory hierarchies by starting with the simplest possible cache. This book di

  12. Integrated circuit authentication hardware Trojans and counterfeit detection

    CERN Document Server

    Tehranipoor, Mohammad; Zhang, Xuehui

    2013-01-01

    This book describes techniques to verify the authenticity of integrated circuits (ICs). It focuses on hardware Trojan detection and prevention and counterfeit detection and prevention. The authors discuss a variety of detection schemes and design methodologies for improving Trojan detection techniques, as well as various attempts at developing hardware Trojans in IP cores and ICs. While describing existing Trojan detection methods, the authors also analyze their effectiveness in disclosing various types of Trojans, and demonstrate several architecture-level solutions. 

  13. Tank waste remediation system high-level waste vitrification system development and testing requirements

    International Nuclear Information System (INIS)

    Calmus, R.B.

    1995-01-01

    This document provides the fiscal year (FY) 1995 recommended high-level waste melter system development and testing (D and T) requirements. The first phase of melter system testing (FY 1995) will focus on the feasibility of high-temperature operation of recommended high-level waste melter systems. These test requirements will be used to establish the basis for defining detailed testing work scope, cost, and schedules. This document includes a brief summary of the recommended technologies and technical issues associated with each technology. In addition, this document presents the key D and T activities and engineering evaluations to be performed for a particular technology or general melter system support feature. The strategy for testing in Phase 1 (FY 1995) is to pursue testing of the recommended high-temperature technologies, namely the high-temperature, ceramic-lined, joule-heated melter, referred to as the HTCM, and the high-frequency, cold-wall, induction-heated melter, referred to as the cold-crucible melter (CCM). This document provides a detailed description of the FY 1995 D and T needs and requirements relative to each of the high-temperature technologies

  14. Smart Home Hardware-in-the-Loop Testing

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, Annabelle

    2017-07-12

    This presentation provides a high-level overview of NREL's smart home hardware-in-the-loop testing. It was presented at the Fourth International Workshop on Grid Simulator Testing of Energy Systems and Wind Turbine Powertrains, held April 25-26, 2017, hosted by NREL and Clemson University at the Energy Systems Integration Facility in Golden, Colorado.

  15. A methodology to quantify the stochastic distribution of friction coefficient required for level walking.

    Science.gov (United States)

    Chang, Wen-Ruey; Chang, Chien-Chi; Matz, Simon; Lesch, Mary F

    2008-11-01

    The required friction coefficient is defined as the minimum friction needed at the shoe and floor interface to support human locomotion. The available friction is the maximum friction coefficient that can be supported without a slip at the shoe and floor interface. A statistical model was recently introduced to estimate the probability of slip and fall incidents by comparing the available friction with the required friction, assuming that both the available and required friction coefficients have stochastic distributions. This paper presents a methodology to investigate the stochastic distributions of the required friction coefficient for level walking. In this experiment, a walkway with a layout of three force plates was specially designed in order to capture a large number of successful strikes without causing fatigue in participants. The required coefficient of friction data of one participant, who repeatedly walked on this walkway under four different walking conditions, is presented as an example of the readiness of the methodology examined in this paper. The results of the Kolmogorov-Smirnov goodness-of-fit test indicated that the required friction coefficient generated from each foot and walking condition by this participant appears to fit the normal, log-normal or Weibull distributions with few exceptions. Among these three distributions, the normal distribution appears to fit all the data generated with this participant. The average of successful strikes for each walk achieved with three force plates in this experiment was 2.49, ranging from 2.14 to 2.95 for each walking condition. The methodology and layout of the experimental apparatus presented in this paper are suitable for being applied to a full-scale study.

  16. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms.

    Science.gov (United States)

    Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B; Liu, Shih-Chii

    2015-01-01

    Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.

  17. Hardware controls for the STAR experiment at RHIC

    International Nuclear Information System (INIS)

    Reichhold, D.; Bieser, F.; Bordua, M.; Cherney, M.; Chrin, J.; Dunlop, J.C.; Ferguson, M.I.; Ghazikhanian, V.; Gross, J.; Harper, G.; Howe, M.; Jacobson, S.; Klein, S.R.; Kravtsov, P.; Lewis, S.; Lin, J.; Lionberger, C.; LoCurto, G.; McParland, C.; McShane, T.; Meier, J.; Sakrejda, I.; Sandler, Z.; Schambach, J.; Shi, Y.; Willson, R.; Yamamoto, E.; Zhang, W.

    2003-01-01

    The STAR detector sits in a high radiation area when operating normally; therefore it was necessary to develop a robust system to remotely control all hardware. The STAR hardware controls system monitors and controls approximately 14,000 parameters in the STAR detector. Voltages, currents, temperatures, and other parameters are monitored. Effort has been minimized by the adoption of experiment-wide standards and the use of pre-packaged software tools. The system is based on the Experimental Physics and Industrial Control System (EPICS) . VME processors communicate with subsystem-based sensors over a variety of field busses, with High-level Data Link Control (HDLC) being the most prevalent. Other features of the system include interfaces to accelerator and magnet control systems, a web-based archiver, and C++-based communication between STAR online, run control and hardware controls and their associated databases. The system has been designed for easy expansion as new detector elements are installed in STAR

  18. BCI meeting 2005--workshop on technology: hardware and software.

    Science.gov (United States)

    Cincotti, Febo; Bianchi, Luigi; Birch, Gary; Guger, Christoph; Mellinger, Jürgen; Scherer, Reinhold; Schmidt, Robert N; Yáñez Suárez, Oscar; Schalk, Gerwin

    2006-06-01

    This paper describes the outcome of discussions held during the Third International BCI Meeting at a workshop to review and evaluate the current state of BCI-related hardware and software. Technical requirements and current technologies, standardization procedures and future trends are covered. The main conclusion was recognition of the need to focus technical requirements on the users' needs and the need for consistent standards in BCI research.

  19. Requirements for an ES and H assurance program at the working levels of organization

    International Nuclear Information System (INIS)

    Tierney, M.S.; Ellingson, A.C.

    1979-07-01

    Means by which the disciplines of quality assurance (QA), reliability (R), and human factors (HF) might be used to the advantage of Environment, Safety, and Health (ES and H) programs are being investigated. A generalized model assurance program, based on QA, R, and HF principles but specifically tailored to ES and H program needs, has been developed. Current studies address implementation of the model assurance program at the working levels of organization. It appears that the only way practicability at the working level can be determined is by the case study method. The present study represents a first step in the application of such a procedure. An attempt was made to approach the question of practicability by first constructing a generic ES and H assurance plan for working-level organizations that is based upon the more widely-applied model plan and studies mentioned earlier. Then the elements of this generic working-level plan were compared with the practices of an existing R and D organization at Sandia Laboratories, Albuquerque. Some of the necessary steps were taken to convert these practices to those required by the generic plan in order to gain a measure of the feasibility, cost, and some of the possible benefits of such a conversion. Partial results of one case study are presented, and some generalizations that emerge regarding the structure of an idealized working-level ES and H plan are made

  20. Hardware architecture design of a fast global motion estimation method

    Science.gov (United States)

    Liang, Chaobing; Sang, Hongshi; Shen, Xubang

    2015-12-01

    VLSI implementation of gradient-based global motion estimation (GME) faces two main challenges: irregular data access and high off-chip memory bandwidth requirement. We previously proposed a fast GME method that reduces computational complexity by choosing certain number of small patches containing corners and using them in a gradient-based framework. A hardware architecture is designed to implement this method and further reduce off-chip memory bandwidth requirement. On-chip memories are used to store coordinates of the corners and template patches, while the Gaussian pyramids of both the template and reference frame are stored in off-chip SDRAMs. By performing geometric transform only on the coordinates of the center pixel of a 3-by-3 patch in the template image, a 5-by-5 area containing the warped 3-by-3 patch in the reference image is extracted from the SDRAMs by burst read. Patched-based and burst mode data access helps to keep the off-chip memory bandwidth requirement at the minimum. Although patch size varies at different pyramid level, all patches are processed in term of 3x3 patches, so the utilization of the patch-processing circuit reaches 100%. FPGA implementation results show that the design utilizes 24,080 bits on-chip memory and for a sequence with resolution of 352x288 and frequency of 60Hz, the off-chip bandwidth requirement is only 3.96Mbyte/s, compared with 243.84Mbyte/s of the original gradient-based GME method. This design can be used in applications like video codec, video stabilization, and super-resolution, where real-time GME is a necessity and minimum memory bandwidth requirement is appreciated.

  1. High-Level software requirements specification for the TWRS controlled baseline database system

    International Nuclear Information System (INIS)

    Spencer, S.G.

    1998-01-01

    This Software Requirements Specification (SRS) is an as-built document that presents the Tank Waste Remediation System (TWRS) Controlled Baseline Database (TCBD) in its current state. It was originally known as the Performance Measurement Control System (PMCS). Conversion to the new system name has not occurred within the current production system. Therefore, for simplicity, all references to TCBD are equivalent to PMCS references. This SRS will reference the PMCS designator from this point forward to capture the as-built SRS. This SRS is written at a high-level and is intended to provide the design basis for the PMCS. The PMCS was first released as the electronic data repository for cost, schedule, and technical administrative baseline information for the TAAS Program. During its initial development, the PMCS was accepted by the customer, TARS Business Management, with no formal documentation to capture the initial requirements

  2. A Hybrid Hardware and Software Component Architecture for Embedded System Design

    Science.gov (United States)

    Marcondes, Hugo; Fröhlich, Antônio Augusto

    Embedded systems are increasing in complexity, while several metrics such as time-to-market, reliability, safety and performance should be considered during the design of such systems. A component-based design which enables the migration of its components between hardware and software can cope to achieve such metrics. To enable that, we define hybrid hardware and software components as a development artifact that can be deployed by different combinations of hardware and software elements. In this paper, we present an architecture for developing such components in order to construct a repository of components that can migrate between the hardware and software domains to meet the design system requirements.

  3. Speed challenge: a case for hardware implementation in soft-computing

    Science.gov (United States)

    Daud, T.; Stoica, A.; Duong, T.; Keymeulen, D.; Zebulum, R.; Thomas, T.; Thakoor, A.

    2000-01-01

    For over a decade, JPL has been actively involved in soft computing research on theory, architecture, applications, and electronics hardware. The driving force in all our research activities, in addition to the potential enabling technology promise, has been creation of a niche that imparts orders of magnitude speed advantage by implementation in parallel processing hardware with algorithms made especially suitable for hardware implementation. We review our work on neural networks, fuzzy logic, and evolvable hardware with selected application examples requiring real time response capabilities.

  4. Pembuatan Service Level Requirement, Service Level Agreement dan Operational Level Agreement pada layanan help desk SAP berdasarkan kerangka kerja ITIL versi 2011 (Studi Kasus : Pupuk Indonesia Holding Company

    Directory of Open Access Journals (Sweden)

    Nur Shabrina Prameswari

    2017-01-01

    Full Text Available PT. Pupuk Indonesia Holding Company baru saja melakukan implementasi SAP pada tahun 2014. Dalam penerapannya, perusahaan merasa perlu membuat help desk SAP sebagai pusat penanganan masalah bagi perusahaan dan 7 anak perusahaannya yang kemudian dapat berfungsi juga sebagai knowledge base yang berguna apabila ada masalah yang berulang diwaktu selanjutnya. Untuk merancang layanan help desk yang baik, perlu didefinisikannya target layanan dalam sebuah kontrak perjanjian antara pengguna layanan dan penyedia layanan. Selain itu,Perjanjian layanan tersebut diperlukan juga sebagai jaminan kualitas help desk yang dapat disepakati oleh penyedia layanan dan pengguna layanan yang merupakan pengguna SAP pada PT. Pupuk Indonesia dan anak perusahaannya. Hal tersebut bertujuan untuk menyelaraskan bisnis dengan kualitas layanan serta menentukan kebutuhan dan harapan pelanggan dalam sebuah perjanjian antara penyedia layanan dan pengguna layanan. Dari permasalahan tersebut, maka diperlukan pembuatan dokumen Service Level Requirement, Service Level Agreement dan juga Operational Level Agreement pada help desk SAP, dengan dilakukan observasi dokumen dan wawancara pada pihak pengguna layanan dan penyedia layanan, maka setelah itu dibuatlah dokumen Service Level management tersebut berdasarkan ITIL Versi 2011.

  5. Computer hardware description languages - A tutorial

    Science.gov (United States)

    Shiva, S. G.

    1979-01-01

    The paper introduces hardware description languages (HDL) as useful tools for hardware design and documentation. The capabilities and limitations of HDLs are discussed along with the guidelines needed in selecting an appropriate HDL. The directions for future work are provided and attention is given to the implementation of HDLs in microcomputers.

  6. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Kimura, N; The ATLAS collaboration

    2012-01-01

    Selecting interesting events with triggering is very challenging at the LHC due to the busy hadronic environment. Starting in 2014 the LHC will run with an energy of 14TeV and instantaneous luminosities which could exceed 10^34 interactions per cm^2 and per second. The triggering in the ATLAS detector is realized using a three level trigger approach, in which the first level (L1) is hardware based and the second (L2) and third (EF) stag are realized using large computing farms. It is a crucial and non-trivial task for triggering to maintain a high efficiency for events of interest while suppressing effectively the very high rates of inclusive QCD process, which constitute mainly background. At the same time the trigger system has to be robust and provide sufficient operational margins to adapt to changes in the running environment. In the current design track reconstruction can be performed only in limited regions of interest at L2 and the CPU requirements may limit this even further at the highest instantane...

  7. OER Approach for Specific Student Groups in Hardware-Based Courses

    Science.gov (United States)

    Ackovska, Nevena; Ristov, Sasko

    2014-01-01

    Hardware-based courses in computer science studies require much effort from both students and teachers. The most important part of students' learning is attending in person and actively working on laboratory exercises on hardware equipment. This paper deals with a specific group of students, those who are marginalized by not being able to…

  8. Motion compensation in digital subtraction angiography using graphics hardware.

    Science.gov (United States)

    Deuerling-Zheng, Yu; Lell, Michael; Galant, Adam; Hornegger, Joachim

    2006-07-01

    An inherent disadvantage of digital subtraction angiography (DSA) is its sensitivity to patient motion which causes artifacts in the subtraction images. These artifacts could often reduce the diagnostic value of this technique. Automated, fast and accurate motion compensation is therefore required. To cope with this requirement, we first examine a method explicitly designed to detect local motions in DSA. Then, we implement a motion compensation algorithm by means of block matching on modern graphics hardware. Both methods search for maximal local similarity by evaluating a histogram-based measure. In this context, we are the first who have mapped an optimizing search strategy on graphics hardware while paralleling block matching. Moreover, we provide an innovative method for creating histograms on graphics hardware with vertex texturing and frame buffer blending. It turns out that both methods can effectively correct the artifacts in most case, as the hardware implementation of block matching performs much faster: the displacements of two 1024 x 1024 images can be calculated at 3 frames/s with integer precision or 2 frames/s with sub-pixel precision. Preliminary clinical evaluation indicates that the computation with integer precision could already be sufficient.

  9. Hardware Design Considerations for Edge-Accelerated Stereo Correspondence Algorithms

    Directory of Open Access Journals (Sweden)

    Christos Ttofis

    2012-01-01

    Full Text Available Stereo correspondence is a popular algorithm for the extraction of depth information from a pair of rectified 2D images. Hence, it has been used in many computer vision applications that require knowledge about depth. However, stereo correspondence is a computationally intensive algorithm and requires high-end hardware resources in order to achieve real-time processing speed in embedded computer vision systems. This paper presents an overview of the use of edge information as a means to accelerate hardware implementations of stereo correspondence algorithms. The presented approach restricts the stereo correspondence algorithm only to the edges of the input images rather than to all image points, thus resulting in a considerable reduction of the search space. The paper highlights the benefits of the edge-directed approach by applying it to two stereo correspondence algorithms: an SAD-based fixed-support algorithm and a more complex adaptive support weight algorithm. Furthermore, we present design considerations about the implementation of these algorithms on reconfigurable hardware and also discuss issues related to the memory structures needed, the amount of parallelism that can be exploited, the organization of the processing blocks, and so forth. The two architectures (fixed-support based versus adaptive-support weight based are compared in terms of processing speed, disparity map accuracy, and hardware overheads, when both are implemented on a Virtex-5 FPGA platform.

  10. An evaluation of Skylab habitability hardware

    Science.gov (United States)

    Stokes, J.

    1974-01-01

    For effective mission performance, participants in space missions lasting 30-60 days or longer must be provided with hardware to accommodate their personal needs. Such habitability hardware was provided on Skylab. Equipment defined as habitability hardware was that equipment composing the food system, water system, sleep system, waste management system, personal hygiene system, trash management system, and entertainment equipment. Equipment not specifically defined as habitability hardware but which served that function were the Wardroom window, the exercise equipment, and the intercom system, which was occasionally used for private communications. All Skylab habitability hardware generally functioned as intended for the three missions, and most items could be considered as adequate concepts for future flights of similar duration. Specific components were criticized for their shortcomings.

  11. Comparative Modal Analysis of Sieve Hardware Designs

    Science.gov (United States)

    Thompson, Nathaniel

    2012-01-01

    The CMTB Thwacker hardware operates as a testbed analogue for the Flight Thwacker and Sieve components of CHIMRA, a device on the Curiosity Rover. The sieve separates particles with a diameter smaller than 150 microns for delivery to onboard science instruments. The sieving behavior of the testbed hardware should be similar to the Flight hardware for the results to be meaningful. The elastodynamic behavior of both sieves was studied analytically using the Rayleigh Ritz method in conjunction with classical plate theory. Finite element models were used to determine the mode shapes of both designs, and comparisons between the natural frequencies and mode shapes were made. The analysis predicts that the performance of the CMTB Thwacker will closely resemble the performance of the Flight Thwacker within the expected steady state operating regime. Excitations of the testbed hardware that will mimic the flight hardware were recommended, as were those that will improve the efficiency of the sieving process.

  12. Optimized hardware design for the divertor remote handling control system

    Energy Technology Data Exchange (ETDEWEB)

    Saarinen, Hannu [Tampere University of Technology, Korkeakoulunkatu 6, 33720 Tampere (Finland)], E-mail: hannu.saarinen@tut.fi; Tiitinen, Juha; Aha, Liisa; Muhammad, Ali; Mattila, Jouni; Siuko, Mikko; Vilenius, Matti [Tampere University of Technology, Korkeakoulunkatu 6, 33720 Tampere (Finland); Jaervenpaeae, Jorma [VTT Systems Engineering, Tekniikankatu 1, 33720 Tampere (Finland); Irving, Mike; Damiani, Carlo; Semeraro, Luigi [Fusion for Energy, Josep Pla 2, Torres Diagonal Litoral B3, 08019 Barcelona (Spain)

    2009-06-15

    A key ITER maintenance activity is the exchange of the divertor cassettes. One of the major focuses of the EU Remote Handling (RH) programme has been the study and development of the remote handling equipment necessary for divertor exchange. The current major step in this programme involves the construction of a full scale physical test facility, namely DTP2 (Divertor Test Platform 2), in which to demonstrate and refine the RH equipment designs for ITER using prototypes. The major objective of the DTP2 project is the proof of concept studies of various RH devices, but is also important to define principles for standardizing control hardware and methods around the ITER maintenance equipment. This paper focuses on describing the control system hardware design optimization that is taking place at DTP2. Here there will be two RH movers, namely the Cassette Multifuctional Mover (CMM), Cassette Toroidal Mover (CTM) and assisting water hydraulic force feedback manipulators (WHMAN) located aboard each Mover. The idea here is to use common Real Time Operating Systems (RTOS), measurement and control IO-cards etc. for all maintenance devices and to standardize sensors and control components as much as possible. In this paper, new optimized DTP2 control system hardware design and some initial experimentation with the new DTP2 RH control system platform are presented. The proposed new approach is able to fulfil the functional requirements for both Mover and Manipulator control systems. Since the new control system hardware design has reduced architecture there are a number of benefits compared to the old approach. The simplified hardware solution enables the use of a single software development environment and a single communication protocol. This will result in easier maintainability of the software and hardware, less dependence on trained personnel, easier training of operators and hence reduced the development costs of ITER RH.

  13. Analyzing the Required Professional Qualification for Agricultural Extension Experts in Operational Level in the Mazandaran Province

    Directory of Open Access Journals (Sweden)

    Amir Ahmadpour

    2015-08-01

    Full Text Available Extension experts who play an active role at the operational level are required to have some indispensable competencies to enable them to provide the rural community with some high quality, ­applicable and important educational programs. Accordingly, the study sought to analyze the components of professional qualifications for agricultural extension experts’ operational level. This study is a descriptive and survey research. The statistical population (Agricultural Extension Experts in Operational Levels was comprised of 290 persons. And the proportional stratified sampling using Krejcie-Morgan Table was applied and 165 subjects were selected. The data collection tool was a researcher-made questionnaire, and its content validity was approved by agricultural extension experts and by using KMO coefficient and Bartlett’s Test giving a reliability of KMO=0.737(. The data analysis results showed that seven extracted factors of (research factors, technical-professional factors, teaching factors, managerial factors, personality factors, communication factors and virtual technology factors explain 63.691% of the total variance of the professional competencies for agriculture extension experts’ operational levels in the province. The  findings indicate that based on scientific methods of research,  assessment of needs, planning and assessment, and in-service training workshops implementation for experts seem to be necessary. Distinctive attention should be practiced by Agriculture Organization to improve agents’ skills in a variety of crops cultivation and in working with software and agricultural applications.

  14. HwPMI: An Extensible Performance Monitoring Infrastructure for Improving Hardware Design and Productivity on FPGAs

    Directory of Open Access Journals (Sweden)

    Andrew G. Schmidt

    2012-01-01

    Full Text Available Designing hardware cores for FPGAs can quickly become a complicated task, difficult even for experienced engineers. With the addition of more sophisticated development tools and maturing high-level language-to-gates techniques, designs can be rapidly assembled; however, when the design is evaluated on the FPGA, the performance may not be what was expected. Therefore, an engineer may need to augment the design to include performance monitors to better understand the bottlenecks in the system or to aid in the debugging of the design. Unfortunately, identifying what to monitor and adding the infrastructure to retrieve the monitored data can be a challenging and time-consuming task. Our work alleviates this effort. We present the Hardware Performance Monitoring Infrastructure (HwPMI, which includes a collection of software tools and hardware cores that can be used to profile the current design, recommend and insert performance monitors directly into the HDL or netlist, and retrieve the monitored data with minimal invasiveness to the design. Three applications are used to demonstrate and evaluate HwPMI’s capabilities. The results are highly encouraging as the infrastructure adds numerous capabilities while requiring minimal effort by the designer and low resource overhead to the existing design.

  15. The Texas Solution to the Nation's Disposal Needs for Irradiated Hardware - 13337

    International Nuclear Information System (INIS)

    Britten, Jay M.

    2013-01-01

    The closure of the disposal facility in Barnwell, South Carolina, to out-of-compact states in 2008 left commercial nuclear power plants without a disposal option for Class B and C irradiated hardware. In 2012, Waste Control Specialists LLC (WCS) opened a highly engineered facility specifically designed and built for the disposal of Class B and C waste. The WCS facility is the first Interstate Compact low-level radioactive waste disposal facility to be licensed and operated under the Low-level Waste Policy Act of 1980, as amended in 1985. Due to design requirements of a modern Low Level Radioactive Waste (LLRW) facility, traditional methods for disposal were not achievable at the WCS site. Earlier methods primarily utilized the As Low as Reasonably Achievable (ALARA) concept of distance to accomplish worker safety. The WCS method required the use of all three ALARA concepts of time, distance, and shielding to ensure the safe disposal of this highly hazardous waste stream. (authors)

  16. Energy requirements and physical activity level of active elderly people in rural areas of cuba

    International Nuclear Information System (INIS)

    Hernandez-Triana, M.; Porrata Maury, C.; Jimenez Acosta, S.; Gonzalez Perez, T.; Diaz, M.E.; Martin, I.; Sanchez, V.; Monterrey, P.

    1999-01-01

    Obesity and non-insulin dependent diabetes mellitus (NIDDM) are common in the Third Age and increasing in Cuba. Among the life-style changes associated with increased prevalence of obesity and its related disorders, diet and activity patterns are prime candidates. The transition to this life-style model may induce a decrease in the energy needs. There is an urgent need for tools which have been validated for measuring diet and physical activity in nutritional studies in the developing world, but also a more urgent need for reference values for the total energy requirements of healthy elderly people. Regular physical activity reduces the likelihood to develop diseases that characterise the metabolic cardiovascular syndrome. Previous studies done in Havana showed values of physical activity level (PAL) which are lower than the reported for elderly subjects. Elderly people living in rural areas use to have physical activity levels which differ from the observed in urban areas. With the purpose of estimating the energy requirements, a group of 40 apparently healthy people older than 60 years of age living in a rural mountain community will be submitted to a medical, epidemiological, dietary, anthropometric and insulin resistance study. Physical activity will be determined by questionnaire and by the calculation of the PAL from the basal metabolic rate (BMR) and total energy expenditure (TEE) measured with the doubly-labelled water method (DLW). Associations with the prevalence of insulin resistance and obesity will be assessed. (author)

  17. Policy Requirements and Factors of High-Level Radioactive Waste Management

    International Nuclear Information System (INIS)

    Lee, Kang Myoung; Jeong, J. Y.; Ha, K. M.

    2007-06-01

    Recently, the need of high-level radioactive waste policy including spent fuel management becomes serious due to the rapid increase in oil price, the nationalism of natural resources, and the environmental issues such as Tokyo protocol. Also, the policy should be established urgently to prepare the saturation of on-site storage capacity of spent fuel, the revision of 'Agreement for Cooperation-Concerning Civil Uses of Atomic Energy' between Korea and US, the anxiety for nuclear weapon proliferation, and R and D to reduce the amount of waste to be disposed. In this study, we performed case study of US, Japan, Canada and Finland, which have special laws and plans/roadmaps for high-level waste management, to draw the policy requirements to be considered in HLW management. Also, we reviewed social conflict issues experienced in our society, and summarized the factors affecting the political and social environment. These policy requirements and factors summarized in this study should be considered seriously in the process for public consensus and the policy making regarding HLW management. Finally, the following 4 action items were drawn to manage HLW successfully : - Continuous and systematic R and D activities to obtain reliable management technology - Promoting companies having specialty in HLW management - Nurturing experts and workforce - Drive the public consensus process

  18. Energy requirements and physical activity level of active elderly people in rural areas of China

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Triana, M; Aleman Mateo, H; Valencia Julleirat, M [Institute of Nutrition and Food Hygiene, Havana (Cuba); and others

    2002-07-01

    Obesity and NIDDM are common in the Third Age and increasing in Cuba. Among the life-style changes associated with increased prevalence of obesity and its related disorders, diet and activity patterns are prime candidates. The transition to this life-style model may induce a decrease in the energy needs. There is an urgent need for tools which have been validated for measuring diet and physical activity in nutritional studies in the developing world, but also a more urgent need for reference values for the total energy requirements of healthy elderly people. Regular physical activity reduces the likelihood to develop diseases that characterise the metabolic cardiovascular syndrome. With the purpose of estimating the energy requirements, a group of 48 elderly people aged 61-74 years living in a rural mountain community was submitted to a medical, epidemiological, dietary and biochemical study of the nutritional status. Glucose intolerance was diagnosed in 40% and arterial hypertension was present in 23 of them. Ten subjects without signs or symptoms of the metabolic cardiovascular syndrome were submitted to a measurement of the total energy expenditure by the doubly labelled water method. PAL values of 2.13 and 1. 77 were measured for men and women, values which were significantly higher that the recommended value of 1.51 for elderly subjects. The estimation of energy requirements by the energy intake or by the factorial method using the physical activity questionnaires generated values, which were 11% and 30% lower than the values obtained by the DLW-method The value of 1.51 x BMR for the estimation of the energy requirements of elderly subjects living in rural areas and submitted to higher levels of physical activity seems to be sub estimated. (author)

  19. Energy requirements and physical activity level of active elderly people in rural areas of China

    International Nuclear Information System (INIS)

    Hernandez-Triana, M.; Aleman Mateo, H.; Valencia Julleirat, M.

    2002-01-01

    Obesity and NIDDM are common in the Third Age and increasing in Cuba. Among the life-style changes associated with increased prevalence of obesity and its related disorders, diet and activity patterns are prime candidates. The transition to this life-style model may induce a decrease in the energy needs. There is an urgent need for tools which have been validated for measuring diet and physical activity in nutritional studies in the developing world, but also a more urgent need for reference values for the total energy requirements of healthy elderly people. Regular physical activity reduces the likelihood to develop diseases that characterise the metabolic cardiovascular syndrome. With the purpose of estimating the energy requirements, a group of 48 elderly people aged 61-74 years living in a rural mountain community was submitted to a medical, epidemiological, dietary and biochemical study of the nutritional status. Glucose intolerance was diagnosed in 40% and arterial hypertension was present in 23 of them. Ten subjects without signs or symptoms of the metabolic cardiovascular syndrome were submitted to a measurement of the total energy expenditure by the doubly labelled water method. PAL values of 2.13 and 1. 77 were measured for men and women, values which were significantly higher that the recommended value of 1.51 for elderly subjects. The estimation of energy requirements by the energy intake or by the factorial method using the physical activity questionnaires generated values, which were 11% and 30% lower than the values obtained by the DLW-method The value of 1.51 x BMR for the estimation of the energy requirements of elderly subjects living in rural areas and submitted to higher levels of physical activity seems to be sub estimated. (author)

  20. Energy requirements and physical activity level of active elderly people in rural areas of Cuba

    International Nuclear Information System (INIS)

    Hernandez-Triana, M.H.; Sanchez, V.; Basabe-Tuero, B.; Gonzalez-Calderin, S.; Diaz, M.E.; Aleman-Mateo, H.; Valencia-Julleirat, M.; Salazar, G.

    2002-01-01

    Obesity and NIDDM are common in the Third Age and increasing in Cuba. Among the life-style changes associated with increased prevalence of obesity and its related disorders, diet and activity patterns are prime candidates. The transition to this life-style model may induce a decrease in the energy needs. There is an urgent need for tools which have been validated for measuring diet and physical activity in nutritional studies in the developing world, but also a more urgent need for reference values for the total energy requirements of healthy elderly people. Regular physical activity reduces the likelihood to develop diseases that characterise the metabolic cardiovascular syndrome. With the purpose of estimating the energy requirements, a group of 48 elderly people aged 61-74 years living in a rural mountain community was submitted to a medical, epidemiological, dietary and biochemical study of the nutritional status. Glucose intolerance was diagnosed in 40% and arterial hypertension was present in 23 % of them. Ten subjects without signs or symptoms of the metabolic cardiovascular syndrome were submitted to a measurement of the total energy expenditure by the doubly labelled water method. PAL values of 2.13 and 1.77 were measured for men and women, values which were significantly higher that the recommended value of 1.51 for elderly subjects. The total energy expenditure: The estimation of energy requirements by the energy intake or by the factorial method using the physical activity questionnaires generated values, which were 11 % and 30% lower than the values obtained by the DLW-method. The value of 1.51 x BMR for the estimation of the energy requirements of elderly subjects living in rural areas and submitted to higher levels of physical activity seems to be sub estimated

  1. Infected hardware after surgical stabilization of rib fractures: Outcomes and management experience.

    Science.gov (United States)

    Thiels, Cornelius A; Aho, Johnathon M; Naik, Nimesh D; Zielinski, Martin D; Schiller, Henry J; Morris, David S; Kim, Brian D

    2016-05-01

    Surgical stabilization of rib fracture (SSRF) is increasingly used for treatment of rib fractures. There are few data on the incidence, risk factors, outcomes, and optimal management strategy for hardware infection in these patients. We aimed to develop and propose a management algorithm to help others treat this potentially morbid complication. We retrospectively searched a prospectively collected rib fracture database for the records of all patients who underwent SSRF from August 2009 through March 2014 at our institution. We then analyzed for the subsequent development of hardware infection among these patients. Standard descriptive analyses were performed. Among 122 patients who underwent SSRF, most (73%) were men; the mean (SD) age was 59.5 (16.4) years, and median (interquartile range [IQR]) Injury Severity Score was 17 (13-22). The median number of rib fractures was 7 (5-9) and 48% of the patients had flail chest. Mortality at 30 days was 0.8%. Five patients (4.1%) had a hardware infection on mean (SD) postoperative day 12.0 (6.6). Median Injury Severity Score (17 [range, 13-42]) and hospital length of stay (9 days [6-37 days]) in these patients were similar to the values for those without infection (17 days [range, 13-22 days] and 9 days [6-12 days], respectively). Patients with infection underwent a median (IQR) of 2 (range, 2-3) additional operations, which included wound debridement (n = 5), negative-pressure wound therapy (n = 3), and antibiotic beads (n = 4). Hardware was removed in 3 patients at 140, 190, and 192 days after index operation. Cultures grew only gram-positive organisms. No patients required reintervention after hardware removal, and all achieved bony union and were taking no narcotics or antibiotics at the latest follow-up. Although uncommon, hardware infection after SSRF carries considerable morbidity. With the use of an aggressive multimodal management strategy, however, bony union and favorable long-term outcomes can be achieved

  2. Radiation therapists' perceptions of the minimum level of experience required to perform portal image analysis

    International Nuclear Information System (INIS)

    Rybovic, Michala; Halkett, Georgia K.; Banati, Richard B.; Cox, Jennifer

    2008-01-01

    Background and purpose: Our aim was to explore radiation therapists' views on the level of experience necessary to undertake portal image analysis and clinical decision making. Materials and methods: A questionnaire was developed to determine the availability of portal imaging equipment in Australia and New Zealand. We analysed radiation therapists' responses to a specific question regarding their opinion on the minimum level of experience required for health professionals to analyse portal images. We used grounded theory and a constant comparative method of data analysis to derive the main themes. Results: Forty-six radiation oncology facilities were represented in our survey, with 40 questionnaires being returned (87%). Thirty-seven radiation therapists answered our free-text question. Radiation therapists indicated three main themes which they felt were important in determining the minimum level of experience: 'gaining on-the-job experience', 'receiving training' and 'working as a team'. Conclusions: Radiation therapists indicated that competence in portal image review occurs via various learning mechanisms. Further research is warranted to determine perspectives of other health professionals, such as radiation oncologists, on portal image review becoming part of radiation therapists' extended role. Suitable training programs and steps for implementation should be developed to facilitate this endeavour

  3. A Measurement Framework for Team Level Assessment of Innovation Capability in Early Requirements Engineering

    Science.gov (United States)

    Regnell, Björn; Höst, Martin; Nilsson, Fredrik; Bengtsson, Henrik

    When developing software-intensive products for a market-place it is important for a development organisation to create innovative features for coming releases in order to achieve advantage over competitors. This paper focuses on assessment of innovation capability at team level in relation to the requirements engineering that is taking place before the actual product development projects are decided, when new business models, technology opportunities and intellectual property rights are created and investigated through e.g. prototyping and concept development. The result is a measurement framework focusing on four areas: innovation elicitation, selection, impact and ways-of-working. For each area, candidate measurements were derived from interviews to be used as inspiration in the development of a tailored measurement program. The framework is based on interviews with participants of a software team with specific innovation responsibilities and validated through cross-case analysis and feedback from practitioners.

  4. Information requirements for the probabilistic risk assessment of underground disposal of low level wastes

    International Nuclear Information System (INIS)

    Sumerling, T.J.; Thompson, B.G.J.

    1987-01-01

    The UK Department of the Environment (DoE) will perform independent post-closure safety assessment of proposals for the disposal of low-level radioactive wastes in engineered facilities in shallow ground. In this paper, the DoE assessment criteria, methodology and modelling capabilities are outlined; the general characteristics of the information required are discussed; and the specific information needs, as presently perceived, are identified. It is concluded that: most of the data that can be obtained by direct means is provided by site investigations and research now in hand, although there may be uncertainty due to sparse regional data; some parameters can only be obtained by, or with the assistance of subjective judgment; the process of data reduction is not straight forward and adequate time must be allowed for this if a comprehensive and defensible assessment is to be constructed

  5. Crosstalk in concurrent repeated games impedes direct reciprocity and requires stronger levels of forgiveness.

    Science.gov (United States)

    Reiter, Johannes G; Hilbe, Christian; Rand, David G; Chatterjee, Krishnendu; Nowak, Martin A

    2018-02-07

    Direct reciprocity is a mechanism for cooperation among humans. Many of our daily interactions are repeated. We interact repeatedly with our family, friends, colleagues, members of the local and even global community. In the theory of repeated games, it is a tacit assumption that the various games that a person plays simultaneously have no effect on each other. Here we introduce a general framework that allows us to analyze "crosstalk" between a player's concurrent games. In the presence of crosstalk, the action a person experiences in one game can alter the person's decision in another. We find that crosstalk impedes the maintenance of cooperation and requires stronger levels of forgiveness. The magnitude of the effect depends on the population structure. In more densely connected social groups, crosstalk has a stronger effect. A harsh retaliator, such as Tit-for-Tat, is unable to counteract crosstalk. The crosstalk framework provides a unified interpretation of direct and upstream reciprocity in the context of repeated games.

  6. TreeBASIS Feature Descriptor and Its Hardware Implementation

    Directory of Open Access Journals (Sweden)

    Spencer Fowers

    2014-01-01

    Full Text Available This paper presents a novel feature descriptor called TreeBASIS that provides improvements in descriptor size, computation time, matching speed, and accuracy. This new descriptor uses a binary vocabulary tree that is computed using basis dictionary images and a test set of feature region images. To facilitate real-time implementation, a feature region image is binary quantized and the resulting quantized vector is passed into the BASIS vocabulary tree. A Hamming distance is then computed between the feature region image and the effectively descriptive basis dictionary image at a node to determine the branch taken and the path the feature region image takes is saved as a descriptor. The TreeBASIS feature descriptor is an excellent candidate for hardware implementation because of its reduced descriptor size and the fact that descriptors can be created and features matched without the use of floating point operations. The TreeBASIS descriptor is more computationally and space efficient than other descriptors such as BASIS, SIFT, and SURF. Moreover, it can be computed entirely in hardware without the support of a CPU for additional software-based computations. Experimental results and a hardware implementation show that the TreeBASIS descriptor compares well with other descriptors for frame-to-frame homography computation while requiring fewer hardware resources.

  7. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    Sheng-Ying Lai

    2013-11-01

    Full Text Available This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA and fuzzy C-means (FCM algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA. It is embedded in a System-on-Chip (SOC platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation.

  8. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Science.gov (United States)

    Hwang, Wen-Jyi; Lee, Wei-Hao; Lin, Shiow-Jyu; Lai, Sheng-Ying

    2013-01-01

    This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA) and fuzzy C-means (FCM) algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA). It is embedded in a System-on-Chip (SOC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation. PMID:24189331

  9. Parallel asynchronous hardware implementation of image processing algorithms

    Science.gov (United States)

    Coon, Darryl D.; Perera, A. G. U.

    1990-01-01

    Research is being carried out on hardware for a new approach to focal plane processing. The hardware involves silicon injection mode devices. These devices provide a natural basis for parallel asynchronous focal plane image preprocessing. The simplicity and novel properties of the devices would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture built from arrays of the devices would form a two-dimensional (2-D) array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuron-like asynchronous pulse-coded form through the laminar processor. No multiplexing, digitization, or serial processing would occur in the preprocessing state. High performance is expected, based on pulse coding of input currents down to one picoampere with noise referred to input of about 10 femtoamperes. Linear pulse coding has been observed for input currents ranging up to seven orders of magnitude. Low power requirements suggest utility in space and in conjunction with very large arrays. Very low dark current and multispectral capability are possible because of hardware compatibility with the cryogenic environment of high performance detector arrays. The aforementioned hardware development effort is aimed at systems which would integrate image acquisition and image processing.

  10. Tomographic image reconstruction and rendering with texture-mapping hardware

    International Nuclear Information System (INIS)

    Azevedo, S.G.; Cabral, B.K.; Foran, J.

    1994-07-01

    The image reconstruction problem, also known as the inverse Radon transform, for x-ray computed tomography (CT) is found in numerous applications in medicine and industry. The most common algorithm used in these cases is filtered backprojection (FBP), which, while a simple procedure, is time-consuming for large images on any type of computational engine. Specially-designed, dedicated parallel processors are commonly used in medical CT scanners, whose results are then passed to graphics workstation for rendering and analysis. However, a fast direct FBP algorithm can be implemented on modern texture-mapping hardware in current high-end workstation platforms. This is done by casting the FBP algorithm as an image warping operation with summing. Texture-mapping hardware, such as that on the Silicon Graphics Reality Engine (TM), shows around 600 times speedup of backprojection over a CPU-based implementation (a 100 Mhz R4400 in this case). This technique has the further advantages of flexibility and rapid programming. In addition, the same hardware can be used for both image reconstruction and for volumetric rendering. The techniques can also be used to accelerate iterative reconstruction algorithms. The hardware architecture also allows more complex operations than straight-ray backprojection if they are required, including fan-beam, cone-beam, and curved ray paths, with little or no speed penalties

  11. Identification of high-level functional/system requirements for future civil transports

    Science.gov (United States)

    Swink, Jay R.; Goins, Richard T.

    1992-01-01

    In order to accommodate the rapid growth in commercial aviation throughout the remainder of this century, the Federal Aviation Administration (FAA) is faced with a formidable challenge to upgrade and/or modernize the National Airspace System (NAS) without compromising safety or efficiency. A recurring theme in both the Aviation System Capital Investment Plan (CIP), which has replaced the NAS Plan, and the new FAA Plan for Research, Engineering, and Development (RE&D) rely on the application of new technologies and a greater use of automation. Identifying the high-level functional and system impacts of such modernization efforts on future civil transport operational requirements, particularly in terms of cockpit functionality and information transfer, was the primary objective of this project. The FAA planning documents for the NAS of the 2005 era and beyond were surveyed; major aircraft functional capabilities and system components required for such an operating environment were identified. A hierarchical structured analysis of the information processing and flows emanating from such functional/system components were conducted and the results documented in graphical form depicting the relationships between functions and systems.

  12. Fast image interpolation for motion estimation using graphics hardware

    Science.gov (United States)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  13. A Fast hardware Tracker for the ATLAS Trigger system

    CERN Document Server

    Pandini, Carlo Enrico; The ATLAS collaboration

    2015-01-01

    The trigger system at the ATLAS experiment is designed to lower the event rate occurring from the nominal bunch crossing at 40 MHz to about 1 kHz for a designed LHC luminosity of 10$^{34}$ cm$^{-2}$ s$^{-1}$. After a very successful data taking run the LHC is expected to run starting in 2015 with much higher instantaneous luminosities and this will increase the load on the High Level Trigger system. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals, which requires a more extensive use of tracking information. The Fast Tracker (FTK) trigger system, part of the ATLAS trigger upgrade program, is a highly parallel hardware device designed to perform full-scan track-finding at the event rate of 100 kHz. FTK is a dedicated processor based on a mixture of advanced technologies. Modern, powerful, Field Programmable Gate Arrays form an important part of the system architecture, and the combinatorial problem of pattern r...

  14. A Qualitative Descriptive Case Study of the Requirements of the IT Industry for Entry-Level IT Positions

    Science.gov (United States)

    Feuerherm, Todd Michael

    2009-01-01

    This qualitative descriptive case study explored the requirements of the IT industry for education, IT certification, and work experience for entry-level IT professionals. Research has shown a growing problem where IT graduates were not able to meet the requirements for entry-level IT jobs. IT enrollment has decreased considerably over the past…

  15. Functions and Requirements for Automated Liquid Level Gauge Instruments in Single-Shell and Double-Shell Tank Farms

    International Nuclear Information System (INIS)

    CARPENTER, K.E.

    1999-01-01

    This functions and requirements document defines the baseline requirements and criteria for the design, purchase, fabrication, construction, installation, and operation of automated liquid level gauge instruments in the Tank Farms. This document is intended to become the technical baseline for current and future installation, operation and maintenance of automated liquid level gauges in single-shell and double-shell tank farms

  16. Advances in neuromorphic hardware exploiting emerging nanoscale devices

    CERN Document Server

    2017-01-01

    This book covers all major aspects of cutting-edge research in the field of neuromorphic hardware engineering involving emerging nanoscale devices. Special emphasis is given to leading works in hybrid low-power CMOS-Nanodevice design. The book offers readers a bidirectional (top-down and bottom-up) perspective on designing efficient bio-inspired hardware. At the nanodevice level, it focuses on various flavors of emerging resistive memory (RRAM) technology. At the algorithm level, it addresses optimized implementations of supervised and stochastic learning paradigms such as: spike-time-dependent plasticity (STDP), long-term potentiation (LTP), long-term depression (LTD), extreme learning machines (ELM) and early adoptions of restricted Boltzmann machines (RBM) to name a few. The contributions discuss system-level power/energy/parasitic trade-offs, and complex real-world applications. The book is suited for both advanced researchers and students interested in the field.

  17. S-1 project. Volume II. Hardware. 1979 annual report

    Energy Technology Data Exchange (ETDEWEB)

    1979-01-01

    This volume includes highlights of the design of the Mark IIA uniprocessor (SMI-2), and the SCALD II user's manual. SCALD (structured computer-aided logic design system) cuts the cost and time required to design logic by letting the logic designer express ideas as naturally as possible, and by eliminating as many errors as possible - through consistency checking, simulation, and timing verification - before the hardware is built. (GHT)

  18. Automatic Optimization of Hardware Accelerators for Image Processing

    OpenAIRE

    Reiche, Oliver; Häublein, Konrad; Reichenbach, Marc; Hannig, Frank; Teich, Jürgen; Fey, Dietmar

    2015-01-01

    In the domain of image processing, often real-time constraints are required. In particular, in safety-critical applications, such as X-ray computed tomography in medical imaging or advanced driver assistance systems in the automotive domain, timing is of utmost importance. A common approach to maintain real-time capabilities of compute-intensive applications is to offload those computations to dedicated accelerator hardware, such as Field Programmable Gate Arrays (FPGAs). Programming such arc...

  19. Hardware-in-the-Loop Testing

    Data.gov (United States)

    Federal Laboratory Consortium — RTC has a suite of Hardware-in-the Loop facilities that include three operational facilities that provide performance assessment and production acceptance testing of...

  20. Hardware device binding and mutual authentication

    Science.gov (United States)

    Hamlet, Jason R; Pierson, Lyndon G

    2014-03-04

    Detection and deterrence of device tampering and subversion by substitution may be achieved by including a cryptographic unit within a computing device for binding multiple hardware devices and mutually authenticating the devices. The cryptographic unit includes a physically unclonable function ("PUF") circuit disposed in or on the hardware device, which generates a binding PUF value. The cryptographic unit uses the binding PUF value during an enrollment phase and subsequent authentication phases. During a subsequent authentication phase, the cryptographic unit uses the binding PUF values of the multiple hardware devices to generate a challenge to send to the other device, and to verify a challenge received from the other device to mutually authenticate the hardware devices.

  1. Comparison of selected DOE and non-DOE requirements, standards, and practices for Low-Level Radioactive Waste Disposal

    International Nuclear Information System (INIS)

    Cole, L.; Kudera, D.; Newberry, W.

    1995-12-01

    This document results from the Secretary of Energy's response to Defense Nuclear Facilities Safety Board Recommendation 94--2. The Secretary stated that the US Department of Energy (DOE) would ''address such issues as...the need for additional requirements, standards, and guidance on low-level radioactive waste management. '' The authors gathered information and compared DOE requirements and standards for the safety aspects Of low-level disposal with similar requirements and standards of non-DOE entities

  2. Cooperative communications hardware, channel and PHY

    CERN Document Server

    Dohler, Mischa

    2010-01-01

    Facilitating Cooperation for Wireless Systems Cooperative Communications: Hardware, Channel & PHY focuses on issues pertaining to the PHY layer of wireless communication networks, offering a rigorous taxonomy of this dispersed field, along with a range of application scenarios for cooperative and distributed schemes, demonstrating how these techniques can be employed. The authors discuss hardware, complexity and power consumption issues, which are vital for understanding what can be realized at the PHY layer, showing how wireless channel models differ from more traditional

  3. IDD Archival Hardware Architecture and Workflow

    Energy Technology Data Exchange (ETDEWEB)

    Mendonsa, D; Nekoogar, F; Martz, H

    2008-10-09

    This document describes the functionality of every component in the DHS/IDD archival and storage hardware system shown in Fig. 1. The document describes steps by step process of image data being received at LLNL then being processed and made available to authorized personnel and collaborators. Throughout this document references will be made to one of two figures, Fig. 1 describing the elements of the architecture and the Fig. 2 describing the workflow and how the project utilizes the available hardware.

  4. Open Hardware For CERN's Accelerator Control Systems

    CERN Document Server

    van der Bij, E; Ayass, M; Boccardi, A; Cattin, M; Gil Soriano, C; Gousiou, E; Iglesias Gonsálvez, S; Penacoba Fernandez, G; Serrano, J; Voumard, N; Wlostowski, T

    2011-01-01

    The accelerator control systems at CERN will be renovated and many electronics modules will be redesigned as the modules they will replace cannot be bought anymore or use obsolete components. The modules used in the control systems are diverse: analog and digital I/O, level converters and repeaters, serial links and timing modules. Overall around 120 modules are supported that are used in systems such as beam instrumentation, cryogenics and power converters. Only a small percentage of the currently used modules are commercially available, while most of them had been specifically designed at CERN. The new developments are based on VITA and PCI-SIG standards such as FMC (FPGA Mezzanine Card), PCI Express and VME64x using transition modules. As system-on-chip interconnect, the public domain Wishbone specification is used. For the renovation, it is considered imperative to have for each board access to the full hardware design and its firmware so that problems could quickly be resolved by CERN engineers or its ...

  5. Software for Managing Inventory of Flight Hardware

    Science.gov (United States)

    Salisbury, John; Savage, Scott; Thomas, Shirman

    2003-01-01

    The Flight Hardware Support Request System (FHSRS) is a computer program that relieves engineers at Marshall Space Flight Center (MSFC) of most of the non-engineering administrative burden of managing an inventory of flight hardware. The FHSRS can also be adapted to perform similar functions for other organizations. The FHSRS affords a combination of capabilities, including those formerly provided by three separate programs in purchasing, inventorying, and inspecting hardware. The FHSRS provides a Web-based interface with a server computer that supports a relational database of inventory; electronic routing of requests and approvals; and electronic documentation from initial request through implementation of quality criteria, acquisition, receipt, inspection, storage, and final issue of flight materials and components. The database lists both hardware acquired for current projects and residual hardware from previous projects. The increased visibility of residual flight components provided by the FHSRS has dramatically improved the re-utilization of materials in lieu of new procurements, resulting in a cost savings of over $1.7 million. The FHSRS includes subprograms for manipulating the data in the database, informing of the status of a request or an item of hardware, and searching the database on any physical or other technical characteristic of a component or material. The software structure forces normalization of the data to facilitate inquiries and searches for which users have entered mixed or inconsistent values.

  6. Conduction cooling: multicrate fastbus hardware

    International Nuclear Information System (INIS)

    Makowiecki, D.; Sims, W.; Larsen, R.

    1980-11-01

    Described is a new and novel approach for cooling nuclear instrumentation modules via heat conduction. The simplicity of liquid cooled crates and ease of thermal management with conduction cooled modules are described. While this system was developed primarily for the higher power levels expected with Fastbus electronics, it has many general applications

  7. Investigations on development of software and hardware for nuclear power plant training simulators

    International Nuclear Information System (INIS)

    He Sian.

    1987-01-01

    The requirements of a training simulator are discussed. The algorithms of the lumped and distributed parameter system and real time system are analysed on principle in software design. The assumed schemes of a hardware system are proposed, too

  8. Sox11 is required to maintain proper levels of Hedgehog signaling during vertebrate ocular morphogenesis.

    Directory of Open Access Journals (Sweden)

    Lakshmi Pillai-Kastoori

    2014-07-01

    Full Text Available Ocular coloboma is a sight-threatening malformation caused by failure of the choroid fissure to close during morphogenesis of the eye, and is frequently associated with additional anomalies, including microphthalmia and cataracts. Although Hedgehog signaling is known to play a critical role in choroid fissure closure, genetic regulation of this pathway remains poorly understood. Here, we show that the transcription factor Sox11 is required to maintain specific levels of Hedgehog signaling during ocular development. Sox11-deficient zebrafish embryos displayed delayed and abnormal lens formation, coloboma, and a specific reduction in rod photoreceptors, all of which could be rescued by treatment with the Hedgehog pathway inhibitor cyclopamine. We further demonstrate that the elevated Hedgehog signaling in Sox11-deficient zebrafish was caused by a large increase in shha transcription; indeed, suppressing Shha expression rescued the ocular phenotypes of sox11 morphants. Conversely, over-expression of sox11 induced cyclopia, a phenotype consistent with reduced levels of Sonic hedgehog. We screened DNA samples from 79 patients with microphthalmia, anophthalmia, or coloboma (MAC and identified two novel heterozygous SOX11 variants in individuals with coloboma. In contrast to wild type human SOX11 mRNA, mRNA containing either variant failed to rescue the lens and coloboma phenotypes of Sox11-deficient zebrafish, and both exhibited significantly reduced transactivation ability in a luciferase reporter assay. Moreover, decreased gene dosage from a segmental deletion encompassing the SOX11 locus resulted in microphthalmia and related ocular phenotypes. Therefore, our study reveals a novel role for Sox11 in controlling Hedgehog signaling, and suggests that SOX11 variants contribute to pediatric eye disorders.

  9. Levelized cost of electricity (LCOE) of renewable energies and required subsidies in China

    International Nuclear Information System (INIS)

    Ouyang, Xiaoling; Lin, Boqiang

    2014-01-01

    The development and utilization of renewable energy (RE), a strategic choice for energy structural adjustment, is an important measure of carbon emissions reduction in China. High cost is a main restriction element for large-scale development of RE, and accurate cost estimation of renewable power generation is urgently necessary. This is the first systemic study on the levelized cost of electricity (LCOE) of RE in China. Results indicate that feed-in-tariff (FIT) of RE should be improved and dynamically adjusted based on the LCOE to provide a better support of the development of RE. The current FIT in China can only cover the LCOE of wind (onshore) and solar photovoltaic energy (PV) at a discount rate of 5%. Subsidies to renewables-based electricity generation, except biomass energy, still need to be increased at higher discount rates. Main conclusions are drawn as follows: (1) Government policy should focus on solving the financing problem of RE projects because fixed capital investment exerts considerable influence over the LCOE; and (2) the problem of high cost could be solved by providing subsidies in the short term and more importantly, by reforming electricity price in the mid-and long-term to make the RE competitive. - Highlights: • Levelized cost of electricity (LCOE) of renewable energies is systemically studied. • Renewable power generation costs are estimated based on data of 17 power plants. • Required subsidies for renewable power generation are calculated. • Electricity price reform is the long-term strategy for solving problem of high cost

  10. Hardware architecture design of image restoration based on time-frequency domain computation

    Science.gov (United States)

    Wen, Bo; Zhang, Jing; Jiao, Zipeng

    2013-10-01

    The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.

  11. Derived Requirements for Double Shell Tank (DST) High Level Waste (HLW) Auxiliary Solids Mobilization

    Energy Technology Data Exchange (ETDEWEB)

    TEDESCHI, A.R.

    2000-02-28

    The potential need for auxiliary double-shell tank waste mixing and solids mobilization requires an evaluation of optional technologies. This document formalizes those operating and design requirements needed for further engineering evaluations.

  12. Derived Requirements for Double-Shell Tank (DST) High Level Waste (HLW) Auxiliary Solids Mobilization

    International Nuclear Information System (INIS)

    TEDESCHI, A.R.

    2000-01-01

    The potential need for auxiliary double-shell tank waste mixing and solids mobilization requires an evaluation of optional technologies. This document formalizes those operating and design requirements needed for further engineering evaluations

  13. From Open Source Software to Open Source Hardware

    OpenAIRE

    Viseur , Robert

    2012-01-01

    Part 2: Lightning Talks; International audience; The open source software principles progressively give rise to new initiatives for culture (free culture), data (open data) or hardware (open hardware). The open hardware is experiencing a significant growth but the business models and legal aspects are not well known. This paper is dedicated to the economics of open hardware. We define the open hardware concept and determine intellectual property tools we can apply to open hardware, with a str...

  14. ANALYSIS OF MODERN REQUIREMENTS FOR THE LEVEL OF FOREIGN LANGUAGE PROFICIENCY OF ENGINEERING SPECIALISTS

    Directory of Open Access Journals (Sweden)

    K. M. Inozemtseva

    2017-01-01

    Full Text Available Introduction. At present, in Russian higher professional education we can observe a shift to the new educational paradigm based on Professional Standards (PS. According to the Federal Law of 02.05.2015 № 122 «About amendments to the Labour Code of Russian Federation and the articles 11 and 73 of «The Law on Education in Russian Federation» formation of the Federal State Educational Standards of Higher Education requirements for expected learning outcomes on universities’ main educational programs is implemented on the basis of relevant Professional Standards. This causes necessity of work on conjunction of Professional Standards, Federal State Educational Standards and universities’ main educational programs.The aim of this article is to demonstrate the influence of a new educational paradigm on the choice of contents, technologies and activities used in foreign language teaching at Russian technical universities.Methodology and research methods. The research methodology is based on the concept of diversification of engineers’ continuous professional foreign language training (T. Yu. Polyakova. In view of priority value of PS for developing universities’ main educational programs the updating of the above concept needs thorough analysis of both PS requirements for the level of foreign language proficiency of engineering specialists and study of scientific literature on the above problem.Results. This research results in interpretation and clarification of generalized PS requirements for the actual needs of industries and individuals in foreign language proficiency. The research also causes Language for Specific Purposes (LSP program developers’ and LSP teachers’ pedagogical reflection about their readiness to form foreign language (FL professional communicative competence of an engineer. It is concluded that a teacher needs to consider axiological aspects of engineering activity in order to understand the nature of the work

  15. Flight Hardware Virtualization for On-Board Science Data Processing

    Data.gov (United States)

    National Aeronautics and Space Administration — Utilize Hardware Virtualization technology to benefit on-board science data processing by investigating new real time embedded Hardware Virtualization solutions and...

  16. Hardware support for CSP on a Java chip multiprocessor

    DEFF Research Database (Denmark)

    Gruian, Flavius; Schoeberl, Martin

    2013-01-01

    Due to memory bandwidth limitations, chip multiprocessors (CMPs) adopting the convenient shared memory model for their main memory architecture scale poorly. On-chip core-to-core communication is a solution to this problem, that can lead to further performance increase for a number of multithreaded...... applications. Programmatically, the Communicating Sequential Processes (CSPs) paradigm provides a sound computational model for such an architecture with message based communication. In this paper we explore hardware support for CSP in the context of an embedded Java CMP. The hardware support for CSP are on......-chip communication channels, implemented by a ring-based network-on-chip (NoC), to reduce the memory bandwidth pressure on the shared memory.The presented solution is scalable and also specific for our limited resources and real-time predictability requirements. CMP architectures of three to eight processors were...

  17. Total knee arthroplasty using patient-specific blocks after prior femoral fracture without hardware removal

    Directory of Open Access Journals (Sweden)

    Raju Vaishya

    2018-01-01

    Full Text Available Background: The options to perform total knee arthroplasty (TKA with retained hardware in femur are mainly – removal of hardware, use of extramedullary guide, or computer-assisted surgery. Patient-specific blocks (PSBs have been introduced with many potential advantages, but their use in retained hardware has not been adequately explored. The purpose of the present study was to outline and assess the usefulness of the PSBs in performing TKA in patients with retained femoral hardware. Materials and Materials and Methods: Nine patients with retained femoral hardware underwent TKA using PSBs. All the surgeries were performed by the same surgeon using same implants. Nine cases (7 males and 2 females out of total of 120 primary TKA had retained hardware. The average age of the patients was 60.55 years. The retained hardware were 6 patients with nails, 2 with plates and one patient had screws. Out of the nine cases, only one patient needed removal of a screw which was hindering placement of pin for the PSB. Results: All the patients had significant improvement in their Knee Society Score (KSS which improved from 47.0 to postoperative KSS of 86.77 (P < 0.00. The mechanical axis was significantly improved (P < 0.03 after surgery. No patient required blood transfusion and the average tourniquet time was 41 min. Conclusion: TKA using PSBs is useful and can be used in patients with retained hardware with good functional and radiological outcome.

  18. Non-fuel bearing hardware melting technology

    International Nuclear Information System (INIS)

    Newman, D.F.

    1993-01-01

    Battelle has developed a portable hardware melter concept that would allow spent fuel rod consolidation operations at commercial nuclear power plants to provide significantly more storage space for other spent fuel assemblies in existing pool racks at lower cost. Using low pressure compaction, the non-fuel bearing hardware (NFBH) left over from the removal of spent fuel rods from the stainless steel end fittings and the Zircaloy guide tubes and grid spacers still occupies 1/3 to 2/5 of the volume of the consolidated fuel rod assemblies. Melting the non-fuel bearing hardware reduces its volume by a factor 4 from that achievable with low-pressure compaction. This paper describes: (1) the configuration and design features of Battelle's hardware melter system that permit its portability, (2) the system's throughput capacity, (3) the bases for capital and operating estimates, and (4) the status of NFBH melter demonstration to reduce technical risks for implementation of the concept. Since all NFBH handling and processing operations would be conducted at the reactor site, costs for shipping radioactive hardware to and from a stationary processing facility for volume reduction are avoided. Initial licensing, testing, and installation in the field would follow the successful pattern achieved with rod consolidation technology

  19. SB certification handout material requirements, test methods, responsibilities, and minimum classification levels for mixture-based specification for flexible base.

    Science.gov (United States)

    2012-10-01

    A handout with tables representing the material requirements, test methods, responsibilities, and minimum classification levels mixture-based specification for flexible base and details on aggregate and test methods employed, along with agency and co...

  20. Dispersal ability and habitat requirements determine landscape-level genetic patterns in desert aquatic insects.

    Science.gov (United States)

    Phillipsen, Ivan C; Kirk, Emily H; Bogan, Michael T; Mims, Meryl C; Olden, Julian D; Lytle, David A

    2015-01-01

    Species occupying the same geographic range can exhibit remarkably different population structures across the landscape, ranging from highly diversified to panmictic. Given limitations on collecting population-level data for large numbers of species, ecologists seek to identify proximate organismal traits-such as dispersal ability, habitat preference and life history-that are strong predictors of realized population structure. We examined how dispersal ability and habitat structure affect the regional balance of gene flow and genetic drift within three aquatic insects that represent the range of dispersal abilities and habitat requirements observed in desert stream insect communities. For each species, we tested for linear relationships between genetic distances and geographic distances using Euclidean and landscape-based metrics of resistance. We found that the moderate-disperser Mesocapnia arizonensis (Plecoptera: Capniidae) has a strong isolation-by-distance pattern, suggesting migration-drift equilibrium. By contrast, population structure in the flightless Abedus herberti (Hemiptera: Belostomatidae) is influenced by genetic drift, while gene flow is the dominant force in the strong-flying Boreonectes aequinoctialis (Coleoptera: Dytiscidae). The best-fitting landscape model for M. arizonensis was based on Euclidean distance. Analyses also identified a strong spatial scale-dependence, where landscape genetic methods only performed well for species that were intermediate in dispersal ability. Our results highlight the fact that when either gene flow or genetic drift dominates in shaping population structure, no detectable relationship between genetic and geographic distances is expected at certain spatial scales. This study provides insight into how gene flow and drift interact at the regional scale for these insects as well as the organisms that share similar habitats and dispersal abilities. © 2014 John Wiley & Sons Ltd.

  1. Mind Your Grip: Even Usual Dexterous Manipulation Requires High Level Cognition

    Directory of Open Access Journals (Sweden)

    Erwan Guillery

    2017-11-01

    Full Text Available Simultaneous execution of cognitive and sensorimotor tasks is critical in daily life. Here, we examined whether dexterous manipulation, a highly habitual and seemingly automatic behavior, involves high order cognitive functions. Specifically, we explored the impact of reducing available cognitive resources on the performance of a precision grip-lift task in healthy participants of three age groups (18–30, 30–60 and 60–75 years. Participants performed a motor task in isolation (M, in combination with a low-load cognitive task (M + L, and in combination with a high-load cognitive task (M + H. The motor task consisted in grasping, lifting and holding an apparatus instrumented with force sensors to monitor motor task performance. In the cognitive task, a list of letters was shown briefly before the motor task. After completing the motor task, one letter of the list was shown, and participants reported the following letter of the list. In M + L, letters in the list followed the alphabetical order. In M + H, letters were presented in random order. Performing the high-load task thus required maintaining information in working memory. Temporal and dynamic parameters of grip and lift forces were compared across conditions. During the cognitive tasks, there was a significant alteration of movement initiation and a significant increase of grip force (GF throughout the grip-lift task. There was no interaction with “age”. Our results demonstrate that planning and the on-line control of dexterous manipulation is not an automatic behavior and, instead, that it interacts with high-level cognitive processes such as those involved in working memory.

  2. Functions and requirements document for interim store solidified high-level and transuranic waste

    Energy Technology Data Exchange (ETDEWEB)

    Smith-Fewell, M.A., Westinghouse Hanford

    1996-05-17

    The functions, requirements, interfaces, and architectures contained within the Functions and Requirements (F{ampersand}R) Document are based on the information currently contained within the TWRS Functions and Requirements database. The database also documents the set of technically defensible functions and requirements associated with the solidified waste interim storage mission.The F{ampersand}R Document provides a snapshot in time of the technical baseline for the project. The F{ampersand}R document is the product of functional analysis, requirements allocation and architectural structure definition. The technical baseline described in this document is traceable to the TWRS function 4.2.4.1, Interim Store Solidified Waste, and its related requirements, architecture, and interfaces.

  3. A Hardware Abstraction Layer in Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Korsholm, Stephan; Kalibera, Tomas

    2011-01-01

    Embedded systems use specialized hardware devices to interact with their environment, and since they have to be dependable, it is attractive to use a modern, type-safe programming language like Java to develop programs for them. Standard Java, as a platform-independent language, delegates access...... to devices, direct memory access, and interrupt handling to some underlying operating system or kernel, but in the embedded systems domain resources are scarce and a Java Virtual Machine (JVM) without an underlying middleware is an attractive architecture. The contribution of this article is a proposal...... for Java packages with hardware objects and interrupt handlers that interface to such a JVM. We provide implementations of the proposal directly in hardware, as extensions of standard interpreters, and finally with an operating system middleware. The latter solution is mainly seen as a migration path...

  4. Hardware Acceleration of Adaptive Neural Algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - world conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.

  5. MFTF supervisory control and diagnostics system hardware

    International Nuclear Information System (INIS)

    Butner, D.N.

    1979-01-01

    The Supervisory Control and Diagnostics System (SCDS) for the Mirror Fusion Test Facility (MFTF) is a multiprocessor minicomputer system designed so that for most single-point failures, the hardware may be quickly reconfigured to provide continued operation of the experiment. The system is made up of nine Perkin-Elmer computers - a mixture of 8/32's and 7/32's. Each computer has ports on a shared memory system consisting of two independent shared memory modules. Each processor can signal other processors through hardware external to the shared memory. The system communicates with the Local Control and Instrumentation System, which consists of approximately 65 microprocessors. Each of the six system processors has facilities for communicating with a group of microprocessors; the groups consist of from four to 24 microprocessors. There are hardware switches so that if an SCDS processor communicating with a group of microprocessors fails, another SCDS processor takes over the communication

  6. Operational intervention levels and related requirements on radiation monitoring during pre-release / release phase of an accident

    International Nuclear Information System (INIS)

    Carny, P.; Cabanekova, H

    2003-01-01

    In this paper authors discusses required outputs of emergency radiological monitoring in various phases of an accident and rationale of these requirements. In various phases of an accident various intervention levels are important and consequently various radiological quantities should be preferably measured. Distinguished tasks or aims of monitoring in different phases of accident have strong influence on methods of monitoring, instrumentation and capabilities of monitoring groups. Required tasks and outputs of monitoring are discussed

  7. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 5

    International Nuclear Information System (INIS)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 5) outlines the standards and requirements for the Fire Protection and Packaging and Transportation sections

  8. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 4

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 4) presents the standards and requirements for the following sections: Radiation Protection and Operations.

  9. High level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 6

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 6) outlines the standards and requirements for the sections on: Environmental Restoration and Waste Management, Research and Development and Experimental Activities, and Nuclear Safety.

  10. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Document (S/RID) is contained in multiple volumes. This document (Volume 2) presents the standards and requirements for the following sections: Quality Assurance, Training and Qualification, Emergency Planning and Preparedness, and Construction.

  11. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 2

    International Nuclear Information System (INIS)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Document (S/RID) is contained in multiple volumes. This document (Volume 2) presents the standards and requirements for the following sections: Quality Assurance, Training and Qualification, Emergency Planning and Preparedness, and Construction

  12. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID)

    International Nuclear Information System (INIS)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 3) presents the standards and requirements for the following sections: Safeguards and Security, Engineering Design, and Maintenance

  13. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID)

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 3) presents the standards and requirements for the following sections: Safeguards and Security, Engineering Design, and Maintenance.

  14. High-level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 4

    International Nuclear Information System (INIS)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 4) presents the standards and requirements for the following sections: Radiation Protection and Operations

  15. High level waste storage tank farms/242-A evaporator Standards/Requirements Identification Document (S/RID), Volume 6

    International Nuclear Information System (INIS)

    1994-04-01

    The High-Level Waste Storage Tank Farms/242-A Evaporator Standards/Requirements Identification Document (S/RID) is contained in multiple volumes. This document (Volume 6) outlines the standards and requirements for the sections on: Environmental Restoration and Waste Management, Research and Development and Experimental Activities, and Nuclear Safety

  16. Hardware Accelerated Sequence Alignment with Traceback

    Directory of Open Access Journals (Sweden)

    Scott Lloyd

    2009-01-01

    in a timely manner. Known methods to accelerate alignment on reconfigurable hardware only address sequence comparison, limit the sequence length, or exhibit memory and I/O bottlenecks. A space-efficient, global sequence alignment algorithm and architecture is presented that accelerates the forward scan and traceback in hardware without memory and I/O limitations. With 256 processing elements in FPGA technology, a performance gain over 300 times that of a desktop computer is demonstrated on sequence lengths of 16000. For greater performance, the architecture is scalable to more processing elements.

  17. Quantum neuromorphic hardware for quantum artificial intelligence

    Science.gov (United States)

    Prati, Enrico

    2017-08-01

    The development of machine learning methods based on deep learning boosted the field of artificial intelligence towards unprecedented achievements and application in several fields. Such prominent results were made in parallel with the first successful demonstrations of fault tolerant hardware for quantum information processing. To which extent deep learning can take advantage of the existence of a hardware based on qubits behaving as a universal quantum computer is an open question under investigation. Here I review the convergence between the two fields towards implementation of advanced quantum algorithms, including quantum deep learning.

  18. Human Centered Hardware Modeling and Collaboration

    Science.gov (United States)

    Stambolian Damon; Lawrence, Brad; Stelges, Katrine; Henderson, Gena

    2013-01-01

    In order to collaborate engineering designs among NASA Centers and customers, to in clude hardware and human activities from multiple remote locations, live human-centered modeling and collaboration across several sites has been successfully facilitated by Kennedy Space Center. The focus of this paper includes innovative a pproaches to engineering design analyses and training, along with research being conducted to apply new technologies for tracking, immersing, and evaluating humans as well as rocket, vehic le, component, or faci lity hardware utilizing high resolution cameras, motion tracking, ergonomic analysis, biomedical monitoring, wor k instruction integration, head-mounted displays, and other innovative human-system integration modeling, simulation, and collaboration applications.

  19. The LISA Pathfinder interferometry-hardware and system testing

    Energy Technology Data Exchange (ETDEWEB)

    Audley, H; Danzmann, K; MarIn, A Garcia; Heinzel, G; Monsky, A; Nofrarias, M; Steier, F; Bogenstahl, J [Albert-Einstein-Institut, Max-Planck-Institut fuer Gravitationsphysik und Universitaet Hannover, 30167 Hannover (Germany); Gerardi, D; Gerndt, R; Hechenblaikner, G; Johann, U; Luetzow-Wentzky, P; Wand, V [EADS Astrium GmbH, Friedrichshafen (Germany); Antonucci, F [Dipartimento di Fisica, Universita di Trento and INFN, Gruppo Collegato di Trento, 38050 Povo, Trento (Italy); Armano, M [European Space Astronomy Centre, European Space Agency, Villanueva de la Canada, 28692 Madrid (Spain); Auger, G; Binetruy, P [APC UMR7164, Universite Paris Diderot, Paris (France); Benedetti, M [Dipartimento di Ingegneria dei Materiali e Tecnologie Industriali, Universita di Trento and INFN, Gruppo Collegato di Trento, Mesiano, Trento (Italy); Boatella, C, E-mail: antonio.garcia@aei.mpg.de [CNES, DCT/AQ/EC, 18 Avenue Edouard Belin, 31401 Toulouse, Cedex 9 (France)

    2011-05-07

    Preparations for the LISA Pathfinder mission have reached an exciting stage. Tests of the engineering model (EM) of the optical metrology system have recently been completed at the Albert Einstein Institute, Hannover, and flight model tests are now underway. Significantly, they represent the first complete integration and testing of the space-qualified hardware and are the first tests on an optical system level. The results and test procedures of these campaigns will be utilized directly in the ground-based flight hardware tests, and subsequently during in-flight operations. In addition, they allow valuable testing of the data analysis methods using the MATLAB-based LTP data analysis toolbox. This paper presents an overview of the results from the EM test campaign that was successfully completed in December 2009.

  20. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

    Science.gov (United States)

    Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid

    2016-11-01

    The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

  1. Characteristics of spent fuel, high-level waste, and other radioactive wastes which may require long-term isolation

    Energy Technology Data Exchange (ETDEWEB)

    None

    1987-12-01

    The purpose of this report, and the information contained in the associated computerized data bases, is to establish the DOE/OCRWM reference characteristics of the radioactive waste materials that may be accepted by DOE for emplacement in the mined geologic disposal system. This report provides relevant technical data for use by DOE and its supporting contractors and is not intended to be a policy document. This document is backed up by five PC-compatible data bases, written in a user-oriented, menu-driven format, which were developed for this purpose. The data bases are the LWR Assemblies Data Base; the LWR Radiological Data Base; the LWR Quantities Data Base; the LWR NFA Hardware Data Base; and the High-Level Waste Data Base. The above data bases may be ordered using the included form. An introductory information diskette can be found inside the back cover of this report. It provides a brief introduction to each of these five PC data bases. 116 refs., 18 figs., 67 tabs.

  2. Characteristics of spent fuel, high-level waste, and other radioactive wastes which may require long-term isolation

    International Nuclear Information System (INIS)

    1987-12-01

    The purpose of this report, and the information contained in the associated computerized data bases, is to establish the DOE/OCRWM reference characteristics of the radioactive waste materials that may be accepted by DOE for emplacement in the mined geologic disposal system. This report provides relevant technical data for use by DOE and its supporting contractors and is not intended to be a policy document. This document is backed up by five PC-compatible data bases, written in a user-oriented, menu-driven format, which were developed for this purpose. The data bases are the LWR Assemblies Data Base; the LWR Radiological Data Base; the LWR Quantities Data Base; the LWR NFA Hardware Data Base; and the High-Level Waste Data Base. The above data bases may be ordered using the included form. An introductory information diskette can be found inside the back cover of this report. It provides a brief introduction to each of these five PC data bases. 116 refs., 18 figs., 67 tabs

  3. Innovative product design based on comprehensive customer requirements of different cognitive levels.

    Science.gov (United States)

    Li, Xiaolong; Zhao, Wu; Zheng, Yake; Wang, Rui; Wang, Chen

    2014-01-01

    To improve customer satisfaction in innovative product design, a topology structure of customer requirements is established and an innovative product approach is proposed. The topology structure provides designers with reasonable guidance to capture the customer requirements comprehensively. With the aid of analytic hierarchy process (AHP), the importance of the customer requirements is evaluated. Quality function deployment (QFD) is used to translate customer requirements into product and process design demands and pick out the technical requirements which need urgent improvement. In this way, the product is developed in a more targeted way to satisfy the customers. the theory of innovative problems solving (TRIZ) is used to help designers to produce innovative solutions. Finally, a case study of automobile steering system is used to illustrate the application of the proposed approach.

  4. Innovative Product Design Based on Comprehensive Customer Requirements of Different Cognitive Levels

    Directory of Open Access Journals (Sweden)

    Xiaolong Li

    2014-01-01

    Full Text Available To improve customer satisfaction in innovative product design, a topology structure of customer requirements is established and an innovative product approach is proposed. The topology structure provides designers with reasonable guidance to capture the customer requirements comprehensively. With the aid of analytic hierarchy process (AHP, the importance of the customer requirements is evaluated. Quality function deployment (QFD is used to translate customer requirements into product and process design demands and pick out the technical requirements which need urgent improvement. In this way, the product is developed in a more targeted way to satisfy the customers. the theory of innovative problems solving (TRIZ is used to help designers to produce innovative solutions. Finally, a case study of automobile steering system is used to illustrate the application of the proposed approach.

  5. Exploiting current-generation graphics hardware for synthetic-scene generation

    Science.gov (United States)

    Tanner, Michael A.; Keen, Wayne A.

    2010-04-01

    Increasing seeker frame rate and pixel count, as well as the demand for higher levels of scene fidelity, have driven scene generation software for hardware-in-the-loop (HWIL) and software-in-the-loop (SWIL) testing to higher levels of parallelization. Because modern PC graphics cards provide multiple computational cores (240 shader cores for a current NVIDIA Corporation GeForce and Quadro cards), implementation of phenomenology codes on graphics processing units (GPUs) offers significant potential for simultaneous enhancement of simulation frame rate and fidelity. To take advantage of this potential requires algorithm implementation that is structured to minimize data transfers between the central processing unit (CPU) and the GPU. In this paper, preliminary methodologies developed at the Kinetic Hardware In-The-Loop Simulator (KHILS) will be presented. Included in this paper will be various language tradeoffs between conventional shader programming, Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), including performance trades and possible pathways for future tool development.

  6. FPGA-Based Flexible Hardware Architecture for Image Interest Point Detection

    Directory of Open Access Journals (Sweden)

    Ana Hernandez-Lopez

    2015-07-01

    Full Text Available An important challenge in computer vision is the implementation of fast and accurate feature detectors, as they are the basis for high-level image processing analysis and understanding. However, image feature detectors cannot be easily applied in embedded scenarios, mainly due to the fact that they are time consuming and require a significant amount of processing power. Although some feature detectors have been implemented in hardware, most implementations target a single detector under very specific constraints. This paper proposes a flexible hardware implementation approach for computing interest point extraction from grey-level images based on two different detectors, Harris and SUSAN, suitable for robotic applications. The design is based on parallel and configurable processing elements for window operators and a buffering strategy to support a coarse-grain pipeline scheme for operator sequencing. When targeted to a Virtex-6 FPGA, a throughput of 49.45 Mpixel/s (processing rate of 161 frames per second of VGA image resolution is achieved at a clock frequency of 50 MHz.

  7. Orbiter data reduction complex data processing requirements for the OFT mission evaluation team (level C)

    Science.gov (United States)

    1979-01-01

    This document addresses requirements for post-test data reduction in support of the Orbital Flight Tests (OFT) mission evaluation team, specifically those which are planned to be implemented in the ODRC (Orbiter Data Reduction Complex). Only those requirements which have been previously baselined by the Data Systems and Analysis Directorate configuration control board are included. This document serves as the control document between Institutional Data Systems Division and the Integration Division for OFT mission evaluation data processing requirements, and shall be the basis for detailed design of ODRC data processing systems.

  8. Optimized design of embedded DSP system hardware supporting complex algorithms

    Science.gov (United States)

    Li, Yanhua; Wang, Xiangjun; Zhou, Xinling

    2003-09-01

    The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.

  9. Hardware demonstration of high-speed networks for satellite applications.

    Energy Technology Data Exchange (ETDEWEB)

    Donaldson, Jonathon W.; Lee, David S.

    2008-09-01

    This report documents the implementation results of a hardware demonstration utilizing the Serial RapidIO{trademark} and SpaceWire protocols that was funded by Sandia National Laboratories (SNL's) Laboratory Directed Research and Development (LDRD) office. This demonstration was one of the activities in the Modeling and Design of High-Speed Networks for Satellite Applications LDRD. This effort has demonstrated the transport of application layer packets across both RapidIO and SpaceWire networks to a common downlink destination using small topologies comprised of commercial-off-the-shelf and custom devices. The RapidFET and NEX-SRIO debug and verification tools were instrumental in the successful implementation of the RapidIO hardware demonstration. The SpaceWire hardware demonstration successfully demonstrated the transfer and routing of application data packets between multiple nodes and also was able reprogram remote nodes using configuration bitfiles transmitted over the network, a key feature proposed in node-based architectures (NBAs). Although a much larger network (at least 18 to 27 nodes) would be required to fully verify the design for use in a real-world application, this demonstration has shown that both RapidIO and SpaceWire are capable of routing application packets across a network to a common downlink node, illustrating their potential use in real-world NBAs.

  10. Reconfigurable Signal Processing and Hardware Architecture for Broadband Wireless Communications

    Directory of Open Access Journals (Sweden)

    Liang Ying-Chang

    2005-01-01

    Full Text Available This paper proposes a broadband wireless transceiver which can be reconfigured to any type of cyclic-prefix (CP -based communication systems, including orthogonal frequency-division multiplexing (OFDM, single-carrier cyclic-prefix (SCCP system, multicarrier (MC code-division multiple access (MC-CDMA, MC direct-sequence CDMA (MC-DS-CDMA, CP-based CDMA (CP-CDMA, and CP-based direct-sequence CDMA (CP-DS-CDMA. A hardware platform is proposed and the reusable common blocks in such a transceiver are identified. The emphasis is on the equalizer design for mobile receivers. It is found that after block despreading operation, MC-DS-CDMA and CP-DS-CDMA have the same equalization blocks as OFDM and SCCP systems, respectively, therefore hardware and software sharing is possible for these systems. An attempt has also been made to map the functional reconfigurable transceiver onto the proposed hardware platform. The different functional entities which will be required to perform the reconfiguration and realize the transceiver are explained.

  11. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.

  12. Cost-optimal levels of minimum energy performance requirements in the Danish Building Regulations

    Energy Technology Data Exchange (ETDEWEB)

    Aggerholm, S.

    2013-09-15

    The purpose of the report is to analyse the cost optimality of the energy requirements in the Danish Building Regulations 2010, BR10 to new building and to existing buildings undergoing major renovation. The energy requirements in the Danish Building Regulations have by tradition always been based on the cost and benefits related to the private economical or financial perspective. Macro economical calculations have in the past only been made in addition. The cost optimum used in this report is thus based on the financial perspective. Due to the high energy taxes in Denmark there is a significant difference between the consumer price and the macro economical for energy. Energy taxes are also paid by commercial consumers when the energy is used for building operation e.g. heating, lighting, ventilation etc. In relation to the new housing examples the present minimum energy requirements in BR 10 all shows gaps that are negative with a deviation of up till 16 % from the point of cost optimality. With the planned tightening of the requirements to new houses in 2015 and in 2020, the energy requirements can be expected to be tighter than the cost optimal point, if the costs for the needed improvements don't decrease correspondingly. In relation to the new office building there is a gap of 31 % to the point of cost optimality in relation to the 2010 requirement. In relation to the 2015 and 2020 requirements there are negative gaps to the point of cost optimality based on today's prices. If the gaps for all the new buildings are weighted to an average based on mix of building types and heat supply for new buildings in Denmark there is a gap of 3 % in average for the new building. The excessive tightness with today's prices is 34 % in relation to the 2015 requirement and 49 % in relation to the 2020 requirement. The component requirement to elements in the building envelope and to installations in existing buildings adds up to significant energy efficiency

  13. Enabling Open Hardware through FOSS tools

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Software developers often take open file formats and tools for granted. When you publish code on github, you do not ask yourself if somebody will be able to open it and modify it. We need the same freedom in the open hardware world, to make it truly accessible for everyone.

  14. Hardware and layout aspects affecting maintainability

    International Nuclear Information System (INIS)

    Jayaraman, V.N.; Surendar, Ch.

    1977-01-01

    It has been found from maintenance experience at the Rajasthan Atomic Power Station that proper hardware and instrumentation layout can reduce maintenance and down-time on the related equipment. The problems faced in this connection and how they were solved is narrated. (M.G.B.)

  15. CAMAC high energy physics electronics hardware

    International Nuclear Information System (INIS)

    Kolpakov, I.F.

    1977-01-01

    CAMAC hardware for high energy physics large spectrometers and control systems is reviewed as is the development of CAMAC modules at the High Energy Laboratory, JINR (Dubna). The total number of crates used at the Laboratory is 179. The number of CAMAC modules of 120 different types exceeds 1700. The principles of organization and the structure of developed CAMAC systems are described. (author)

  16. Design of hardware accelerators for demanding applications.

    NARCIS (Netherlands)

    Jozwiak, L.; Jan, Y.

    2010-01-01

    This paper focuses on mastering the architecture development of hardware accelerators. It presents the results of our analysis of the main issues that have to be addressed when designing accelerators for modern demanding applications, when using as an example the accelerator design for LDPC decoding

  17. Building Correlators with Many-Core Hardware

    NARCIS (Netherlands)

    van Nieuwpoort, R.V.

    2010-01-01

    Radio telescopes typically consist of multiple receivers whose signals are cross-correlated to filter out noise. A recent trend is to correlate in software instead of custom-built hardware, taking advantage of the flexibility that software solutions offer. Examples include e-VLBI and LOFAR. However,

  18. Computer hardware for radiologists: Part I

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM, Picture Archiving and Communication System (PACS, Radiology information system (RIS technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU, the chipset, the random access memory (RAM, the memory modules, bus, storage drives, and ports. The personnel computer (PC has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs. The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called "buses". The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute "programs". A Pentium® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration.

  19. Computer hardware for radiologists: Part I

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM), Picture Archiving and Communication System (PACS), Radiology information system (RIS) technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU), the chipset, the random access memory (RAM), the memory modules, bus, storage drives, and ports. The personnel computer (PC) has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs). The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called “buses”. The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute “programs”. A Pentium ® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM) is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration

  20. Digital Hardware Design Teaching: An Alternative Approach

    Science.gov (United States)

    Benkrid, Khaled; Clayton, Thomas

    2012-01-01

    This article presents the design and implementation of a complete review of undergraduate digital hardware design teaching in the School of Engineering at the University of Edinburgh. Four guiding principles have been used in this exercise: learning-outcome driven teaching, deep learning, affordability, and flexibility. This has identified…

  1. The fast Amsterdam multiprocessor (FAMP) system hardware

    International Nuclear Information System (INIS)

    Hertzberger, L.O.; Kieft, G.; Kisielewski, B.; Wiggers, L.W.; Engster, C.; Koningsveld, L. van

    1981-01-01

    The architecture of a multiprocessor system is described that will be used for on-line filter and second stage trigger applications. The system is based on the MC 68000 microprocessor from Motorola. Emphasis is paid to hardware aspects, in particular the modularity, processor communication and interfacing, whereas the system software and the applications will be described in separate articles. (orig.)

  2. Global Climate targets and future consumption level: An evaluation of the required GHG intensity

    NARCIS (Netherlands)

    Girod, B.; van Vuuren, D.P.; Hertwich, E.G.

    2013-01-01

    Discussion and analysis on international climate policy often focuses on the rather abstract level of total national and regional greenhouse gas (GHG) emissions. At some point, however, emission reductions need to be translated to consumption level. In this article, we evaluate the implications of

  3. Automation Hardware & Software for the STELLA Robotic Telescope

    Science.gov (United States)

    Weber, M.; Granzer, Th.; Strassmeier, K. G.

    The STELLA telescope (a joint project of the AIP, Hamburger Sternwarte and the IAC) is to operate in fully robotic mode, with no human interaction necessary for regular operation. Thus, the hardware must be kept as simple as possible to avoid unnecessary failures, and the environmental conditions must be monitored accurately to protect the telescope in case of bad weather. All computers are standard PCs running Linux, and communication with specialized hardware is done via a RS232/RS485 bus system. The high level (java based) control software consists of independent modules to ease bug-tracking and to allow the system to be extended without changing existing modules. Any command cycle consists of three messages, the actual command sent from the central node to the operating device, an immediate acknowledge, and a final done message, both sent back from the receiving device to the central node. This reply-splitting allows a direct distinction between communication problems (no acknowledge message) and hardware problems (no or a delayed done message). To avoid bug-prone packing of all the sensor-analyzing software into a single package, each sensor-reading and interaction with other sensors is done within a self-contained thread. Weather-decision making is therefore totally decoupled from the core control software to avoid dead-locks in the core module.

  4. Hardware Accelerators Targeting a Novel Group Based Packet Classification Algorithm

    Directory of Open Access Journals (Sweden)

    O. Ahmed

    2013-01-01

    Full Text Available Packet classification is a ubiquitous and key building block for many critical network devices. However, it remains as one of the main bottlenecks faced when designing fast network devices. In this paper, we propose a novel Group Based Search packet classification Algorithm (GBSA that is scalable, fast, and efficient. GBSA consumes an average of 0.4 megabytes of memory for a 10 k rule set. The worst-case classification time per packet is 2 microseconds, and the preprocessing speed is 3 M rules/second based on an Xeon processor operating at 3.4 GHz. When compared with other state-of-the-art classification techniques, the results showed that GBSA outperforms the competition with respect to speed, memory usage, and processing time. Moreover, GBSA is amenable to implementation in hardware. Three different hardware implementations are also presented in this paper including an Application Specific Instruction Set Processor (ASIP implementation and two pure Register-Transfer Level (RTL implementations based on Impulse-C and Handel-C flows, respectively. Speedups achieved with these hardware accelerators ranged from 9x to 18x compared with a pure software implementation running on an Xeon processor.

  5. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms

    Directory of Open Access Journals (Sweden)

    Evangelos eStromatias

    2015-07-01

    Full Text Available Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks requires vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost 2 bits, and shows that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.

  6. Requirement of trained first responders and national level preparedness for prevention and response to radiological terrorism

    International Nuclear Information System (INIS)

    Sharma, R.; Pradeepkumar, K.S.

    2010-01-01

    In this paper we have identified the educational needs for response to radiological emergency in India with major thrust on training. The paper has also enumerated the available educational and training infrastructure, the human resources, as well as the important stake holders for development of sustainable education and training programme. The training of emergency response personnel will help in quick decision making, planning and effective response during such emergencies. Medical Emergency management requires planning by hospitals which includes up-gradation of earmarked hospitals, development of mobile hospitals and mobile medical teams supported by communication backups and adequate medical logistics for radiological emergency. Department of Atomic Energy (DAE) is a nodal agency for advising authorities for any nuclear/radiological emergency in public domain. DAE through the various ERCs have already developed technical expertise, systems, software and methodology for quick impact assessment which may be required for the implementation of countermeasures if required following any nuclear disaster/radiological emergency

  7. Software-Controlled Dynamically Swappable Hardware Design in Partially Reconfigurable Systems

    Directory of Open Access Journals (Sweden)

    Huang Chun-Hsian

    2008-01-01

    Full Text Available Abstract We propose two basic wrapper designs and an enhanced wrapper design for arbitrary digital hardware circuit designs such that they can be enhanced with the capability for dynamic swapping controlled by software. A hardware design with either of the proposed wrappers can thus be swapped out of the partially reconfigurable logic at runtime in some intermediate state of computation and then swapped in when required to continue from that state. The context data is saved to a buffer in the wrapper at interruptible states, and then the wrapper takes care of saving the hardware context to communication memory through a peripheral bus, and later restoring the hardware context after the design is swapped in. The overheads of the hardware standardization and the wrapper in terms of additional reconfigurable logic resources and the time for context switching are small and generally acceptable. With the capability for dynamic swapping, high priority hardware tasks can interrupt low-priority tasks in real-time embedded systems so that the utilization of hardware space per unit time is increased.

  8. 48 CFR 301.607-71 - FAC-P/PM levels and requirements.

    Science.gov (United States)

    2010-10-01

    ... Certification—Program and Project Managers—Information Technology Technical Competencies, in the P/PM Handbook for additional information. (b)(1) Competencies. An applicant can satisfy the competency requirements... programs; (iii) Demonstration of knowledge, skills, and abilities; or (iv) Any combination of these three...

  9. Monitoring and Hardware Management for Critical Fusion Plasma Instrumentation

    Directory of Open Access Journals (Sweden)

    Carvalho Paulo F.

    2018-01-01

    Full Text Available Controlled nuclear fusion aims to obtain energy by particles collision confined inside a nuclear reactor (Tokamak. These ionized particles, heavier isotopes of hydrogen, are the main elements inside of plasma that is kept at high temperatures (millions of Celsius degrees. Due to high temperatures and magnetic confinement, plasma is exposed to several sources of instabilities which require a set of procedures by the control and data acquisition systems throughout fusion experiments processes. Control and data acquisition systems often used in nuclear fusion experiments are based on the Advanced Telecommunication Computer Architecture (AdvancedTCA® standard introduced by the Peripheral Component Interconnect Industrial Manufacturers Group (PICMG®, to meet the demands of telecommunications that require large amount of data (TB transportation at high transfer rates (Gb/s, to ensure high availability including features such as reliability, serviceability and redundancy. For efficient plasma control, systems are required to collect large amounts of data, process it, store for later analysis, make critical decisions in real time and provide status reports either from the experience itself or the electronic instrumentation involved. Moreover, systems should also ensure the correct handling of detected anomalies and identified faults, notify the system operator of occurred events, decisions taken to acknowledge and implemented changes. Therefore, for everything to work in compliance with specifications it is required that the instrumentation includes hardware management and monitoring mechanisms for both hardware and software. These mechanisms should check the system status by reading sensors, manage events, update inventory databases with hardware system components in use and maintenance, store collected information, update firmware and installed software modules, configure and handle alarms to detect possible system failures and prevent emergency

  10. Monitoring and Hardware Management for Critical Fusion Plasma Instrumentation

    Science.gov (United States)

    Carvalho, Paulo F.; Santos, Bruno; Correia, Miguel; Combo, Álvaro M.; Rodrigues, AntÓnio P.; Pereira, Rita C.; Fernandes, Ana; Cruz, Nuno; Sousa, Jorge; Carvalho, Bernardo B.; Batista, AntÓnio J. N.; Correia, Carlos M. B. A.; Gonçalves, Bruno

    2018-01-01

    Controlled nuclear fusion aims to obtain energy by particles collision confined inside a nuclear reactor (Tokamak). These ionized particles, heavier isotopes of hydrogen, are the main elements inside of plasma that is kept at high temperatures (millions of Celsius degrees). Due to high temperatures and magnetic confinement, plasma is exposed to several sources of instabilities which require a set of procedures by the control and data acquisition systems throughout fusion experiments processes. Control and data acquisition systems often used in nuclear fusion experiments are based on the Advanced Telecommunication Computer Architecture (AdvancedTCA®) standard introduced by the Peripheral Component Interconnect Industrial Manufacturers Group (PICMG®), to meet the demands of telecommunications that require large amount of data (TB) transportation at high transfer rates (Gb/s), to ensure high availability including features such as reliability, serviceability and redundancy. For efficient plasma control, systems are required to collect large amounts of data, process it, store for later analysis, make critical decisions in real time and provide status reports either from the experience itself or the electronic instrumentation involved. Moreover, systems should also ensure the correct handling of detected anomalies and identified faults, notify the system operator of occurred events, decisions taken to acknowledge and implemented changes. Therefore, for everything to work in compliance with specifications it is required that the instrumentation includes hardware management and monitoring mechanisms for both hardware and software. These mechanisms should check the system status by reading sensors, manage events, update inventory databases with hardware system components in use and maintenance, store collected information, update firmware and installed software modules, configure and handle alarms to detect possible system failures and prevent emergency scenarios

  11. Hardware Architectures for the Correspondence Problem in Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Thomas Eide

    Method"has been developed in conjunction with the work on this thesis and has not previously been described. Also, during this project a combined image acquisition and compression board has been developed for a NASA sounding rocket. This circuit, a so-called Lightning Imager, is also described. Finally...... an optimized hardware architecture has been proposed in relation to the three matching methods mentioned above. Because of the cost required to physically implement and test the developed architecture, it has been decided todocument the performance of the architecture through theoretical proofs only....

  12. Summary report of a seminar on geosphere modelling requirements of deep disposal of low and intermediate level radioactive wastes

    International Nuclear Information System (INIS)

    Piper, D.; Paige, R.W.; Broyd, T.W.

    1989-02-01

    A seminar on the geosphere modelling requirements of deep disposal of low and intermediate level radioactive wastes was organised by WS Atkins Engineering Sciences as part of Her Majesty's Inspectorate of Pollution's Radioactive Waste Assessment Programme. The objectives of the seminar were to review geosphere modelling capabilities and prioritise, if possible, any requirements for model development. Summaries of the presentations and subsequent discussions are given in this report. (author)

  13. Cache Hardware Approaches to Multiple Independent Levels of Security (MILS)

    Science.gov (United States)

    2012-10-01

    point-to-point interconnect for multicore processors. From [6] ..... 6 Figure 4 - State Diagram showing transitions to and from SMM ...separate mode of operation, called System Management Mode ( SMM ). This mode is in addition to the more commonly known x86 modes – Real Mode and Protected...of the SMM . A possible security exploit was uncovered with SMM , and is described in the section on Task 2. Task 3 took significantly extra time, as

  14. Energy Harvesting-based Spectrum Access with Incremental Cooperation, Relay Selection and Hardware Noises

    Directory of Open Access Journals (Sweden)

    T. N. Nguyen

    2017-04-01

    Full Text Available In this paper, we propose an energy harvesting (EH-based spectrum access model in cognitive radio (CR network. In the proposed scheme, one of available secondary transmitters (STs helps a primary transmitter (PT forward primary signals to a primary receiver (PR. Via the cooperation, the selected ST finds opportunities to access licensed bands to transmit secondary signals to its intended secondary receiver (SR. Secondary users are assumed to be mobile, hence, optimization of energy consumption for these users is interested. The EH STs have to harvest energy from the PT's radio-frequency (RF signals to serve the PT-PR communication as well as to transmit their signals. The proposed scheme employs incremental relaying technique in which the PR only requires the assistance from the STs when the transmission between PT and PR is not successful. Moreover, we also investigate impact of hardware impairments on performance of the primary and secondary networks. For performance evaluation, we derive exact and lower-bound expressions of outage probability (OP over Rayleigh fading channel. Monte-Carlo simulations are performed to verify the theoretical results. The results present that the outage performance of both networks can be enhanced by increasing the number of the ST-SR pairs. In addition, it is also shown that fraction of time used for EH, positions of the secondary users and the hardware-impairment level significantly impact on the system performance.

  15. Autonomous target tracking of UAVs based on low-power neural network hardware

    Science.gov (United States)

    Yang, Wei; Jin, Zhanpeng; Thiem, Clare; Wysocki, Bryant; Shen, Dan; Chen, Genshe

    2014-05-01

    Detecting and identifying targets in unmanned aerial vehicle (UAV) images and videos have been challenging problems due to various types of image distortion. Moreover, the significantly high processing overhead of existing image/video processing techniques and the limited computing resources available on UAVs force most of the processing tasks to be performed by the ground control station (GCS) in an off-line manner. In order to achieve fast and autonomous target identification on UAVs, it is thus imperative to investigate novel processing paradigms that can fulfill the real-time processing requirements, while fitting the size, weight, and power (SWaP) constrained environment. In this paper, we present a new autonomous target identification approach on UAVs, leveraging the emerging neuromorphic hardware which is capable of massively parallel pattern recognition processing and demands only a limited level of power consumption. A proof-of-concept prototype was developed based on a micro-UAV platform (Parrot AR Drone) and the CogniMemTMneural network chip, for processing the video data acquired from a UAV camera on the y. The aim of this study was to demonstrate the feasibility and potential of incorporating emerging neuromorphic hardware into next-generation UAVs and their superior performance and power advantages towards the real-time, autonomous target tracking.

  16. A preferential design approach for energy-efficient and robust implantable neural signal processing hardware.

    Science.gov (United States)

    Narasimhan, Seetharam; Chiel, Hillel J; Bhunia, Swarup

    2009-01-01

    For implantable neural interface applications, it is important to compress data and analyze spike patterns across multiple channels in real time. Such a computational task for online neural data processing requires an innovative circuit-architecture level design approach for low-power, robust and area-efficient hardware implementation. Conventional microprocessor or Digital Signal Processing (DSP) chips would dissipate too much power and are too large in size for an implantable system. In this paper, we propose a novel hardware design approach, referred to as "Preferential Design" that exploits the nature of the neural signal processing algorithm to achieve a low-voltage, robust and area-efficient implementation using nanoscale process technology. The basic idea is to isolate the critical components with respect to system performance and design them more conservatively compared to the noncritical ones. This allows aggressive voltage scaling for low power operation while ensuring robustness and area efficiency. We have applied the proposed approach to a neural signal processing algorithm using the Discrete Wavelet Transform (DWT) and observed significant improvement in power and robustness over conventional design.

  17. System-level Analysis of Food Moisture Content Requirements for the Mars Dual Lander Transit Mission

    Science.gov (United States)

    Levri, Julie A.; Perchonok, Michele H.

    2004-01-01

    In order to ensure that adequate water resources are available during a mission, any net water loss from the habitat must be balanced with an equivalent amount of required makeup water. Makeup water may come from a variety of sources, including water in shipped tanks, water stored in prepackaged food, product water from fuel cells, and in-situ water resources. This paper specifically addresses the issue of storing required makeup water in prepackaged food versus storing the water in shipped tanks for the Mars Dual Lander Transit Mission, one of the Advanced Life Support Reference Missions. In this paper, water mass balances have been performed for the Dual Lander Transit Mission, to determine the necessary requirement of makeup water under nominal operation (i.e. no consideration of contingency needs), on a daily basis. Contingency issues are briefly discussed with respect to impacts on makeup water storage (shipped tanks versus storage in prepackaged food). The Dual Lander Transit Mission was selected for study because it has been considered by the Johnson Space Center Exploration Office in enough detail to define a reasonable set of scenario options for nominal system operation and contingencies. This study also illustrates the concept that there are multiple, reasonable life support system scenarios for any one particular mission. Thus, the need for a particular commodity can depend upon many variables in the system. In this study, we examine the need for makeup water as it depends upon the configuration of the rest of the life support system.

  18. European Utility Requirements: leveling the European electricity producers' playing ground for new NPPs

    International Nuclear Information System (INIS)

    Bernard Roche

    2006-01-01

    Full text of publication follows: Since 1992, the European Utility Requirement (EUR) document has been developed by the major European electricity producers. The main driver to this work has been the construction of a unified European market. The electricity producers have set out design requirements adapted to this new European environment, while keeping in mind experience feedback from operating NPPs worldwide. The EUR document is now fully operational and its set of generic requirements have been recently used as bid specification in Finland and in China. The EUR document keeps developing in two directions: 1- completing the assessment of the projects that could be proposed by the vendors for the European market. Five projects have been assessed between 1999 and 2002: BWR90, EPR, EP1000, ABWR and SWR1000. Two new projects are being assessed, the Westinghouse AP1000 and the Russian VVER AES92. It is currently planned to publish these two new assessments in the first half of 2006. Others may be undertaken meanwhile. 2- revision of the generic requirements. A revision C of the volume 4 dedicated to power generation plant is being completed. It includes responses to vendors comments and feedback from the TVO call for bid for Finland 5. A revision D of the volumes 1 and 2 dedicated to nuclear islands is foreseen. The main contributions to this revision are the harmonization actions going on in Europe about nuclear safety (WENRA study on reactor safety harmonization, EC works, evolution of the IAEA guides and requirements), the harmonization works on the conditions of connection to the European HV grid as well as harmonization works on other matters, like codes and standards. This has given a unified frame in which the future nuclear plants can be designed and built. In this frame development of standards designs usable throughout Europe without major design change is possible, thus helping to increase competition, and ultimately to save investment and operating costs

  19. Hardware and software status of QCDOC

    International Nuclear Information System (INIS)

    Boyle, P.A.; Chen, D.; Christ, N.H.; Clark, M.; Cohen, S.D.; Cristian, C.; Dong, Z.; Gara, A.; Joo, B.; Jung, C.; Kim, C.; Levkova, L.; Liao, X.; Liu, G.; Mawhinney, R.D.; Ohta, S.; Petrov, K.; Wettig, T.; Yamaguchi, A.

    2004-01-01

    QCDOC is a massively parallel supercomputer whose processing nodes are based on an application-specific integrated circuit (ASIC). This ASIC was custom-designed so that crucial lattice QCD kernels achieve an overall sustained performance of 50% on machines with several 10,000 nodes. This strong scalability, together with low power consumption and a price/performance ratio of $1 per sustained MFlops, enable QCDOC to attack the most demanding lattice QCD problems. The first ASICs became available in June of 2003, and the testing performed so far has shown all systems functioning according to specification. We review the hardware and software status of QCDOC and present performance figures obtained in real hardware as well as in simulation

  20. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  1. Hardware Design of a Smart Meter

    OpenAIRE

    Ganiyu A. Ajenikoko; Anthony A. Olaomi

    2014-01-01

    Smart meters are electronic measurement devices used by utilities to communicate information for billing customers and operating their electric systems. This paper presents the hardware design of a smart meter. Sensing and circuit protection circuits are included in the design of the smart meter in which resistors are naturally a fundamental part of the electronic design. Smart meters provides a route for energy savings, real-time pricing, automated data collection and elimina...

  2. Optimization Strategies for Hardware-Based Cofactorization

    Science.gov (United States)

    Loebenberger, Daniel; Putzka, Jens

    We use the specific structure of the inputs to the cofactorization step in the general number field sieve (GNFS) in order to optimize the runtime for the cofactorization step on a hardware cluster. An optimal distribution of bitlength-specific ECM modules is proposed and compared to existing ones. With our optimizations we obtain a speedup between 17% and 33% of the cofactorization step of the GNFS when compared to the runtime of an unoptimized cluster.

  3. Particle Transport Simulation on Heterogeneous Hardware

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    CPUs and GPGPUs. About the speaker Vladimir Koylazov is CTO and founder of Chaos Software and one of the original developers of the V-Ray raytracing software. Passionate about 3D graphics and programming, Vlado is the driving force behind Chaos Group's software solutions. He participated in the implementation of algorithms for accurate light simulations and support for different hardware platforms, including CPU and GPGPU, as well as distributed calculat...

  4. Radioactive waste management: review on clearance levels and acceptance criteria legislation, requirements and standards.

    Science.gov (United States)

    Maringer, F J; Suráň, J; Kovář, P; Chauvenet, B; Peyres, V; García-Toraño, E; Cozzella, M L; De Felice, P; Vodenik, B; Hult, M; Rosengård, U; Merimaa, M; Szücs, L; Jeffery, C; Dean, J C J; Tymiński, Z; Arnold, D; Hinca, R; Mirescu, G

    2013-11-01

    In 2011 the joint research project Metrology for Radioactive Waste Management (MetroRWM)(1) of the European Metrology Research Programme (EMRP) started with a total duration of three years. Within this project, new metrological resources for the assessment of radioactive waste, including their calibration with new reference materials traceable to national standards will be developed. This paper gives a review on national, European and international strategies as basis for science-based metrological requirements in clearance and acceptance of radioactive waste. © 2013 Elsevier Ltd. All rights reserved.

  5. High-Level Functional and Operational Requirements for the Advanced Fuel Cycle Facility

    International Nuclear Information System (INIS)

    Charles Park

    2006-01-01

    This document describes the principal functional and operational requirements for the proposed Advanced Fuel Cycle Facility (AFCF). The AFCF is intended to be the world's foremost facility for nuclear fuel cycle research, technology development, and demonstration. The facility will also support the near-term mission to develop and demonstrate technology in support of fuel cycle needs identified by industry, and the long-term mission to retain and retain U.S. leadership in fuel cycle operations. The AFCF is essential to demonstrate a more proliferation-resistant fuel cycle and make long-term improvements in fuel cycle effectiveness, performance and economy

  6. Object and Facial Recognition in Augmented and Virtual Reality: Investigation into Software, Hardware and Potential Uses

    Science.gov (United States)

    Schulte, Erin

    2017-01-01

    As augmented and virtual reality grows in popularity, and more researchers focus on its development, other fields of technology have grown in the hopes of integrating with the up-and-coming hardware currently on the market. Namely, there has been a focus on how to make an intuitive, hands-free human-computer interaction (HCI) utilizing AR and VR that allows users to control their technology with little to no physical interaction with hardware. Computer vision, which is utilized in devices such as the Microsoft Kinect, webcams and other similar hardware has shown potential in assisting with the development of a HCI system that requires next to no human interaction with computing hardware and software. Object and facial recognition are two subsets of computer vision, both of which can be applied to HCI systems in the fields of medicine, security, industrial development and other similar areas.

  7. Trends in computer hardware and software.

    Science.gov (United States)

    Frankenfeld, F M

    1993-04-01

    Previously identified and current trends in the development of computer systems and in the use of computers for health care applications are reviewed. Trends identified in a 1982 article were increasing miniaturization and archival ability, increasing software costs, increasing software independence, user empowerment through new software technologies, shorter computer-system life cycles, and more rapid development and support of pharmaceutical services. Most of these trends continue today. Current trends in hardware and software include the increasing use of reduced instruction-set computing, migration to the UNIX operating system, the development of large software libraries, microprocessor-based smart terminals that allow remote validation of data, speech synthesis and recognition, application generators, fourth-generation languages, computer-aided software engineering, object-oriented technologies, and artificial intelligence. Current trends specific to pharmacy and hospitals are the withdrawal of vendors of hospital information systems from the pharmacy market, improved linkage of information systems within hospitals, and increased regulation by government. The computer industry and its products continue to undergo dynamic change. Software development continues to lag behind hardware, and its high cost is offsetting the savings provided by hardware.

  8. Software error masking effect on hardware faults

    International Nuclear Information System (INIS)

    Choi, Jong Gyun; Seong, Poong Hyun

    1999-01-01

    Based on the Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL), in this work, a simulation model for fault injection is developed to estimate the dependability of the digital system in operational phase. We investigated the software masking effect on hardware faults through the single bit-flip and stuck-at-x fault injection into the internal registers of the processor and memory cells. The fault location reaches all registers and memory cells. Fault distribution over locations is randomly chosen based on a uniform probability distribution. Using this model, we have predicted the reliability and masking effect of an application software in a digital system-Interposing Logic System (ILS) in a nuclear power plant. We have considered four the software operational profiles. From the results it was found that the software masking effect on hardware faults should be properly considered for predicting the system dependability accurately in operation phase. It is because the masking effect was formed to have different values according to the operational profile

  9. A Hardware Lab Anywhere At Any Time

    Directory of Open Access Journals (Sweden)

    Tobias Schubert

    2004-12-01

    Full Text Available Scientific technical courses are an important component in any student's education. These courses are usually characterised by the fact that the students execute experiments in special laboratories. This leads to extremely high costs and a reduction in the maximum number of possible participants. From this traditional point of view, it doesn't seem possible to realise the concepts of a Virtual University in the context of sophisticated technical courses since the students must be "on the spot". In this paper we introduce the so-called Mobile Hardware Lab which makes student participation possible at any time and from any place. This lab nevertheless transfers a feeling of being present in a laboratory. This is accomplished with a special Learning Management System in combination with hardware components which correspond to a fully equipped laboratory workstation that are lent out to the students for the duration of the lab. The experiments are performed and solved at home, then handed in electronically. Judging and marking are also both performed electronically. Since 2003 the Mobile Hardware Lab is now offered in a completely web based form.

  10. Instrument hardware and software upgrades at IPNS

    International Nuclear Information System (INIS)

    Worlton, Thomas; Hammonds, John; Mikkelson, D.; Mikkelson, Ruth; Porter, Rodney; Tao, Julian; Chatterjee, Alok

    2006-01-01

    IPNS is in the process of upgrading their time-of-flight neutron scattering instruments with improved hardware and software. The hardware upgrades include replacing old VAX Qbus and Multibus-based data acquisition systems with new systems based on VXI and VME. Hardware upgrades also include expanded detector banks and new detector electronics. Old VAX Fortran-based data acquisition and analysis software is being replaced with new software as part of the ISAW project. ISAW is written in Java for ease of development and portability, and is now used routinely for data visualization, reduction, and analysis on all upgraded instruments. ISAW provides the ability to process and visualize the data from thousands of detector pixels, each having thousands of time channels. These operations can be done interactively through a familiar graphical user interface or automatically through simple scripts. Scripts and operators provided by end users are automatically included in the ISAW menu structure, along with those distributed with ISAW, when the application is started

  11. Using Project Complexity Determinations to Establish Required Levels of Project Rigor

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, Thomas D.

    2015-10-01

    This presentation discusses the project complexity determination process that was developed by National Security Technologies, LLC, for the U.S. Department of Energy, National Nuclear Security Administration Nevada Field Office for implementation at the Nevada National Security Site (NNSS). The complexity determination process was developed to address the diversity of NNSS project types, size, and complexity; to fill the need for one procedure but with provision for tailoring the level of rigor to the project type, size, and complexity; and to provide consistent, repeatable, effective application of project management processes across the enterprise; and to achieve higher levels of efficiency in project delivery. These needs are illustrated by the wide diversity of NNSS projects: Defense Experimentation, Global Security, weapons tests, military training areas, sensor development and testing, training in realistic environments, intelligence community support, sensor development, environmental restoration/waste management, and disposal of radioactive waste, among others.

  12. Flap reconstruction for soft-tissue defects with exposed hardware following deep infection after internal fixation of ankle fractures.

    Science.gov (United States)

    Ovaska, Mikko T; Madanat, Rami; Tukiainen, Erkki; Pulliainen, Lea; Sintonen, Harri; Mäkinen, Tatu J

    2014-12-01

    The aim of the present study was to determine the outcome for patients treated with flap reconstruction following deep ankle fracture infection with exposed hardware. Out of 3041 consecutive ankle fracture operations in 3030 patients from 2006 to 2011, we identified 56 patients requiring flap reconstruction following deep infection. Thirty-two of these patients could be examined at a follow-up visit. Olerud-Molander Ankle (OMA) score, 15D score, Numeric Rating Scale (NRS), and clinical examination were used to assess the outcome. A total of 58 flap reconstructions were performed in 56 patients with a mean age of 57 years (range 25–93 years) and mean follow-up time of 52 months. The most commonly used reconstruction was a distally based peroneus brevis muscle flap with a split-thickness skin graft. A microvascular free flap was required in only one patient. 22 (39%) patients required subsequent surgical interventions because of a flap-related complication. With flap reconstruction, hardware could eventually be salvaged in 53% of patients with a non-consolidated fracture. The mean OMA score was fair or poor in 53% of the patients, and only 56% had recovered their pre-injury level of function. Half of the patients had shoe wear limitations. The 15D score showed a significantly poorer health-related quality of life compared to an age-standardised sample of the general population. The mean pain NRS was 2.1 (range 0–6), and the mean satisfaction NRS was 6.6 (range 0–10). Our study showed that successful treatment of a soft-tissue defect with exposed hardware following ankle fracture infections can be achieved with local flaps. Despite eventual reconstructive success, complications are common. Patients perceive a poorer health-related quality of life, have shoe wear limitations, and only half of them achieve their pre-injury level of function.

  13. Ck2-Dependent Phosphorylation Is Required to Maintain Pax7 Protein Levels in Proliferating Muscle Progenitors.

    Directory of Open Access Journals (Sweden)

    Natalia González

    Full Text Available Skeletal muscle regeneration and long term maintenance is directly link to the balance between self-renewal and differentiation of resident adult stem cells known as satellite cells. In turn, satellite cell fate is influenced by a functional interaction between the transcription factor Pax7 and members of the MyoD family of muscle regulatory factors. Thus, changes in the Pax7-to-MyoD protein ratio may act as a molecular rheostat fine-tuning acquisition of lineage identity while preventing precocious terminal differentiation. Pax7 is expressed in quiescent and proliferating satellite cells, while its levels decrease sharply in differentiating progenitors Pax7 is maintained in cells (reacquiring quiescence. While the mechanisms regulating Pax7 levels based on differentiation status are not well understood, we have recently described that Pax7 levels are directly regulated by the ubiquitin-ligase Nedd4, thus promoting proteasome-dependent Pax7 degradation in differentiating satellite cells. Here we show that Pax7 levels are maintained in proliferating muscle progenitors by a mechanism involving casein kinase 2-dependent Pax7 phosphorylation at S201. Point mutations preventing S201 phosphorylation or casein kinase 2 inhibition result in decreased Pax7 protein in proliferating muscle progenitors. Accordingly, this correlates directly with increased Pax7 ubiquitination. Finally, Pax7 down regulation induced by casein kinase 2 inhibition results in precocious myogenic induction, indicating early commitment to terminal differentiation. These observations highlight the critical role of post translational regulation of Pax7 as a molecular switch controlling muscle progenitor fate.

  14. Comparison of different methods to extract the required coefficient of friction for level walking.

    Science.gov (United States)

    Chang, Wen-Ruey; Chang, Chien-Chi; Matz, Simon

    2012-01-01

    The required coefficient of friction (RCOF) is an important predictor for slip incidents. Despite the wide use of the RCOF there is no standardised method for identifying the RCOF from ground reaction forces. This article presents a comparison of the outcomes from seven different methods, derived from those reported in the literature, for identifying the RCOF from the same data. While commonly used methods are based on a normal force threshold, percentage of stance phase or time from heel contact, a newly introduced hybrid method is based on a combination of normal force, time and direction of increase in coefficient of friction. Although no major differences were found with these methods in more than half the strikes, significant differences were found in a significant portion of strikes. Potential problems with some of these methods were identified and discussed and they appear to be overcome by the hybrid method. No standard method exists for determining the required coefficient of friction (RCOF), an important predictor for slipping. In this study, RCOF values from a single data set, using various methods from the literature, differed considerably for a significant portion of strikes. A hybrid method may yield improved results.

  15. Review of important rock mechanics studies required for underground high level nuclear waste repository program

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, S.; Cho, W. J

    2007-01-15

    Disposal concept adapting room and pillar method, which is a confirmed technique in mining and tunnel construction for long time, has advantages at cost, safety, technical feasibility, flexibility, and international cooperation point of views. Then the important rock mechanics principals and in situ and laboratory tests for understanding the behavior of rock, buffer, and backfill as well as their interactions will be reviewed. The accurate understanding of them is important for developing a safe disposal concept and successful operation of underground repository for permanent disposal of radioactive wastes. First of all, In this study, current status of rock mechanics studies for HLW disposal in foreign countries such as Sweden, USA, Canada, Finland, Japan, and France were reviewed. After then the in situ and laboratory tests for site characterization were summarized. Furthermore, rock mechanics studies required during the whole procedure for the disposal project from repository design to the final closure will be reviewed systematically. This study will help for developing a disposal system including site selection, repository design, operation, maintenance, and closure of a repository in deep underground rock. By introducing the required rock mechanics tests at different stages, it would be helpful from the planning stage to the operation stage of a radioactive waste disposal project.

  16. Review of important rock mechanics studies required for underground high level nuclear waste repository program

    International Nuclear Information System (INIS)

    Kwon, S.; Cho, W. J.

    2007-01-01

    Disposal concept adapting room and pillar method, which is a confirmed technique in mining and tunnel construction for long time, has advantages at cost, safety, technical feasibility, flexibility, and international cooperation point of views. Then the important rock mechanics principals and in situ and laboratory tests for understanding the behavior of rock, buffer, and backfill as well as their interactions will be reviewed. The accurate understanding of them is important for developing a safe disposal concept and successful operation of underground repository for permanent disposal of radioactive wastes. First of all, In this study, current status of rock mechanics studies for HLW disposal in foreign countries such as Sweden, USA, Canada, Finland, Japan, and France were reviewed. After then the in situ and laboratory tests for site characterization were summarized. Furthermore, rock mechanics studies required during the whole procedure for the disposal project from repository design to the final closure will be reviewed systematically. This study will help for developing a disposal system including site selection, repository design, operation, maintenance, and closure of a repository in deep underground rock. By introducing the required rock mechanics tests at different stages, it would be helpful from the planning stage to the operation stage of a radioactive waste disposal project

  17. IDENTIFICATION REQUIREMENTS CUSTOMER SERVICE PROVIDED ON THE LEVEL OF LIGHT INDUSTRY COMPANIES

    Directory of Open Access Journals (Sweden)

    MALCOCI Marina

    2015-05-01

    Full Text Available Moldova is a small country whose territory is 350 km from north to south and 150 km from West to East. Analyzing data from the Statistical Yearbook 2012 shows that 437 enterprises were active dealing with textiles, footwear etc., from 2005 - only 310 companies. Motivation is the business of an assured market, the demand for products and services - volume and structure - which manifests itself on the domestic and foreign markets. Improving customer service is one of the main objectives of production enterprises. Service level directly affects the economic capacity of the enterprise by increasing its contribution in increasing company profits. Increasing the level of service in shops can be determined by reducing factors that negatively influence the desire to purchase, ie ,, eyes scan "; lengthy speech to the seller on the phone; excessive attention to the buyer; arrogant and indifferent gaze of the seller. As a tool for gathering information served questionnaire that was distributed to 50 respondents, which ranks in the age group: 18-27 years with urban living environment. The questionnaire included questions that allow to analyze the efficiency of customer service and the factors influencing the decision to purchase in local shops in the field of Light Industry. The paper identified measures to increase the level of customer service, which would help to increase sales.

  18. Combining high productivity with high performance on commodity hardware

    DEFF Research Database (Denmark)

    Skovhede, Kenneth

    -like compiler for translating CIL bytecode on the CELL-BE. I then introduce a bytecode converter that transforms simple loops in Java bytecode to GPGPU capable code. I then introduce the numeric library for the Common Intermediate Language, NumCIL. I can then utilizing the vector programming model from Num......CIL and map this to the Bohrium framework. The result is a complete system that gives the user a choice of high-level languages with no explicit parallelism, yet seamlessly performs efficient execution on a number of hardware setups....

  19. 25 CFR 39.219 - What happens if a residential program does not maintain residency levels required by this subpart?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false What happens if a residential program does not maintain residency levels required by this subpart? 39.219 Section 39.219 Indians BUREAU OF INDIAN AFFAIRS..., Student Counts, and Verifications Residential Programs § 39.219 What happens if a residential program does...

  20. Changing conditions require a higher level of entrepreneurship by farmers: use of an interactive strategic management tool

    NARCIS (Netherlands)

    Beldman, A.C.G.; Lakner, D.; Smit, A.B.

    2013-01-01

    Changing conditions require a higher level of entrepreneurship by farmers. The method of interactive strategic management (ISM) has been developed to support farmers in developing strategic skills. The method is based on three principles: (1) emphasis is on the entrepreneur; (2) interaction with the

  1. Experiments on mobile robot stereo vision system calibration under hardware imperfection

    Directory of Open Access Journals (Sweden)

    Safin Ramil

    2018-01-01

    Full Text Available Calibration is essential for any robot vision system for achieving high accuracy in deriving objects metric information. One of typical requirements for a stereo vison system in order to obtain better calibration results is to guarantee that both cameras keep the same vertical level. However, cameras may be displaced due to severe conditions of a robot operating or some other circumstances. This paper presents our experimental approach to the problem of a mobile robot stereo vision system calibration under a hardware imperfection. In our experiments, we used crawler-type mobile robot «Servosila Engineer». Stereo system cameras of the robot were displaced relative to each other, causing loss of surrounding environment information. We implemented and verified checkerboard and circle grid based calibration methods. The two methods comparison demonstrated that a circle grid based calibration should be preferred over a classical checkerboard calibration approach.

  2. Space Shuttle Program (SSP) Shock Test and Specification Experience for Reusable Flight Hardware Equipment

    Science.gov (United States)

    Larsen, Curtis E.

    2012-01-01

    As commercial companies are nearing a preliminary design review level of design maturity, several companies are identifying the process for qualifying their multi-use electrical and mechanical components for various shock environments, including pyrotechnic, mortar firing, and water impact. The experience in quantifying the environments consists primarily of recommendations from Military Standard-1540, Product Verification Requirement for Launch, Upper Stage, and Space Vehicles. Therefore, the NASA Engineering and Safety Center (NESC) formed a team of NASA shock experts to share the NASA experience with qualifying hardware for the Space Shuttle Program (SSP) and other applicable programs and projects. Several team teleconferences were held to discuss past experience and to share ideas of possible methods for qualifying components for multiple missions. This document contains the information compiled from the discussions

  3. The NIDS Cluster: Scalable, Stateful Network Intrusion Detection on Commodity Hardware

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, Brian L; Vallentin, Matthias; Sommer, Robin; Lee, Jason; Leres, Craig; Paxson, Vern; Tierney, Brian

    2007-09-19

    In this work we present a NIDS cluster as a scalable solution for realizing high-performance, stateful network intrusion detection on commodity hardware. The design addresses three challenges: (i) distributing traffic evenly across an extensible set of analysis nodes in a fashion that minimizes the communication required for coordination, (ii) adapting the NIDS's operation to support coordinating its low-level analysis rather than just aggregating alerts; and (iii) validating that the cluster produces sound results. Prototypes of our NIDS cluster now operate at the Lawrence Berkeley National Laboratory and the University of California at Berkeley. In both environments the clusters greatly enhance the power of the network security monitoring.

  4. Does Vitamin D Sufficiency Equate to a Single Serum 25-Hydroxyvitamin D Level or Are Different Levels Required for Non-Skeletal Diseases?

    Directory of Open Access Journals (Sweden)

    Simon Spedding

    2013-12-01

    Full Text Available Objective: Clarify the concept of vitamin D sufficiency, the relationship between efficacy and vitamin D status and the role of Vitamin D supplementation in the management of non-skeletal diseases. We outline reasons for anticipating different serum vitamin D levels are required for different diseases. Method: Review the literature for evidence of efficacy of supplementation and minimum effective 25-hydroxyvitamin D (25-OHD levels in non-skeletal disease. Results: Evidence of efficacy of vitamin supplementation is graded according to levels of evidence. Minimum effective serum 25-OHD levels are lower for skeletal disease, e.g., rickets (25 nmol/L, osteoporosis and fractures (50 nmol/L, than for premature mortality (75 nmol/L or non-skeletal diseases, e.g., depression (75 nmol/L, diabetes and cardiovascular disease (80 nmol/L, falls and respiratory infections (95 nmol/L and cancer (100 nmol/L. Conclusions: Evidence for the efficacy of vitamin D supplementation at serum 25-OHD levels ranging from 25 to 100 nmol/L has been obtained from trials with vitamin D interventions that change vitamin D status by increasing serum 25-OHD to a level consistent with sufficiency for that disease. This evidence supports the hypothesis that just as vitamin D metabolism is tissue dependent, so the serum levels of 25-OHD signifying deficiency or sufficiency are disease dependent.

  5. Bridging the gap between actual and required mathematics background at undergraduate university level

    DEFF Research Database (Denmark)

    Triantafyllou, Eva; Timcenko, Olga

    courses of Medialogy, e.g. computer graphics programming. Moreover, this poor performance in mathematics is one of the main causes for dropout at university level. This paper presents our ongoing research aiming at tackling with this problem by developing dynamic and multimodal media for math- ematics...... teaching and learning which will make mathematics more at- tractive and easier to understand to undergraduate students. These tools realise an interactive educational method by giving mathematics learners opportunities to develop visualization skills, explore mathe- matical concepts, and obtain solutions...

  6. MRI - From basic knowledge to advanced strategies: Hardware

    International Nuclear Information System (INIS)

    Carpenter, T.A.; Williams, E.J.

    1999-01-01

    There have been remarkable advances in the hardware used for nuclear magnetic resonance imaging scanners. These advances have enabled an extraordinary range of sophisticated magnetic resonance MR sequences to be performed routinely. This paper focuses on the following particular aspects: (a) Magnet system. Advances in magnet technology have allowed superconducting magnets which are low maintenance and have excellent homogeneity and very small stray field footprints. (b) Gradient system. Optimisation of gradient design has allowed gradient coils which provide excellent field for spatial encoding, have reduced diameter and have technology to minimise the effects of eddy currents. These coils can now routinely provide the strength and switching rate required by modern imaging methods. (c) Radio-frequency (RF) system. The advances in digital electronics can now provide RF electronics which have low noise characteristics, high accuracy and improved stability, which are all essential to the formation of excellent images. The use of surface coils has increased with the availability of phased-array systems, which are ideal for spinal work. (d) Computer system. The largest advance in technology has been in the supporting computer hardware which is now affordable, reliable and with performance to match the processing requirements demanded by present imaging sequences. (orig.)

  7. Optimum SNR data compression in hardware using an Eigencoil array.

    Science.gov (United States)

    King, Scott B; Varosi, Steve M; Duensing, G Randy

    2010-05-01

    With the number of receivers available on clinical MRI systems now ranging from 8 to 32 channels, data compression methods are being explored to lessen the demands on the computer for data handling and processing. Although software-based methods of compression after reception lessen computational requirements, a hardware-based method before the receiver also reduces the number of receive channels required. An eight-channel Eigencoil array is constructed by placing a hardware radiofrequency signal combiner inline after preamplification, before the receiver system. The Eigencoil array produces signal-to-noise ratio (SNR) of an optimal reconstruction using a standard sum-of-squares reconstruction, with peripheral SNR gains of 30% over the standard array. The concept of "receiver channel reduction" or MRI data compression is demonstrated, with optimal SNR using only four channels, and with a three-channel Eigencoil, superior sum-of-squares SNR was achieved over the standard eight-channel array. A three-channel Eigencoil portion of a product neurovascular array confirms in vivo SNR performance and demonstrates parallel MRI up to R = 3. This SNR-preserving data compression method advantageously allows users of MRI systems with fewer receiver channels to achieve the SNR of higher-channel MRI systems. (c) 2010 Wiley-Liss, Inc.

  8. The FTK: A Hardware Track Finder for the ATLAS Trigger

    CERN Document Server

    Alison, J; Anderson, J; Andreani, A; Andreazza, A; Annovi, A; Antonelli, M; Atkinson, M; Auerbach, B; Baines, J; Barberio, E; Beccherle, R; Beretta, M; Biesuz, N V; Blair, R; Blazey, G; Bogdan, M; Boveia, A; Britzger, D; Bryant, P; Burghgrave, B; Calderini, G; Cavaliere, V; Cavasinni, V; Chakraborty, D; Chang, P; Cheng, Y; Cipriani, R; Citraro, S; Citterio, M; Crescioli, F; Dell'Orso, M; Donati, S; Dondero, P; Drake, G; Gadomski, S; Gatta, M; Gentsos, C; Giannetti, P; Giulini, M; Gkaitatzis, S; Howarth, J W; Iizawa, T; Kapliy, A; Kasten, M; Kim, Y K; Kimura, N; Klimkovich, T; Kordas, K; Korikawa, T; Krizka, K; Kubota, T; Lanza, A; Lasagni, F; Liberali, V; Li, H L; Love, J; Luciano, P; Luongo, C; Magalotti, D; Melachrinos, C; Meroni, C; Mitani, T; Negri, A; Neroutsos, P; Neubauer, M; Nikolaidis, S; Okumura, Y; Pandini, C; Penning, B; Petridou, C; Piendibene, M; Proudfoot, J; Rados, P; Roda, C; Rossi, E; Sakurai, Y; Sampsonidis, D; Sampsonidou, D; Schmitt, S; Schoening, A; Shochet, M; Shojaii, S; Soltveit, H; Sotiropoulou, C L; Stabile, A; Tang, F; Testa, M; Tompkins, L; Vercesi, V; Villa, M; Volpi, G; Webster, J; Wu, X; Yorita, K; Yurkewicz, A; Zeng, J C; Zhang, J

    2014-01-01

    The ATLAS experiment trigger system is designed to reduce the event rate, at the LHC design luminosity of 1034 cm-2 s-1, from the nominal bunch crossing rate of 40 MHz to less than 1 kHz for permanent storage. During Run 1, the LHC has performed exceptionally well, routinely exceeding the design luminosity. From 2015 the LHC is due to operate with higher still luminosities. This will place a significant load on the High Level Trigger system, both due to the need for more sophisticated algorithms to reject background, and from the larger data volumes that will need to be processed. The Fast TracKer is a hardware upgrade for Run 2, consisting of a custom electronics system that will operate at the full rate for Level-1 accepted events of 100 kHz and provide high quality tracks at the beginning of processing in the High Level Trigger. This will perform track reconstruction using hardware with massive parallelism using associative memories and FPGAs. The availability of the full tracking information will enable r...

  9. Approach to integrate current safeguards measures with additional protocol requirements at national level

    International Nuclear Information System (INIS)

    Ramirez, R.

    2001-01-01

    Peru adhered to the Additional Protocol in March 2000 which was also approved by the Congress in May 2001. After approval by law the obligations derived from this Additional Protocol will be in force after 180 days. After the signing of the Protocol an approach was designed to help better fulfill these requirements in an integrated way with the previous measures. As first stage, a review of the current state of safeguards was undertaken. Under the current agreement (an INFCIRC/153 type agreement) the reporting is less complicated and inexpensive to be carried out because these reports include only the declared nuclear material and the features of declared facilities where the nuclear material is used. No other related facility or material or activity needs to be declared. In Peru there are only two MBAs where low enriched uranium (LEU) is used and the record system includes general ledgers, inventory records and operational books. The results of national inspections and copies of reports and communications sent to the IAEA are also kept in this system. Under the agreement and subsidiary arrangements material balance reports (MBR), physical inventory listings (PIL) and inventory change reports (ICR) are prepared and submitted to the IAEA at scheduled periods. The MBR and PIL reports are sent after yearly regular inspections carried out by the IAEA. The ICR is sent just every time when an import or export of nuclear material is made. The time devoted to carry out all of these activities is not so extensive for both the State System for Accountability and Control (SSAC) and the users because of the limited nuclear activities in the country. Because of the characteristics and limited quantities of nuclear material the efforts for inspection and reporting activities are few. Another subject under review was the procedure for controlling the imports of nuclear material. Under the current agreement this subject was not a problem, as all of the radioactive and nuclear

  10. A Message-Passing Hardware/Software Cosimulation Environment for Reconfigurable Computing Systems

    Directory of Open Access Journals (Sweden)

    Manuel Saldaña

    2009-01-01

    Full Text Available High-performance reconfigurable computers (HPRCs provide a mix of standard processors and FPGAs to collectively accelerate applications. This introduces new design challenges, such as the need for portable programming models across HPRCs and system-level verification tools. To address the need for cosimulating a complete heterogeneous application using both software and hardware in an HPRC, we have created a tool called the Message-passing Simulation Framework (MSF. We have used it to simulate and develop an interface enabling an MPI-based approach to exchange data between X86 processors and hardware engines inside FPGAs. The MSF can also be used as an application development tool that enables multiple FPGAs in simulation to exchange messages amongst themselves and with X86 processors. As an example, we simulate a LINPACK benchmark hardware core using an Intel-FSB-Xilinx-FPGA platform to quickly prototype the hardware, to test the communications. and to verify the benchmark results.

  11. Functions and requirements document, WESF decoupling project, low-level liquid waste system

    Energy Technology Data Exchange (ETDEWEB)

    Rasmussen, J.H., Fluor Daniel Hanford

    1997-02-27

    The Waste Encapsulation and Storage Facility (WESF) was constructed in 1974 to encapsulate and store cesium and strontium which were isolated at B Plant from underground storage tank waste. The WESF, Building 225-B, is attached physically to the west end of B Plant, Building 221-B, 200 East area. The WESF currently utilizes B Plant facilities for disposing liquid and solid waste streams. With the deactivation of B Plant, the WESF Decoupling Project will provide replacement systems allowing WESF to continue operations independently from B Plant. Four major systems have been identified to be replaced by the WESF Decoupling Project, including the following: Low Level Liquid Waste System, Solid Waste Handling System, Liquid Effluent Control System, and Deionized Water System.

  12. A regulatory perspective on design and performance requirements for engineered systems in high-level waste

    International Nuclear Information System (INIS)

    Bernero, R.M.

    1992-01-01

    For engineered systems, this paper gives an overview of some of the current activities at the U.S. Nuclear Regulatory Commission (NRC), with the intent of elucidating how the regulatory process works in the management of high-level waste (HLW). Throughout the waste management cycle, starting with packaging and transportation, and continuing to final closure of a repository, these activities are directed at taking advantage of the prelicensing consultation period, a period in which the NRC, DOE and others can interact in ways that will reduce regulatory, technical and institutional uncertainties, and open the path to development and construction of a deep geologic repository for permanent disposal of HLW. Needed interactions in the HLW program are highlighted. Examples of HLW regulatory activities are given in discussions of a multipurpose-cask concept and of current NRC work on the meaning of the term substantially complete containment

  13. Prioritizing Chemicals and Data Requirements for Screening-Level Exposure and Risk Assessment

    Science.gov (United States)

    Brown, Trevor N.; Wania, Frank; Breivik, Knut; McLachlan, Michael S.

    2012-01-01

    Background: Scientists and regulatory agencies strive to identify chemicals that may cause harmful effects to humans and the environment; however, prioritization is challenging because of the large number of chemicals requiring evaluation and limited data and resources. Objectives: We aimed to prioritize chemicals for exposure and exposure potential and obtain a quantitative perspective on research needs to better address uncertainty in screening assessments. Methods: We used a multimedia mass balance model to prioritize > 12,000 organic chemicals using four far-field human exposure metrics. The propagation of variance (uncertainty) in key chemical information used as model input for calculating exposure metrics was quantified. Results: Modeled human concentrations and intake rates span approximately 17 and 15 orders of magnitude, respectively. Estimates of exposure potential using human concentrations and a unit emission rate span approximately 13 orders of magnitude, and intake fractions span 7 orders of magnitude. The actual chemical emission rate contributes the greatest variance (uncertainty) in exposure estimates. The human biotransformation half-life is the second greatest source of uncertainty in estimated concentrations. In general, biotransformation and biodegradation half-lives are greater sources of uncertainty in modeled exposure and exposure potential than chemical partition coefficients. Conclusions: Mechanistic exposure modeling is suitable for screening and prioritizing large numbers of chemicals. By including uncertainty analysis and uncertainty in chemical information in the exposure estimates, these methods can help identify and address the important sources of uncertainty in human exposure and risk assessment in a systematic manner. PMID:23008278

  14. Prioritizing chemicals and data requirements for screening-level exposure and risk assessment.

    Science.gov (United States)

    Arnot, Jon A; Brown, Trevor N; Wania, Frank; Breivik, Knut; McLachlan, Michael S

    2012-11-01

    Scientists and regulatory agencies strive to identify chemicals that may cause harmful effects to humans and the environment; however, prioritization is challenging because of the large number of chemicals requiring evaluation and limited data and resources. We aimed to prioritize chemicals for exposure and exposure potential and obtain a quantitative perspective on research needs to better address uncertainty in screening assessments. We used a multimedia mass balance model to prioritize > 12,000 organic chemicals using four far-field human exposure metrics. The propagation of variance (uncertainty) in key chemical information used as model input for calculating exposure metrics was quantified. Modeled human concentrations and intake rates span approximately 17 and 15 orders of magnitude, respectively. Estimates of exposure potential using human concentrations and a unit emission rate span approximately 13 orders of magnitude, and intake fractions span 7 orders of magnitude. The actual chemical emission rate contributes the greatest variance (uncertainty) in exposure estimates. The human biotransformation half-life is the second greatest source of uncertainty in estimated concentrations. In general, biotransformation and biodegradation half-lives are greater sources of uncertainty in modeled exposure and exposure potential than chemical partition coefficients. Mechanistic exposure modeling is suitable for screening and prioritizing large numbers of chemicals. By including uncertainty analysis and uncertainty in chemical information in the exposure estimates, these methods can help identify and address the important sources of uncertainty in human exposure and risk assessment in a systematic manner.

  15. Hardware implementation of a GFSR pseudo-random number generator

    Science.gov (United States)

    Aiello, G. R.; Budinich, M.; Milotti, E.

    1989-12-01

    We describe the hardware implementation of a pseudo-random number generator of the "Generalized Feedback Shift Register" (GFSR) type. After brief theoretical considerations we describe two versions of the hardware, the tests done and the performances achieved.

  16. Requirement of trained first responders and national level preparedness for prevention and response to radiological terrorism

    International Nuclear Information System (INIS)

    Sharma, R.; Pradeepkumar, K.S.

    2011-01-01

    The increase in the usage of radioactive sources in various fields and the present scenario of adopting various means of terrorism indicates a possible environment for malicious usage of radioactive sources. Many nations, India inclusive, have to strengthen further it's capability to deal with Nuclear/Radiological Emergencies. The probable radiological emergency scenario in public domain involves inadvertent melting of radioactive material, transport accident involving radioactive material/sources and presence of orphan sources as reported elsewhere. Explosion of Radiological Dispersal Device (RDDs) or Improvised Nuclear Devices (IND) leading to spread of radioactive contamination in public places have been identified by IAEA as probable radiological threats. The IAEA documents put lot of emphasis, at national level, on training and educational issues related with Radiological Emergencies. The agencies and institutions dealing with radioactive sources have few personnel trained in radiation protection. Experience so far indicates that public awareness is also not adequate in the field of radiological safety which may create difficulties during emergency response in public domain. The major challenges are associated with mitigation, monitoring methodology, contaminated and overexposed casualties, decontamination and media briefing. In this paper, we have identified the educational needs for response to radiological emergency in India with major thrust on training. The paper has also enumerated the available educational and training infrastructure, the human resources, as well as the important stake holders for development of sustainable education and training programme. (author)

  17. Balanced levels of nerve growth factor are required for normal pregnancy progression.

    Science.gov (United States)

    Frank, Pierre; Barrientos, Gabriela; Tirado-González, Irene; Cohen, Marie; Moschansky, Petra; Peters, Eva M; Klapp, Burghard F; Rose, Matthias; Tometten, Mareike; Blois, Sandra M

    2014-08-01

    Nerve growth factor (NGF), the first identified member of the family of neurotrophins, is thought to play a critical role in the initiation of the decidual response in stress-challenged pregnant mice. However, the contribution of this pathway to physiological events during the establishment and maintenance of pregnancy remains largely elusive. Using NGF depletion and supplementation strategies alternatively, in this study, we demonstrated that a successful pregnancy is sensitive to disturbances in NGF levels in mice. Treatment with NGF further boosted fetal loss rates in the high-abortion rate CBA/J x DBA/2J mouse model by amplifying a local inflammatory response through recruitment of NGF-expressing immune cells, increased decidual innervation with substance P(+) nerve fibres and a Th1 cytokine shift. Similarly, treatment with a NGF-neutralising antibody in BALB/c-mated CBA/J mice, a normal-pregnancy model, also induced abortions associated with increased infiltration of tropomyosin kinase receptor A-expressing NK cells to the decidua. Importantly, in neither of the models, pregnancy loss was associated with defective ovarian function, angiogenesis or placental development. We further demonstrated that spontaneous abortion in humans is associated with up-regulated synthesis and an aberrant distribution of NGF in placental tissue. Thus, a local threshold of NGF expression seems to be necessary to ensure maternal tolerance in healthy pregnancies, but when surpassed may result in fetal rejection due to exacerbated inflammation. © 2014 Society for Reproduction and Fertility.

  18. Open Source Hardware for DIY Environmental Sensing

    Science.gov (United States)

    Aufdenkampe, A. K.; Hicks, S. D.; Damiano, S. G.; Montgomery, D. S.

    2014-12-01

    The Arduino open source electronics platform has been very popular within the DIY (Do It Yourself) community for several years, and it is now providing environmental science researchers with an inexpensive alternative to commercial data logging and transmission hardware. Here we present the designs for our latest series of custom Arduino-based dataloggers, which include wireless communication options like self-meshing radio networks and cellular phone modules. The main Arduino board uses a custom interface board to connect to various research-grade sensors to take readings of turbidity, dissolved oxygen, water depth and conductivity, soil moisture, solar radiation, and other parameters. Sensors with SDI-12 communications can be directly interfaced to the logger using our open Arduino-SDI-12 software library (https://github.com/StroudCenter/Arduino-SDI-12). Different deployment options are shown, like rugged enclosures to house the loggers and rigs for mounting the sensors in both fresh water and marine environments. After the data has been collected and transmitted by the logger, the data is received by a mySQL-PHP stack running on a web server that can be accessed from anywhere in the world. Once there, the data can be visualized on web pages or served though REST requests and Water One Flow (WOF) services. Since one of the main benefits of using open source hardware is the easy collaboration between users, we are introducing a new web platform for discussion and sharing of ideas and plans for hardware and software designs used with DIY environmental sensors and data loggers.

  19. The Impact of Flight Hardware Scavenging on Space Logistics

    Science.gov (United States)

    Oeftering, Richard C.

    2011-01-01

    For a given fixed launch vehicle capacity the logistics payload delivered to the moon may be only roughly 20 percent of the payload delivered to the International Space Station (ISS). This is compounded by the much lower flight frequency to the moon and thus low availability of spares for maintenance. This implies that lunar hardware is much more scarce and more costly per kilogram than ISS and thus there is much more incentive to preserve hardware. The Constellation Lunar Surface System (LSS) program is considering ways of utilizing hardware scavenged from vehicles including the Altair lunar lander. In general, the hardware will have only had a matter of hours of operation yet there may be years of operational life remaining. By scavenging this hardware the program, in effect, is treating vehicle hardware as part of the payload. Flight hardware may provide logistics spares for system maintenance and reduce the overall logistics footprint. This hardware has a wide array of potential applications including expanding the power infrastructure, and exploiting in-situ resources. Scavenging can also be seen as a way of recovering the value of, literally, billions of dollars worth of hardware that would normally be discarded. Scavenging flight hardware adds operational complexity and steps must be taken to augment the crew s capability with robotics, capabilities embedded in flight hardware itself, and external processes. New embedded technologies are needed to make hardware more serviceable and scavengable. Process technologies are needed to extract hardware, evaluate hardware, reconfigure or repair hardware, and reintegrate it into new applications. This paper also illustrates how scavenging can be used to drive down the cost of the overall program by exploiting the intrinsic value of otherwise discarded flight hardware.

  20. An Overview of Reconfigurable Hardware in Embedded Systems

    Directory of Open Access Journals (Sweden)

    Wenyin Fu

    2006-09-01

    Full Text Available Over the past few years, the realm of embedded systems has expanded to include a wide variety of products, ranging from digital cameras, to sensor networks, to medical imaging systems. Consequently, engineers strive to create ever smaller and faster products, many of which have stringent power requirements. Coupled with increasing pressure to decrease costs and time-to-market, the design constraints of embedded systems pose a serious challenge to embedded systems designers. Reconfigurable hardware can provide a flexible and efficient platform for satisfying the area, performance, cost, and power requirements of many embedded systems. This article presents an overview of reconfigurable computing in embedded systems, in terms of benefits it can provide, how it has already been used, design issues, and hurdles that have slowed its adoption.

  1. Management of cladding hulls and fuel hardware

    International Nuclear Information System (INIS)

    1985-01-01

    The reprocessing of spent fuel from power reactors based on chop-leach technology produces a solid waste product of cladding hulls and other metallic residues. This report describes the current situation in the management of fuel cladding hulls and hardware. Information is presented on the material composition of such waste together with the heating effects due to neutron-induced activation products and fuel contamination. As no country has established a final disposal route and the corresponding repository, this report also discusses possible disposal routes and various disposal options under consideration at present

  2. Hardware for computing the integral image

    OpenAIRE

    Fernández-Berni, J.; Rodríguez-Vázquez, Ángel; Río, Rocío del; Carmona-Galán, R.

    2015-01-01

    La presente invención, según se expresa en el enunciado de esta memoria descriptiva, consiste en hardware de señal mixta para cómputo de la imagen integral en el plano focal mediante una agrupación de celdas básicas de sensado-procesamiento cuya interconexión puede ser reconfigurada mediante circuitería periférica que hace posible una implementación muy eficiente de una tarea de procesamiento muy útil en visión artificial como es el cálculo de la imagen integral en escenarios tales como monit...

  3. Development of Hardware Dual Modality Tomography System

    Directory of Open Access Journals (Sweden)

    R. M. Zain

    2009-06-01

    Full Text Available The paper describes the hardware development and performance of the Dual Modality Tomography (DMT system. DMT consists of optical and capacitance sensors. The optical sensors consist of 16 LEDs and 16 photodiodes. The Electrical Capacitance Tomography (ECT electrode design use eight electrode plates as the detecting sensor. The digital timing and the control unit have been developing in order to control the light projection of optical emitters, switching the capacitance electrodes and to synchronize the operation of data acquisition. As a result, the developed system is able to provide a maximum 529 set data per second received from the signal conditioning circuit to the computer.

  4. Fast Gridding on Commodity Graphics Hardware

    DEFF Research Database (Denmark)

    Sørensen, Thomas Sangild; Schaeffter, Tobias; Noe, Karsten Østergaard

    2007-01-01

    is the far most time consuming of the three steps (Table 1). Modern graphics cards (GPUs) can be utilised as a fast parallel processor provided that algorithms are reformulated in a parallel solution. The purpose of this work is to test the hypothesis, that a non-cartesian reconstruction can be efficiently...... implemented on graphics hardware giving a significant speedup compared to CPU based alternatives. We present a novel GPU implementation of the convolution step that overcomes the problems of memory bandwidth that has limited the speed of previous GPU gridding algorithms [2]....

  5. List search hardware for interpretive software

    CERN Document Server

    Altaber, Jacques; Mears, B; Rausch, R

    1979-01-01

    Interpreted languages, e.g. BASIC, are simple to learn, easy to use, quick to modify and in general 'user-friendly'. However, a critically time consuming process during interpretation is that of list searching. A special microprogrammed device for fast list searching has therefore been developed at the SPS Division of CERN. It uses bit- sliced hardware. Fast algorithms perform search, insert and delete of a six-character name and its value in a list of up to 1000 pairs. The prototype shows retrieval times of the order of 10-30 microseconds. (11 refs).

  6. Hardware trigger processor for the MDT system

    CERN Document Server

    AUTHOR|(SzGeCERN)757787; The ATLAS collaboration; Hazen, Eric; Butler, John; Black, Kevin; Gastler, Daniel Edward; Ntekas, Konstantinos; Taffard, Anyes; Martinez Outschoorn, Verena; Ishino, Masaya; Okumura, Yasuyuki

    2017-01-01

    We are developing a low-latency hardware trigger processor for the Monitored Drift Tube system in the Muon spectrometer. The processor will fit candidate Muon tracks in the drift tubes in real time, improving significantly the momentum resolution provided by the dedicated trigger chambers. We present a novel pure-FPGA implementation of a Legendre transform segment finder, an associative-memory alternative implementation, an ARM (Zynq) processor-based track fitter, and compact ATCA carrier board architecture. The ATCA architecture is designed to allow a modular, staged approach to deployment of the system and exploration of alternative technologies.

  7. Development of a hardware-in-loop attitude control simulator for a CubeSat satellite

    Science.gov (United States)

    Tapsawat, Wittawat; Sangpet, Teerawat; Kuntanapreeda, Suwat

    2018-01-01

    Attitude control is an important part in satellite on-orbit operation. It greatly affects the performance of satellites. Testing of an attitude determination and control subsystem (ADCS) is very challenging since it might require attitude dynamics and space environment in the orbit. This paper develops a low-cost hardware-in-loop (HIL) simulator for testing an ADCS of a CubeSat satellite. The simulator consists of a numerical simulation part, a hardware part, and a HIL interface hardware unit. The numerical simulation part includes orbital dynamics, attitude dynamics and Earth’s magnetic field. The hardware part is the real ADCS board of the satellite. The simulation part outputs satellite’s angular velocity and geomagnetic field information to the HIL interface hardware. Then, based on this information, the HIL interface hardware generates I2C signals mimicking the signals of the on-board rate-gyros and magnetometers and consequently outputs the signals to the ADCS board. The ADCS board reads the rate-gyro and magnetometer signals, calculates control signals, and drives the attitude actuators which are three magnetic torquers (MTQs). The responses of the MTQs sensed by a separated magnetometer are feedback to the numerical simulation part completing the HIL simulation loop. Experimental studies are conducted to demonstrate the feasibility and effectiveness of the simulator.

  8. A hardware overview of the RHIC LLRF platform

    International Nuclear Information System (INIS)

    Hayes, T.; Smith, K.S.

    2011-01-01

    The RHIC Low Level RF (LLRF) platform is a flexible, modular system designed around a carrier board with six XMC daughter sites. The carrier board features a Xilinx FPGA with an embedded, hard core Power PC that is remotely reconfigurable. It serves as a front end computer (FEC) that interfaces with the RHIC control system. The carrier provides high speed serial data paths to each daughter site and between daughter sites as well as four generic external fiber optic links. It also distributes low noise clocks and serial data links to all daughter sites and monitors temperature, voltage and current. To date, two XMC cards have been designed: a four channel high speed ADC and a four channel high speed DAC. The new LLRF hardware was used to replace the old RHIC LLRF system for the 2009 run. For the 2010 run, the RHIC RF system operation was dramatically changed with the introduction of accelerating both beams in a new, common cavity instead of each ring having independent cavities. The flexibility of the new system was beneficial in allowing the low level system to be adapted to support this new configuration. This hardware was also used in 2009 to provide LLRF for the newly commissioned Electron Beam Ion Source.

  9. A Hardware Fast Tracker for the ATLAS trigger

    International Nuclear Information System (INIS)

    Asbah, N.

    2016-01-01

    The trigger system of the ATLAS experiment is designed to reduce the event rate from the LHC nominal bunch crossing at 40 MHz to about 1 kHz, at the design luminosity of 10 34 cm -2 · s -1 . After a successful period of data taking from 2010 to early 2013, the LHC already started with much higher instantaneous luminosity. This will increase the load on High Level Trigger system, the second stage of the selection based on software algorithms. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals. The Fast TracKer (FTK) is part of the ATLAS trigger upgrade project. It is a hardware processor that will provide, at every Level-1 accepted event (100 kHz) and within 100 μs, full tracking information for tracks with momentum as low as 1 GeV. Providing fast, extensive access to tracking information, with resolution comparable to the offline reconstruction, FTK will help in precise detection of the primary and secondary vertices to ensure robust selections and improve the trigger performance. FTK exploits hardware technologies with massive parallelism, combining Associative Memory ASICs, FPGAs and high-speed communication links.

  10. Optimizing memory-bound SYMV kernel on GPU hardware accelerators

    KAUST Repository

    Abdelfattah, Ahmad

    2013-01-01

    Hardware accelerators are becoming ubiquitous high performance scientific computing. They are capable of delivering an unprecedented level of concurrent execution contexts. High-level programming language extensions (e.g., CUDA), profiling tools (e.g., PAPI-CUDA, CUDA Profiler) are paramount to improve productivity, while effectively exploiting the underlying hardware. We present an optimized numerical kernel for computing the symmetric matrix-vector product on nVidia Fermi GPUs. Due to its inherent memory-bound nature, this kernel is very critical in the tridiagonalization of a symmetric dense matrix, which is a preprocessing step to calculate the eigenpairs. Using a novel design to address the irregular memory accesses by hiding latency and increasing bandwidth, our preliminary asymptotic results show 3.5x and 2.5x fold speedups over the similar CUBLAS 4.0 kernel, and 7-8% and 30% fold improvement over the Matrix Algebra on GPU and Multicore Architectures (MAGMA) library in single and double precision arithmetics, respectively. © 2013 Springer-Verlag.

  11. A hardware fast tracker for the ATLAS trigger

    Science.gov (United States)

    Asbah, Nedaa

    2016-09-01

    The trigger system of the ATLAS experiment is designed to reduce the event rate from the LHC nominal bunch crossing at 40 MHz to about 1 kHz, at the design luminosity of 1034 cm-2 s-1. After a successful period of data taking from 2010 to early 2013, the LHC already started with much higher instantaneous luminosity. This will increase the load on High Level Trigger system, the second stage of the selection based on software algorithms. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals. The Fast TracKer (FTK) is part of the ATLAS trigger upgrade project. It is a hardware processor that will provide, at every Level-1 accepted event (100 kHz) and within 100 microseconds, full tracking information for tracks with momentum as low as 1 GeV. Providing fast, extensive access to tracking information, with resolution comparable to the offline reconstruction, FTK will help in precise detection of the primary and secondary vertices to ensure robust selections and improve the trigger performance. FTK exploits hardware technologies with massive parallelism, combining Associative Memory ASICs, FPGAs and high-speed communication links.

  12. Static Scheduling of Periodic Hardware Tasks with Precedence and Deadline Constraints on Reconfigurable Hardware Devices

    Directory of Open Access Journals (Sweden)

    Ikbel Belaid

    2011-01-01

    Full Text Available Task graph scheduling for reconfigurable hardware devices can be defined as finding a schedule for a set of periodic tasks with precedence, dependence, and deadline constraints as well as their optimal allocations on the available heterogeneous hardware resources. This paper proposes a new methodology comprising three main stages. Using these three main stages, dynamic partial reconfiguration and mixed integer programming, pipelined scheduling and efficient placement are achieved and enable parallel computing of the task graph on the reconfigurable devices by optimizing placement/scheduling quality. Experiments on an application of heterogeneous hardware tasks demonstrate an improvement of resource utilization of 12.45% of the available reconfigurable resources corresponding to a resource gain of 17.3% compared to a static design. The configuration overhead is reduced to 2% of the total running time. Due to pipelined scheduling, the task graph spanning is minimized by 4% compared to sequential execution of the graph.

  13. Is Hardware Removal Recommended after Ankle Fracture Repair?

    Directory of Open Access Journals (Sweden)

    Hong-Geun Jung

    2016-01-01

    Full Text Available The indications and clinical necessity for routine hardware removal after treating ankle or distal tibia fracture with open reduction and internal fixation are disputed even when hardware-related pain is insignificant. Thus, we determined the clinical effects of routine hardware removal irrespective of the degree of hardware-related pain, especially in the perspective of patients’ daily activities. This study was conducted on 80 consecutive cases (78 patients treated by surgery and hardware removal after bony union. There were 56 ankle and 24 distal tibia fractures. The hardware-related pain, ankle joint stiffness, discomfort on ambulation, and patient satisfaction were evaluated before and at least 6 months after hardware removal. Pain score before hardware removal was 3.4 (range 0 to 6 and decreased to 1.3 (range 0 to 6 after removal. 58 (72.5% patients experienced improved ankle stiffness and 65 (81.3% less discomfort while walking on uneven ground and 63 (80.8% patients were satisfied with hardware removal. These results suggest that routine hardware removal after ankle or distal tibia fracture could ameliorate hardware-related pain and improves daily activities and patient satisfaction even when the hardware-related pain is minimal.

  14. Current trends in hardware and software for brain-computer interfaces (BCIs).

    Science.gov (United States)

    Brunner, P; Bianchi, L; Guger, C; Cincotti, F; Schalk, G

    2011-04-01

    A brain-computer interface (BCI) provides a non-muscular communication channel to people with and without disabilities. BCI devices consist of hardware and software. BCI hardware records signals from the brain, either invasively or non-invasively, using a series of device components. BCI software then translates these signals into device output commands and provides feedback. One may categorize different types of BCI applications into the following four categories: basic research, clinical/translational research, consumer products, and emerging applications. These four categories use BCI hardware and software, but have different sets of requirements. For example, while basic research needs to explore a wide range of system configurations, and thus requires a wide range of hardware and software capabilities, applications in the other three categories may be designed for relatively narrow purposes and thus may only need a very limited subset of capabilities. This paper summarizes technical aspects for each of these four categories of BCI applications. The results indicate that BCI technology is in transition from isolated demonstrations to systematic research and commercial development. This process requires several multidisciplinary efforts, including the development of better integrated and more robust BCI hardware and software, the definition of standardized interfaces, and the development of certification, dissemination and reimbursement procedures.

  15. Current trends in hardware and software for brain-computer interfaces (BCIs)

    Science.gov (United States)

    Brunner, P.; Bianchi, L.; Guger, C.; Cincotti, F.; Schalk, G.

    2011-04-01

    A brain-computer interface (BCI) provides a non-muscular communication channel to people with and without disabilities. BCI devices consist of hardware and software. BCI hardware records signals from the brain, either invasively or non-invasively, using a series of device components. BCI software then translates these signals into device output commands and provides feedback. One may categorize different types of BCI applications into the following four categories: basic research, clinical/translational research, consumer products, and emerging applications. These four categories use BCI hardware and software, but have different sets of requirements. For example, while basic research needs to explore a wide range of system configurations, and thus requires a wide range of hardware and software capabilities, applications in the other three categories may be designed for relatively narrow purposes and thus may only need a very limited subset of capabilities. This paper summarizes technical aspects for each of these four categories of BCI applications. The results indicate that BCI technology is in transition from isolated demonstrations to systematic research and commercial development. This process requires several multidisciplinary efforts, including the development of better integrated and more robust BCI hardware and software, the definition of standardized interfaces, and the development of certification, dissemination and reimbursement procedures.

  16. Associations between Depressive State and Impaired Higher-Level Functional Capacity in the Elderly with Long-Term Care Requirements.

    Science.gov (United States)

    Ogata, Soshiro; Hayashi, Chisato; Sugiura, Keiko; Hayakawa, Kazuo

    2015-01-01

    Depressive state has been reported to be significantly associated with higher-level functional capacity among community-dwelling elderly. However, few studies have investigated the associations among people with long-term care requirements. We aimed to investigate the associations between depressive state and higher-level functional capacity and obtain marginal odds ratios using propensity score analyses in people with long-term care requirements. We conducted a cross-sectional study based on participants aged ≥ 65 years (n = 545) who were community dwelling and used outpatient care services for long-term preventive care. We measured higher-level functional capacity, depressive state, and possible confounders. Then, we estimated the marginal odds ratios (i.e., the change in odds of impaired higher-level functional capacity if all versus no participants were exposed to depressive state) by logistic models using generalized linear models with the inverse probability of treatment weighting (IPTW) for propensity score and design-based standard errors. Depressive state was used as the exposure variable and higher-level functional capacity as the outcome variable. The all absolute standardized differences after the IPTW using the propensity scores were functional capacity.

  17. High Performance Motion-Planner Architecture for Hardware-In-the-Loop System Based on Position-Based-Admittance-Control

    Directory of Open Access Journals (Sweden)

    Francesco La Mura

    2018-02-01

    Full Text Available This article focuses on a Hardware-In-the-Loop application developed from the advanced energy field project LIFES50+. The aim is to replicate, inside a wind gallery test facility, the combined effect of aerodynamic and hydrodynamic loads on a floating wind turbine model for offshore energy production, using a force controlled robotic device, emulating floating substructure’s behaviour. In addition to well known real-time Hardware-In-the-Loop (HIL issues, the particular application presented has stringent safety requirements of the HIL equipment and difficult to predict operating conditions, so that extra computational efforts have to be spent running specific safety algorithms and achieving desired performance. To meet project requirements, a high performance software architecture based on Position-Based-Admittance-Control (PBAC is presented, combining low level motion interpolation techniques, efficient motion planning, based on buffer management and Time-base control, and advanced high level safety algorithms, implemented in a rapid real-time control architecture.

  18. ARM assembly language with hardware experiments

    CERN Document Server

    Elahi, Ata

    2015-01-01

    This book provides a hands-on approach to learning ARM assembly language with the use of a TI microcontroller. The book starts with an introduction to computer architecture and then discusses number systems and digital logic. The text covers ARM Assembly Language, ARM Cortex Architecture and its components, and Hardware Experiments using TILM3S1968. Written for those interested in learning embedded programming using an ARM Microcontroller. ·         Introduces number systems and signal transmission methods   ·         Reviews logic gates, registers, multiplexers, decoders and memory   ·         Provides an overview and examples of ARM instruction set   ·         Uses using Keil development tools for writing and debugging ARM assembly language Programs   ·         Hardware experiments using a Mbed NXP LPC1768 microcontroller; including General Purpose Input/Output (GPIO) configuration, real time clock configuration, binary input to 7-segment display, creating ...

  19. Introduction to Hardware Security and Trust

    CERN Document Server

    Wang, Cliff

    2012-01-01

    The emergence of a globalized, horizontal semiconductor business model raises a set of concerns involving the security and trust of the information systems on which modern society is increasingly reliant for mission-critical functionality. Hardware-oriented security and trust issues span a broad range including threats related to the malicious insertion of Trojan circuits designed, e.g.,to act as a ‘kill switch’ to disable a chip, to integrated circuit (IC) piracy,and to attacks designed to extract encryption keys and IP from a chip. This book provides the foundations for understanding hardware security and trust, which have become major concerns for national security over the past decade.  Coverage includes security and trust issues in all types of electronic devices and systems such as ASICs, COTS, FPGAs, microprocessors/DSPs, and embedded systems.  This serves as an invaluable reference to the state-of-the-art research that is of critical significance to the security of,and trust in, modern society�...

  20. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  1. Hardware for dynamic quantum computing experiments: Part I

    Science.gov (United States)

    Johnson, Blake; Ryan, Colm; Riste, Diego; Donovan, Brian; Ohki, Thomas

    Static, pre-defined control sequences routinely achieve high-fidelity operation on superconducting quantum processors. Efforts toward dynamic experiments depending on real-time information have mostly proceeded through hardware duplication and triggers, requiring a combinatorial explosion in the number of channels. We provide a hardware efficient solution to dynamic control with a complete platform of specialized FPGA-based control and readout electronics; these components enable arbitrary control flow, low-latency feedback and/or feedforward, and scale far beyond single-qubit control and measurement. We will introduce the BBN Arbitrary Pulse Sequencer 2 (APS2) control system and the X6 QDSP readout platform. The BBN APS2 features: a sequencer built around implementing short quantum gates, a sequence cache to allow long sequences with branching structures, subroutines for code re-use, and a trigger distribution module to capture and distribute steering information. The X6 QDSP features a single-stage DSP pipeline that combines demodulation with arbitrary integration kernels, and multiple taps to inspect data flow for debugging and calibration. We will show system performance when putting it all together, including a latency budget for feedforward operations. This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Office Contract No. W911NF-10-1-0324.

  2. Advances in flexible optrode hardware for use in cybernetic insects

    Science.gov (United States)

    Register, Joseph; Callahan, Dennis M.; Segura, Carlos; LeBlanc, John; Lissandrello, Charles; Kumar, Parshant; Salthouse, Christopher; Wheeler, Jesse

    2017-08-01

    Optogenetic manipulation is widely used to selectively excite and silence neurons in laboratory experiments. Recent efforts to miniaturize the components of optogenetic systems have enabled experiments on freely moving animals, but further miniaturization is required for freely flying insects. In particular, miniaturization of high channel-count optical waveguides are needed for high-resolution interfaces. Thin flexible waveguide arrays are needed to bend light around tight turns to access small anatomical targets. We present the design of lightweight miniaturized optogentic hardware and supporting electronics for the untethered steering of dragonfly flight. The system is designed to enable autonomous flight and includes processing, guidance sensors, solar power, and light stimulators. The system will weigh less than 200mg and be worn by the dragonfly as a backpack. The flexible implant has been designed to provide stimuli around nerves through micron scale apertures of adjacent neural tissue without the use of heavy hardware. We address the challenges of lightweight optogenetics and the development of high contrast polymer waveguides for this purpose.

  3. Development of the Sixty Watt Heat-Source hardware components

    International Nuclear Information System (INIS)

    McNeil, D.C.; Wyder, W.C.

    1995-01-01

    The Sixty Watt Heat Source is a nonvented heat source designed to provide 60 thermal watts of power. The unit incorporates a plutonium-238 fuel pellet encapsulated in a hot isostatically pressed General Purpose Heat Source (GPHS) iridium clad vent set. A molybdenum liner sleeve and support components isolate the fueled iridium clad from the T-111 strength member. This strength member serves as the pressure vessel and fulfills the impact and hydrostatic strength requirements. The shell is manufactured from Hastelloy S which prevents the internal components from being oxidized. Conventional drawing operations were used to simplify processing and utilize existing equipment. The deep drawing reqirements for the molybdenum, T-111, and Hastelloy S were developed from past heat source hardware fabrication experiences. This resulted in multiple step drawing processes with intermediate heat treatments between forming steps. The molybdenum processing included warm forming operations. This paper describes the fabrication of these components and the multiple draw tooling developed to produce hardware to the desired specifications. copyright 1995 American Institute of Physics

  4. Spinal fusion-hardware construct: Basic concepts and imaging review

    Science.gov (United States)

    Nouh, Mohamed Ragab

    2012-01-01

    The interpretation of spinal images fixed with metallic hardware forms an increasing bulk of daily practice in a busy imaging department. Radiologists are required to be familiar with the instrumentation and operative options used in spinal fixation and fusion procedures, especially in his or her institute. This is critical in evaluating the position of implants and potential complications associated with the operative approaches and spinal fixation devices used. Thus, the radiologist can play an important role in patient care and outcome. This review outlines the advantages and disadvantages of commonly used imaging methods and reports on the best yield for each modality and how to overcome the problematic issues associated with the presence of metallic hardware during imaging. Baseline radiographs are essential as they are the baseline point for evaluation of future studies should patients develop symptoms suggesting possible complications. They may justify further imaging workup with computed tomography, magnetic resonance and/or nuclear medicine studies as the evaluation of a patient with a spinal implant involves a multi-modality approach. This review describes imaging features of potential complications associated with spinal fusion surgery as well as the instrumentation used. This basic knowledge aims to help radiologists approach everyday practice in clinical imaging. PMID:22761979

  5. A novel hardware implementation for detecting respiration rate using photoplethysmography.

    Science.gov (United States)

    Prinable, Joseph; Jones, Peter; Thamrin, Cindy; McEwan, Alistair

    2017-07-01

    Asthma is a serious public health problem. Continuous monitoring of breathing may offer an alternative way to assess disease status. In this paper we present a novel hardware implementation for the capture and storage of a photoplethysmography (PPG) signal. The LED duty cycle was altered to determine the effect on respiratory rate accuracy. The oximeter was mounted to the left index finger of ten healthy volunteers. The breathing rate derived from the oximeter was validated against a nasal airflow sensor. The duty cycle of a pulse oximeter was changed between 5%, 10% and 25% at a sample rate of 500 Hz. A PPG signal and reference signal was captured for each duty cycle. The PPG signals were post processed in Matlab to derive a respiration rate using an existing Matlab toolbox. At a 25% duty cycle the RMSE was <;2 breaths per minute for the top performing algorithm. The RMSE increased to over 5 breaths per minute when the duty cycle was reduced to 5%. The power consumed by the hardware for a 5%, 10% and 25% duty cycle was 5.4 mW, 7.8 mW, and 15 mW respectively. For clinical assessment of respiratory rate, a RSME of <;2 breaths per minute is recommended. Further work is required to determine utility in asthma management. However for non-clinical applications such as fitness tracking, lower accuracy may be sufficient to allow a reduced duty cycle setting.

  6. Space station common module network topology and hardware development

    Science.gov (United States)

    Anderson, P.; Braunagel, L.; Chwirka, S.; Fishman, M.; Freeman, K.; Eason, D.; Landis, D.; Lech, L.; Martin, J.; Mccorkle, J.

    1990-01-01

    Conceptual space station common module power management and distribution (SSM/PMAD) network layouts and detailed network evaluations were developed. Individual pieces of hardware to be developed for the SSM/PMAD test bed were identified. A technology assessment was developed to identify pieces of equipment requiring development effort. Equipment lists were developed from the previously selected network schematics. Additionally, functional requirements for the network equipment as well as other requirements which affected the suitability of specific items for use on the Space Station Program were identified. Assembly requirements were derived based on the SSM/PMAD developed requirements and on the selected SSM/PMAD network concepts. Basic requirements and simplified design block diagrams are included. DC remote power controllers were successfully integrated into the DC Marshall Space Flight Center breadboard. Two DC remote power controller (RPC) boards experienced mechanical failure of UES 706 stud-mounted diodes during mechanical installation of the boards into the system. These broken diodes caused input to output shorting of the RPC's. The UES 706 diodes were replaced on these RPC's which eliminated the problem. The DC RPC's as existing in the present breadboard configuration do not provide ground fault protection because the RPC was designed to only switch the hot side current. If ground fault protection were to be implemented, it would be necessary to design the system so the RPC switched both the hot and the return sides of power.

  7. Performance and system flexibility of the CDF Hardware Event Builder

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, T.M.; Schurecht, K. (Fermi National Accelerator Lab., Batavia, IL (United States)); Sinervo, P. (Toronto Univ., ON (Canada). Dept. of Physics)

    1991-11-01

    The CDF Hardware Event Builder (1) is a flexible system which is built from a combination of three different 68020-based single width Fastbus modules. The system may contain as few as three boards or as many as fifteen, depending on the specific application. Functionally, the boards receive a command to read out the raw event data from a set of Fastbus based data buffers ( scanners''), reformat data and then write the data to a Level 3 trigger/processing farm which will decide to throw the event away or to write it to tape. The data acquisition system at CDF will utilize two nine board systems which will allow an event rate of up to 35 Hz into the Level 3 trigger. This paper will present detailed performance factors, system and individual board architecture, and possible system configurations.

  8. Health Maintenance System (HMS) Hardware Research, Design, and Collaboration

    Science.gov (United States)

    Gonzalez, Stefanie M.

    2010-01-01

    The Space Life Sciences division (SLSD) concentrates on optimizing a crew member's health. Developments are translated into innovative engineering solutions, research growth, and community awareness. This internship incorporates all those areas by targeting various projects. The main project focuses on integrating clinical and biomedical engineering principles to design, develop, and test new medical kits scheduled for launch in the Spring of 2011. Additionally, items will be tagged with Radio Frequency Interference Devices (RFID) to keep track of the inventory. The tags will then be tested to optimize Radio Frequency feed and feed placement. Research growth will occur with ground based experiments designed to measure calcium encrusted deposits in the International Space Station (ISS). The tests will assess the urine calcium levels with Portable Clinical Blood Analyzer (PCBA) technology. If effective then a model for urine calcium will be developed and expanded to microgravity environments. To support collaboration amongst the subdivisions of SLSD the architecture of the Crew Healthcare Systems (CHeCS) SharePoint site has been redesigned for maximum efficiency. Community collaboration has also been established with the University of Southern California, Dept. of Aeronautical Engineering and the Food and Drug Administration (FDA). Hardware disbursements will transpire within these communities to support planetary surface exploration and to serve as an educational tool demonstrating how ground based medicine influenced the technological development of space hardware.

  9. Towards tributyltin quantification in natural water at the Environmental Quality Standard level required by the Water Framework Directive.

    Science.gov (United States)

    Alasonati, Enrica; Fettig, Ina; Richter, Janine; Philipp, Rosemarie; Milačič, Radmila; Sčančar, Janez; Zuliani, Tea; Tunç, Murat; Bilsel, Mine; Gören, Ahmet Ceyhan; Fisicaro, Paola

    2016-11-01

    The European Union (EU) has included tributyltin (TBT) and its compounds in the list of priority water pollutants. Quality standards demanded by the EU Water Framework Directive (WFD) require determination of TBT at so low concentration level that chemical analysis is still difficult and further research is needed to improve the sensitivity, the accuracy and the precision of existing methodologies. Within the frame of a joint research project "Traceable measurements for monitoring critical pollutants under the European Water Framework Directive" in the European Metrology Research Programme (EMRP), four metrological and designated institutes have developed a primary method to quantify TBT in natural water using liquid-liquid extraction (LLE) and species-specific isotope dilution mass spectrometry (SSIDMS). The procedure has been validated at the Environmental Quality Standard (EQS) level (0.2ngL(-1) as cation) and at the WFD-required limit of quantification (LOQ) (0.06ngL(-1) as cation). The LOQ of the methodology was 0.06ngL(-1) and the average measurement uncertainty at the LOQ was 36%, which agreed with WFD requirements. The analytical difficulties of the method, namely the presence of TBT in blanks and the sources of measurement uncertainties, as well as the interlaboratory comparison results are discussed in detail. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. To Predict the Requirement of Pharmacotherapy by OGTT Glucose Levels in Women with GDM Classified by the IADPSG Criteria

    Directory of Open Access Journals (Sweden)

    Gülen Yerlikaya

    2018-01-01

    Full Text Available The aim of this study was to assess the association between OGTT glucose levels and requirement of pharmacotherapy in GDM patients classified by the IADPSG criteria. This study included 203 GDM patients (108 managed with lifestyle modification and 95 requiring pharmacotherapy. Clinical risk factors and OGTT glucose concentrations at 0 (G0, 60 (G60, and 120 min (G120 were collected. OGTT glucose levels were significantly associated with the later requirement of pharmacotherapy (ROC-AUC: 71.1, 95% CI: 63.8–78.3. Also, the combination of clinical risk factors (age, BMI, parity, and pharmacotherapy in previous gestation showed an acceptable predictive accuracy (ROC-AUC: 72.1, 95% CI: 65.0–79.2, which was further improved when glycemic parameters were added (ROC-AUC: 77.5, 95% CI: 71.5–83.9. Random forest analysis revealed the highest variable importance for G0, G60, and age. OGTT glucose measures in addition to clinical risk factors showed promising properties for risk stratification in GDM patients classified by the recently established IADPSG criteria.

  11. Handbook of hardware/software codesign

    CERN Document Server

    Teich, Jürgen

    2017-01-01

    This handbook presents fundamental knowledge on the hardware/software (HW/SW) codesign methodology. Contributing expert authors look at key techniques in the design flow as well as selected codesign tools and design environments, building on basic knowledge to consider the latest techniques. The book enables readers to gain real benefits from the HW/SW codesign methodology through explanations and case studies which demonstrate its usefulness. Readers are invited to follow the progress of design techniques through this work, which assists readers in following current research directions and learning about state-of-the-art techniques. Students and researchers will appreciate the wide spectrum of subjects that belong to the design methodology from this handbook. .

  12. EPICS: Allen-Bradley hardware reference manual

    International Nuclear Information System (INIS)

    Nawrocki, G.

    1993-01-01

    This manual covers the following hardware: Allen-Bradley 6008 -- SV VMEbus I/O scanner; Allen-Bradley universal I/O chassis 1771-A1B, -A2B, -A3B, and -A4B; Allen-Bradley power supply module 1771-P4S; Allen-Bradley 1771-ASB remote I/O adapter module; Allen-Bradley 1771-IFE analog input module; Allen-Bradley 1771-OFE analog output module; Allen-Bradley 1771-IG(D) TTL input module; Allen-Bradley 1771-OG(d) TTL output; Allen-Bradley 1771-IQ DC selectable input module; Allen-Bradley 1771-OW contact output module; Allen-Bradley 1771-IBD DC (10--30V) input module; Allen-Bradley 1771-OBD DC (10--60V) output module; Allen-Bradley 1771-IXE thermocouple/millivolt input module; and the Allen-Bradley 2705 RediPANEL push button module

  13. Theorem Proving in Intel Hardware Design

    Science.gov (United States)

    O'Leary, John

    2009-01-01

    For the past decade, a framework combining model checking (symbolic trajectory evaluation) and higher-order logic theorem proving has been in production use at Intel. Our tools and methodology have been used to formally verify execution cluster functionality (including floating-point operations) for a number of Intel products, including the Pentium(Registered TradeMark)4 and Core(TradeMark)i7 processors. Hardware verification in 2009 is much more challenging than it was in 1999 - today s CPU chip designs contain many processor cores and significant firmware content. This talk will attempt to distill the lessons learned over the past ten years, discuss how they apply to today s problems, outline some future directions.

  14. Hardware implementation of stochastic spiking neural networks.

    Science.gov (United States)

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni

    2012-08-01

    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  15. Chronic alcohol binging injures the liver and other organs by reducing NAD⁺ levels required for sirtuin's deacetylase activity.

    Science.gov (United States)

    French, Samuel W

    2016-04-01

    NAD(+) levels are markedly reduced when blood alcohol levels are high during binge drinking. This causes liver injury to occur because the enzymes that require NAD(+) as a cofactor such as the sirtuin de-acetylases cannot de-acetylate acetylated proteins such as acetylated histones. This prevents the epigenetic changes that regulate metabolic processes and which prevent organ injury such as fatty liver in response to alcohol abuse. Hyper acetylation of numerous regulatory proteins develops. Systemic multi-organ injury occurs when NAD(+) is reduced. For instance the Circadian clock is altered if NAD(+) is not available. Cell cycle arrest occurs due to up regulation of cell cycle inhibitors leading to DNA damage, mutations, apoptosis and tumorigenesis. NAD(+) is linked to aging in the regulation of telomere stability. NAD(+) is required for mitochondrial renewal. Alcohol dehydrogenase is present in every visceral organ in the body so that there is a systemic reduction of NAD(+) levels in all of these organs during binge drinking. Copyright © 2016. Published by Elsevier Inc.

  16. An assessment of issues related to determination of time periods required for isolation of high level waste

    International Nuclear Information System (INIS)

    Cohen, J.J.; Daer, G.R.; Vogt, D.K.; Woolfolk, S.W.

    1989-01-01

    A commonly held perception is that disposal of spent nuclear fuel or high-level waste presents a risk of unprecedented duration. In 40 CFR 191, the EPA requires that projected releases of radioactivity be limited for 10,000 years after disposal with the intent that risks from the disposal repository be no greater than those from the uranium ore deposit from which the nuclear fuel was originally extracted. This study reviews issues involved in assessing compliance with the requirement. The determination of compliance is assumption dependent primarily due to uncertainties in dosi-metric data, and relative availability of the radioactivity for environmental transport and eventual assimilation by humans. A conclusion of this study is that, in time, a spent fuel disposal repository such as the projected Yucca Mountain Project Facility will become less hazardous than the original ore deposit

  17. High-level waste storage tank farms/242-A evaporator standards/requirements identification document (S/RID), Vol. 7

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-01

    This Requirements Identification Document (RID) describes an Occupational Health and Safety Program as defined through the Relevant DOE Orders, regulations, industry codes/standards, industry guidance documents and, as appropriate, good industry practice. The definition of an Occupational Health and Safety Program as specified by this document is intended to address Defense Nuclear Facilities Safety Board Recommendations 90-2 and 91-1, which call for the strengthening of DOE complex activities through the identification and application of relevant standards which supplement or exceed requirements mandated by DOE Orders. This RID applies to the activities, personnel, structures, systems, components, and programs involved in maintaining the facility and executing the mission of the High-Level Waste Storage Tank Farms.

  18. High-level waste storage tank farms/242-A evaporator standards/requirements identification document (S/RID), Vol. 3

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-01

    The Safeguards and Security (S&S) Functional Area address the programmatic and technical requirements, controls, and standards which assure compliance with applicable S&S laws and regulations. Numerous S&S responsibilities are performed on behalf of the Tank Farm Facility by site level organizations. Certain other responsibilities are shared, and the remainder are the sole responsibility of the Tank Farm Facility. This Requirements Identification Document describes a complete functional Safeguards and Security Program that is presumed to be the responsibility of the Tank Farm Facility. The following list identifies the programmatic elements in the S&S Functional Area: Program Management, Protection Program Scope and Evaluation, Personnel Security, Physical Security Systems, Protection Program Operations, Material Control and Accountability, Information Security, and Key Program Interfaces.

  19. High-level waste storage tank farms/242-A evaporator standards/requirements identification document (S/RID), Vol. 3

    International Nuclear Information System (INIS)

    1994-04-01

    The Safeguards and Security (S ampersand S) Functional Area address the programmatic and technical requirements, controls, and standards which assure compliance with applicable S ampersand S laws and regulations. Numerous S ampersand S responsibilities are performed on behalf of the Tank Farm Facility by site level organizations. Certain other responsibilities are shared, and the remainder are the sole responsibility of the Tank Farm Facility. This Requirements Identification Document describes a complete functional Safeguards and Security Program that is presumed to be the responsibility of the Tank Farm Facility. The following list identifies the programmatic elements in the S ampersand S Functional Area: Program Management, Protection Program Scope and Evaluation, Personnel Security, Physical Security Systems, Protection Program Operations, Material Control and Accountability, Information Security, and Key Program Interfaces

  20. High-level waste storage tank farms/242-A evaporator standards/requirements identification document (S/RID), Vol. 7

    International Nuclear Information System (INIS)

    1994-04-01

    This Requirements Identification Document (RID) describes an Occupational Health and Safety Program as defined through the Relevant DOE Orders, regulations, industry codes/standards, industry guidance documents and, as appropriate, good industry practice. The definition of an Occupational Health and Safety Program as specified by this document is intended to address Defense Nuclear Facilities Safety Board Recommendations 90-2 and 91-1, which call for the strengthening of DOE complex activities through the identification and application of relevant standards which supplement or exceed requirements mandated by DOE Orders. This RID applies to the activities, personnel, structures, systems, components, and programs involved in maintaining the facility and executing the mission of the High-Level Waste Storage Tank Farms

  1. Hardware authentication using transmission spectra modified optical fiber

    International Nuclear Information System (INIS)

    Grubbs, Robert K.; Romero, Juan A.

    2010-01-01

    The ability to authenticate the source and integrity of data is critical to the monitoring and inspection of special nuclear materials, including hardware related to weapons production. Current methods rely on electronic encryption/authentication codes housed in monitoring devices. This always invites the question of implementation and protection of authentication information in an electronic component necessitating EMI shielding, possibly an on board power source to maintain the information in memory. By using atomic layer deposition techniques (ALD) on photonic band gap (PBG) optical fibers we will explore the potential to randomly manipulate the output spectrum and intensity of an input light source. This randomization could produce unique signatures authenticating devices with the potential to authenticate data. An external light source projected through the fiber with a spectrometer at the exit would 'read' the unique signature. No internal power or computational resources would be required.

  2. Acquisition of reliable vacuum hardware for large accelerator systems

    International Nuclear Information System (INIS)

    Welch, K.M.

    1995-01-01

    Credible and effective communications prove to be the major challenge in the acquisition of reliable vacuum hardware. Technical competence is necessary but not sufficient. The authors must effectively communicate with management, sponsoring agencies, project organizations, service groups, staff and with vendors. Most of Deming's 14 quality assurance tenants relate to creating an enlightened environment of good communications. All projects progress along six distinct, closely coupled, dynamic phases. All six phases are in a state of perpetual change. These phases and their elements are discussed, with emphasis given to the acquisition phase and its related vocabulary. Large projects require great clarity and rigor as poor communications can be costly. For rigor to be cost effective, it can't be pedantic. Clarity thrives best in a low-risk, team environment

  3. Comparison of spike-sorting algorithms for future hardware implementation.

    Science.gov (United States)

    Gibson, Sarah; Judy, Jack W; Markovic, Dejan

    2008-01-01

    Applications such as brain-machine interfaces require hardware spike sorting in order to (1) obtain single-unit activity and (2) perform data reduction for wireless transmission of data. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection and feature extraction algorithms for spike sorting are described briefly and evaluated in terms of accuracy versus computational complexity. The nonlinear energy operator method is chosen as the optimal spike detection algorithm, being most robust over noise and relatively simple. The discrete derivatives method [1] is chosen as the optimal feature extraction method, maintaining high accuracy across SNRs with a complexity orders of magnitude less than that of traditional methods such as PCA.

  4. Hardware Testing for the Optical PAyload for Lasercomm Science (OPALS)

    Science.gov (United States)

    Slagle, Amanda

    2011-01-01

    Hardware for several subsystems of the proposed Optical PAyload for Lasercomm Science (OPALS), including the gimbal and avionics, was tested. Microswitches installed on the gimbal were evaluated to verify that their point of actuation would remain within the acceptable range even if the switches themselves move slightly during launch. An inspection of the power board was conducted to ensure that all power and ground signals were isolated, that polarized components were correctly oriented, and that all components were intact and securely soldered. Initial testing on the power board revealed several minor problems, but once they were fixed the power board was shown to function correctly. All tests and inspections were documented for future use in verifying launch requirements.

  5. Hardware accelerator design for tracking in smart camera

    Science.gov (United States)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.

  6. Hardware accelerator design for change detection in smart camera

    Science.gov (United States)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Chaudhury, Santanu; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in Human Computer Interaction. In any remote surveillance scenario, smart cameras have to take intelligent decisions to select frames of significant changes to minimize communication and processing overhead. Among many of the algorithms for change detection, one based on clustering based scheme was proposed for smart camera systems. However, such an algorithm could achieve low frame rate far from real-time requirements on a general purpose processors (like PowerPC) available on FPGAs. This paper proposes the hardware accelerator capable of detecting real time changes in a scene, which uses clustering based change detection scheme. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA board. Resulted frame rate is 30 frames per second for QVGA resolution in gray scale.

  7. Accelerating epistasis analysis in human genetics with consumer graphics hardware

    Directory of Open Access Journals (Sweden)

    Cancare Fabio

    2009-07-01

    Full Text Available Abstract Background Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs have more memory bandwidth and computational capability than Central Processing Units (CPUs and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. Findings We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective

  8. Accelerating epistasis analysis in human genetics with consumer graphics hardware.

    Science.gov (United States)

    Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H

    2009-07-24

    Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR) is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs) have more memory bandwidth and computational capability than Central Processing Units (CPUs) and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective performance while leaving the CPU available for other

  9. Compliance with the annual NO 2 air quality standard in Athens. Required NO x levels and expected health implications

    Science.gov (United States)

    Chaloulakou, A.; Mavroidis, I.; Gavriil, I.

    Recent risk assessment studies have shown that high outdoor NO 2 levels observed in residential areas contribute to increased respiratory and cardiovascular diseases and mortality. Detailed information on present NO 2 levels as well as predictions of NO 2 concentrations corresponding to reduced NO x levels in urban areas are very useful to decision and policy makers in order to protect the public health. In the present paper, monitoring stations of the Athens network are initially classified into two main groups, traffic affected and urban background, using effectively a criterion based on the ratio of annual mean NO:NO 2 concentrations. Two empirical methodologies are then considered and compared for assessing the effect of different NO x levels on the attainment of the annual NO 2 air quality standard at urban-background locations in the Athens area. An interesting finding is that these two methodologies, one more general and one both year and site dependent, give similar results for the specific study area and can be applied alternatively based on the length of available concentration time series. The results show that in order to meet the EU annual mean NO 2 objective at all the urban-background locations of the Athens area, annual NO x concentrations should be reduced to approximately 60 μg m -3, requiring NO x emission reductions of up to 30%. An analysis of the health implications of the currently observed NO 2 levels is conducted, based on a dose-response relationship, and is coupled with available health-related data for the Athens area. This analysis suggests that if NO 2 concentrations were reduced to the levels of the annual EU air quality standard, then a decrease of hospital admissions of up to 2.6% would be observed, depending on the levels of NO 2 measured at different monitoring sites of the Athens conurbation.

  10. An integrable low-cost hardware random number generator

    Science.gov (United States)

    Ranasinghe, Damith C.; Lim, Daihyun; Devadas, Srinivas; Jamali, Behnam; Zhu, Zheng; Cole, Peter H.

    2005-02-01

    A hardware random number generator is different from a pseudo-random number generator; a pseudo-random number generator approximates the assumed behavior of a real hardware random number generator. Simple pseudo random number generators suffices for most applications, however for demanding situations such as the generation of cryptographic keys, requires an efficient and a cost effective source of random numbers. Arbiter-based Physical Unclonable Functions (PUFs) proposed for physical authentication of ICs exploits statistical delay variation of wires and transistors across integrated circuits, as a result of process variations, to build a secret key unique to each IC. Experimental results and theoretical studies show that a sufficient amount of variation exits across IC"s. This variation enables each IC to be identified securely. It is possible to exploit the unreliability of these PUF responses to build a physical random number generator. There exists measurement noise, which comes from the instability of an arbiter when it is in a racing condition. There exist challenges whose responses are unpredictable. Without environmental variations, the responses of these challenges are random in repeated measurements. Compared to other physical random number generators, the PUF-based random number generators can be a compact and a low-power solution since the generator need only be turned on when required. A 64-stage PUF circuit costs less than 1000 gates and the circuit can be implemented using a standard IC manufacturing processes. In this paper we have presented a fast and an efficient random number generator, and analysed the quality of random numbers produced using an array of tests used by the National Institute of Standards and Technology to evaluate the randomness of random number generators designed for cryptographic applications.

  11. Fides: Lightweight Authenticated Cipher with Side-Channel Resistance for Constrained Hardware

    DEFF Research Database (Denmark)

    Bilgin, Begul; Bogdanov, Andrey; Knezevic, Miroslav

    2013-01-01

    In this paper, we present a novel lightweight authenticated cipher optimized for hardware implementations called Fides. It is an online nonce-based authenticated encryption scheme with authenticated data whose area requirements are as low as 793 GE and 1001 GE for 80-bit and 96-bit security...

  12. Internet-based hardware/software co-design framework for embedded 3D graphics applications

    Directory of Open Access Journals (Sweden)

    Wong Weng-Fai

    2011-01-01

    Full Text Available Abstract Advances in technology are making it possible to run three-dimensional (3D graphics applications on embedded and handheld devices. In this article, we propose a hardware/software co-design environment for 3D graphics application development that includes the 3D graphics software, OpenGL ES application programming interface (API, device driver, and 3D graphics hardware simulators. We developed a 3D graphics system-on-a-chip (SoC accelerator using transaction-level modeling (TLM. This gives software designers early access to the hardware even before it is ready. On the other hand, hardware designers also stand to gain from the more complex test benches made available in the software for verification. A unique aspect of our framework is that it allows hardware and software designers from geographically dispersed areas to cooperate and work on the same framework. Designs can be entered and executed from anywhere in the world without full access to the entire framework, which may include proprietary components. This results in controlled and secure transparency and reproducibility, granting leveled access to users of various roles.

  13. Practical Considerations regarding Implementation of Wind Power Applications into Real-Time Hardware-In-The-Loop Framework

    DEFF Research Database (Denmark)

    Petersen, Lennart; Iov, Florin

    2017-01-01

    , where the focus is laid on the model development in a real-time simulator. It enables to verify the functionality of developed controls, which is one of the research priorities due to the increased complexity of large wind power plants requiring high level of com-munication between plant control......This paper addresses the system implementation of voltage control architecture in wind power plants into a Real-Time Hardware-In-The-Loop framework. The increasing amount of wind power penetration into the power systems has en-gaged the wind power plants to take over the responsibility for adequate...... control of the node voltages, which has previ-ously been accomplished by conventional generation. Voltage support at the point of common coupling is realized by an overall wind power plant controller which requires high-performance and robust control solution. In most cases the system including all...

  14. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration

    Science.gov (United States)

    Saenz, Daniel L.; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu’s method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms.

  15. Fed levels of amino acids are required for the somatotropin-induced increase in muscle protein synthesis.

    Science.gov (United States)

    Wilson, Fiona A; Suryawan, Agus; Orellana, Renán A; Nguyen, Hanh V; Jeyapalan, Asumthia S; Gazzaneo, Maria C; Davis, Teresa A

    2008-10-01

    Chronic somatotropin (pST) treatment in pigs increases muscle protein synthesis and circulating insulin, a known promoter of protein synthesis. Previously, we showed that the pST-mediated rise in insulin could not account for the pST-induced increase in muscle protein synthesis when amino acids were maintained at fasting levels. This study aimed to determine whether the pST-induced increase in insulin promotes skeletal muscle protein synthesis when amino acids are provided at fed levels and whether the response is associated with enhanced translation initiation factor activation. Growing pigs were treated with pST (0 or 180 microg x kg(-1) x day(-1)) for 7 days, and then pancreatic-glucose-amino acid clamps were performed. Amino acids were raised to fed levels in the presence of either fasted or fed insulin concentrations; glucose was maintained at fasting throughout. Muscle protein synthesis was increased by pST treatment and by amino acids (with or without insulin) (P<0.001). In pST-treated pigs, fed, but not fasting, amino acid concentrations further increased muscle protein synthesis rates irrespective of insulin level (P<0.02). Fed amino acids, with or without raised insulin concentrations, increased the phosphorylation of S6 kinase (S6K1) and eukaryotic initiation factor (eIF) 4E-binding protein 1 (4EBP1), decreased inactive 4EBP1.eIF4E complex association, and increased active eIF4E.eIF4G complex formation (P<0.02). pST treatment did not alter translation initiation factor activation. We conclude that the pST-induced stimulation of muscle protein synthesis requires fed amino acid levels, but not fed insulin levels. However, under the current conditions, the response to amino acids is not mediated by the activation of translation initiation factors that regulate mRNA binding to the ribosomal complex.

  16. Survey of hardware supported by the Control System at the Advanced Photon Source

    International Nuclear Information System (INIS)

    Coulter, K.J.; Nawrocki, G.J.

    1993-01-01

    The Experimental Physics and Industrial control System (EPICS) has been under development at Los Alamos and Argonne National Laboratories for over six years. A wide variety of instrumentation is now supported. This presentation will give an overview of the types of hardware and subsystems which are currently supported and will discuss future plans for addressing additional hardware requirements at the APS. Supported systems to be discussed include: motion control, vacuum pump control and system monitoring, standard laboratory instrumentation (ADCs, DVMs, pulse generators, etc.), image processing, discrete binary and analog I/O, and standard temperature, pressure and flow monitoring

  17. The Application of Hardware in the Loop Testing for Distributed Engine Control

    Science.gov (United States)

    Thomas, George L.; Culley, Dennis E.; Brand, Alex

    2016-01-01

    The essence of a distributed control system is the modular partitioning of control function across a hardware implementation. This type of control architecture requires embedding electronics in a multitude of control element nodes for the execution of those functions, and their integration as a unified system. As the field of distributed aeropropulsion control moves toward reality, questions about building and validating these systems remain. This paper focuses on the development of hardware-in-the-loop (HIL) test techniques for distributed aero engine control, and the application of HIL testing as it pertains to potential advanced engine control applications that may now be possible due to the intelligent capability embedded in the nodes.

  18. Hardware Locks with Priority Ceiling Emulation for a Java Chip-Multiprocessor

    DEFF Research Database (Denmark)

    Strøm, Torur Biskopstø; Schoeberl, Martin

    2015-01-01

    According to the safety-critical Java specification, priority ceiling emulation is a requirement for implementations, as it has preferable properties, such as avoiding priority inversion and being deadlock free on uni-core systems. In this paper we explore our hardware supported implementation...... of priority ceiling emulation on the multicore Java optimized processor, and compare it to the existing hardware locks on the Java optimized processor. We find that the additional overhead for priority ceiling emulation on a multicore processor is several times higher than simpler, non-premptive locks, mainly...

  19. A data acquisition computer for high energy physics applications DAFNE:- hardware manual

    International Nuclear Information System (INIS)

    Barlow, J.; Seller, P.; De-An, W.

    1983-07-01

    A high performance stand alone computer system based on the Motorola 68000 micro processor has been built at the Rutherford Appleton Laboratory. Although the design was strongly influenced by the requirement to provide a compact data acquisition computer for the high energy physics environment, the system is sufficiently general to find applications in a wider area. It provides colour graphics and tape and disc storage together with access to CAMAC systems. This report is the hardware manual of the data acquisition computer, DAFNE (Data Acquisition For Nuclear Experiments), and as such contains a full description of the hardware structure of the computer system. (author)

  20. Greater-than-Class C low-level radioactive waste shipping package/container identification and requirements study. National Low-Level Waste Management Program

    Energy Technology Data Exchange (ETDEWEB)

    Tyacke, M.

    1993-08-01

    This report identifies a variety of shipping packages (also referred to as casks) and waste containers currently available or being developed that could be used for greater-than-Class C (GTCC) low-level waste (LLW). Since GTCC LLW varies greatly in size, shape, and activity levels, the casks and waste containers that could be used range in size from small, to accommodate a single sealed radiation source, to very large-capacity casks/canisters used to transport or dry-store highly radioactive spent fuel. In some cases, the waste containers may serve directly as shipping packages, while in other cases, the containers would need to be placed in a transport cask. For the purpose of this report, it is assumed that the generator is responsible for transporting the waste to a Department of Energy (DOE) storage, treatment, or disposal facility. Unless DOE establishes specific acceptance criteria, the receiving facility would need the capability to accept any of the casks and waste containers identified in this report. In identifying potential casks and waste containers, no consideration was given to their adequacy relative to handling, storage, treatment, and disposal. Those considerations must be addressed separately as the capabilities of the receiving facility and the handling requirements and operations are better understood.

  1. Low-level waste management in the South. Task 4.2 - long-term care requirements

    International Nuclear Information System (INIS)

    1983-01-01

    This paper provides an analysis of the long-term care requirements of low-level radioactive waste disposal facilities. Among the topics considered are the technical requirements for long-term care, the experiences of the three inactive and three active commercial disposal facilities concerning perpetual care and maintenance, and the financial management of a perpetual care fund. In addition, certain recommendations for the establishment of a perpetual care fund are provided. The predominant method of disposing of low-level radioactive wastes is shallow land burial. After studying alternative methods of disposal, the U.S Nuclear Regulatory Commission (NRC) concluded that there are no compelling reasons for abandoning this disposal method. Of the 22 shallow land burial facilities in the U.S., the federal government maintains 14 active and two inactive disposal sites. There are three active (Barnwell, South Carolina; Hanford, Washington; and Beatty, Nevada) and three inactive commercial disposal facilities (Maxey Flats, Kentucky; Sheffield, Illinois; and West Valley, New York). The life of a typical facility can be broken into five phases: preoperational, operational, closure, postclosure observation and maintenance, and institutional control. Long-term care of a shallow land burial facility will begin with the disposal site closure phase and continue through the postclosure observation and maintenance and institutional control phases. Since the postclosure observation and maintenance phase will last about five years and the institutional control phase 100 years, the importance of a well planned long-term care program is apparent. 26 references, 1 table

  2. Relationship of Baseline Hemoglobin Level with Serum Ferritin, Postphlebotomy Hemoglobin Changes, and Phlebotomy Requirements among HFE C282Y Homozygotes

    Directory of Open Access Journals (Sweden)

    Seyed Ali Mousavi

    2015-01-01

    Full Text Available Objectives. We aimed to examine whether baseline hemoglobin levels in C282Y-homozygous patients are related to the degree of serum ferritin (SF elevation and whether patients with different baseline hemoglobin have different phlebotomy requirements. Methods. A total of 196 patients (124 males and 72 females who had undergone therapeutic phlebotomy and had SF and both pre- and posttreatment hemoglobin values were included in the study. Results. Bivariate correlation analysis suggested that baseline SF explains approximately 6 to 7% of the variation in baseline hemoglobin. The results also showed that males who had higher (≥150 g/L baseline hemoglobin levels had a significantly greater reduction in their posttreatment hemoglobin despite requiring fewer phlebotomies to achieve iron depletion than those who had lower (<150 g/L baseline hemoglobin, regardless of whether baseline SF was below or above 1000 µg/L. There were no significant differences between hemoglobin subgroups regarding baseline and treatment characteristics, except for transferrin saturation between male subgroups with SF above 1000 µg/L. Similar differences were observed when females with higher (≥138 g/L baseline hemoglobin were compared with those with lower (<138 g/L baseline hemoglobin. Conclusion. Dividing C282Y-homozygous patients into just two subgroups according to the degree of baseline SF elevation may obscure important subgroup variations.

  3. Relationship of Baseline Hemoglobin Level with Serum Ferritin, Postphlebotomy Hemoglobin Changes, and Phlebotomy Requirements among HFE C282Y Homozygotes

    Science.gov (United States)

    Mousavi, Seyed Ali; Mahmood, Faiza; Aandahl, Astrid; Knutsen, Teresa Risopatron; Llohn, Abid Hussain

    2015-01-01

    Objectives. We aimed to examine whether baseline hemoglobin levels in C282Y-homozygous patients are related to the degree of serum ferritin (SF) elevation and whether patients with different baseline hemoglobin have different phlebotomy requirements. Methods. A total of 196 patients (124 males and 72 females) who had undergone therapeutic phlebotomy and had SF and both pre- and posttreatment hemoglobin values were included in the study. Results. Bivariate correlation analysis suggested that baseline SF explains approximately 6 to 7% of the variation in baseline hemoglobin. The results also showed that males who had higher (≥150 g/L) baseline hemoglobin levels had a significantly greater reduction in their posttreatment hemoglobin despite requiring fewer phlebotomies to achieve iron depletion than those who had lower (baseline hemoglobin, regardless of whether baseline SF was below or above 1000 µg/L. There were no significant differences between hemoglobin subgroups regarding baseline and treatment characteristics, except for transferrin saturation between male subgroups with SF above 1000 µg/L. Similar differences were observed when females with higher (≥138 g/L) baseline hemoglobin were compared with those with lower (baseline hemoglobin. Conclusion. Dividing C282Y-homozygous patients into just two subgroups according to the degree of baseline SF elevation may obscure important subgroup variations. PMID:26380265

  4. Experiment Design Regularization-Based Hardware/Software Codesign for Real-Time Enhanced Imaging in Uncertain Remote Sensing Environment

    Directory of Open Access Journals (Sweden)

    Castillo Atoche A

    2010-01-01

    Full Text Available A new aggregated Hardware/Software (HW/SW codesign approach to optimization of the digital signal processing techniques for enhanced imaging with real-world uncertain remote sensing (RS data based on the concept of descriptive experiment design regularization (DEDR is addressed. We consider the applications of the developed approach to typical single-look synthetic aperture radar (SAR imaging systems operating in the real-world uncertain RS scenarios. The software design is aimed at the algorithmic-level decrease of the computational load of the large-scale SAR image enhancement tasks. The innovative algorithmic idea is to incorporate into the DEDR-optimized fixed-point iterative reconstruction/enhancement procedure the convex convergence enforcement regularization via constructing the proper multilevel projections onto convex sets (POCS in the solution domain. The hardware design is performed via systolic array computing based on a Xilinx Field Programmable Gate Array (FPGA XC4VSX35-10ff668 and is aimed at implementing the unified DEDR-POCS image enhancement/reconstruction procedures in a computationally efficient multi-level parallel fashion that meets the (near real-time image processing requirements. Finally, we comment on the simulation results indicative of the significantly increased performance efficiency both in resolution enhancement and in computational complexity reduction metrics gained with the proposed aggregated HW/SW co-design approach.

  5. Technical basis for high-level waste repository land control requirements for Palo Duro Basin, Paradox Basin, and Richton Dome

    International Nuclear Information System (INIS)

    Chen, C.P.; Raines, G.E.

    1987-02-01

    Three sites, the Palo Duro Basin in Texas, the Paradox Basin in Utah, and the Richton Dome in Mississippi, are being investigated by the US Department of Energy for high-level radioactive-waste disposal in mined, deep geologic repositories in salt. This report delineates the use of regulatory, engineering, and performance assessment information to establish the technical basis for controlled area requirements. Based on the size of the controlled area determined, plus that of the geologic repository operations area, recommendations of possible land control or ownership area requirements for each locale are provided. On a technical basis, the following minimum land control or ownership requirements are recommended, assuming repository operations area of 2240 ac (907 ha), or 3.5 mi 2 (9.1 km 2 ): Palo Duro Basin - 4060 ac (1643 ha), or 6.3 mi 2 (16.4 km 2 ); Paradox Basin - 4060 ac (1643 ha), or 6.3 mi 2 (16.4 km 2 ); and Richton Dome - 5000 ac (2024 ha), or 7.8 mi 2 (20.2 km 2 ). Of the factors used to determine the technically based recommendations, one was found to dominate each locale. For the Palo Duro and Paradox Basins, the dominant factor was the need to limit potential radionuclide release by ground-water flow to the accessible environment. For the Richton Dome, the dominant factor was the need to limit the potential effects of solution mining on dome and repository integrity

  6. Architecture and development of the CDF hardware event builder

    International Nuclear Information System (INIS)

    Shaw, T.M.; Booth, A.W.; Bowden, M.

    1989-01-01

    A hardware Event Builder (EVB) has been developed for use at the Collider Detector experiment at Fermi National Accelerator (CDF). the Event builder presently consists of five FASTBUS modules and has the task of reading out the front end scanners, reformatting the data into YBOS bank structure, and transmitting the data to a Level 3 (L3) trigger system which is composed of multiple VME processing nodes. The Event Builder receives its instructions from a VAX based Buffer Manager (BFM) program via a Unibus Processor Interface (UPI). The Buffer Manager instructs the Event Builder to read out one of the four CDF front end buffers. The Event Builder then informs the Buffer Manager when the event has been formatted and then is instructed to push it up to the L3 trigger system. Once in the L3 system, a decision is made as to whether to write the event to tape

  7. Operation and Monitoring of the CMS Regional Calorimeter Trigger Hardware

    CERN Document Server

    Klabbers, P

    2008-01-01

    The electronics for the Regional Calorimeter Trigger (RCT) of the Compact Muon Solenoid Experiment (CMS) have been produced, tested, and installed. The RCT hardware consists of one clock distribution crate and 18 double-sided crates containing custom boards, ASICs, and backplanes. The RCT receives 8-bit energies and a data quality bit from the HCAL and ECAL Trigger Primitive Generators (TPGs) and sends it to the CMS Global Calorimeter Trigger (GCT) after processing. Integration tests with the TPG and GCT subsystems have been successful. Installation is complete and the RCT is integrated into the Level-1 Trigger chain. Data taking has begun using detector noise, cosmic rays, proton-beam debris, and beamhalo muons. The operation and configuration of the RCT is a completely automated process. The tools to monitor, operate, and debug the RCT are mature and will be described in detail, as well as the results from data taking with the RCT.

  8. Protein requirement of young adult Nigerian females on habitual Nigerian diet at the usual level of energy intake.

    Science.gov (United States)

    Egun, G N; Atinmo, T

    1993-09-01

    A short-term N balance study was conducted in twelve healthy female adults aged 21-32 years to determine their protein requirement. Four dietary protein levels (0.3, 0.4, 0.5 and 0.6 g protein/kg per d) were used. Energy intake of the subjects was kept constant at 0.18 MJ/kg per d. All subjects maintained their normal activity throughout the study period. N excretion was determined from the measurements of N in a total collection of urine, faeces, sweat and menstrual fluid for each dietary period. N balance during the four protein levels were -15.15 (SD 5.95), -5.53 (SD 6.71), +6.15 (SD 4.76) and +12.05 (SD 8.63) mg N/kg per d for 0.3, 0.4, 0.5 and 0.6 g protein/kg per d respectively. The calculated average N requirements from regression analysis was 76.0 (SD 3.37) mg N/kg per d (0.48 g protein/kg per d). The estimate of allowance for individual variation to cover the 97.5% population was 95 mg N/kg per d (0.6 g protein/kg per d). The net protein utilization (NPU) of the diet was 0.55. When compared with a similar study with men, there was a significant difference in the protein requirement between sexes. Thus, the unjustifiable sex difference in the protein allowance recommended by the Food and Agriculture Organization/World Health Organization/United Nations University (1985) Expert Consultation group must be reviewed.

  9. Data Applicability of Heritage and New Hardware for Launch Vehicle System Reliability Models

    Science.gov (United States)

    Al Hassan Mohammad; Novack, Steven

    2015-01-01

    Many launch vehicle systems are designed and developed using heritage and new hardware. In most cases, the heritage hardware undergoes modifications to fit new functional system requirements, impacting the failure rates and, ultimately, the reliability data. New hardware, which lacks historical data, is often compared to like systems when estimating failure rates. Some qualification of applicability for the data source to the current system should be made. Accurately characterizing the reliability data applicability and quality under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This presentation will demonstrate a data-source classification method that ranks reliability data according to applicability and quality criteria to a new launch vehicle. This method accounts for similarities/dissimilarities in source and applicability, as well as operating environments like vibrations, acoustic regime, and shock. This classification approach will be followed by uncertainty-importance routines to assess the need for additional data to reduce uncertainty.

  10. A Framework for Dynamically-Loaded Hardware Library (HLL) in FPGA Acceleration

    DEFF Research Database (Denmark)

    Cardarilli, Gian Carlo; Di Carlo, Leonardo; Nannarelli, Alberto

    2016-01-01

    Hardware acceleration is often used to address the need for speed and computing power in embedded systems. FPGAs always represented a good solution for HW acceleration and, recently, new SoC platforms extended the flexibility of the FPGAs by combining on a single chip both high-performance CPUs...... and FPGA fabric. The aim of this work is the implementation of hardware accelerators for these new SoCs. The innovative feature of these accelerators is the on-the-fly reconfiguration of the hardware to dynamically adapt the accelerator’s functionalities to the current CPU workload. The realization...... of the accelerators preliminarily requires also the profiling of both the SW (ARM CPU + NEON Units) and HW (FPGA) performance, an evaluation of the partial reconfiguration times and the development of an applicationspecific IP-cores library. This paper focuses on the profiling aspect of both the SW and HW...

  11. Why Open Source Hardware matters and why you should care

    OpenAIRE

    Gürkaynak, Frank K.

    2017-01-01

    Open source hardware is currently where open source software was about 30 years ago. The idea is well received by enthusiasts, there is interest and the open source hardware has gained visible momentum recently, with several well-known universities including UC Berkeley, Cambridge and ETH Zürich actively working on large projects involving open source hardware, attracting the attention of companies big and small. But it is still not quite there yet. In this talk, based on my experience on the...

  12. Support for NUMA hardware in HelenOS

    OpenAIRE

    Horký, Vojtěch

    2011-01-01

    The goal of this master thesis is to extend HelenOS operating system with the support for ccNUMA hardware. The text of the thesis contains a brief introduction to ccNUMA hardware, an overview of NUMA features and relevant features of HelenOS (memory management, scheduling, etc.). The thesis analyses various design decisions of the implementation of NUMA support -- introducing the hardware topology into the kernel data structures, propagating this information to user space, thread affinity to ...

  13. The Hardware Topological Trigger of ATLAS: Commissioning and Operations

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00226165; The ATLAS collaboration

    2018-01-01

    The Level-1 trigger is the first rate-reducing step in the ATLAS trigger system with an output rate of 100 kHz and decision latency smaller than 2.5 μs. It consists of a calorimeter trigger, muon trigger and a central trigger processor. To improve the physics potential reach in ATLAS, during the LHC shutdown after Run 1, the Level-1 trigger system was upgraded at hardware, firmware and software level. In particular, a new electronics sub-system was introduced in the real-time data processing path: the Topological Processor System (L1Topo). It consists of a single AdvancedCTA shelf equipped with two Level-1 topological processor blades. For individual blades, real-time information from calorimeter and muon Level-1 trigger systems, is processed by four individual state-of-the-art FPGAs. It needs to deal with a large input bandwidth of up to 6 Tb/s, optical connectivity and low processing latency on the real-time data path. The L1Topo firmware apply measurements of angles between jets and/or leptons and several...

  14. Materials Science Research Hardware for Application on the International Space Station: an Overview of Typical Hardware Requirements and Features

    Science.gov (United States)

    Schaefer, D. A.; Cobb, S.; Fiske, M. R.; Srinivas, R.

    2000-01-01

    NASA's Marshall Space Flight Center (MSFC) is the lead center for Materials Science Microgravity Research. The Materials Science Research Facility (MSRF) is a key development effort underway at MSFC. The MSRF will be the primary facility for microgravity materials science research on board the International Space Station (ISS) and will implement the NASA Materials Science Microgravity Research Program. It will operate in the U.S. Laboratory Module and support U. S. Microgravity Materials Science Investigations. This facility is being designed to maintain the momentum of the U.S. role in microgravity materials science and support NASA's Human Exploration and Development of Space (HEDS) Enterprise goals and objectives for Materials Science. The MSRF as currently envisioned will consist of three Materials Science Research Racks (MSRR), which will be deployed to the International Space Station (ISS) in phases, Each rack is being designed to accommodate various Experiment Modules, which comprise processing facilities for peer selected Materials Science experiments. Phased deployment will enable early opportunities for the U.S. and International Partners, and support the timely incorporation of technology updates to the Experiment Modules and sensor devices.

  15. Environmental Friendly Coatings and Corrosion Prevention For Flight Hardware Project

    Science.gov (United States)

    Calle, Luz

    2014-01-01

    Identify, test and develop qualification criteria for environmentally friendly corrosion protective coatings and corrosion preventative compounds (CPC's) for flight hardware an ground support equipment.

  16. Radiation therapists' perceptions of the minimum level of experience required to perform portal image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rybovic, Michala [Discipline of Medical Radiation Sciences, Faculty of Health Sciences, University of Sydney, PO Box 170, Lidcombe, NSW 1825 (Australia)], E-mail: mryb6983@mail.usyd.edu.au; Halkett, Georgia K. [Western Australia Centre for Cancer and Palliative Care, Curtin University of Technology, Health Research Campus, GPO Box U1987, Perth, WA 6845 (Australia)], E-mail: g.halkett@curtin.edu.au; Banati, Richard B. [Faculty of Health Sciences, Brain and Mind Research Institute - Ramaciotti Centre for Brain Imaging, University of Sydney, PO Box 170, Lidcombe, NSW 1825 (Australia)], E-mail: r.banati@usyd.edu.au; Cox, Jennifer [Discipline of Medical Radiation Sciences, Faculty of Health Sciences, University of Sydney, PO Box 170, Lidcombe, NSW 1825 (Australia)], E-mail: jenny.cox@usyd.edu.au

    2008-11-15

    Background and purpose: Our aim was to explore radiation therapists' views on the level of experience necessary to undertake portal image analysis and clinical decision making. Materials and methods: A questionnaire was developed to determine the availability of portal imaging equipment in Australia and New Zealand. We analysed radiation therapists' responses to a specific question regarding their opinion on the minimum level of experience required for health professionals to analyse portal images. We used grounded theory and a constant comparative method of data analysis to derive the main themes. Results: Forty-six radiation oncology facilities were represented in our survey, with 40 questionnaires being returned (87%). Thirty-seven radiation therapists answered our free-text question. Radiation therapists indicated three main themes which they felt were important in determining the minimum level of experience: 'gaining on-the-job experience', 'receiving training' and 'working as a team'. Conclusions: Radiation therapists indicated that competence in portal image review occurs via various learning mechanisms. Further research is warranted to determine perspectives of other health professionals, such as radiation oncologists, on portal image review becoming part of radiation therapists' extended role. Suitable training programs and steps for implementation should be developed to facilitate this endeavour.

  17. Glucose is required to maintain high ATP-levels for the energy utilizing steps during PDT-induced apoptosis

    International Nuclear Information System (INIS)

    Oberdanner, C.; Plaetzer, K.; Kiesslich, T.; Krammer, B.

    2003-01-01

    Full text: Photodynamic therapy (PDT) may trigger apoptosis or necrosis in cancer cells. Several steps in the induction and execution of apoptosis require high amounts of adenosine-5'-triphosphate (ATP). Since the mitochondrial membrane potential (ΔΨ) decreases early in apoptosis, we raised the question about the mechanisms of maintaining a sufficiently high ATP-level. We therefore monitored ΔΨ and the intracellular ATP-level of apoptotic human epidermoid carcinoma cells (A431) after photodynamic treatment with aluminium (III) phthalocyanine tetrasulfonate chloride. A maximum of caspase-3 activation and nuclear fragmentation was found at fluences of about 4 J.cm -2 . Under these conditions apoptotic cells reduced ΔΨ rapidly, while the ATP-level remained high for 4 to 6 hours after treatment for cells supplied with glucose. To analyze the contribution of glycolysis to the energy supply during apoptosis experiments were carried out with cells deprivated of glucose. These cells showed a rapid drop of ATP-content and neither caspase-activation nor nuclear fragmentation could be detected. We conclude that the use of glucose as a source of ATP is obligatory for the execution of PDT-induced apoptosis. (author)

  18. Magnetic qubits as hardware for quantum computers

    International Nuclear Information System (INIS)

    Tejada, J.; Chudnovsky, E.; Barco, E. del

    2000-01-01

    We propose two potential realisations for quantum bits based on nanometre scale magnetic particles of large spin S and high anisotropy molecular clusters. In case (1) the bit-value basis states vertical bar-0> and vertical bar-1> are the ground and first excited spin states S z = S and S-1, separated by an energy gap given by the ferromagnetic resonance (FMR) frequency. In case (2), when there is significant tunnelling through the anisotropy barrier, the qubit states correspond to the symmetric, vertical bar-0>, and antisymmetric, vertical bar-1>, combinations of the two-fold degenerate ground state S z = ± S. In each case the temperature of operation must be low compared to the energy gap, Δ, between the states vertical bar-0> and vertical bar-1>. The gap Δ in case (2) can be controlled with an external magnetic field perpendicular to the easy axis of the molecular cluster. The states of different molecular clusters and magnetic particles may be entangled by connecting them by superconducting lines with Josephson switches, leading to the potential for quantum computing hardware. (author)

  19. Magnetic qubits as hardware for quantum computers

    Energy Technology Data Exchange (ETDEWEB)

    Tejada, J.; Chudnovsky, E.; Barco, E. del [and others

    2000-07-01

    We propose two potential realisations for quantum bits based on nanometre scale magnetic particles of large spin S and high anisotropy molecular clusters. In case (1) the bit-value basis states vertical bar-0> and vertical bar-1> are the ground and first excited spin states S{sub z} = S and S-1, separated by an energy gap given by the ferromagnetic resonance (FMR) frequency. In case (2), when there is significant tunnelling through the anisotropy barrier, the qubit states correspond to the symmetric, vertical bar-0>, and antisymmetric, vertical bar-1>, combinations of the two-fold degenerate ground state S{sub z} = {+-} S. In each case the temperature of operation must be low compared to the energy gap, {delta}, between the states vertical bar-0> and vertical bar-1>. The gap {delta} in case (2) can be controlled with an external magnetic field perpendicular to the easy axis of the molecular cluster. The states of different molecular clusters and magnetic particles may be entangled by connecting them by superconducting lines with Josephson switches, leading to the potential for quantum computing hardware. (author)

  20. Nanorobot Hardware Architecture for Medical Defense

    Directory of Open Access Journals (Sweden)

    Luiz C. Kretly

    2008-05-01

    Full Text Available This work presents a new approach with details on the integrated platform and hardware architecture for nanorobots application in epidemic control, which should enable real time in vivo prognosis of biohazard infection. The recent developments in the field of nanoelectronics, with transducers progressively shrinking down to smaller sizes through nanotechnology and carbon nanotubes, are expected to result in innovative biomedical instrumentation possibilities, with new therapies and efficient diagnosis methodologies. The use of integrated systems, smart biosensors, and programmable nanodevices are advancing nanoelectronics, enabling the progressive research and development of molecular machines. It should provide high precision pervasive biomedical monitoring with real time data transmission. The use of nanobioelectronics as embedded systems is the natural pathway towards manufacturing methodology to achieve nanorobot applications out of laboratories sooner as possible. To demonstrate the practical application of medical nanorobotics, a 3D simulation based on clinical data addresses how to integrate communication with nanorobots using RFID, mobile phones, and satellites, applied to long distance ubiquitous surveillance and health monitoring for troops in conflict zones. Therefore, the current model can also be used to prevent and save a population against the case of some targeted epidemic disease.

  1. Hardware upgrade for A2 data acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Ostrick, Michael; Gradl, Wolfgang; Otte, Peter-Bernd; Neiser, Andreas; Steffen, Oliver; Wolfes, Martin; Koerner, Tito [Institut fuer Kernphysik, Mainz (Germany); Collaboration: A2-Collaboration

    2014-07-01

    The A2 Collaboration uses an energy tagged photon beam which is produced via bremsstrahlung off the MAMI electron beam. The detector system consists of Crystal Ball and TAPS and covers almost the whole solid angle. A frozen-spin polarized target allows to perform high precision measurements of polarization observables in meson photo-production. During the last summer, a major upgrade of the data acquisition system was performed, both on the hardware and the software side. The goal of this upgrade was increased reliability of the system and an improvement in the data rate to disk. By doubling the number of readout CPUs and employing special VME crates with a split backplane, the number of bus accesses per readout cycle and crate was cut by a factor of two, giving almost a factor of two gain in the readout rate. In the course of the upgrade, we also switched most of the detector control system to using the distributed control system EPICS. For the upgraded control system, some new tools were developed to make full use of the capabilities of this decentralised slow control and monitoring system. The poster presents some of the major contributions to this project.

  2. Feasibility studies of a Level-1 Tracking Trigger for ATLAS

    CERN Document Server

    Warren, M; Brenner, R; Konstantinidis, N; Sutton, M

    2009-01-01

    The existing ATLAS Level-1 trigger system is seriously challenged at the SLHC's higher luminosity. A hardware tracking trigger might be needed, but requires a detailed understanding of the detector. Simulation of high pile-up events, with various data-reduction techniques applied will be described. Two scenarios are envisaged: (a) regional readout - calorimeter and muon triggers are used to identify portions of the tracker; and (b) track-stub finding using special trigger layers. A proposed hardware system, including data reduction on the front-end ASICs, readout within a super-module and integrating regional triggering into all levels of the readout system, will be discussed.

  3. Hardware and software for physical assessment work and health students

    Directory of Open Access Journals (Sweden)

    Олександр Юрійович Азархов

    2016-11-01

    Full Text Available The hardware and software used to assess the state of the students’ health by means of information technology were described in the article and displayed in the form of PEAC – (physical efficiency assessment channel. The list of the diseases that students often suffer from has been prepared for which minimum number of informative primary biosignals have been selected. The structural scheme PEAC has been made up, the ways to form and calculate the secondary parameters for evaluating the health of students have been shown. The resulting criteria, indices, indicators and parameters grouped in a separate table for ease of use, are also presented in the article. The given list necessitates the choice of vital activities parameters, which are further to be used as the criteria for primary express-diagnostics of the health state according to such indicators as electrocardiogram, photoplethysmogram, spirogram, blood pressure, body mass length, dynamometry. But these indicators (qualitative should be supplemented with measurement methods which provide quantitative component of an indicator. This method makes it possible to obtain assessments of students’ health with desired properties. Channel of the student physical disability assessment, along with the channel of activity comprehensive evaluation and decision support subsystem ensure assessment of the student's health with all aspects of his activity and professional training, thereby creating adequate algorithm of his behavior that provides maximum health, longevity and professional activities. The basic requirements for hardware have been formed, and they are, minimum number of information-measuring channels; high noise stability of information-measuring channels; comfort, providing normal activity of a student; small dimensions, weight and power consumption; simplicity, and in some cases service authorization

  4. Personal radiation detector at a high technology readiness level that satisfies DARPA's SN-13-47 and SIGMA program requirements

    Science.gov (United States)

    Ginzburg, D.; Knafo, Y.; Manor, A.; Seif, R.; Ghelman, M.; Ellenbogen, M.; Pushkarsky, V.; Ifergan, Y.; Semyonov, N.; Wengrowicz, U.; Mazor, T.; Kadmon, Y.; Cohen, Y.; Osovizky, A.

    2015-06-01

    There is a need to develop new personal radiation detector (PRD) technologies that can be mass produced. On August 2013, DARPA released a request for information (RFI) seeking innovative radiation detection technologies. In addition, on December 2013, a Broad Agency Announcement (BAA) for the SIGMA program was released. The RFI requirements focused on a sensor that should possess three main properties: low cost, high compactness and radioisotope identification capabilities. The identification performances should facilitate the detection of a hidden threat, ranging from special nuclear materials (SNM) to commonly used radiological sources. Subsequently, the BAA presented the specific requirements at an instrument level and provided a comparison between the current market status (state-of-the-art) and the SIGMA program objectives. This work presents an optional alternative for both the detection technology (sensor with communication output and without user interface) for DARPA's initial RFI and for the PRD required by the SIGMA program. A broad discussion is dedicated to the method proposed to fulfill the program objectives and to the selected alternative that is based on the PDS-GO design and technology. The PDS-GO is the first commercially available PRD that is based on a scintillation crystal optically coupled with a silicon photomultiplier (SiPM), a solid-state light sensor. This work presents the current performance of the instrument and possible future upgrades based on recent technological improvements in the SiPM design. The approach of utilizing the SiPM with a commonly available CsI(Tl) crystal is the key for achieving the program objectives. This approach provides the appropriate performance, low cost, mass production and small dimensions; however, it requires a creative approach to overcome the obstacles of the solid-state detector dark current (noise) and gain stabilization over a wide temperature range. Based on the presented results, we presume that

  5. A comparison of hardware description languages. [describing digital systems structure and behavior to a computer

    Science.gov (United States)

    Shiva, S. G.

    1978-01-01

    Several high level languages which evolved over the past few years for describing and simulating the structure and behavior of digital systems, on digital computers are assessed. The characteristics of the four prominent languages (CDL, DDL, AHPL, ISP) are summarized. A criterion for selecting a suitable hardware description language for use in an automatic integrated circuit design environment is provided.

  6. Web-Compatible Graphics Visualization Framework for Online Instruction and Assessment of Hardware Concepts

    Science.gov (United States)

    Chandramouli, Magesh; Chittamuru, Siva-Teja

    2016-01-01

    This paper explains the design of a graphics-based virtual environment for instructing computer hardware concepts to students, especially those at the beginner level. Photorealistic visualizations and simulations are designed and programmed with interactive features allowing students to practice, explore, and test themselves on computer hardware…

  7. Hardware realization of a fast neural network algorithm for real-time tracking in HEP experiments

    International Nuclear Information System (INIS)

    Leimgruber, F.R.; Pavlopoulos, P.; Steinacher, M.; Tauscher, L.; Vlachos, S.; Wendler, H.

    1995-01-01

    A fast pattern recognition system for HEP experiments, based on artificial neural network algorithms (ANN), has been realized with standard electronics. The multiplicity and location of tracks in an event are determined in less than 75 ns. Hardware modules of this first level trigger were extensively tested for performance and reliability with data from the CPLEAR experiment. (orig.)

  8. Hardware-efficient Implementation of Half-Band IIR Filter for Interpolation and Decimation

    DEFF Research Database (Denmark)

    Jørgensen, Ivan Harald Holger; Pracný, Peter; Bruun, Erik

    2013-01-01

    This brief deals with a simple heuristic method for the hardware optimization of a half-band infinite-impulse response (IIR) filter. The optimization method that is proposed here is intended for a quick design selection at the system level, without the need for computationally intensive calculati...

  9. Study on the design and manufacturing requirements of container for low level radioactive solid waste form KRR decommissioning

    International Nuclear Information System (INIS)

    Lee, D. K.; Kim, H. R.; Park, S. K.; Jung, K. H.; Jung, W. S.; Jung, K. J.

    2000-01-01

    The design requirement and manufacturing criteria have been proposed on the container for the storage and transportation of low level radioactive solid waste from decommissioning of KRR 1 and 2. The structure analysis was carried out based on the design criteria, and the safety of the container was assessed. The ISO container with its capacity of 4m 3 was selected for the radioactive solid waste storage. The proposed container was satisfied the criteria of ISO 1496/1 and the packaging standard of atomic energy act. manufacturing and test standards of IAEA were also applied to the container. Stress distribution and deformation were analyzed under given condition using ANSYS code, and the maximum stress was verified to be within yield stress without any structural deformation. From the results of lifting tests, it was verified that the container was safe

  10. Speed test results and hardware/software study of computational speed problem, appendix D

    Science.gov (United States)

    1984-01-01

    The HP9845C is a desktop computer which is tested and evaluated for processing speed. A study was made to determine the availability and approximate cost of computers and/or hardware accessories necessary to meet the 20 ms sample period speed requirements. Additional requirements were that the control algorithm could be programmed in a high language and that the machine have sufficient storage to store the data from a complete experiment.

  11. Hardware in the Loop Testing of an Iodine-Fed Hall Thruster

    Science.gov (United States)

    Polzin, Kurt A.; Peeples, Steven R.; Cecil, Jim; Lewis, Brandon L.; Molina Fraticelli, Jose C.; Clark, James P.

    2015-01-01

    initiated from an operator's workstation outside the vacuum chamber and passed through the Cortex 160 to exercise portions of the flight avionics. Two custom-designed pieces of electronics hardware have been designed to operate the propellant feed system. One piece of hardware is an auxiliary board that controls a latch valve, proportional flow control valves (PFCVs) and valve heaters as well as measuring pressures, temperatures and PFCV feedback voltage. An onboard FPGA provides a serial link for issuing commands and manages all lower level input-output functions. The other piece of hardware is a power distribution board, which accepts a standard bus voltage input and converts this voltage into all the different current-voltage types required to operate the auxiliary board. These electronics boards are located in the vacuum chamber near the thruster, exposing this hardware to both the vacuum and plasma environments they would encounter during a mission, with these components communicating to the flight computer through an RS-422 interface. The auxiliary board FPGA provides a 28V MOSFET switch circuit with a 20ms pulse to open or close the iodine propellant feed system latch valve. The FPGA provides a pulse width modulation (PWM) signal to a DC/DC boost converter to produce the 12-120V needed for control of the proportional flow control valve. There are eight MOSFET-switched heating circuits in the system. Heaters are 28V and located in the latch valve, PFCV, propellant tank and propellant feed lines. Both the latch valve and PFCV have thermistors built into them for temperature monitoring. There are also seven resistance temperature device (RTD) circuits on the auxiliary board that can be used to measure the propellant tank and feedline temperatures. The signals are conditioned and sent to an analog to digital converter (ADC), which is directly commanded and controlled by the FPGA.

  12. Maintenance of muscle myosin levels in adult C. elegans requires both the double bromodomain protein BET-1 and sumoylation

    Directory of Open Access Journals (Sweden)

    Kate Fisher

    2013-10-01

    Attenuation of RAS-mediated signalling is a conserved process essential to control cell proliferation, differentiation, and apoptosis. Cooperative interactions between histone modifications such as acetylation, methylation and sumoylation are crucial for proper attenuation in C. elegans, implying that the proteins recognising these histone modifications could also play an important role in attenuation of RAS-mediated signalling. We sought to systematically identify these proteins and found BET-1. BET-1 is a conserved double bromodomain protein that recognises acetyl-lysines on histone tails and maintains the stable fate of various lineages. Unexpectedly, adults lacking both BET-1 and SUMO-1 are depleted of muscle myosin, an essential component of myofibrils. We also show that this muscle myosin depletion does not occur in all animals at a specific time, but rather that the penetrance of the phenotype increases with age. To gain mechanistic insights into this process, we sought to delay the occurrence of the muscle myosin depletion phenotype and found that it requires caspase activity and MEK-dependent signalling. We also performed transcription profiling on these mutants and found an up-regulation of the FGF receptor, egl-15, a tyrosine kinase receptor acting upstream of MEK. Consistent with a MEK requirement, we could delay the muscle phenotype by systemic or hypodermal knock down of egl-15. Thus, this work uncovered a caspase- and MEK-dependent mechanism that acts specifically on ageing adults to maintain the appropriate net level of muscle myosin.

  13. Biosafety and Biosecurity in European Containment Level 3 Laboratories: Focus on French Recent Progress and Essential Requirements

    Directory of Open Access Journals (Sweden)

    Boris Pastorino

    2017-05-01

    Full Text Available Even if European Union (EU Member States are obliged to implement EU Directives 2000/54/EC on the protection of workers from risks related to exposure to biological agents at work, national biosafety regulations and practices varied from country to country. In fact, EU legislation on biological agents and genetically modified microorganisms is often not specific enough to ensure harmonization leading to difficulties in implementation for most laboratories. In the same way, biosecurity is a relatively new concept and a few EU Member States are known to have introduced national laboratory biosecurity legislation. In France, recent regulations have reinforced biosafety/biosecurity in containment level 3 (CL-3 laboratories but they concern a specific list of pathogens with no correlation in other European Members States. The objective of this review was to summarize European biosafety/biosecurity measures concerning CL-3 facilities focusing on French specificities. Essential requirements needed to preserve efficient biosafety measures when manipulating risk group 3 biological agents are highlighted. In addition, International, European and French standards related to containment laboratory planning, operation or biosafety equipment are described to clarify optimal biosafety and biosecurity requirements.

  14. Biosafety and Biosecurity in European Containment Level 3 Laboratories: Focus on French Recent Progress and Essential Requirements.

    Science.gov (United States)

    Pastorino, Boris; de Lamballerie, Xavier; Charrel, Rémi

    2017-01-01

    Even if European Union (EU) Member States are obliged to implement EU Directives 2000/54/EC on the protection of workers from risks related to exposure to biological agents at work , national biosafety regulations and practices varied from country to country. In fact, EU legislation on biological agents and genetically modified microorganisms is often not specific enough to ensure harmonization leading to difficulties in implementation for most laboratories. In the same way, biosecurity is a relatively new concept and a few EU Member States are known to have introduced national laboratory biosecurity legislation. In France, recent regulations have reinforced biosafety/biosecurity in containment level 3 (CL-3) laboratories but they concern a specific list of pathogens with no correlation in other European Members States. The objective of this review was to summarize European biosafety/biosecurity measures concerning CL-3 facilities focusing on French specificities. Essential requirements needed to preserve efficient biosafety measures when manipulating risk group 3 biological agents are highlighted. In addition, International, European and French standards related to containment laboratory planning, operation or biosafety equipment are described to clarify optimal biosafety and biosecurity requirements.

  15. An assessment of issues related to determination of time periods required for isolation of high level waste

    International Nuclear Information System (INIS)

    Cohen, J.J.; Daer, G.R.; Smith, C.F.; Vogt, D.K.; Woolfolk, S.W.

    1989-01-01

    A commonly held perception is that disposal of spent nuclear fuel or high-level waste presents a risk of unprecedented duration. The EPA requires that projected releases of radioactivity be limited for 10,000 years after disposal with the intent that risks from the disposal repository be no greater than those from the uranium ore deposit from which the nuclear fuel was originally extracted. This study reviews issues involved in assessing compliance with the requirement. The determination of compliance is assumption dependent primarily due to uncertainties in dosimetric data, and relative availability of the radioactivity for environmental transport and eventual assimilation by humans. A conclusion of this study is that, in time, a spent fuel disposal repository such as the projected Yucca Mountain Project Facility will become less hazardous than the original ore deposit. Only the time it takes to do so is in question. Depending upon the assumptions selected, this time period could range from a few centuries to hundreds of thousands of years considering only the inherent radiotoxicities. However, if it can be assumed that the spent fuel radioactivity emplaced in a waste repository is less than 1/10 as available for human assimilation than that in a uranium ore deposit, then even under the most pessimistic set of assumptions, the EPA criteria can be considered to be complied with. 24 refs., 5 figs., 2 tabs

  16. Critical threshold levels of DNA methyltransferase 1 are required to maintain DNA methylation across the genome in human cancer cells.

    Science.gov (United States)

    Cai, Yi; Tsai, Hsing-Chen; Yen, Ray-Whay Chiu; Zhang, Yang W; Kong, Xiangqian; Wang, Wei; Xia, Limin; Baylin, Stephen B

    2017-04-01

    Reversing DNA methylation abnormalities and associated gene silencing, through inhibiting DNA methyltransferases (DNMTs) is an important potential cancer therapy paradigm. Maximizing this potential requires defining precisely how these enzymes maintain genome-wide, cancer-specific DNA methylation. To date, there is incomplete understanding of precisely how the three DNMTs, 1, 3A, and 3B, interact for maintaining DNA methylation abnormalities in cancer. By combining genetic and shRNA depletion strategies, we define not only a dominant role for DNA methyltransferase 1 (DNMT1) but also distinct roles of 3A and 3B in genome-wide DNA methylation maintenance. Lowering DNMT1 below a threshold level is required for maximal loss of DNA methylation at all genomic regions, including gene body and enhancer regions, and for maximally reversing abnormal promoter DNA hypermethylation and associated gene silencing to reexpress key genes. It is difficult to reach this threshold with patient-tolerable doses of current DNMT inhibitors (DNMTIs). We show that new approaches, like decreasing the DNMT targeting protein, UHRF1, can augment the DNA demethylation capacities of existing DNA methylation inhibitors for fully realizing their therapeutic potential. © 2017 Cai et al.; Published by Cold Spring Harbor Laboratory Press.

  17. Technical position on items and activities in the high-level waste geologic repository program subject to quality assurance requirements

    International Nuclear Information System (INIS)

    Duncan, A.B.; Bilhorn, S.G.; Kennedy, J.E.

    1988-04-01

    This document provides guidance on how to identify items and activities subject to Quality Assurance in the high-level nuclear waste repository program for pre-closure and post-closure phases of the repository. In the pre-closure phase, structures, systems and components essential to the prevention or mitigation of an accident that could result in an off-site radiation dose of 0.5rem or greater are termed ''important to safety''. In the post-closure phase, the barriers which are relied on to meet the containment and isolation requirements are defined as ''important to waste isolation''. These structures, systems, components, and barriers, and the activities related to their characterization, design, construction, and operation are required to meet quality assurance (QA) criteria to provide confidence in the performance of the geologic repository. The list of structures, systems, and components important to safety and engineered barriers important to waste isolation is referred to as the ''Q-List'' and lies within the scope of the QA program. 10 refs

  18. FPGA BASED HARDWARE KEY FOR TEMPORAL ENCRYPTION

    Directory of Open Access Journals (Sweden)

    B. Lakshmi

    2010-09-01

    Full Text Available In this paper, a novel encryption scheme with time based key technique on an FPGA is presented. Time based key technique ensures right key to be entered at right time and hence, vulnerability of encryption through brute force attack is eliminated. Presently available encryption systems, suffer from Brute force attack and in such a case, the time taken for breaking a code depends on the system used for cryptanalysis. The proposed scheme provides an effective method in which the time is taken as the second dimension of the key so that the same system can defend against brute force attack more vigorously. In the proposed scheme, the key is rotated continuously and four bits are drawn from the key with their concatenated value representing the delay the system has to wait. This forms the time based key concept. Also the key based function selection from a pool of functions enhances the confusion and diffusion to defend against linear and differential attacks while the time factor inclusion makes the brute force attack nearly impossible. In the proposed scheme, the key scheduler is implemented on FPGA that generates the right key at right time intervals which is then connected to a NIOS – II processor (a virtual microcontroller which is brought out from Altera FPGA that communicates with the keys to the personal computer through JTAG (Joint Test Action Group communication and the computer is used to perform encryption (or decryption. In this case the FPGA serves as hardware key (dongle for data encryption (or decryption.

  19. Bayesian Estimation and Inference using Stochastic Hardware

    Directory of Open Access Journals (Sweden)

    Chetan Singh Thakur

    2016-03-01

    Full Text Available In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker, demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND, we show how inference can be performed in a Directed Acyclic Graph (DAG using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.

  20. Hardware replacements and software tools for digital control computers

    International Nuclear Information System (INIS)

    Walker, R.A.P.; Wang, B-C.; Fung, J.

    1996-01-01

    Technological obsolescence is an on-going challenge for all computer use. By design, and to some extent good fortune, AECL has had a good track record with respect to the march of obsolescence in CANDU digital control computer technology. Recognizing obsolescence as a fact of life, AECL has undertaken a program of supporting the digital control technology of existing CANDU plants. Other AECL groups are developing complete replacement systems for the digital control computers, and more advanced systems for the digital control computers of the future CANDU reactors. This paper presents the results of the efforts of AECL's DCC service support group to replace obsolete digital control computer and related components and to provide friendlier software technology related to the maintenance and use of digital control computers in CANDU. These efforts are expected to extend the current lifespan of existing digital control computers through their mandated life. This group applied two simple rules; the product, whether new or replacement should have a generic basis, and the products should be applicable to both existing CANDU plants and to 'repeat' plant designs built using current design guidelines. While some exceptions do apply, the rules have been met. The generic requirement dictates that the product should not be dependent on any brand technology, and should back-fit to and interface with any such technology which remains in the control design. The application requirement dictates that the product should have universal use and be user friendly to the greatest extent possible. Furthermore, both requirements were designed to anticipate user involvement, modifications and alternate user defined applications. The replacements for hardware components such as paper tape reader/punch, moving arm disk, contact scanner and Ramtek are discussed. The development of these hardware replacements coincide with the development of a gateway system for selected CANDU digital control

  1. Greater-than-Class C low-level radioactive waste shipping package/container identification and requirements study

    International Nuclear Information System (INIS)

    Tyacke, M.

    1993-08-01

    This report identifies a variety of shipping packages (also referred to as casks) and waste containers currently available or being developed that could be used for greater-than-Class C (GTCC) low-level waste (LLW). Since GTCC LLW varies greatly in size, shape, and activity levels, the casks and waste containers that could be used range in size from small, to accommodate a single sealed radiation source, to very large-capacity casks/canisters used to transport or dry-store highly radioactive spent fuel. In some cases, the waste containers may serve directly as shipping packages, while in other cases, the containers would need to be placed in a transport cask. For the purpose of this report, it is assumed that the generator is responsible for transporting the waste to a Department of Energy (DOE) storage, treatment, or disposal facility. Unless DOE establishes specific acceptance criteria, the receiving facility would need the capability to accept any of the casks and waste containers identified in this report. In identifying potential casks and waste containers, no consideration was given to their adequacy relative to handling, storage, treatment, and disposal. Those considerations must be addressed separately as the capabilities of the receiving facility and the handling requirements and operations are better understood

  2. Sharing open hardware through ROP, the robotic open platform

    NARCIS (Netherlands)

    Lunenburg, J.; Soetens, R.P.T.; Schoenmakers, F.; Metsemakers, P.M.G.; van de Molengraft, M.J.G.; Steinbuch, M.; Behnke, S.; Veloso, M.; Visser, A.; Xiong, R.

    2014-01-01

    The robot open source software community, in particular ROS, drastically boosted robotics research. However, a centralized place to exchange open hardware designs does not exist. Therefore we launched the Robotic Open Platform (ROP). A place to share and discuss open hardware designs. Among others

  3. Sharing open hardware through ROP, the Robotic Open Platform

    NARCIS (Netherlands)

    Lunenburg, J.J.M.; Soetens, R.P.T.; Schoenmakers, Ferry; Metsemakers, P.M.G.; Molengraft, van de M.J.G.; Steinbuch, M.

    2013-01-01

    The robot open source software community, in particular ROS, drastically boosted robotics research. However, a centralized place to exchange open hardware designs does not exist. Therefore we launched the Robotic Open Platform (ROP). A place to share and discuss open hardware designs. Among others

  4. The role of the visual hardware system in rugby performance ...

    African Journals Online (AJOL)

    This study explores the importance of the 'hardware' factors of the visual system in the game of rugby. A group of professional and club rugby players were tested and the results compared. The results were also compared with the established norms for elite athletes. The findings indicate no significant difference in hardware ...

  5. Hardware packet pacing using a DMA in a parallel computer

    Science.gov (United States)

    Chen, Dong; Heidelberger, Phillip; Vranas, Pavlos

    2013-08-13

    Method and system for hardware packet pacing using a direct memory access controller in a parallel computer which, in one aspect, keeps track of a total number of bytes put on the network as a result of a remote get operation, using a hardware token counter.

  6. Hardware/software virtualization for the reconfigurable multicore platform.

    NARCIS (Netherlands)

    Ferger, M.; Al Kadi, M.; Hübner, M.; Koedam, M.L.P.J.; Sinha, S.S.; Goossens, K.G.W.; Marchesan Almeida, Gabriel; Rodrigo Azambuja, J.; Becker, Juergen

    2012-01-01

    This paper presents the Flex Tiles approach for the virtualization of hardware and software for a reconfigurable multicore architecture. The approach enables the virtualization of a dynamic tile-based hardware architecture consisting of processing tiles connected via a network-on-chip and a

  7. Flexible hardware design for RSA and Elliptic Curve Cryptosystems

    NARCIS (Netherlands)

    Batina, L.; Bruin - Muurling, G.; Örs, S.B.; Okamoto, T.

    2004-01-01

    This paper presents a scalable hardware implementation of both commonly used public key cryptosystems, RSA and Elliptic Curve Cryptosystem (ECC) on the same platform. The introduced hardware accelerator features a design which can be varied from very small (less than 20 Kgates) targeting wireless

  8. Hardware and software for image acquisition in nuclear medicine

    International Nuclear Information System (INIS)

    Fideles, E.L.; Vilar, G.; Silva, H.S.

    1992-01-01

    A system for image acquisition and processing in nuclear medicine is presented, including the hardware and software referring to acquisition. The hardware is consisted of an analog-digital conversion card, developed in wire-wape. Its function is digitate the analogic signs provided by gamma camera. The acquisitions are made in list or frame mode. (C.G.C.)

  9. Hardware Abstraction and Protocol Optimization for Coded Sensor Networks

    DEFF Research Database (Denmark)

    Nistor, Maricica; Roetter, Daniel Enrique Lucani; Barros, João

    2015-01-01

    The design of the communication protocols in wireless sensor networks (WSNs) often neglects several key characteristics of the sensor's hardware, while assuming that the number of transmitted bits is the dominating factor behind the system's energy consumption. A closer look at the hardware speci...

  10. Acceptance of non-fuel assembly hardware by the Federal Waste Management System

    International Nuclear Information System (INIS)

    1990-03-01

    This report is one of a series of eight prepared by E. R. Johnson Associates, Inc. (JAI) under ORNL's contract with DOE's OCRWM Systems Integration Program and in support of the Annual Capacity Report (ACR) Issue Resolution Process. The report topics relate specifically to the list of high-priority technical waste acceptance issues developed jointly by DOE and a utility-working group. JAI performed various analyses and studies on each topic to serve as starting points for further discussion and analysis leading eventually to finalizing the process by which DOE will accept spent fuel and waste into its waste management system. The eight reports are concerned with the conditions under which spent fuel and high-level waste will be accepted in the following categories: failed fuel; consolidated fuel and associated structural parts; non-fuel-assembly hardware; fuel in metal storage casks; fuel in multi-element sealed canisters; inspection and testing requirements for wastes; canister criteria; spent fuel selection for delivery; and defense and commercial high-level waste packages. 14 refs., 12 figs., 43 tabs

  11. Increased risk for complications following removal of hardware in patients with liver disease, pilon or pelvic fractures: A regression analysis.

    Science.gov (United States)

    Brown, Bryan D; Steinert, Justin N; Stelzer, John W; Yoon, Richard S; Langford, Joshua R; Koval, Kenneth J

    2017-12-01

    Indications for removing orthopedic hardware on an elective basis varies widely. Although viewed as a relatively benign procedure, there is a lack of data regarding overall complication rates after fracture fixation. The purpose of this study is to determine the overall short-term complication rate for elective removal of orthopedic hardware after fracture fixation and to identify associated risk factors. Adult patients indicated for elective hardware removal after fracture fixation between July 2012 and July 2016 were screened for inclusion. Inclusion criteria included patients with hardware related pain and/or impaired cosmesis with complete medical and radiographic records and at least 3-month follow-up. Exclusion criteria were those patients indicated for hardware removal for a diagnosis of malunion, non-union, and/or infection. Data collected included patient age, gender, anatomic location of hardware removed, body mass index, ASA score, and comorbidities. Overall complications, as well as complications requiring revision surgery were recorded. Statistical analysis was performed with SPSS 20.0, and included univariate and multivariate regression analysis. 391 patients (418 procedures) were included for analysis. Overall complication rates were 8.4%, with a 3.6% revision surgery rate. Univariate regression analysis revealed that patients who had liver disease were at significant risk for complication (p=0.001) and revision surgery (p=0.036). Multivariate regression analysis showed that: 1) patients who had liver disease were at significant risk of overall complication (p=0.001) and revision surgery (p=0.039); 2) Removal of hardware following fixation for a pilon had significantly increased risk for complication (p=0.012), but not revision surgery (p=0.43); and 3) Removal of hardware for pelvic fixation had a significantly increased risk for revision surgery (p=0.017). Removal of hardware following fracture fixation is not a risk-free procedure. Patients with

  12. A Practical Introduction to HardwareSoftware Codesign

    CERN Document Server

    Schaumont, Patrick R

    2013-01-01

    This textbook provides an introduction to embedded systems design, with emphasis on integration of custom hardware components with software. The key problem addressed in the book is the following: how can an embedded systems designer strike a balance between flexibility and efficiency? The book describes how combining hardware design with software design leads to a solution to this important computer engineering problem. The book covers four topics in hardware/software codesign: fundamentals, the design space of custom architectures, the hardware/software interface and application examples. The book comes with an associated design environment that helps the reader to perform experiments in hardware/software codesign. Each chapter also includes exercises and further reading suggestions. Improvements in this second edition include labs and examples using modern FPGA environments from Xilinx and Altera, which make the material applicable to a greater number of courses where these tools are already in use.  Mo...

  13. FPGA-Based Efficient Hardware/Software Co-Design for Industrial Systems with Consideration of Output Selection

    Science.gov (United States)

    Deliparaschos, Kyriakos M.; Michail, Konstantinos; Zolotas, Argyrios C.; Tzafestas, Spyros G.

    2016-05-01

    This work presents a field programmable gate array (FPGA)-based embedded software platform coupled with a software-based plant, forming a hardware-in-the-loop (HIL) that is used to validate a systematic sensor selection framework. The systematic sensor selection framework combines multi-objective optimization, linear-quadratic-Gaussian (LQG)-type control, and the nonlinear model of a maglev suspension. A robustness analysis of the closed-loop is followed (prior to implementation) supporting the appropriateness of the solution under parametric variation. The analysis also shows that quantization is robust under different controller gains. While the LQG controller is implemented on an FPGA, the physical process is realized in a high-level system modeling environment. FPGA technology enables rapid evaluation of the algorithms and test designs under realistic scenarios avoiding heavy time penalty associated with hardware description language (HDL) simulators. The HIL technique facilitates significant speed-up in the required execution time when compared to its software-based counterpart model.

  14. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Neubauer, M; The ATLAS collaboration

    2011-01-01

    In hadron collider experiments, triggering the detector to store interesting events for offline analysis is a challenge due to the high rates and multiplicities of particles produced. The LHC will soon operate at a center-of-mass energy of 14 TeV and at high instantaneous luminosities of the order of $10^{34}$ to $10^{35}$ cm$^{-2}$ s$^{-1}$. A multi-level trigger strategy is used in ATLAS, with the first level (LVL1) implemented in hardware and the second and third levels (LVL2 and EF) implemented in a large computer farm. Maintaining high trigger efficiency for the physics we are most interested in while at the same time suppressing high rate physics from inclusive QCD processes is a difficult but important problem. It is essential that the trigger system be flexible and robust, with sufficient redundancy and operating margin. Providing high quality track reconstruction over the full ATLAS detector by the start of processing at LVL2 is an important element to achieve these needs. As the instantaneous lumino...

  15. {sup 18}F-FDG PET/CT evaluation of children and young adults with suspected spinal fusion hardware infection

    Energy Technology Data Exchange (ETDEWEB)

    Bagrosky, Brian M. [University of Colorado School of Medicine, Department of Pediatric Radiology, Children' s Hospital Colorado, 12123 E. 16th Ave., Box 125, Aurora, CO (United States); University of Colorado School of Medicine, Department of Radiology, Division of Nuclear Medicine, Aurora, CO (United States); Hayes, Kari L.; Fenton, Laura Z. [University of Colorado School of Medicine, Department of Pediatric Radiology, Children' s Hospital Colorado, 12123 E. 16th Ave., Box 125, Aurora, CO (United States); Koo, Phillip J. [University of Colorado School of Medicine, Department of Radiology, Division of Nuclear Medicine, Aurora, CO (United States)

    2013-08-15

    Evaluation of the child with spinal fusion hardware and concern for infection is challenging because of hardware artifact with standard imaging (CT and MRI) and difficult physical examination. Studies using {sup 18}F-FDG PET/CT combine the benefit of functional imaging with anatomical localization. To discuss a case series of children and young adults with spinal fusion hardware and clinical concern for hardware infection. These people underwent FDG PET/CT imaging to determine the site of infection. We performed a retrospective review of whole-body FDG PET/CT scans at a tertiary children's hospital from December 2009 to January 2012 in children and young adults with spinal hardware and suspected hardware infection. The PET/CT scan findings were correlated with pertinent clinical information including laboratory values of inflammatory markers, postoperative notes and pathology results to evaluate the diagnostic accuracy of FDG PET/CT. An exempt status for this retrospective review was approved by the Institution Review Board. Twenty-five FDG PET/CT scans were performed in 20 patients. Spinal fusion hardware infection was confirmed surgically and pathologically in six patients. The most common FDG PET/CT finding in patients with hardware infection was increased FDG uptake in the soft tissue and bone immediately adjacent to the posterior spinal fusion rods at multiple contiguous vertebral levels. Noninfectious hardware complications were diagnosed in ten patients and proved surgically in four. Alternative sources of infection were diagnosed by FDG PET/CT in seven patients (five with pneumonia, one with pyonephrosis and one with superficial wound infections). FDG PET/CT is helpful in evaluation of children and young adults with concern for spinal hardware infection. Noninfectious hardware complications and alternative sources of infection, including pneumonia and pyonephrosis, can be diagnosed. FDG PET/CT should be the first-line cross-sectional imaging study in

  16. Fracture of fusion mass after hardware removal in patients with high sagittal imbalance.

    Science.gov (United States)

    Sedney, Cara L; Daffner, Scott D; Stefanko, Jared J; Abdelfattah, Hesham; Emery, Sanford E; France, John C

    2016-04-01

    As spinal fusions become more common and more complex, so do the sequelae of these procedures, some of which remain poorly understood. The authors report on a series of patients who underwent removal of hardware after CT-proven solid fusion, confirmed by intraoperative findings. These patients later developed a spontaneous fracture of the fusion mass that was not associated with trauma. A series of such patients has not previously been described in the literature. An unfunded, retrospective review of the surgical logs of 3 fellowship-trained spine surgeons yielded 7 patients who suffered a fracture of a fusion mass after hardware removal. Adult patients from the West Virginia University Department of Orthopaedics who underwent hardware removal in the setting of adjacent-segment disease (ASD), and subsequently experienced fracture of the fusion mass through the uninstrumented segment, were studied. The medical records and radiological studies of these patients were examined for patient demographics and comorbidities, initial indication for surgery, total number of surgeries, timeline of fracture occurrence, risk factors for fracture, as well as sagittal imbalance. All 7 patients underwent hardware removal in conjunction with an extension of fusion for ASD. All had CT-proven solid fusion of their previously fused segments, which was confirmed intraoperatively. All patients had previously undergone multiple operations for a variety of indications, 4 patients were smokers, and 3 patients had osteoporosis. Spontaneous fracture of the fusion mass occurred in all patients and was not due to trauma. These fractures occurred 4 months to 4 years after hardware removal. All patients had significant sagittal imbalance of 13-15 cm. The fracture level was L-5 in 6 of the 7 patients, which was the first uninstrumented level caudal to the newly placed hardware in all 6 of these patients. Six patients underwent surgery due to this fracture. The authors present a case series of 7

  17. Optimal Incorporation Level of Dietary Alternative Phosphate (MgHPO and Requirement for Phosphorus in Juvenile Far Eastern Catfish (

    Directory of Open Access Journals (Sweden)

    Tae-Hyun Yoon

    2015-01-01

    Full Text Available A growth trial was conducted to determine the optimal incorporation level of dietary magnesium hydrogen phosphate (MHP, MgHPO4, which was manufactured from swine manure and phosphorus (P, required by juvenile far eastern catfish (Silurus asotus. Graded MHP of 0.5%, 1.0%, 1.5%, and 2.0%, and 2.0% monocalcium phosphate (MCP each was added to the basal diet (control in lieu of cellulose to become the range of available P (AP from 0.4% to 0.8% of which diets were designated as control, MHP0.5, MHP1.0, MHP1.5, MHP2.0, and MCP, respectively. Control diet contained fish meal (20%, soybean meal (40%, wheat flour (27%, corn gluten meal (5%, fish oil (2% and soy oil (2% as main ingredients. Following a 24 h fasting, 540 fish with a mean body weight of 11.8 g were randomly allotted to 6 groups in triplicate, whereby 18 tanks (0.4×0.6×0.36 cm, water volume of 66 L were prepared. The feeding experiment lasted for 8 weeks. Fish group fed the control diet showed the lowest weight gain (WG and feed efficiency (FE among treatments. The WG was, however, not significantly different (p>0.05 from that of fish group fed MHP0.5. Fish group fed MHP2.0 showed the highest WG and FE of which values were not significantly different from those of fish groups fed diets MHP1.0 and MHP1.5 as well as MCP (p>0.05 except fish groups fed control and MHP0.5. Aspartate aminotransferase was significantly decreased with an increase in available P, while alanine aminotransferase did not show a significant difference among treatment. The highest inorganic P in plasma was observed in fish fed MHP2.0. From the present results, a second-order regression analysis revealed that the optimal dietary MHP level and the AP requirement were found to be 1.62% and 0.7%, respectively.

  18. Psychometric study of the Required Care Levels for People with Severe Mental Disorder Assessment Scale (ENAR-TMG).

    Science.gov (United States)

    Lascorz, David; López, Victoria; Pinedo, Carmen; Trujols, Joan; Vegué, Joan; Pérez, Víctor

    2016-03-08

    People with severe mental disorder have significant difficulties in everyday life that involve the need for continued support. These needs are not easily measurable with the currently available tools. Therefore, a multidimensional scale that assesses the different levels of need for care is proposed, including a study of its psychometric properties. One-hundred and thirty-nine patients (58% men) with a severe mental disorder were assessed using the Required Care Levels for People with Severe Mental Disorder Assessment Scale (ENAR-TMG), the Camberwell Assessment of Need scale, and the Health of the Nation Outcome Scales. ENAR-TMG's psychometric features were examined by: a) evaluating 2 sources of validity evidence (evidence based on internal structure and evidence based on relations to other variables), and b) estimating the internal consistency, temporal stability, inter-rater reliability, and sensitivity to change of scores of the ENAR-TMG's subscales. Exploratory factor analyses revealed a one-factor structure for each of the theoretical dimensions of the scale, in which all but one showed a significant and positive correlation with the Camberwell Assessment of Need (range of r: 0.143-0.557) and Health of the Nation Outcome Scales (range of r: 0.241-0.474) scales. ENAR-TMG subscale scores showed acceptable internal consistency (range of ordinal α coefficients: 0.682-0.804), excellent test-retest (range of intraclass correlation coefficients: 0.889-0.999) and inter-rater reliabilities (range of intraclass correlation coefficients: 0.926-0.972), and satisfactory sensitivity to treatment-related changes (range of η 2 : 0.003-0.103). The satisfactory psychometric behaviour of the ENAR-TMG makes the scale a promising tool to assess global functioning in people with a severe mental disorder. Copyright © 2016 SEP y SEPB. Published by Elsevier España. All rights reserved.

  19. Monitoring Particulate Matter with Commodity Hardware

    Science.gov (United States)

    Holstius, David

    Health effects attributed to outdoor fine particulate matter (PM 2.5) rank it among the risk factors with the highest health burdens in the world, annually accounting for over 3.2 million premature deaths and over 76 million lost disability-adjusted life years. Existing PM2.5 monitoring infrastructure cannot, however, be used to resolve variations in ambient PM2.5 concentrations with adequate spatial and temporal density, or with adequate coverage of human time-activity patterns, such that the needs of modern exposure science and control can be met. Small, inexpensive, and portable devices, relying on newly available off-the-shelf sensors, may facilitate the creation of PM2.5 datasets with improved resolution and coverage, especially if many such devices can be deployed concurrently with low system cost. Datasets generated with such technology could be used to overcome many important problems associated with exposure misclassification in air pollution epidemiology. Chapter 2 presents an epidemiological study of PM2.5 that used data from ambient monitoring stations in the Los Angeles basin to observe a decrease of 6.1 g (95% CI: 3.5, 8.7) in population mean birthweight following in utero exposure to the Southern California wildfires of 2003, but was otherwise limited by the sparsity of the empirical basis for exposure assessment. Chapter 3 demonstrates technical potential for remedying PM2.5 monitoring deficiencies, beginning with the generation of low-cost yet useful estimates of hourly and daily PM2.5 concentrations at a regulatory monitoring site. The context (an urban neighborhood proximate to a major goods-movement corridor) and the method (an off-the-shelf sensor costing approximately USD $10, combined with other low-cost, open-source, readily available hardware) were selected to have special significance among researchers and practitioners affiliated with contemporary communities of practice in public health and citizen science. As operationalized by

  20. Hardware-in-the-loop-based development methods for mechatronic light control; Hardware-in-the-loop basierte Entwicklungsmethodik fuer eine mechatronische Leuchtweiteregelung

    Energy Technology Data Exchange (ETDEWEB)

    Opgen-Rhein, P.

    2005-07-01

    A hardware-in-the-loop solution is presented which in the system integration phase takes account of the process of functional property validation of mechatronic light control systems. The method is not tested on the road but on a test rig with defined boundary conditions. This test stand, combined with objective assessment criteria developed for the specific requirements, helps to minimize the number of costly road tests still required. Using the example of an adaptive filter of a light control system, the author shows how filter paramaters are applied on the test stand, and how the subjective judgement of the driver is taken into account as well in the evaluations. (orig.)

  1. Water system hardware and management rehabilitation: Qualitative evidence from Ghana, Kenya, and Zambia.

    Science.gov (United States)

    Klug, Tori; Shields, Katherine F; Cronk, Ryan; Kelly, Emma; Behnke, Nikki; Lee, Kristen; Bartram, Jamie

    2017-05-01

    Sufficient, safe, continuously available drinking water is important for human health and development, yet one in three handpumps in sub-Saharan Africa are non-functional at any given time. Community management, coupled with access to external technical expertise and spare parts, is a widely promoted model for rural water supply management. However, there is limited evidence describing how community management can address common hardware and management failures of rural water systems in sub-Saharan Africa. We identified hardware and management rehabilitation pathways using qualitative data from 267 interviews and 57 focus group discussions in Ghana, Kenya, and Zambia. Study participants were water committee members, community members, and local leaders in 18 communities (six in each study country) with water systems managed by a water committee and supported by World Vision (WV), an international non-governmental organization (NGO). Government, WV or private sector employees engaged in supporting the water systems were also interviewed. Inductive analysis was used to allow for pathways to emerge from the data, based on the perspectives and experiences of study participants. Four hardware rehabilitation pathways were identified, based on the types of support used in rehabilitation. Types of support were differentiated as community or external. External support includes financial and/or technical support from government or WV employees. Community actor understanding of who to contact when a hardware breakdown occurs and easy access to technical experts were consistent reasons for rapid rehabilitation for all hardware rehabilitation pathways. Three management rehabilitation pathways were identified. All require the involvement of community leaders and were best carried out when the action was participatory. The rehabilitation pathways show how available resources can be leveraged to restore hardware breakdowns and management failures for rural water systems in sub

  2. 100% energy supply coverage with renewable energy. Requirements for its implementation at the global, national and municipal level

    International Nuclear Information System (INIS)

    Rogall, Holger

    2014-01-01

    This book presents itself as a systematic, easily understandable introduction into the requirements for an energy supply based 100% on renewable energy. Its main focus is on the strategic paths that must be followed for this purpose in the realms of business, technology and governmental policy. It highlights the opportunities and impediments on the way, analysing in the process the roles of political, economic and civil society players from the global down to the municipal level. Starting out from the present state of discussion on the German energy transition it investigates the strengths and weak points of efficiency technologies and renewable energies available today and elaborates a strategic path for developing the necessary infrastructure. In awareness of the fact that 100% coverage will not come about from market mechanisms alone it explores the ecological crash barriers that need to be set up in addition. This is followed by chapters on the roles, interests and means of those players who can exert influence on the framing of the relevant political and legal instruments as well as their means of pursuing their interests. The book thus contributes to clarifying the possibilities of and impediments to achieving an energy supply system based 100% on renewable energy.

  3. Genome-health nutrigenomics and nutrigenetics: nutritional requirements or 'nutriomes' for chromosomal stability and telomere maintenance at the individual level.

    Science.gov (United States)

    Bull, Caroline; Fenech, Michael

    2008-05-01

    It is becoming increasingly evident that (a) risk for developmental and degenerative disease increases with more DNA damage, which in turn is dependent on nutritional status, and (b) the optimal concentration of micronutrients for prevention of genome damage is also dependent on genetic polymorphisms that alter the function of genes involved directly or indirectly in the uptake and metabolism of micronutrients required for DNA repair and DNA replication. The development of dietary patterns, functional foods and supplements that are designed to improve genome-health maintenance in individuals with specific genetic backgrounds may provide an important contribution to an optimum health strategy based on the diagnosis and individualised nutritional prevention of genome damage, i.e. genome health clinics. The present review summarises some of the recent knowledge relating to micronutrients that are associated with chromosomal stability and provides some initial insights into the likely nutritional factors that may be expected to have an impact on the maintenance of telomeres. It is evident that developing effective strategies for defining nutrient doses and combinations or 'nutriomes' for genome-health maintenance at the individual level is essential for further progress in this research field.

  4. Hardware Commissioning of the LHC Quality Assurance, follow-up and storing of the test results

    CERN Document Server

    Barbero, E

    2005-01-01

    During the commissioning of the LHC technical systems [1] (the so-called Hardware Commissioning) a large number of test sequences and procedures will be applied to the different systems and components of the accelerator. All the information related to the coordination of the Hardware Commissioning will be structured and managed towards the final objective of integrating all the data produced in the Manufacturing and Test Folders (MTF) [2] at both equipment level (i.e. individual system tests) and commissioning level (i.e.Hardware Commissioning). The MTF for Hardware Commissioning will be mainly used to archive the results of the tests (i.e. status, parameters and waveforms) which will be used later as reference during the operation with beam. Also it is an indispensable tool for monitoring the progress of the different tests and ensuring the proper follow-up of the procedures described in the engineering specifications; in this way, the Quality Assurance process will be completed. This paper describes the spe...

  5. Is an absolute level of cortical beta suppression required for proper movement? Magnetoencephalographic evidence from healthy aging.

    Science.gov (United States)

    Heinrichs-Graham, Elizabeth; Wilson, Tony W

    2016-07-01

    Previous research has connected a specific pattern of beta oscillatory activity to proper motor execution, but no study to date has directly examined how resting beta levels affect motor-related beta oscillatory activity in the motor cortex. Understanding this relationship is imperative to determining the basic mechanisms of motor control, as well as the impact of pathological beta oscillations on movement execution. In the current study, we used magnetoencephalography (MEG) and a complex movement paradigm to quantify resting beta activity and movement-related beta oscillations in the context of healthy aging. We chose healthy aging as a model because preliminary evidence suggests that beta activity is elevated in older adults, and thus by examining older and younger adults we were able to naturally vary resting beta levels. To this end, healthy younger and older participants were recorded during motor performance and at rest. Using beamforming, we imaged the peri-movement beta event-related desynchronization (ERD) and extracted virtual sensors from the peak voxels, which enabled absolute and relative beta power to be assessed. Interestingly, absolute beta power during the pre-movement baseline was much stronger in older relative to younger adults, and older adults also exhibited proportionally large beta desynchronization (ERD) responses during motor planning and execution compared to younger adults. Crucially, we found a significant relationship between spontaneous (resting) beta power and beta ERD magnitude in both primary motor cortices, above and beyond the effects of age. A similar link was found between beta ERD magnitude and movement duration. These findings suggest a direct linkage between beta reduction during movement and spontaneous activity in the motor cortex, such that as spontaneous beta power increases, a greater reduction in beta activity is required to execute movement. We propose that, on an individual level, the primary motor cortices have an

  6. An open-source hardware and software system for acquisition and real-time processing of electrophysiology during high field MRI.

    Science.gov (United States)

    Purdon, Patrick L; Millan, Hernan; Fuller, Peter L; Bonmassar, Giorgio

    2008-11-15

    Simultaneous recording of electrophysiology and functional magnetic resonance imaging (fMRI) is a technique of growing importance in neuroscience. Rapidly evolving clinical and scientific requirements have created a need for hardware and software that can be customized for specific applications. Hardware may require customization to enable a variety of recording types (e.g., electroencephalogram, local field potentials, or multi-unit activity) while meeting the stringent and costly requirements of MRI safety and compatibility. Real-time signal processing tools are an enabling technology for studies of learning, attention, sleep, epilepsy, neurofeedback, and neuropharmacology, yet real-time signal processing tools are difficult to develop. We describe an open-source system for simultaneous electrophysiology and fMRI featuring low-noise (tested up to 7T), and user-programmable real-time signal processing. The hardware distribution provides the complete specifications required to build an MRI-compatible electrophysiological data acquisition system, including circuit schematics, print circuit board (PCB) layouts, Gerber files for PCB fabrication and robotic assembly, a bill of materials with part numbers, data sheets, and vendor information, and test procedures. The software facilitates rapid implementation of real-time signal processing algorithms. This system has been used in human EEG/fMRI studies at 3 and 7T examining the auditory system, visual system, sleep physiology, and anesthesia, as well as in intracranial electrophysiological studies of the non-human primate visual system during 3T fMRI, and in human hyperbaric physiology studies at depths of up to 300 feet below sea level.

  7. A Hardware Fast Tracker for the ATLAS trigger

    CERN Document Server

    Asbah, Nedaa; The ATLAS collaboration

    2015-01-01

    The trigger system of the ATLAS experiment is designed to reduce the event rate from the LHC nominal bunch crossing at 40 MHz to about 1 kHz, at the design luminosity of 10^{34} cm^{-2}s^{-1}. After a successful period of data taking from 2010 to early 2013, the LHC restarted with much higher instantaneous luminosity. This will increase the load on High Level Trigger system, the second stage of the selection based on software algorithms. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals. The Fast TracKer (FTK) is part of the ATLAS trigger upgrade project; it is a hardware processor that will provide, at every level-1 accepted event (100 kHz) and within 100 microseconds, full tracking information for tracks with momentum as low as 1 GeV. Providing fast extensive access to tracking information, with resolution comparable to the offline reconstruction, FTK will help in precise detection of the primary and secondar...

  8. Hardware Implementation of a Bilateral Subtraction Filter

    Science.gov (United States)

    Huertas, Andres; Watson, Robert; Villalpando, Carlos; Goldberg, Steven

    2009-01-01

    A bilateral subtraction filter has been implemented as a hardware module in the form of a field-programmable gate array (FPGA). In general, a bilateral subtraction filter is a key subsystem of a high-quality stereoscopic machine vision system that utilizes images that are large and/or dense. Bilateral subtraction filters have been implemented in software on general-purpose computers, but the processing speeds attainable in this way even on computers containing the fastest processors are insufficient for real-time applications. The present FPGA bilateral subtraction filter is intended to accelerate processing to real-time speed and to be a prototype of a link in a stereoscopic-machine- vision processing chain, now under development, that would process large and/or dense images in real time and would be implemented in an FPGA. In terms that are necessarily oversimplified for the sake of brevity, a bilateral subtraction filter is a smoothing, edge-preserving filter for suppressing low-frequency noise. The filter operation amounts to replacing the value for each pixel with a weighted average of the values of that pixel and the neighboring pixels in a predefined neighborhood or window (e.g., a 9 9 window). The filter weights depend partly on pixel values and partly on the window size. The present FPGA implementation of a bilateral subtraction filter utilizes a 9 9 window. This implementation was designed to take advantage of the ability to do many of the component computations in parallel pipelines to enable processing of image data at the rate at which they are generated. The filter can be considered to be divided into the following parts (see figure): a) An image pixel pipeline with a 9 9- pixel window generator, b) An array of processing elements; c) An adder tree; d) A smoothing-and-delaying unit; and e) A subtraction unit. After each 9 9 window is created, the affected pixel data are fed to the processing elements. Each processing element is fed the pixel value for

  9. Hardware Realization of Chaos Based Symmetric Image Encryption

    KAUST Repository

    Barakat, Mohamed L.

    2012-06-01

    This thesis presents a novel work on hardware realization of symmetric image encryption utilizing chaos based continuous systems as pseudo random number generators. Digital implementation of chaotic systems results in serious degradations in the dynamics of the system. Such defects are illuminated through a new technique of generalized post proceeding with very low hardware cost. The thesis further discusses two encryption algorithms designed and implemented as a block cipher and a stream cipher. The security of both systems is thoroughly analyzed and the performance is compared with other reported systems showing a superior results. Both systems are realized on Xilinx Vetrix-4 FPGA with a hardware and throughput performance surpassing known encryption systems.

  10. Dynamically-Loaded Hardware Libraries (HLL) Technology for Audio Applications

    DEFF Research Database (Denmark)

    Esposito, A.; Lomuscio, A.; Nunzio, L. Di

    2016-01-01

    In this work, we apply hardware acceleration to embedded systems running audio applications. We present a new framework, Dynamically-Loaded Hardware Libraries or HLL, to dynamically load hardware libraries on reconfigurable platforms (FPGAs). Provided a library of application-specific processors......, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA-based accelerator. The proposed architecture provides excellent flexibility with respect to the different audio applications implemented, high quality audio, and an energy efficient solution....

  11. Hardware support for collecting performance counters directly to memory

    Science.gov (United States)

    Gara, Alan; Salapura, Valentina; Wisniewski, Robert W.

    2012-09-25

    Hardware support for collecting performance counters directly to memory, in one aspect, may include a plurality of performance counters operable to collect one or more counts of one or more selected activities. A first storage element may be operable to store an address of a memory location. A second storage element may be operable to store a value indicating whether the hardware should begin copying. A state machine may be operable to detect the value in the second storage element and trigger hardware copying of data in selected one or more of the plurality of performance counters to the memory location whose address is stored in the first storage element.

  12. Aspects of system modelling in Hardware/Software partitioning

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1996-01-01

    This paper addresses fundamental aspects of system modelling and partitioning algorithms in the area of Hardware/Software Codesign. Three basic system models for partitioning are presented and the consequences of partitioning according to each of these are analyzed. The analysis shows...... the importance of making a clear distinction between the model used for partitioning and the model used for evaluation It also illustrates the importance of having a realistic hardware model such that hardware sharing can be taken into account. Finally, the importance of integrating scheduling and allocation...

  13. A Systematic Hardware Sharing Method for Unified Architecture Design of H.264 Transforms

    Directory of Open Access Journals (Sweden)

    Po-Hung Chen

    2015-01-01

    Full Text Available Multitransform techniques have been widely used in modern video coding and have better compression efficiency than the single transform technique that is used conventionally. However, every transform needs a corresponding hardware implementation, which results in a high hardware cost for multiple transforms. A novel method that includes a five-step operation sharing synthesis and architecture-unification techniques is proposed to systematically share the hardware and reduce the cost of multitransform coding. In order to demonstrate the effectiveness of the method, a unified architecture is designed using the method for all of the six transforms involved in the H.264 video codec: 2D 4 × 4 forward and inverse integer transforms, 2D 4 × 4 and 2 × 2 Hadamard transforms, and 1D 8 × 8 forward and inverse integer transforms. Firstly, the six H.264 transform architectures are designed at a low cost using the proposed five-step operation sharing synthesis technique. Secondly, the proposed architecture-unification technique further unifies these six transform architectures into a low cost hardware-unified architecture. The unified architecture requires only 28 adders, 16 subtractors, 40 shifters, and a proposed mux-based routing network, and the gate count is only 16308. The unified architecture processes 8 pixels/clock-cycle, up to 275 MHz, which is equal to 707 Full-HD 1080 p frames/second.

  14. Conceptual Design Approach to Implementing Hardware-based Security Controls in Data Communication Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ibrahim, Ahmad Salah; Jung, Jaecheon [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2016-10-15

    In the Korean Advanced Power Reactor (APR1400), safety control systems network is electrically isolated and physically separated from non-safety systems data network. Unidirectional gateways, include data diode fiber-optic cabling and computer-based servers, transmit the plant safety critical parameters to the main control room (MCR) for control and monitoring processes. The data transmission is only one-way from safety to non-safety. Reverse communication is blocked so that safety systems network is protected from potential cyberattacks or intrusions from non-safety side. Most of commercials off-the-shelf (COTS) security devices are software-based solutions that require operating systems and processors to perform its functions. Field Programmable Gate Arrays (FPGAs) offer digital hardware solutions to implement security controls such as data packet filtering and deep data packet inspection. This paper presents a conceptual design to implement hardware-based network security controls for maintaining the availability of gateway servers. A conceptual design of hardware-based network security controls was discussed in this paper. The proposed design is aiming at utilizing the hardware-based capabilities of FPGAs together with filtering and DPI functions of COTS software-based firewalls and intrusion detection and prevention systems (IDPS). The proposed design implemented a network security perimeter between the DCN-I zone and gateway servers zone. Security control functions are to protect the gateway servers from potential DoS attacks that could affect the data availability and integrity.

  15. Conceptual Design Approach to Implementing Hardware-based Security Controls in Data Communication Systems

    International Nuclear Information System (INIS)

    Ibrahim, Ahmad Salah; Jung, Jaecheon

    2016-01-01

    In the Korean Advanced Power Reactor (APR1400), safety control systems network is electrically isolated and physically separated from non-safety systems data network. Unidirectional gateways, include data diode fiber-optic cabling and computer-based servers, transmit the plant safety critical parameters to the main control room (MCR) for control and monitoring processes. The data transmission is only one-way from safety to non-safety. Reverse communication is blocked so that safety systems network is protected from potential cyberattacks or intrusions from non-safety side. Most of commercials off-the-shelf (COTS) security devices are software-based solutions that require operating systems and processors to perform its functions. Field Programmable Gate Arrays (FPGAs) offer digital hardware solutions to implement security controls such as data packet filtering and deep data packet inspection. This paper presents a conceptual design to implement hardware-based network security controls for maintaining the availability of gateway servers. A conceptual design of hardware-based network security controls was discussed in this paper. The proposed design is aiming at utilizing the hardware-based capabilities of FPGAs together with filtering and DPI functions of COTS software-based firewalls and intrusion detection and prevention systems (IDPS). The proposed design implemented a network security perimeter between the DCN-I zone and gateway servers zone. Security control functions are to protect the gateway servers from potential DoS attacks that could affect the data availability and integrity

  16. Hardware Implementation of Lossless Adaptive and Scalable Hyperspectral Data Compression for Space

    Science.gov (United States)

    Aranki, Nazeeh; Keymeulen, Didier; Bakhshi, Alireza; Klimesh, Matthew

    2009-01-01

    On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware. A modified form of the algorithm that is better suited for data from pushbroom instruments is generally appropriate for flight implementation. A scalable field programmable gate array (FPGA) hardware implementation was developed. The FPGA implementation achieves a throughput performance of 58 Msamples/sec, which can be increased to over 100 Msamples/sec in a parallel implementation that uses twice the hardware resources This paper describes the hardware implementation of the 'Modified Fast Lossless' compression algorithm on an FPGA. The FPGA implementation targets the current state-of-the-art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for space applications.

  17. Hardware-in-the-Loop emulator for a hydrokinetic turbine

    Science.gov (United States)

    Rat, C. L.; Prostean, O.; Filip, I.

    2018-01-01

    Hydroelectric power has proven to be an efficient and reliable form of renewable energy, but its impact on the environment has long been a source of concern. Hydrokinetic turbines are an emerging class of renewable energy technology designed for deployment in small rivers and streams with minimal environmental impact on the local ecosystem. Hydrokinetic technology represents a truly clean source of energy, having the potential to become a highly efficient method of harvesting renewable energy. However, in order to achieve this goal, extensive research is necessary. This paper presents a Hardware-in-the-Loop emulator for a run-of-the-river type hydrokinetic turbine. The HIL system uses an ABB ACS800 drive to control an induction machine as a significant means of replicating the behavior of the real turbine. The induction machine is coupled to a permanent magnet synchronous generator and the corresponding load. The ACS800 drive is controlled through the software system, which comprises of the hydrokinetic turbine real-time simulation through mathematical modeling in the LabVIEW programming environment running on a NI CompactRIO (cRIO) platform. The advantages of this method are that it can provide a means for testing many control configurations without requiring the presence of the real turbine. This paper contains the basic principles of a hydrokinetic turbine, particularly the run-of-the-river configurations along with the experimental results obtained from the HIL system.

  18. Hardware Implementation of Artificial Neural Network for Data Ciphering

    Directory of Open Access Journals (Sweden)

    Sahar L. Kadoory

    2016-10-01

    Full Text Available This paper introduces the design and realization of multiple blocks ciphering techniques on the FPGA (Field Programmable Gate Arrays. A back propagation neural networks have been built for substitution, permutation and XOR blocks ciphering using Neural Network Toolbox in MATLAB program. They are trained to encrypt the data, after obtaining the suitable weights, biases, activation function and layout. Afterward, they are described using VHDL and implemented using Xilinx Spartan-3E FPGA using two approaches: serial and parallel versions. The simulation results obtained with Xilinx ISE 9.2i software. The numerical precision is chosen carefully when implementing the Neural Network on FPGA. Obtained results from the hardware designs show accurate numeric values to cipher the data. As expected, the synthesis results indicate that the serial version requires less area resources than the parallel version. As, the data throughput in parallel version is higher than the serial version in rang between (1.13-1.5 times. Also, a slight difference can be observed in the maximum frequency.

  19. A Fast hardware tracker for the ATLAS Trigger

    CERN Document Server

    Pandini, Carlo Enrico; The ATLAS collaboration

    2015-01-01

    The trigger system at the ATLAS experiment is designed to lower the event rate occurring from the nominal bunch crossing at 40 MHz to about 1 kHz for a designed LHC luminosity of 10$^{34}$ cm$^{-2}$ s$^{-1}$. To achieve high background rejection while maintaining good efficiency for interesting physics signals, sophisticated algorithms are needed which require extensive use of tracking information. The Fast TracKer (FTK) trigger system, part of the ATLAS trigger upgrade program, is a highly parallel hardware device designed to perform track-finding at 100 kHz and based on a mixture of advanced technologies. Modern, powerful Field Programmable Gate Arrays (FPGA) form an important part of the system architecture, and the combinatorial problem of pattern recognition is solved by ~8000 standard-cell ASICs named Associative Memories. The availability of the tracking and subsequent vertex information within a short latency ensures robust selections and allows improved trigger performance for the most difficult sign...

  20. Effect of spaceflight hardware on the skeletal properties of ground control mice

    Science.gov (United States)

    Bateman, Ted; Lloyd, Shane; Dunlap, Alex; Ferguson, Virginia; Simske, Steven; Stodieck, Louis; Livingston, Eric

    Introduction: Spaceflight experiments using mouse or rat models require habitats that are specifically designed for the microgravity environment. During spaceflight, rodents are housed in a specially designed stainless steel meshed cage with gravity-independent food and water delivery systems and constant airflow to push floating urine and feces towards a waste filter. Differences in the housing environment alone, not even considering the spaceflight environment itself, may lead to physiological changes in the animals contained within. It is important to characterize these cage differences so that results from spaceflight experiments can be more reliably compared to studies from other laboratories. Methods: For this study, we examined the effect of NASA's Animal Enclosure Module (AEM) spaceflight hardware on the skeletal properties of 8-week-old female C57BL/6J mice. This 13-day experiment, conducted on the ground, modeled the flight experiment profile of the CBTM-01 payload on STS-108, with standard vivarium-housed mice being compared to AEM-housed mice (n = 12/group). Functional differences were compared via mechanical testing, micro-hardness indentation, microcomputed tomography, and mineral/matrix composition. Cellular changes were examined by serum chemistry, histology, quantitative histomorphometry, and RT-PCR. A Student's t-test was utilized, with the level of Type I error set at 95 Results: There was no change in elastic, maximum, or fracture force mechanical properties at the femur mid-diaphysis, however, structural stiffness was -17.5 Conclusions: Housing mice in the AEM spaceflight hardware had minimal effects on femur cortical bone properties. However, trabecular bone at the proximal tibia in AEM mice experi-enced large increases in microarchitecture and mineral composition. Increases in bone density were accompanied by reductions in bone-forming osteoblasts and bone-resorbing osteoclasts, representing a general decline in bone turnover at this site