WorldWideScience

Sample records for finding optimum hardware

  1. Hardware-Based Non-Optimum Factors for Launch Vehicle Structural Design

    Science.gov (United States)

    Wu, K. Chauncey; Cerro, Jeffrey A.

    2010-01-01

    During aerospace vehicle conceptual and preliminary design, empirical non-optimum factors are typically applied to predicted structural component weights to account for undefined manufacturing and design details. Non-optimum factors are developed here for 32 aluminum-lithium 2195 orthogrid panels comprising the liquid hydrogen tank barrel of the Space Shuttle External Tank using measured panel weights and manufacturing drawings. Minimum values for skin thickness, axial and circumferential blade stiffener thickness and spacing, and overall panel thickness are used to estimate individual panel weights. Panel non-optimum factors computed using a coarse weights model range from 1.21 to 1.77, and a refined weights model (including weld lands and skin and stiffener transition details) yields non-optimum factors of between 1.02 and 1.54. Acreage panels have an average 1.24 non-optimum factor using the coarse model, and 1.03 with the refined version. The observed consistency of these acreage non-optimum factors suggests that relatively simple models can be used to accurately predict large structural component weights for future launch vehicles.

  2. Finding the Optimum Scenario in Risk-benefit Assessment: An Example on Vitamin D

    DEFF Research Database (Denmark)

    Berjia, Firew Lemma; Hoekstra, J.; Verhagen, H.

    2014-01-01

    when changing from the reference to the optimum scenario. Conclusion: The method allowed us to find the optimum serum level in the vitamin D example. Additional case studies are needed to further validate the applicability of the approach to other nutrients or foods, especially with regards...... a method for finding the optimum scenario that provides maximum net health gains. Methods: A multiple scenario simulation. The method is presented using vitamin D intake in Denmark as an example. In addition to the reference scenario, several alternative scenarios are simulated to detect the scenario...... that provides maximum net health gains. As a common health metric, Disability Adjusted Life Years (DALY) has been used to project the net health effect by using the QALIBRA (Quality of Life for Benefit Risk Assessment) software. Results: The method used in the vitamin D example shows that it is feasible to find...

  3. Finding the Optimum Scenario in Risk-benefit Assessment: An Example on Vitamin D

    DEFF Research Database (Denmark)

    Berjia, Firew Lemma; Hoekstra, J.; Verhagen, H.

    2014-01-01

    a method for finding the optimum scenario that provides maximum net health gains. Methods: A multiple scenario simulation. The method is presented using vitamin D intake in Denmark as an example. In addition to the reference scenario, several alternative scenarios are simulated to detect the scenario......Background: In risk-benefit assessment of food and nutrients, several studies so far have focused on comparison of two scenarios to weigh the health effect against each other. One obvious next step is finding the optimum scenario that provides maximum net health gains. Aim: This paper aims to show...... that provides maximum net health gains. As a common health metric, Disability Adjusted Life Years (DALY) has been used to project the net health effect by using the QALIBRA (Quality of Life for Benefit Risk Assessment) software. Results: The method used in the vitamin D example shows that it is feasible to find...

  4. Finding the Optimum Scenario in Risk-benefit Assessment: An Example on Vitamin D

    DEFF Research Database (Denmark)

    Berjia, Firew Lemma; Hoekstra, J.; Verhagen, H.

    2014-01-01

    an optimum scenario that provides maximum net health gain in health risk-benefit assessment of dietary exposure as expressed by serum vitamin D level. With regard to the vitamin D assessment, a considerable health gain is observed due to the reduction of risk of other cause mortality, fall and hip fractures......Background: In risk-benefit assessment of food and nutrients, several studies so far have focused on comparison of two scenarios to weigh the health effect against each other. One obvious next step is finding the optimum scenario that provides maximum net health gains. Aim: This paper aims to show...... that provides maximum net health gains. As a common health metric, Disability Adjusted Life Years (DALY) has been used to project the net health effect by using the QALIBRA (Quality of Life for Benefit Risk Assessment) software. Results: The method used in the vitamin D example shows that it is feasible to find...

  5. Finding an optimum immuno-histochemical feature set to distinguish benign phyllodes from fibroadenoma.

    Science.gov (United States)

    Maity, Priti Prasanna; Chatterjee, Subhamoy; Das, Raunak Kumar; Mukhopadhyay, Subhalaxmi; Maity, Ashok; Maulik, Dhrubajyoti; Ray, Ajoy Kumar; Dhara, Santanu; Chatterjee, Jyotirmoy

    2013-05-01

    Benign phyllodes and fibroadenoma are two well-known breast tumors with remarkable diagnostic ambiguity. The present study is aimed at determining an optimum set of immuno-histochemical features to distinguish them by analyzing important observations on expressions of important genes in fibro-glandular tissue. Immuno-histochemically, the expressions of p63 and α-SMA in myoepithelial cells and collagen I, III and CD105 in stroma of tumors and their normal counterpart were studied. Semi-quantified features were analyzed primarily by ANOVA and ranked through F-scores for understanding relative importance of group of features in discriminating three classes followed by reduction in F-score arranged feature space dimension and application of inter-class Bhattacharyya distances to distinguish tumors with an optimum set of features. Among thirteen studied features except one all differed significantly in three study classes. F-Ranking of features revealed highest discriminative potential of collagen III (initial region). F-Score arranged feature space dimension and application of Bhattacharyya distance gave rise to a feature set of lower dimension which can discriminate benign phyllodes and fibroadenoma effectively. The work definitely separated normal breast, fibroadenoma and benign phyllodes, through an optimal set of immuno-histochemical features which are not only useful to address diagnostic ambiguity of the tumors but also to spell about malignant potentiality. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Direction of Radio Finding via MUSIC (Multiple Signal Classification) Algorithm for Hardware Design System

    Science.gov (United States)

    Zhang, Zheng

    2017-10-01

    Concept of radio direction finding systems, which use radio direction finding is based on digital signal processing algorithms. Thus, the radio direction finding system becomes capable to locate and track signals by the both. Performance of radio direction finding significantly depends on effectiveness of digital signal processing algorithms. The algorithm uses the Direction of Arrival (DOA) algorithms to estimate the number of incidents plane waves on the antenna array and their angle of incidence. This manuscript investigates implementation of the DOA algorithms (MUSIC) on the uniform linear array in the presence of white noise. The experiment results exhibit that MUSIC algorithm changed well with the radio direction.

  7. Hardware Demonstrator of a Level-1 Track Finding Algorithm with FPGAs for the Phase II CMS Experiment

    CERN Document Server

    AUTHOR|(CDS)2090481

    2016-01-01

    At the HL-LHC, proton bunches collide every 25\\,ns, producing an average of 140 pp interactions per bunch crossing. To operate in such an environment, the CMS experiment will need a Level-1 (L1) hardware trigger, able to identify interesting events within a latency of 12.5\\,$\\mu$s. This novel L1 trigger will make use of data coming from the silicon tracker to constrain the trigger rate. Goal of this new \\textit{track trigger} will be to build L1 tracks from the tracker information. The architecture that will be implemented in future to process tracker data is still under discussion. One possibility is to adopt a system entirely based on FPGA electronic. The proposed track finding algorithm is based on the Hough transform method. The algorithm has been tested using simulated pp collision data and it is currently being demonstrated in hardware, using the ``MP7'', which is a $\\mu$TCA board with a powerful FPGA capable of handling data rates approaching 1 Tb/s. Two different implementations of the Hough tran...

  8. Hardware Demonstrator of a Level-1 Track Finding Algorithm with FPGAs for the Phase II CMS Experiment

    Science.gov (United States)

    Cieri, D.; CMS Collaboration

    2016-10-01

    At the HL-LHC, proton bunches collide every 25 ns, producing an average of 140 pp interactions per bunch crossing. To operate in such an environment, the CMS experiment will need a Level-1 (L1) hardware trigger, able to identify interesting events within a latency of 12.5 μs. This novel L1 trigger will make use of data coming from the silicon tracker to constrain the trigger rate. Goal of this new track trigger will be to build L1 tracks from the tracker information. The architecture that will be implemented in future to process tracker data is still under discussion. One possibility is to adopt a system entirely based on FPGA electronic. The proposed track finding algorithm is based on the Hough transform method. The algorithm has been tested using simulated pp collision data and it is currently being demonstrated in hardware, using the “MP7”, which is a μTCA board with a powerful FPGA capable of handling data rates approaching 1 Tb/s. Two different implementations of the Hough transform technique are currently under investigation: one utilizes a systolic array to represent the Hough space, while the other exploits a pipelined approach.

  9. Hardware malware

    CERN Document Server

    Krieg, Christian

    2013-01-01

    In our digital world, integrated circuits are present in nearly every moment of our daily life. Even when using the coffee machine in the morning, or driving our car to work, we interact with integrated circuits. The increasing spread of information technology in virtually all areas of life in the industrialized world offers a broad range of attack vectors. So far, mainly software-based attacks have been considered and investigated, while hardware-based attacks have attracted comparatively little interest. The design and production process of integrated circuits is mostly decentralized due to

  10. Introduction to Hardware Security

    OpenAIRE

    Yier Jin

    2015-01-01

    Hardware security has become a hot topic recently with more and more researchers from related research domains joining this area. However, the understanding of hardware security is often mixed with cybersecurity and cryptography, especially cryptographic hardware. For the same reason, the research scope of hardware security has never been clearly defined. To help researchers who have recently joined in this area better understand the challenges and tasks within the hardware security domain an...

  11. Introduction to Hardware Security

    Directory of Open Access Journals (Sweden)

    Yier Jin

    2015-10-01

    Full Text Available Hardware security has become a hot topic recently with more and more researchers from related research domains joining this area. However, the understanding of hardware security is often mixed with cybersecurity and cryptography, especially cryptographic hardware. For the same reason, the research scope of hardware security has never been clearly defined. To help researchers who have recently joined in this area better understand the challenges and tasks within the hardware security domain and to help both academia and industry investigate countermeasures and solutions to solve hardware security problems, we will introduce the key concepts of hardware security as well as its relations to related research topics in this survey paper. Emerging hardware security topics will also be clearly depicted through which the future trend will be elaborated, making this survey paper a good reference for the continuing research efforts in this area.

  12. Raspberry Pi hardware projects 1

    CERN Document Server

    Robinson, Andrew

    2013-01-01

    Learn how to take full advantage of all of Raspberry Pi's amazing features and functions-and have a blast doing it! Congratulations on becoming a proud owner of a Raspberry Pi, the credit-card-sized computer! If you're ready to dive in and start finding out what this amazing little gizmo is really capable of, this ebook is for you. Taken from the forthcoming Raspberry Pi Projects, Raspberry Pi Hardware Projects 1 contains three cool hardware projects that let you have fun with the Raspberry Pi while developing your Raspberry Pi skills. The authors - PiFace inventor, Andrew Robinson and Rasp

  13. Optimum design of steel structures

    CERN Document Server

    Farkas, József

    2013-01-01

    This book helps designers and manufacturers to select and develop the most suitable and competitive steel structures, which are safe, fit for production and economic. An optimum design system is used to find the best characteristics of structural models, which guarantee the fulfilment of design and fabrication requirements and minimize the cost function. Realistic numerical models are used as main components of industrial steel structures. Chapter 1 containts some experiences with the optimum design of steel structures Chapter 2 treats some newer mathematical optimization methods. Chapter 3 gives formulae for fabrication times and costs. Chapters 4 deals with beams and columns. Summarizes the Eurocode rules for design. Chapter 5 deals with the design of tubular trusses. Chapter 6 gives the design of frame structures and fire-resistant design rules for a frame. In Chapters 7 some minimum cost design problems of stiffened and cellular plates and shells are worked out for cases of different stiffenings and loads...

  14. Hardware protection through obfuscation

    CERN Document Server

    Bhunia, Swarup; Tehranipoor, Mark

    2017-01-01

    This book introduces readers to various threats faced during design and fabrication by today’s integrated circuits (ICs) and systems. The authors discuss key issues, including illegal manufacturing of ICs or “IC Overproduction,” insertion of malicious circuits, referred as “Hardware Trojans”, which cause in-field chip/system malfunction, and reverse engineering and piracy of hardware intellectual property (IP). The authors provide a timely discussion of these threats, along with techniques for IC protection based on hardware obfuscation, which makes reverse-engineering an IC design infeasible for adversaries and untrusted parties with any reasonable amount of resources. This exhaustive study includes a review of the hardware obfuscation methods developed at each level of abstraction (RTL, gate, and layout) for conventional IC manufacturing, new forms of obfuscation for emerging integration strategies (split manufacturing, 2.5D ICs, and 3D ICs), and on-chip infrastructure needed for secure exchange o...

  15. Open Hardware at CERN

    CERN Multimedia

    CERN Knowledge Transfer Group

    2015-01-01

    CERN is actively making its knowledge and technology available for the benefit of society and does so through a variety of different mechanisms. Open hardware has in recent years established itself as a very effective way for CERN to make electronics designs and in particular printed circuit board layouts, accessible to anyone, while also facilitating collaboration and design re-use. It is creating an impact on many levels, from companies producing and selling products based on hardware designed at CERN, to new projects being released under the CERN Open Hardware Licence. Today the open hardware community includes large research institutes, universities, individual enthusiasts and companies. Many of the companies are actively involved in the entire process from design to production, delivering services and consultancy and even making their own products available under open licences.

  16. The optimum spanning catenary cable

    Science.gov (United States)

    Wang, C. Y.

    2015-03-01

    A heavy cable spans two points in space. There exists an optimum cable length such that the maximum tension is minimized. If the two end points are at the same level, the optimum length is 1.258 times the distance between the ends. The optimum lengths for end points of different heights are also found.

  17. Choosing the optimum burnup

    International Nuclear Information System (INIS)

    Geller, L.; Goldstein, L.; Franks, W.A.

    1986-01-01

    This paper reviews some of the considerations utilities must evaluate when going to higher discharge burnups. The advantages and disadvantages of higher discharge burnups are described, as well as a consistent approach for evaluating optimum discharge burnup and its comparison to current practice. When an analysis is performed over the life of the plant, the design of the terminal cycles has significant impact on the lifetime savings from higher burnups. Designs for high burnup cycles have a greater average inventory value in the core. As one goes to higher burnup, there is a greater likelihood of discarding a larger value in unused fuel unless the terminal cycles are designed carefully. This effect can be large enough in some cases to wipe out the lifetime cost savings relative to operating with a higher discharge burnup cycle

  18. NASA HUNCH Hardware

    Science.gov (United States)

    Hall, Nancy R.; Wagner, James; Phelps, Amanda

    2014-01-01

    What is NASA HUNCH? High School Students United with NASA to Create Hardware-HUNCH is an instructional partnership between NASA and educational institutions. This partnership benefits both NASA and students. NASA receives cost-effective hardware and soft goods, while students receive real-world hands-on experiences. The 2014-2015 was the 12th year of the HUNCH Program. NASA Glenn Research Center joined the program that already included the NASA Johnson Space Flight Center, Marshall Space Flight Center, Langley Research Center and Goddard Space Flight Center. The program included 76 schools in 24 states and NASA Glenn worked with the following five schools in the HUNCH Build to Print Hardware Program: Medina Career Center, Medina, OH; Cattaraugus Allegheny-BOCES, Olean, NY; Orleans Niagara-BOCES, Medina, NY; Apollo Career Center, Lima, OH; Romeo Engineering and Tech Center, Washington, MI. The schools built various parts of an International Space Station (ISS) middeck stowage locker and learned about manufacturing process and how best to build these components to NASA specifications. For the 2015-2016 school year the schools will be part of a larger group of schools building flight hardware consisting of 20 ISS middeck stowage lockers for the ISS Program. The HUNCH Program consists of: Build to Print Hardware; Build to Print Soft Goods; Design and Prototyping; Culinary Challenge; Implementation: Web Page and Video Production.

  19. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  20. The LASS hardware processor

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1976-01-01

    The problems of data analysis with hardware processors are reviewed and a description is given of a programmable processor. This processor, the 168/E, has been designed for use in the LASS multi-processor system; it has an execution speed comparable to the IBM 370/168 and uses the subset of IBM 370 instructions appropriate to the LASS analysis task. (Auth.)

  1. CERN Neutrino Platform Hardware

    CERN Document Server

    Nelson, Kevin

    2017-01-01

    My summer research was broadly in CERN's neutrino platform hardware efforts. This project had two main components: detector assembly and data analysis work for ICARUS. Specifically, I worked on assembly for the ProtoDUNE project and monitored the safety of ICARUS as it was transported to Fermilab by analyzing the accelerometer data from its move.

  2. An Optimum Currency Crisis

    Directory of Open Access Journals (Sweden)

    Paolo Pasimeni

    2014-12-01

    Full Text Available This paper presents an ex-post assessment of the current situation of the EMU in light of the conditions prescribed by the theory of Optimum Currency Areas (OCA. The analysis shows that some of those conditions were satisfied at the inception of the EMU, others were missing at the beginning, but improved over time as expected by the endogenous approach to the OCA theory. The common fiscal capacity was the main missing element of the initial construction of the Eurozone, and still is. The common budget is so exiguous that its effectiveness as shock absorption mechanism is negligible. The analysis then shows how some of the concerns raised on the eve of the euro did actually materialize, even if not immediately. First, in its first decade the Eurozone did not experience major turbulences, because growing financial integration was compensating the need for fiscal transfers, channelling the excess of saving from the ‘core’ to the ‘periphery’. Second, the mechanism generated record-high private indebtedness in the ‘periphery’ and exposure of the banks in the ‘core’, making the whole system more fragile as it relied upon financial markets’ stability. Third, once the long-feared shock hit, the mechanism proved weak and non-resilient. The inherent weaknesses of the EMU became evident. Fourth, as it had been foreseen, the cost of the adjustment after the shock fell mainly on labour, with much higher and longer unemployment in the Eurozone than both non-Eurozone EU and the US. Fifth, as the theory suggested, the lack of common mechanisms of adjustment dramatically increased the socio-economic divergences within the EMU. The paper finally presents a simulation for a common budget of the Eurozone, linked to the relative current account positions of the member states.

  3. Sterilization of space hardware.

    Science.gov (United States)

    Pflug, I. J.

    1971-01-01

    Discussion of various techniques of sterilization of space flight hardware using either destructive heating or the action of chemicals. Factors considered in the dry-heat destruction of microorganisms include the effects of microbial water content, temperature, the physicochemical properties of the microorganism and adjacent support, and nature of the surrounding gas atmosphere. Dry-heat destruction rates of microorganisms on the surface, between mated surface areas, or buried in the solid material of space vehicle hardware are reviewed, along with alternative dry-heat sterilization cycles, thermodynamic considerations, and considerations of final sterilization-process design. Discussed sterilization chemicals include ethylene oxide, formaldehyde, methyl bromide, dimethyl sulfoxide, peracetic acid, and beta-propiolactone.

  4. Hardware characteristic and application

    International Nuclear Information System (INIS)

    Gu, Dong Hyeon

    1990-03-01

    The contents of this book are system board on memory, performance, system timer system click and specification, coprocessor such as programing interface and hardware interface, power supply on input and output, protection for DC output, Power Good signal, explanation on 84 keyboard and 101/102 keyboard,BIOS system, 80286 instruction set and 80287 coprocessor, characters, keystrokes and colors, communication and compatibility of IBM personal computer on application direction, multitasking and code for distinction of system.

  5. COMPUTER HARDWARE MARKING

    CERN Multimedia

    Groupe de protection des biens

    2000-01-01

    As part of the campaign to protect CERN property and for insurance reasons, all computer hardware belonging to the Organization must be marked with the words 'PROPRIETE CERN'.IT Division has recently introduced a new marking system that is both economical and easy to use. From now on all desktop hardware (PCs, Macintoshes, printers) issued by IT Division with a value equal to or exceeding 500 CHF will be marked using this new system.For equipment that is already installed but not yet marked, including UNIX workstations and X terminals, IT Division's Desktop Support Service offers the following services free of charge:Equipment-marking wherever the Service is called out to perform other work (please submit all work requests to the IT Helpdesk on 78888 or helpdesk@cern.ch; for unavoidable operational reasons, the Desktop Support Service will only respond to marking requests when these coincide with requests for other work such as repairs, system upgrades, etc.);Training of personnel designated by Division Leade...

  6. Open hardware for open science

    CERN Document Server

    CERN Bulletin

    2011-01-01

    Inspired by the open source software movement, the Open Hardware Repository was created to enable hardware developers to share the results of their R&D activities. The recently published CERN Open Hardware Licence offers the legal framework to support this knowledge and technology exchange.   Two years ago, a group of electronics designers led by Javier Serrano, a CERN engineer, working in experimental physics laboratories created the Open Hardware Repository (OHR). This project was initiated in order to facilitate the exchange of hardware designs across the community in line with the ideals of “open science”. The main objectives include avoiding duplication of effort by sharing results across different teams that might be working on the same need. “For hardware developers, the advantages of open hardware are numerous. For example, it is a great learning tool for technologies some developers would not otherwise master, and it avoids unnecessary work if someone ha...

  7. Foundations of hardware IP protection

    CERN Document Server

    Torres, Lionel

    2017-01-01

    This book provides a comprehensive and up-to-date guide to the design of security-hardened, hardware intellectual property (IP). Readers will learn how IP can be threatened, as well as protected, by using means such as hardware obfuscation/camouflaging, watermarking, fingerprinting (PUF), functional locking, remote activation, hidden transmission of data, hardware Trojan detection, protection against hardware Trojan, use of secure element, ultra-lightweight cryptography, and digital rights management. This book serves as a single-source reference to design space exploration of hardware security and IP protection. · Provides readers with a comprehensive overview of hardware intellectual property (IP) security, describing threat models and presenting means of protection, from integrated circuit layout to digital rights management of IP; · Enables readers to transpose techniques fundamental to digital rights management (DRM) to the realm of hardware IP security; · Introduce designers to the concept of salutar...

  8. Optimum Design of Plasma Focus

    International Nuclear Information System (INIS)

    Ramos, Ruben; Gonzalez, Jose; Clausse, Alejandro

    2000-01-01

    The optimum design of Plasma Focus devices is presented based in a lumped parameter model of the MHD equations.Maps in the design parameters space are obtained, which determine the length and deuterium pressure required to produce a given neutron yield.Sensitivity analyses of the main effective numbers (sweeping efficiencies) was performed, and lately the optimum values were determined in order to set a basis for the conceptual design

  9. Hardware Support for Embedded Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    The general Java runtime environment is resource hungry and unfriendly for real-time systems. To reduce the resource consumption of Java in embedded systems, direct hardware support of the language is a valuable option. Furthermore, an implementation of the Java virtual machine in hardware enables...... worst-case execution time analysis of Java programs. This chapter gives an overview of current approaches to hardware support for embedded and real-time Java....

  10. Hardware assisted hypervisor introspection.

    Science.gov (United States)

    Shi, Jiangyong; Yang, Yuexiang; Tang, Chuan

    2016-01-01

    In this paper, we introduce hypervisor introspection, an out-of-box way to monitor the execution of hypervisors. Similar to virtual machine introspection which has been proposed to protect virtual machines in an out-of-box way over the past decade, hypervisor introspection can be used to protect hypervisors which are the basis of cloud security. Virtual machine introspection tools are usually deployed either in hypervisor or in privileged virtual machines, which might also be compromised. By utilizing hardware support including nested virtualization, EPT protection and #BP, we are able to monitor all hypercalls belongs to the virtual machines of one hypervisor, include that of privileged virtual machine and even when the hypervisor is compromised. What's more, hypercall injection method is used to simulate hypercall-based attacks and evaluate the performance of our method. Experiment results show that our method can effectively detect hypercall-based attacks with some performance cost. Lastly, we discuss our furture approaches of reducing the performance cost and preventing the compromised hypervisor from detecting the existence of our introspector, in addition with some new scenarios to apply our hypervisor introspection system.

  11. LHCb: Hardware Data Injector

    CERN Multimedia

    Delord, V; Neufeld, N

    2009-01-01

    The LHCb High Level Trigger and Data Acquisition system selects about 2 kHz of events out of the 1 MHz of events, which have been selected previously by the first-level hardware trigger. The selected events are consolidated into files and then sent to permanent storage for subsequent analysis on the Grid. The goal of the upgrade of the LHCb readout is to lift the limitation to 1 MHz. This means speeding up the DAQ to 40 MHz. Such a DAQ system will certainly employ 10 Gigabit or technologies and might also need new networking protocols: a customized TCP or proprietary solutions. A test module is being presented, which integrates in the existing LHCb infrastructure. It is a 10-Gigabit traffic generator, flexible enough to generate LHCb's raw data packets using dummy data or simulated data. These data are seen as real data coming from sub-detectors by the DAQ. The implementation is based on an FPGA using 10 Gigabit Ethernet interface. This module is integrated in the experiment control system. The architecture, ...

  12. Hardware for soft computing and soft computing for hardware

    CERN Document Server

    Nedjah, Nadia

    2014-01-01

    Single and Multi-Objective Evolutionary Computation (MOEA),  Genetic Algorithms (GAs), Artificial Neural Networks (ANNs), Fuzzy Controllers (FCs), Particle Swarm Optimization (PSO) and Ant colony Optimization (ACO) are becoming omnipresent in almost every intelligent system design. Unfortunately, the application of the majority of these techniques is complex and so requires a huge computational effort to yield useful and practical results. Therefore, dedicated hardware for evolutionary, neural and fuzzy computation is a key issue for designers. With the spread of reconfigurable hardware such as FPGAs, digital as well as analog hardware implementations of such computation become cost-effective. The idea behind this book is to offer a variety of hardware designs for soft computing techniques that can be embedded in any final product. Also, to introduce the successful application of soft computing technique to solve many hard problem encountered during the design of embedded hardware designs. Reconfigurable em...

  13. The role of the visual hardware system in rugby performance ...

    African Journals Online (AJOL)

    This study explores the importance of the 'hardware' factors of the visual system in the game of rugby. A group of professional and club rugby players were tested and the results compared. The results were also compared with the established norms for elite athletes. The findings indicate no significant difference in hardware ...

  14. Optimum Safety Levels for Breakwaters

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Sørensen, John Dalsgaard

    2005-01-01

    Optimum design safety levels for rock and cube armoured rubble mound breakwaters without superstructure are investigated by numerical simulations on the basis of minimization of the total costs over the service life of the structure, taking into account typical uncertainties related to wave...

  15. Secure coupling of hardware components

    NARCIS (Netherlands)

    Hoepman, J.H.; Joosten, H.J.M.; Knobbe, J.W.

    2011-01-01

    A method and a system for securing communication between at least a first and a second hardware components of a mobile device is described. The method includes establishing a first shared secret between the first and the second hardware components during an initialization of the mobile device and,

  16. Optimum operation cycle of nuclear plant in power system operation

    International Nuclear Information System (INIS)

    Kurihara, Ikuo; Matsumura, Tetsuo; Katayama, Noboru

    1989-01-01

    Extension of nuclear power plant operation cycle leads to improvement of its capacity factor and affects to suppress thermal plant generation of which fuel cost is relatively high. On the other hand, the number of nuclear fuel assembly to be exchanged at the time of maintenance increases with the operation cycle extension and this makes the fuel cost of nuclear generation high. For this reason, there exists the optimum operation cycle from the power system operation. This report deals with the optimum operation cycle of nuclear plant as the optimum sharing problem of generated energy between nuclear and thermal plants. The incremental fuel cost is considered to find the optimum value. The effects of the generation mix and high burn-up fuel on optimum operation cycle are examined. (author)

  17. Hardware for dynamic quantum computing.

    Science.gov (United States)

    Ryan, Colm A; Johnson, Blake R; Ristè, Diego; Donovan, Brian; Ohki, Thomas A

    2017-10-01

    We describe the hardware, gateware, and software developed at Raytheon BBN Technologies for dynamic quantum information processing experiments on superconducting qubits. In dynamic experiments, real-time qubit state information is fed back or fed forward within a fraction of the qubits' coherence time to dynamically change the implemented sequence. The hardware presented here covers both control and readout of superconducting qubits. For readout, we created a custom signal processing gateware and software stack on commercial hardware to convert pulses in a heterodyne receiver into qubit state assignments with minimal latency, alongside data taking capability. For control, we developed custom hardware with gateware and software for pulse sequencing and steering information distribution that is capable of arbitrary control flow in a fraction of superconducting qubit coherence times. Both readout and control platforms make extensive use of field programmable gate arrays to enable tailored qubit control systems in a reconfigurable fabric suitable for iterative development.

  18. NDAS Hardware Translation Layer Development

    Science.gov (United States)

    Nazaretian, Ryan N.; Holladay, Wendy T.

    2011-01-01

    The NASA Data Acquisition System (NDAS) project is aimed to replace all DAS software for NASA s Rocket Testing Facilities. There must be a software-hardware translation layer so the software can properly talk to the hardware. Since the hardware from each test stand varies, drivers for each stand have to be made. These drivers will act more like plugins for the software. If the software is being used in E3, then the software should point to the E3 driver package. If the software is being used at B2, then the software should point to the B2 driver package. The driver packages should also be filled with hardware drivers that are universal to the DAS system. For example, since A1, A2, and B2 all use the Preston 8300AU signal conditioners, then the driver for those three stands should be the same and updated collectively.

  19. Static Scheduling of Periodic Hardware Tasks with Precedence and Deadline Constraints on Reconfigurable Hardware Devices

    Directory of Open Access Journals (Sweden)

    Ikbel Belaid

    2011-01-01

    Full Text Available Task graph scheduling for reconfigurable hardware devices can be defined as finding a schedule for a set of periodic tasks with precedence, dependence, and deadline constraints as well as their optimal allocations on the available heterogeneous hardware resources. This paper proposes a new methodology comprising three main stages. Using these three main stages, dynamic partial reconfiguration and mixed integer programming, pipelined scheduling and efficient placement are achieved and enable parallel computing of the task graph on the reconfigurable devices by optimizing placement/scheduling quality. Experiments on an application of heterogeneous hardware tasks demonstrate an improvement of resource utilization of 12.45% of the available reconfigurable resources corresponding to a resource gain of 17.3% compared to a static design. The configuration overhead is reduced to 2% of the total running time. Due to pipelined scheduling, the task graph spanning is minimized by 4% compared to sequential execution of the graph.

  20. Commodity hardware and software summary

    International Nuclear Information System (INIS)

    Wolbers, S.

    1997-04-01

    A review is given of the talks and papers presented in the Commodity Hardware and Software Session at the CHEP97 conference. An examination of the trends leading to the consideration of PC's for HEP is given, and a status of the work that is being done at various HEP labs and Universities is given

  1. Optimum Maintenance Strategies for Highway Bridges

    DEFF Research Database (Denmark)

    Frangopol, Dan M.; Thoft-Christensen, Palle; Das, Parag C.

    As bridges become older and maintenance costs become higher, transportation agencies are facing challenges related to implementation of optimal bridge management programs based on life cycle cost considerations. A reliability-based approach is necessary to find optimal solutions based on minimum...... expected life-cycle costs or maximum life-cycle benefits. This is because many maintenance activities can be associated with significant costs, but their effects on bridge safety can be minor. In this paper, the program of an investigation on optimum maintenance strategies for different bridge types...... is described. The end result of this investigation will be a general reliability-based framework to be used by the UK Highways Agency in order to plan optimal strategies for the maintenance of its bridge network so as to optimize whole-life costs....

  2. On the optimum energy mix

    International Nuclear Information System (INIS)

    Fujii, Yasumasa

    2011-01-01

    After the Fukushima accident occurred in March 2011, reform of Japan's basic energy plan and energy supply system was reported to be under discussion such as to reduce dependence on nuclear power. Planning of energy policy should be considered based on four evaluation indexes of 'economics'. 'environmental effects', 'stable supply of energy' and 'sustainability'. 'Stable supply of energy' should include stability of domestic energy supply infrastructure against natural disasters in addition to stable supply of overseas resources. 'Sustainability' meant long-term availability of resources. Since there did not exist an almighty energy source and energy supply system superior in terms of every above-mentioned evaluation index, it would be wise to use combining various energy sources and supply system in rational way. This combination lead to optimum energy mix, so-called 'Energy Best Mix'. The author evaluated characteristics of energy sources and energy supply system in terms of four indexes and showed best energy mix from short-, medium- and long-term perspectives. Since fossil fuel resources would deplete anyhow, it would be inevitable for human being to be dependent on non-fossil energy resources regardless of greenhouse effects. At present it would be difficult and no guarantee to establish society fully dependent on renewable energy, then it would be probable to need utilization of nuclear energy in the long term. (T. Tanaka)

  3. The principles of computer hardware

    CERN Document Server

    Clements, Alan

    2000-01-01

    Principles of Computer Hardware, now in its third edition, provides a first course in computer architecture or computer organization for undergraduates. The book covers the core topics of such a course, including Boolean algebra and logic design; number bases and binary arithmetic; the CPU; assembly language; memory systems; and input/output methods and devices. It then goes on to cover the related topics of computer peripherals such as printers; the hardware aspects of the operating system; and data communications, and hence provides a broader overview of the subject. Its readable, tutorial-based approach makes it an accessible introduction to the subject. The book has extensive in-depth coverage of two microprocessors, one of which (the 68000) is widely used in education. All chapters in the new edition have been updated. Major updates include: powerful software simulations of digital systems to accompany the chapters on digital design; a tutorial-based introduction to assembly language, including many exam...

  4. BIOLOGICALLY INSPIRED HARDWARE CELL ARCHITECTURE

    DEFF Research Database (Denmark)

    2010-01-01

    Disclosed is a system comprising: - a reconfigurable hardware platform; - a plurality of hardware units defined as cells adapted to be programmed to provide self-organization and self-maintenance of the system by means of implementing a program expressed in a programming language defined as DNA...... language, where each cell is adapted to communicate with one or more other cells in the system, and where the system further comprises a converter program adapted to convert keywords from the DNA language to a binary DNA code; where the self-organisation comprises that the DNA code is transmitted to one...... or more of the cells, and each of the one or more cells is adapted to determine its function in the system; where if a fault occurs in a first cell and the first cell ceases to perform its function, self-maintenance is performed by that the system transmits information to the cells that the first cell has...

  5. Hunting for hardware changes in data centres

    Science.gov (United States)

    Coelho dos Santos, M.; Steers, I.; Szebenyi, I.; Xafi, A.; Barring, O.; Bonfillou, E.

    2012-12-01

    With many servers and server parts the environment of warehouse sized data centres is increasingly complex. Server life-cycle management and hardware failures are responsible for frequent changes that need to be managed. To manage these changes better a project codenamed “hardware hound” focusing on hardware failure trending and hardware inventory has been started at CERN. By creating and using a hardware oriented data set - the inventory - with detailed information on servers and their parts as well as tracking changes to this inventory, the project aims at, for example, being able to discover trends in hardware failure rates.

  6. Hunting for hardware changes in data centres

    International Nuclear Information System (INIS)

    Coelho dos Santos, M; Steers, I; Szebenyi, I; Xafi, A; Barring, O; Bonfillou, E

    2012-01-01

    With many servers and server parts the environment of warehouse sized data centres is increasingly complex. Server life-cycle management and hardware failures are responsible for frequent changes that need to be managed. To manage these changes better a project codenamed “hardware hound” focusing on hardware failure trending and hardware inventory has been started at CERN. By creating and using a hardware oriented data set - the inventory - with detailed information on servers and their parts as well as tracking changes to this inventory, the project aims at, for example, being able to discover trends in hardware failure rates.

  7. Optimum Groove Location of Hydrodynamic Journal Bearing Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Lintu Roy

    2013-01-01

    Full Text Available This paper presents the various arrangements of grooving location of two-groove oil journal bearing for optimum performance. An attempt has been made to find out the effect of different configurations of two groove oil journal bearing by changing groove locations. Various groove angles that have been considered are 10°, 20°, and 30°. The Reynolds equation is solved numerically in a finite difference grid satisfying the appropriate boundary conditions. Determination of optimum performance is based on maximization of nondimensional load, flow coefficient, and mass parameter and minimization of friction variable using genetic algorithm. The results using genetic algorithm are compared with sequential quadratic programming (SQP. The two grooved bearings in general have grooves placed at diametrically opposite directions. However, the optimum groove locations, arrived at in the present work, are not diametrically opposite.

  8. Compiling quantum circuits to realistic hardware architectures using temporal planners

    Science.gov (United States)

    Venturelli, Davide; Do, Minh; Rieffel, Eleanor; Frank, Jeremy

    2018-04-01

    To run quantum algorithms on emerging gate-model quantum hardware, quantum circuits must be compiled to take into account constraints on the hardware. For near-term hardware, with only limited means to mitigate decoherence, it is critical to minimize the duration of the circuit. We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus on compiling to superconducting hardware architectures with nearest neighbor constraints. Our initial experiments focus on compiling Quantum Alternating Operator Ansatz (QAOA) circuits whose high number of commuting gates allow great flexibility in the order in which the gates can be applied. That freedom makes it more challenging to find optimal compilations but also means there is a greater potential win from more optimized compilation than for less flexible circuits. We map this quantum circuit compilation problem to a temporal planning problem, and generated a test suite of compilation problems for QAOA circuits of various sizes to a realistic hardware architecture. We report compilation results from several state-of-the-art temporal planners on this test set. This early empirical evaluation demonstrates that temporal planning is a viable approach to quantum circuit compilation.

  9. Threats and Challenges in Reconfigurable Hardware Security

    OpenAIRE

    Kastner, Ryan; Huffmire, Ted

    2008-01-01

    Computing systems designed using reconfigurable hardware are now used in many sensitive applications, where security is of utmost importance. Unfortunately, a strong notion of security is not currently present in FPGA hardware and software design flows. In the following, we discuss the security implications of using reconfigurable hardware in sensitive applications, and outline problems, attacks, solutions and topics for future research.

  10. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  11. Optimum shapes for pump limiters

    International Nuclear Information System (INIS)

    Ulrickson, M.

    1982-05-01

    The design of a pump limiter depends strongly on the details of the plasma scrapeoff zone. A model has been developed which allows the transport coefficients in the scrapeoff to be functions of n and t. This model has been used to predict scrapeoff profiles for FED/INTOR. The profiles are used to find and analyze limiter profiles. The results suggest the use of limiter shapes which curve toward the plasma

  12. Hardware complications in scoliosis surgery

    Energy Technology Data Exchange (ETDEWEB)

    Bagchi, Kaushik; Mohaideen, Ahamed [Department of Orthopaedic Surgery and Musculoskeletal Services, Maimonides Medical Center, Brooklyn, NY (United States); Thomson, Jeffrey D. [Connecticut Children' s Medical Center, Department of Orthopaedics, Hartford, CT (United States); Foley, Christopher L. [Department of Radiology, Connecticut Children' s Medical Center, Hartford, Connecticut (United States)

    2002-07-01

    Background: Scoliosis surgery has undergone a dramatic evolution over the past 20 years with the advent of new surgical techniques and sophisticated instrumentation. Surgeons have realized scoliosis is a complex multiplanar deformity that requires thorough knowledge of spinal anatomy and pathophysiology in order to manage patients afflicted by it. Nonoperative modalities such as bracing and casting still play roles in the treatment of scoliosis; however, it is the operative treatment that has revolutionized the treatment of this deformity that affects millions worldwide. As part of the evolution of scoliosis surgery, newer implants have resulted in improved outcomes with respect to deformity correction, reliability of fixation, and paucity of complications. Each technique and implant has its own set of unique complications, and the surgeon must appreciate these when planning surgery. Materials and methods: Various surgical techniques and types of instrumentation typically used in scoliosis surgery are briefly discussed. Though scoliosis surgery is associated with a wide variety of complications, only those that directly involve the hardware are discussed. The current literature is reviewed and several illustrative cases of patients treated for scoliosis at the Connecticut Children's Medical Center and the Newington Children's Hospital in Connecticut are briefly presented. Conclusion: Spine surgeons and radiologists should be familiar with the different types of instrumentation in the treatment of scoliosis. Furthermore, they should recognize the clinical and roentgenographic signs of hardware failure as part of prompt and effective treatment of such complications. (orig.)

  13. Travel Software using GPU Hardware

    CERN Document Server

    Szalwinski, Chris M; Dimov, Veliko Atanasov; CERN. Geneva. ATS Department

    2015-01-01

    Travel is the main multi-particle tracking code being used at CERN for the beam dynamics calculations through hadron and ion linear accelerators. It uses two routines for the calculation of space charge forces, namely, rings of charges and point-to-point. This report presents the studies to improve the performance of Travel using GPU hardware. The studies showed that the performance of Travel with the point-to-point simulations of space-charge effects can be speeded up at least 72 times using current GPU hardware. Simple recompilation of the source code using an Intel compiler can improve performance at least 4 times without GPU support. The limited memory of the GPU is the bottleneck. Two algorithms were investigated on this point: repeated computation and tiling. The repeating computation algorithm is simpler and is the currently recommended solution. The tiling algorithm was more complicated and degraded performance. Both build and test instructions for the parallelized version of the software are inclu...

  14. Optimum Energy Window In Liver Scintigraphy

    Directory of Open Access Journals (Sweden)

    Alireza Sadremomtaz

    2015-07-01

    Full Text Available Abstract In liver scintigraphy radioactive tracers in addition to liver are accumulated in other organs such as spleen. It leads to the presence of secondary source which affects image quality. Therefore knowing the influence of the noise arising from the secondary source and trying to reduce the additional data is necessary. In nuclear medicine imaging using of energy window is a useful way to reduce the noise. In this paper we try to find an optimum energy window to reduce the noise for two different low energy collimators. Liver scintigraphy images with and without activity in spleen were simulated by SIMIND software with different energy window percentages and with Low-Energy High-Resolution LEHR and Low-Energy General-Purpose LEGP collimators. We used with activity of 190 MBq. Spleen was outside of the camera field of view so that just its noise effects on the liver image is examined. Finally the images of liver with activity in spleen were compared with that without activity in spleen by MATLAB code.

  15. NOAA Optimum Interpolation (OI) SST V2

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The optimum interpolation (OI) sea surface temperature (SST) analysis is produced weekly on a one-degree grid. The analysis uses in situ and satellite SST's plus...

  16. On Optimum Safety Levels of Breakwaters

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Sørensen, John Dalsgaard

    2006-01-01

    The paper presents results from numerical simulations performed with the objective of identifying optimum design safety levels of conventional rubble mound and caisson breakwaters, corresponding to the lowest costs over the service life of the structures. The work is related to the PIANC Working...... Group 47 on "Selection of type of breakwater structures". The paper summaries results given in Burcharth and Sorensen (2005) related to outer rubble mound breakwaters but focus on optimum safety levels for outer caisson breakwaters on low and high rubble foundations placed on sea beds strong enough...... to resist geotechnical slip failures. Optimum safety levels formulated for use both in deterministic and probabilistic design procedures are given. Results obtained so far indicate that the optimum safety levels for caisson breakwaters are much higher than for rubble mound breakwaters....

  17. Optimum design for pipe-support allocation against seismic loading

    International Nuclear Information System (INIS)

    Hara, Fumio; Iwasaki, Akira

    1996-01-01

    This paper deals with the optimum design methodology of a piping system subjected to a seismic design loading to reduce its dynamic response by selecting the location of pipe supports and whereby reducing the number of pipe supports to be used. The author employs the Genetic Algorithm for obtaining a reasonably optimum solution of the pipe support location, support capacity and number of supports. The design condition specified by the support location, support capacity and the number of supports to be used is encored by an integer number string for each of the support allocation candidates and they prepare many strings for expressing various kinds of pipe-support allocation state. Corresponding to each string, the authors evaluate the seismic response of the piping system to the design seismic excitation and apply the Genetic Algorithm to select the next generation candidates of support allocation to improve the seismic design performance specified by a weighted linear combination of seismic response magnitude, support capacity and the number of supports needed. Continuing this selection process, they find a reasonably optimum solution to the seismic design problem. They examine the feasibility of this optimum design method by investigating the optimum solution for 5, 7 and 10 degree-of-freedom models of piping system, and find that this method can offer one a theoretically feasible solution to the problem. They will be, thus, liberated from the severe uncertainty of damping value when the pipe support guaranties the design capacity of damping. Finally, they discuss the usefulness of the Genetic Algorithm for the seismic design problem of piping systems and some sensitive points when it will be applied to actual design problems

  18. Global optimum path planning for a redundant space robot

    Science.gov (United States)

    Agrawal, Om P.; Xu, Yangsheng

    1991-12-01

    Robotic manipulators will play a significant role in the maintenance and repair of space stations and satellites, and other future space missions. Robot path planning and control for the above applications should be optimum, since any inefficiency in the planning may considerably risk the success of the space mission. This paper presents a global optimum path planning scheme for redundant space robotic manipulators to be used in such missions. In this formulation, a variational approach is used to minimize the objective functional. Two optimum path planning problems are considered: first, given the end-effector trajectory, find the optimum trajectories of the joints, and second, given the terminal conditions of the end-effector, find the optimum trajectories for the end-effector and the joints. It is explicitly assumed that the gravity is zero in, and the robotic manipulator is mounted on a completely free-flying base (spacecraft) and the attitude control (reaction wheels or thrust jets) is off. Linear and angular momentum conditions for this system lead to a set of mixed holonomic an nonholonomic constraints. These equations are adjoined to the objective functional using a Lagrange multiplier technique. The formulation leads to a system of Differential and Algebraic Equations (DAEs) and a set of terminal conditions. A numerical scheme is presented for forward integration of the above system of DAEs, and an iterative shooting method is used to satisfy the terminal conditions. This approach is significant since most space robots that have been developed so far are redundant. The kinematic redundancy of space robots offers efficient control and provides the necessary dexterity for extra-vehicular activity or avoidance of potential obstacles in space stations.

  19. Is there an optimum level for renewable energy?

    International Nuclear Information System (INIS)

    Moriarty, Patrick; Honnery, Damon

    2011-01-01

    Because continued heavy use of fossil fuel will lead to both global climate change and resource depletion of easily accessible fuels, many researchers advocate a rapid transition to renewable energy (RE) sources. In this paper we examine whether RE can provide anywhere near the levels of primary energy forecast by various official organisations in a business-as-usual world. We find that the energy costs of energy will rise in a non-linear manner as total annual primary RE output increases. In addition, increasing levels of RE will lead to increasing levels of ecosystem maintenance energy costs per unit of primary energy output. The result is that there is an optimum level of primary energy output, in the sense that the sustainable level of energy available to the economy is maximised at that level. We further argue that this optimum occurs at levels well below the energy consumption forecasts for a few decades hence. - Highlights: → We need to shift to renewable energy for climate change and fuel depletion reasons. → We examine whether renewable energy can provide the primary energy levels forecast. → The energy costs of energy rise non-linearly with renewable energy output. → There is thus an optimum level of primary energy output. → This optimum occurs at levels well below future official energy use forecasts.

  20. Open-source hardware for medical devices.

    Science.gov (United States)

    Niezen, Gerrit; Eslambolchilar, Parisa; Thimbleby, Harold

    2016-04-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device.

  1. Hardware Resource Allocation for Hardware/Software Partitioning in the LYCOS System

    DEFF Research Database (Denmark)

    Grode, Jesper Nicolai Riis; Knudsen, Peter Voigt; Madsen, Jan

    1998-01-01

    as a designer's/design tool's aid to generate good hardware allocations for use in hardware/software partitioning. The algorithm has been implemented in a tool under the LYCOS system. The results show that the allocations produced by the algorithm come close to the best allocations obtained by exhaustive search.......This paper presents a novel hardware resource allocation technique for hardware/software partitioning. It allocates hardware resources to the hardware data-path using information such as data-dependencies between operations in the application, and profiling information. The algorithm is useful...

  2. Hardware Resource Allocation for Hardware/Software Partitioning in the LYCOS System

    DEFF Research Database (Denmark)

    Grode, Jesper Nicolai Riis; Madsen, Jan; Knudsen, Peter Voigt

    1998-01-01

    as a designer's/design tool's aid to generate good hardware allocations for use in hardware/software partitioning. The algorithm has been implemented in a tool under the LYCOS system. The results show that the allocations produced by the algorithm come close to the best allocations obtained by exhaustive search......This paper presents a novel hardware resource allocation technique for hardware/software partitioning. It allocates hardware resources to the hardware data-path using information such as data-dependencies between operations in the application, and profiling information. The algorithm is useful...

  3. Optimum x-ray spectra for mammography.

    Science.gov (United States)

    Beaman, S A; Lillicrap, S C

    1982-10-01

    A number of authors have calculated x-ray energies for mammography using, as a criterion, the maximum signal-to-noise ratio (SNR) obtainable per unit dose to the breast or conversely the minimum exposure for constant SNR. The predicted optimum energy increases with increasing breast thickness. Tungsten anode x-ray spectra have been measured with and without various added filter materials to determine how close the resultant spectra can be brought to the predicted optimum energies without reducing the x-ray output to unacceptable levels. The proportion of the total number of x-rays in a measured spectrum lying within a narrow energy band centred on the predicted optimum has been used as an optimum energy index. The effect of various filter materials on the measured x-ray spectra has been investigated both experimentally and theoretically. The resulting spectra have been compared with molybdenum anode, molybdenum filtered x-ray spectra normally used for mammography. It is shown that filters with K-absorption edges close to the predicted optimum energies are the most effective at producing the desired spectral shape. The choices of filter thickness and Vp are also explored in relationship to their effect on the resultant x-ray spectral shape and intensity.

  4. Optimum burnup of BAEC TRIGA research reactor

    International Nuclear Information System (INIS)

    Lyric, Zoairia Idris; Mahmood, Mohammad Sayem; Motalab, Mohammad Abdul; Khan, Jahirul Haque

    2013-01-01

    Highlights: ► Optimum loading scheme for BAEC TRIGA core is out-to-in loading with 10 fuels/cycle starting with 5 for the first reload. ► The discharge burnup ranges from 17% to 24% of U235 per fuel element for full power (3 MW) operation. ► Optimum extension of operating core life is 100 MWD per reload cycle. - Abstract: The TRIGA Mark II research reactor of BAEC (Bangladesh Atomic Energy Commission) has been operating since 1986 without any reshuffling or reloading yet. Optimum fuel burnup strategy has been investigated for the present BAEC TRIGA core, where three out-to-in loading schemes have been inspected in terms of core life extension, burnup economy and safety. In considering different schemes of fuel loading, optimization has been searched by only varying the number of fuels discharged and loaded. A cost function has been defined and evaluated based on the calculated core life and fuel load and discharge. The optimum loading scheme has been identified for the TRIGA core, the outside-to-inside fuel loading with ten fuels for each cycle starting with five fuels for the first reload. The discharge burnup has been found ranging from 17% to 24% of U235 per fuel element and optimum extension of core operating life is 100 MWD for each loading cycle. This study will contribute to the in-core fuel management of TRIGA reactor

  5. Hardware availability calculations and results of the IFMIF accelerator facility

    Energy Technology Data Exchange (ETDEWEB)

    Bargalló, Enric, E-mail: enric.bargallo-font@upc.edu [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Arroyo, Jose Manuel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Abal, Javier [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Beauvais, Pierre-Yves; Gobin, Raphael; Orsini, Fabienne [Commissariat à l’Energie Atomique, Saclay (France); Weber, Moisés; Podadera, Ivan [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Grespan, Francesco; Fagotti, Enrico [Istituto Nazionale di Fisica Nucleare, Legnaro (Italy); De Blas, Alfredo; Dies, Javier; Tapia, Carlos [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Mollá, Joaquín; Ibarra, Ángel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain)

    2014-10-15

    Highlights: • IFMIF accelerator facility hardware availability analyses methodology is described. • Results of the individual hardware availability analyses are shown for the reference design. • Accelerator design improvements are proposed for each system. • Availability results are evaluated and compared with the requirements. - Abstract: Hardware availability calculations have been done individually for each system of the deuteron accelerators of the International Fusion Materials Irradiation Facility (IFMIF). The principal goal of these analyses is to estimate the availability of the systems, compare it with the challenging IFMIF requirements and find new paths to improve availability performances. Major unavailability contributors are highlighted and possible design changes are proposed in order to achieve the hardware availability requirements established for each system. In this paper, such possible improvements are implemented in fault tree models and the availability results are evaluated. The parallel activity on the design and construction of the linear IFMIF prototype accelerator (LIPAc) provides detailed design information for the RAMI (reliability, availability, maintainability and inspectability) analyses and allows finding out the improvements that the final accelerator could have. Because of the R and D behavior of the LIPAc, RAMI improvements could be the major differences between the prototype and the IFMIF accelerator design.

  6. Method for Determining Optimum Injector Inlet Geometry

    Science.gov (United States)

    Trinh, Huu P. (Inventor); Myers, W. Neill (Inventor)

    2015-01-01

    A method for determining the optimum inlet geometry of a liquid rocket engine swirl injector includes obtaining a throttleable level phase value, volume flow rate, chamber pressure, liquid propellant density, inlet injector pressure, desired target spray angle and desired target optimum delta pressure value between an inlet and a chamber for a plurality of engine stages. The method calculates the tangential inlet area for each throttleable stage. The method also uses correlation between the tangential inlet areas and delta pressure values to calculate the spring displacement and variable inlet geometry of a liquid rocket engine swirl injector.

  7. Optimum design of B-series marine propellers

    Directory of Open Access Journals (Sweden)

    M.M. Gaafary

    2011-03-01

    Full Text Available The choice of an optimum marine propeller is one of the most important problems in naval architecture. This problem can be handled using the propeller series diagrams or regression polynomials. This paper introduces a procedure to find out the optimum characteristics of B-series marine propellers. The propeller design process is performed as a single objective function subjected to constraints imposed by cavitation, material strength and required propeller thrust. Although optimization software of commercial type can be adopted to solve the problem, the computer program that has been specially developed for this task may be more useful for its flexibility and possibility to be incorporated, as a subroutine, with the complex ship design process.

  8. Comparative Modal Analysis of Sieve Hardware Designs

    Science.gov (United States)

    Thompson, Nathaniel

    2012-01-01

    The CMTB Thwacker hardware operates as a testbed analogue for the Flight Thwacker and Sieve components of CHIMRA, a device on the Curiosity Rover. The sieve separates particles with a diameter smaller than 150 microns for delivery to onboard science instruments. The sieving behavior of the testbed hardware should be similar to the Flight hardware for the results to be meaningful. The elastodynamic behavior of both sieves was studied analytically using the Rayleigh Ritz method in conjunction with classical plate theory. Finite element models were used to determine the mode shapes of both designs, and comparisons between the natural frequencies and mode shapes were made. The analysis predicts that the performance of the CMTB Thwacker will closely resemble the performance of the Flight Thwacker within the expected steady state operating regime. Excitations of the testbed hardware that will mimic the flight hardware were recommended, as were those that will improve the efficiency of the sieving process.

  9. Determination of the Optimum Thickness of Approximately ...

    African Journals Online (AJOL)

    In an attempt to conserve the world's scarce energy and material resources, a balance between the cost of heating a material and the optimum thickness of the material becomes vey essential. One of such materials is the local cast aluminium pot commonly used as cooking ware in Nigeria. This paper therefore sets up a ...

  10. Belichting bromelia: het optimum verschilt per soort

    NARCIS (Netherlands)

    Garcia Victoria, N.; Warmenhoven, M.G.

    2009-01-01

    Het afstemmen van de hoeveelheid assimilatiebelichting en bemesting in de teelt van bromelia's is vakwerk. Extra mest en licht is beter, maar er is een optimum; een plant kan ook te veel mest en licht ontvangen. Voor bepaling van een aangepast teeltrecept is vervolgonderzoek nodig

  11. Common Core: Teaching Optimum Topic Exploration (TOTE)

    Science.gov (United States)

    Karge, Belinda Dunnick; Moore, Roxane Kushner

    2015-01-01

    The Common Core has become a household term and yet many educators do not understand what it means. This article explains the historical perspectives of the Common Core and gives guidance to teachers in application of Teaching Optimum Topic Exploration (TOTE) necessary for full implementation of the Common Core State Standards. An effective…

  12. Genotype x environment interaction and optimum resource ...

    African Journals Online (AJOL)

    ... x E) interaction and to determine the optimum resource allocation for cassava yield trials. The effects of environment, genotype and G x E interaction were highly significant for all yield traits. Variations due to G x E interaction were greater than those due to genotypic differences for all yield traits. Genotype x location x year ...

  13. WHAT IS THE OPTIMUM SIZE OF GOVERNMENT: A SUGGESTION

    Directory of Open Access Journals (Sweden)

    Aykut Ekinci

    2011-01-01

    Full Text Available What is the optimum size of government? When the rule of law and the establishment of private property rights are taken into consideration, it is clear that the answer will not be at some 0%. On the other hand, when the experience of the old Soviet Union, East Germany and North Korea is considered, the answer will not be at some 100% either. Therefore, extreme points should not be the right answer. This study offers using normal distribution to answer this question. The study has revealed the following findings: (i The total amount of public expenditures as % of GDP, a is at minimum level at 4.55% rate, b is at optimum level at 13.4% rate, c is at maximum level at 31.7%. (ii Thus, as a fiscal rule, countries should: a choose the total amount of public expenditures as % of GDP ≤ 31.7% b target 13.4%. (iii Tree dimensional (3D normal distribution demonstrates that a healthy market system could be built upon a healthy government system (iv This approach rejects Wagner’s law. In a healthy growing economy, optimum government size could be kept at 13.4%. (v The UK, the USA and the European countries have been in the Keynesian-Marxist area, which reduces their average growth.

  14. An optimum analysis sequence for environmental gamma-ray spectrometry

    International Nuclear Information System (INIS)

    De la Torre, F.; Rios M, C.; Ruvalcaba A, M. G.; Mireles G, F.; Saucedo A, S.; Davila R, I.; Pinedo, J. L.

    2010-10-01

    This work aims to obtain an optimum analysis sequence for environmental gamma-ray spectroscopy by means of Genie 2000 (Canberra). Twenty different analysis sequences were customized using different peak area percentages and different algorithms for: 1) peak finding, and 2) peak area determination, and with or without the use of a library -based on evaluated nuclear data- of common gamma-ray emitters in environmental samples. The use of an optimum analysis sequence with certified nuclear information avoids the problems originated by the significant variations in out-of-date nuclear parameters of commercial software libraries. Interference-free gamma ray energies with absolute emission probabilities greater than 3.75% were included in the customized library. The gamma-ray spectroscopy system (based on a Ge Re-3522 Canberra detector) was calibrated both in energy and shape by means of the IAEA-2002 reference spectra for software intercomparison. To test the performance of the analysis sequences, the IAEA-2002 reference spectrum was used. The z-score and the reduced χ 2 criteria were used to determine the optimum analysis sequence. The results show an appreciable variation in the peak area determinations and their corresponding uncertainties. Particularly, the combination of second derivative peak locate with simple peak area integration algorithms provides the greater accuracy. Lower accuracy comes from the combination of library directed peak locate algorithm and Genie's Gamma-M peak area determination. (Author)

  15. An optimum analysis sequence for environmental gamma-ray spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    De la Torre, F.; Rios M, C.; Ruvalcaba A, M. G.; Mireles G, F.; Saucedo A, S.; Davila R, I.; Pinedo, J. L., E-mail: fta777@hotmail.co [Universidad Autonoma de Zacatecas, Centro Regional de Estudis Nucleares, Calle Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas (Mexico)

    2010-10-15

    This work aims to obtain an optimum analysis sequence for environmental gamma-ray spectroscopy by means of Genie 2000 (Canberra). Twenty different analysis sequences were customized using different peak area percentages and different algorithms for: 1) peak finding, and 2) peak area determination, and with or without the use of a library -based on evaluated nuclear data- of common gamma-ray emitters in environmental samples. The use of an optimum analysis sequence with certified nuclear information avoids the problems originated by the significant variations in out-of-date nuclear parameters of commercial software libraries. Interference-free gamma ray energies with absolute emission probabilities greater than 3.75% were included in the customized library. The gamma-ray spectroscopy system (based on a Ge Re-3522 Canberra detector) was calibrated both in energy and shape by means of the IAEA-2002 reference spectra for software intercomparison. To test the performance of the analysis sequences, the IAEA-2002 reference spectrum was used. The z-score and the reduced {chi}{sup 2} criteria were used to determine the optimum analysis sequence. The results show an appreciable variation in the peak area determinations and their corresponding uncertainties. Particularly, the combination of second derivative peak locate with simple peak area integration algorithms provides the greater accuracy. Lower accuracy comes from the combination of library directed peak locate algorithm and Genie's Gamma-M peak area determination. (Author)

  16. Optimum Waveforms for Differential Ion Mobility Spectrometry (FAIMS)

    Science.gov (United States)

    Shvartsburg, Alexandre A.; Smith, Richard D.

    2009-01-01

    Differential mobility spectrometry or field asymmetric waveform ion mobility spectrometry (FAIMS) is a new tool for separation and identification of gas-phase ions, particularly in conjunction with mass-spectrometry. In FAIMS, ions are filtered by the difference between mobilities in gases (K) at high and low electric field intensity (E) using asymmetric waveforms. An infinite number of possible waveform profiles make maximizing the performance within engineering constraints a major issue for FAIMS technology refinement. Earlier optimizations assumed the non-constant component of mobility to scale as E2, producing the same result for all ions. Here we show that the optimum profiles are defined by the full series expansion of K(E) that includes terms beyond the 1st that is proportional to E2. For many ion/gas pairs, the first two terms have different signs, and the optimum profiles at sufficiently high E in FAIMS may differ substantially from those previously reported, improving the resolving power by up to 2.2 times. This situation arises for some ions in all FAIMS systems, but becomes more common in recent miniaturized devices that employ higher E. With realistic K(E) dependences, the maximum waveform amplitude is not necessarily optimum and reducing it by up to ∼20 – 30% is beneficial in some cases. The present findings are particularly relevant to targeted analyses where separation depends on the difference between K(E) functions for specific ions. PMID:18585054

  17. Hardware-in-the-Loop Testing

    Data.gov (United States)

    Federal Laboratory Consortium — RTC has a suite of Hardware-in-the Loop facilities that include three operational facilities that provide performance assessment and production acceptance testing of...

  18. Optimum refuelling strategy in light water reactors

    International Nuclear Information System (INIS)

    Hermansky, B.

    1977-01-01

    The flow sheets are presented of refuelling schedules aimed at obtaining deep average fuel burnup with levelling up the output along the reactor radius in large PWR reactors. The zone refuelling is described in which only 1/3 of the fuel element number is replaced. The elements are placed in the outer zone of the core. Also described is the distributed refuelling in which fuel elements with different burnups are evenly spaced. A modified refuelling schedule is shown involving the replacement from the outside to the inside where a uniform radial distribution of thermal output is achieved. Calculation methods are shown of determining the optimum refuelling strategy. Dynamic programming is one of the prospective computer methods. Its general algorithm is indicated. A survey is made of some studies on the optimum refuelling strategy in pressurized water reactors. (J.B.)

  19. Optimum Operational Parameters for Yawed Wind Turbines

    Directory of Open Access Journals (Sweden)

    David A. Peters

    2011-01-01

    Full Text Available A set of systematical optimum operational parameters for wind turbines under various wind directions is derived by using combined momentum-energy and blade-element-energy concepts. The derivations are solved numerically by fixing some parameters at practical values. Then, the interactions between the produced power and the influential factors of it are generated in the figures. It is shown that the maximum power produced is strongly affected by the wind direction, the tip speed, the pitch angle of the rotor, and the drag coefficient, which are specifically indicated by figures. It also turns out that the maximum power can take place at two different optimum tip speeds in some cases. The equations derived herein can also be used in the modeling of tethered wind turbines which can keep aloft and deliver energy.

  20. Cooperative communications hardware, channel and PHY

    CERN Document Server

    Dohler, Mischa

    2010-01-01

    Facilitating Cooperation for Wireless Systems Cooperative Communications: Hardware, Channel & PHY focuses on issues pertaining to the PHY layer of wireless communication networks, offering a rigorous taxonomy of this dispersed field, along with a range of application scenarios for cooperative and distributed schemes, demonstrating how these techniques can be employed. The authors discuss hardware, complexity and power consumption issues, which are vital for understanding what can be realized at the PHY layer, showing how wireless channel models differ from more traditional

  1. Designing Secure Systems on Reconfigurable Hardware

    OpenAIRE

    Huffmire, Ted; Brotherton, Brett; Callegari, Nick; Valamehr, Jonathan; White, Jeff; Kastner, Ryan; Sherwood, Ted

    2008-01-01

    The extremely high cost of custom ASIC fabrication makes FPGAs an attractive alternative for deployment of custom hardware. Embedded systems based on reconfigurable hardware integrate many functions onto a single device. Since embedded designers often have no choice but to use soft IP cores obtained from third parties, the cores operate at different trust levels, resulting in mixed trust designs. The goal of this project is to evaluate recently proposed security primitives for reconfigurab...

  2. LWH and ACH Helmet Hardware Study

    Science.gov (United States)

    2015-11-30

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6355--15-9642 LWH & ACH Helmet Hardware Study November 30, 2015 Ronald l. Holtz PeteR...19b. TELEPHONE NUMBER (include area code) b. ABSTRACT c. THIS PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT LWH & ACH Helmet Hardware Study...screws and nuts used with the Light Weight Helmet (LWH) and Advanced Combat Helmet ( ACH ). The testing included basic dimensional measurements, Rockwell

  3. IDD Archival Hardware Architecture and Workflow

    Energy Technology Data Exchange (ETDEWEB)

    Mendonsa, D; Nekoogar, F; Martz, H

    2008-10-09

    This document describes the functionality of every component in the DHS/IDD archival and storage hardware system shown in Fig. 1. The document describes steps by step process of image data being received at LLNL then being processed and made available to authorized personnel and collaborators. Throughout this document references will be made to one of two figures, Fig. 1 describing the elements of the architecture and the Fig. 2 describing the workflow and how the project utilizes the available hardware.

  4. Probabilistic studies for safety at optimum cost

    International Nuclear Information System (INIS)

    Pitner, P.

    1999-01-01

    By definition, the risk of failure of very reliable components is difficult to evaluate. How can the best strategies for in service inspection and maintenance be defined to limit this risk to an acceptable level at optimum cost? It is not sufficient to design structures with margins, it is also essential to understand how they age. The probabilistic approach has made it possible to develop well proven concepts. (author)

  5. Techniques for evaluating optimum data center operation

    Science.gov (United States)

    Hamann, Hendrik F.; Rodriguez, Sergio Adolfo Bermudez; Wehle, Hans-Dieter

    2017-06-14

    Techniques for modeling a data center are provided. In one aspect, a method for determining data center efficiency is provided. The method includes the following steps. Target parameters for the data center are obtained. Technology pre-requisite parameters for the data center are obtained. An optimum data center efficiency is determined given the target parameters for the data center and the technology pre-requisite parameters for the data center.

  6. Optimum Filters and Pulsed Signal Storage Devices,

    Science.gov (United States)

    1982-05-05

    but having any value of initial phase, i.e., invar - iant with respect to the initial phase. The property of invariance of an optimum filter is very...22.8 -5.8.10-5 Magnesium Alloys 5.6-5.8 10-20 Steel 4.7-6.1 9-44 4.10-4 Aluminum 5.1-6.4 11.8 -2-10-4 To reduce overall dimensions, solid delay lines

  7. Developed Hybrid Model for Propylene Polymerisation at Optimum Reaction Conditions

    Directory of Open Access Journals (Sweden)

    Mohammad Jakir Hossain Khan

    2016-02-01

    Full Text Available A statistical model combined with CFD (computational fluid dynamic method was used to explain the detailed phenomena of the process parameters, and a series of experiments were carried out for propylene polymerisation by varying the feed gas composition, reaction initiation temperature, and system pressure, in a fluidised bed catalytic reactor. The propylene polymerisation rate per pass was considered the response to the analysis. Response surface methodology (RSM, with a full factorial central composite experimental design, was applied to develop the model. In this study, analysis of variance (ANOVA indicated an acceptable value for the coefficient of determination and a suitable estimation of a second-order regression model. For better justification, results were also described through a three-dimensional (3D response surface and a related two-dimensional (2D contour plot. These 3D and 2D response analyses provided significant and easy to understand findings on the effect of all the considered process variables on expected findings. To diagnose the model adequacy, the mathematical relationship between the process variables and the extent of polymer conversion was established through the combination of CFD with statistical tools. All the tests showed that the model is an excellent fit with the experimental validation. The maximum extent of polymer conversion per pass was 5.98% at the set time period and with consistent catalyst and co-catalyst feed rates. The optimum conditions for maximum polymerisation was found at reaction temperature (RT 75 °C, system pressure (SP 25 bar, and 75% monomer concentration (MC. The hydrogen percentage was kept fixed at all times. The coefficient of correlation for reaction temperature, system pressure, and monomer concentration ratio, was found to be 0.932. Thus, the experimental results and model predicted values were a reliable fit at optimum process conditions. Detailed and adaptable CFD results were capable

  8. Flight Hardware Virtualization for On-Board Science Data Processing

    Data.gov (United States)

    National Aeronautics and Space Administration — Utilize Hardware Virtualization technology to benefit on-board science data processing by investigating new real time embedded Hardware Virtualization solutions and...

  9. Thermal Comfort and Optimum Humidity Part 2

    Directory of Open Access Journals (Sweden)

    M. V. Jokl

    2002-01-01

    Full Text Available The hydrothermal microclimate is the main component in indoor comfort. The optimum hydrothermal level can be ensured by suitable changes in the sources of heat and water vapor within the building, changes in the environment (the interior of the building and in the people exposed to the conditions inside the building. A change in the heat source and the source of water vapor involves improving the heat - insulating properties and the air permeability of the peripheral walls and especially of the windows. The change in the environment will bring human bodies into balance with the environment. This can be expressed in terms of an optimum or at least an acceptable globe temperature, an adequate proportion of radiant heat within the total amount of heat from the environment (defined by the difference between air and wall temperature, uniform cooling of the human body by the environment, defined a by the acceptable temperature difference between head and ankles, b by acceptable temperature variations during a shift (location unchanged, or during movement from one location to another without a change of clothing. Finally, a moisture balance between man and the environment is necessary (defined by acceptable relative air humidity. A change for human beings means a change of clothes which, of course, is limited by social acceptance in summer and by inconvenient heaviness in winter. The principles of optimum heating and cooling, humidification and dehumidification are presented in this paper.Hydrothermal comfort in an environment depends on heat and humidity flows (heat and water vapors, occurring in a given space in a building interior and affecting the total state of the human organism.

  10. Thermal Comfort and Optimum Humidity Part 1

    Directory of Open Access Journals (Sweden)

    M. V. Jokl

    2002-01-01

    Full Text Available The hydrothermal microclimate is the main component in indoor comfort. The optimum hydrothermal level can be ensured by suitable changes in the sources of heat and water vapor within the building, changes in the environment (the interior of the building and in the people exposed to the conditions inside the building. A change in the heat source and the source of water vapor involves improving the heat - insulating properties and the air permeability of the peripheral walls and especially of the windows. The change in the environment will bring human bodies into balance with the environment. This can be expressed in terms of an optimum or at least an acceptable globe temperature, an adequate proportion of radiant heat within the total amount of heat from the environment (defined by the difference between air and wall temperature, uniform cooling of the human body by the environment, defined a by the acceptable temperature difference between head and ankles, b by acceptable temperature variations during a shift (location unchanged, or during movement from one location to another without a change of clothing. Finally, a moisture balance between man and the environment is necessary (defined by acceptable relative air humidity. A change for human beings means a change of clothes which, of course, is limited by social acceptance in summer and by inconvenient heaviness in winter. The principles of optimum heating and cooling, humidification and dehumidification are presented in this paper.Hydrothermal comfort in an environment depends on heat and humidity flows (heat and water vapors, occurring in a given space in a building interior and affecting the total state of the human organism.

  11. A Hardware Abstraction Layer in Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Korsholm, Stephan; Kalibera, Tomas

    2011-01-01

    Embedded systems use specialized hardware devices to interact with their environment, and since they have to be dependable, it is attractive to use a modern, type-safe programming language like Java to develop programs for them. Standard Java, as a platform-independent language, delegates access...... to devices, direct memory access, and interrupt handling to some underlying operating system or kernel, but in the embedded systems domain resources are scarce and a Java Virtual Machine (JVM) without an underlying middleware is an attractive architecture. The contribution of this article is a proposal...... for Java packages with hardware objects and interrupt handlers that interface to such a JVM. We provide implementations of the proposal directly in hardware, as extensions of standard interpreters, and finally with an operating system middleware. The latter solution is mainly seen as a migration path...

  12. MFTF supervisory control and diagnostics system hardware

    International Nuclear Information System (INIS)

    Butner, D.N.

    1979-01-01

    The Supervisory Control and Diagnostics System (SCDS) for the Mirror Fusion Test Facility (MFTF) is a multiprocessor minicomputer system designed so that for most single-point failures, the hardware may be quickly reconfigured to provide continued operation of the experiment. The system is made up of nine Perkin-Elmer computers - a mixture of 8/32's and 7/32's. Each computer has ports on a shared memory system consisting of two independent shared memory modules. Each processor can signal other processors through hardware external to the shared memory. The system communicates with the Local Control and Instrumentation System, which consists of approximately 65 microprocessors. Each of the six system processors has facilities for communicating with a group of microprocessors; the groups consist of from four to 24 microprocessors. There are hardware switches so that if an SCDS processor communicating with a group of microprocessors fails, another SCDS processor takes over the communication

  13. Hardware Accelerators for Elliptic Curve Cryptography

    Directory of Open Access Journals (Sweden)

    C. Puttmann

    2008-05-01

    Full Text Available In this paper we explore different hardware accelerators for cryptography based on elliptic curves. Furthermore, we present a hierarchical multiprocessor system-on-chip (MPSoC platform that can be used for fast integration and evaluation of novel hardware accelerators. In respect of two application scenarios the hardware accelerators are coupled at different hierarchy levels of the MPSoC platform. The whole system is implemented in a state of the art 65 nm standard cell technology. Moreover, an FPGA-based rapid prototyping system for fast system verification is presented. Finally, a metric to analyze the resource efficiency by means of chip area, execution time and energy consumption is introduced.

  14. Hardware Acceleration of Adaptive Neural Algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - world conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.

  15. Optimum body size of Holstein replacement heifers.

    Science.gov (United States)

    Hoffman, P C

    1997-03-01

    Criteria that define optimum body size of replacement heifers are required by commercial dairy producers to evaluate replacement heifer management programs. Historically recommended body size criteria have been based on live BW measurements. Numerous research studies have observed a positive relationship between BW at first calving and first lactation milk yield, which has served as the impetus for using live BW to define body size of replacement heifers. Live BW is, however, not the only available measurement to define body size. Skeletal measurements such as wither height, length, and pelvic area have been demonstrated to be related to first lactation performance and (or) dystocia. Live BW measurements also do not define differences in body composition. Differences in body composition of replacement heifers at first calving are also related to key performance variables. An updated research data base is available for the modern Holstein genotype to incorporate measures of skeletal growth and body composition with BW when defining body size. These research projects also lend insight into the relative importance of measurements that define body size of replacement heifers. Incorporation of these measurements from current research into present BW recommendations should aid commercial dairy producers to better define replacement heifer growth and management practices. This article proposes enhancements in defining optimum body size and growth characteristics of Holstein replacement heifers.

  16. REGARDING "TRAGIC ECONOMIC OPTIMUM" FROM HOLISTIC+ PERSPECTIVE

    Directory of Open Access Journals (Sweden)

    Constantin Popescu

    2010-12-01

    Full Text Available Communication aims to discuss the new scientific vision of "the entire integrated" as it follows the recent achievements of quantum physics, psychology and biology. From this perspective, economy is seen as a living organism, part of the social organism and together with de bright ecology. The optimum of the economy as a living organism is based on dynamic compatibilities with all common living requirements. The evolution of economic life is organically linked to the unavoidable circumstances contained in the form of V. Frankl ‘s tragic triad consisting of: pain, guilt and death. In interaction with the holistic triad circumscribed by limitations, uncertainties and open interdependencies, the tragic economic optimum (TEO is formed. It can be understood as that state of economic life in which freedom of choice of scarce resources under uncertainty has in the compatibility of rationality and hope the development criteria of MEANING. TEO means to say YES to economic life even in conditions of resource limitations, bankruptcies and unemployment, negative externalities, stress, etc. By respiritualization of responsibility using scientific knowledge. TEO - involves multicriteria modeling of economic life by integrating human demands, community, environmental, spiritual and business development in the assessment predicting human GDP as a variable wave aggregate.

  17. Human Centered Hardware Modeling and Collaboration

    Science.gov (United States)

    Stambolian Damon; Lawrence, Brad; Stelges, Katrine; Henderson, Gena

    2013-01-01

    In order to collaborate engineering designs among NASA Centers and customers, to in clude hardware and human activities from multiple remote locations, live human-centered modeling and collaboration across several sites has been successfully facilitated by Kennedy Space Center. The focus of this paper includes innovative a pproaches to engineering design analyses and training, along with research being conducted to apply new technologies for tracking, immersing, and evaluating humans as well as rocket, vehic le, component, or faci lity hardware utilizing high resolution cameras, motion tracking, ergonomic analysis, biomedical monitoring, wor k instruction integration, head-mounted displays, and other innovative human-system integration modeling, simulation, and collaboration applications.

  18. Quantum neuromorphic hardware for quantum artificial intelligence

    Science.gov (United States)

    Prati, Enrico

    2017-08-01

    The development of machine learning methods based on deep learning boosted the field of artificial intelligence towards unprecedented achievements and application in several fields. Such prominent results were made in parallel with the first successful demonstrations of fault tolerant hardware for quantum information processing. To which extent deep learning can take advantage of the existence of a hardware based on qubits behaving as a universal quantum computer is an open question under investigation. Here I review the convergence between the two fields towards implementation of advanced quantum algorithms, including quantum deep learning.

  19. Hardware Accelerated Sequence Alignment with Traceback

    Directory of Open Access Journals (Sweden)

    Scott Lloyd

    2009-01-01

    in a timely manner. Known methods to accelerate alignment on reconfigurable hardware only address sequence comparison, limit the sequence length, or exhibit memory and I/O bottlenecks. A space-efficient, global sequence alignment algorithm and architecture is presented that accelerates the forward scan and traceback in hardware without memory and I/O limitations. With 256 processing elements in FPGA technology, a performance gain over 300 times that of a desktop computer is demonstrated on sequence lengths of 16000. For greater performance, the architecture is scalable to more processing elements.

  20. Universal Curve of Optimum Thermoelectric Figures of Merit for Bulk and Low-Dimensional Semiconductors

    Science.gov (United States)

    Hung, Nguyen T.; Nugraha, Ahmad R. T.; Saito, Riichiro

    2018-02-01

    This paper is a contribution to the Physical Review Applied collection in memory of Mildred S. Dresselhaus. Analytical formulas for thermoelectric figures of merit and power factors are derived based on the one-band model. We find that there is a direct relationship between the optimum figures of merit and the optimum power factors of semiconductors despite of the fact that the two quantities are generally given by different values of chemical potentials. By introducing a dimensionless parameter consisting of the optimum power factor and lattice thermal conductivity (without electronic thermal conductivity), it is possible to unify optimum figures of merit of both bulk and low-dimensional semiconductors into a single universal curve that covers many materials with different dimensionalities.

  1. Optimum motion track planning for avoiding obstacles

    International Nuclear Information System (INIS)

    Attia, A.A.A

    2008-01-01

    A genetic algorithm (GA) is a stochastic search and optimization technique based on the mechanism of natural selection. A population of candidate solutions (Chromosomes) is held and interacts over a number of iterations (Generations) to produce better solutions. In canonical GA, the chromosomes are encoded as binary strings. Driving the process is the fitness of the chromosomes, which relates the quality of a candidate in quantitative terms. The fitness function encapsulates the problem- specific knowledge. The fitness is used in a stochastic selection of pairs of chromosomes which are 'reproduced' to generate new solution strings. Reproduction involves crossover, which generates new children by combining chromosomes in a process which swaps portions of each others genes. The other reproduction operator is called mutation. Mutation randomly changes genes and is used to introduce new information into the search. Both crossover and mutation make heavy use of random numbers.The aim of this thesis is to investigate the H/W implementation of genetic algorithm based motion path planning of robot. The potential benefit of using genetic algorithm hardware is that it allows both the huge parallelism which is suited to random number generation, crossover, mutation and fitness evaluation. For many real-world applications, GA can run for days, even when it is executed on a high performance workstation. According to the extensive computation of GA, it follows that hardware-based GA has been put forward. There are aspects of GA approach attract H/W implementation. The operation of selection and reproduction are basically problem independent and involve basic string manipulation tasks. These can be achieved by logical circuits.The fitness evaluation task, which is problem dependent, however proves a major difficulty in H/W implementation. Another difficulty comes from that designs can only be used for the individual problem their fitness function represents. Therefore, in this

  2. Design optimum frac jobs using virtual intelligence techniques

    Science.gov (United States)

    Mohaghegh, Shahab; Popa, Andrei; Ameri, Sam

    2000-10-01

    Designing optimal frac jobs is a complex and time-consuming process. It usually involves the use of a two- or three-dimensional computer model. For the computer models to perform as intended, a wealth of input data is required. The input data includes wellbore configuration and reservoir characteristics such as porosity, permeability, stress and thickness profiles of the pay layers as well as the overburden layers. Among other essential information required for the design process is fracturing fluid type and volume, proppant type and volume, injection rate, proppant concentration and frac job schedule. Some of the parameters such as fluid and proppant types have discrete possible choices. Other parameters such as fluid and proppant volume, on the other hand, assume values from within a range of minimum and maximum values. A potential frac design for a particular pay zone is a combination of all of these parameters. Finding the optimum combination is not a trivial process. It usually requires an experienced engineer and a considerable amount of time to tune the parameters in order to achieve desirable outcome. This paper introduces a new methodology that integrates two virtual intelligence techniques, namely, artificial neural networks and genetic algorithms to automate and simplify the optimum frac job design process. This methodology requires little input from the engineer beyond the reservoir characterizations and wellbore configuration. The software tool that has been developed based on this methodology uses the reservoir characteristics and an optimization criteria indicated by the engineer, for example a certain propped frac length, and provides the detail of the optimum frac design that will result in the specified criteria. An ensemble of neural networks is trained to mimic the two- or three-dimensional frac simulator. Once successfully trained, these networks are capable of providing instantaneous results in response to any set of input parameters. These

  3. Computer hardware for radiologists: Part I

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM), Picture Archiving and Communication System (PACS), Radiology information system (RIS) technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU), the chipset, the random access memory (RAM), the memory modules, bus, storage drives, and ports. The personnel computer (PC) has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs). The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called “buses”. The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute “programs”. A Pentium ® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM) is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration

  4. Computer hardware for radiologists: Part I

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM, Picture Archiving and Communication System (PACS, Radiology information system (RIS technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU, the chipset, the random access memory (RAM, the memory modules, bus, storage drives, and ports. The personnel computer (PC has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs. The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called "buses". The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute "programs". A Pentium® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration.

  5. Environmental Control System Software & Hardware Development

    Science.gov (United States)

    Vargas, Daniel Eduardo

    2017-01-01

    ECS hardware: (1) Provides controlled purge to SLS Rocket and Orion spacecraft. (2) Provide mission-focused engineering products and services. ECS software: (1) NASA requires Compact Unique Identifiers (CUIs); fixed-length identifier used to identify information items. (2) CUI structure; composed of nine semantic fields that aid the user in recognizing its purpose.

  6. Femoral neck fracture following hardware removal.

    Science.gov (United States)

    Shaer, James A; Hileman, Barbara M; Newcomer, Jill E; Hanes, Marina C

    2012-01-16

    It is uncommon for femoral neck fractures to occur after proximal femoral hardware removal because age, osteoporosis, and technical error are often noted as the causes for this type of fracture. However, excessive alcohol consumption and failure to comply with protected weight bearing for 6 weeks increases the risk of femoral neck fractures.This article describes a case of a 57-year-old man with a high-energy ipsilateral inter-trochanteric hip fracture, comminuted distal third femoral shaft fracture, and displaced lateral tibial plateau fracture. Cephalomedullary fixation was used to fix the ipsilateral femur fractures after medical stabilization and evaluation of the patient. The patient healed clinically and radiographically at 6 months. Despite conservative treatment for painful proximal hardware, elective hip screw removal was performed 22.5 months after injury. Seven weeks later, he sustained a nontraumatic femoral neck fracture.In this case, it is unlikely that the femoral neck fracture occurred as a result of hardware removal. We assumed that, in addition to the patient's alcohol abuse and tobacco use, stress fractures may have attributed to the femoral neck fracture. We recommend using a shorter hip screw to minimize hardware prominence or possibly off-label use of an injectable bone filler, such as calcium phosphate cement. Copyright 2012, SLACK Incorporated.

  7. QCE : A Simulator for Quantum Computer Hardware

    NARCIS (Netherlands)

    Michielsen, Kristel; Raedt, Hans De

    2003-01-01

    The Quantum Computer Emulator (QCE) described in this paper consists of a simulator of a generic, general purpose quantum computer and a graphical user interface. The latter is used to control the simulator, to define the hardware of the quantum computer and to debug and execute quantum algorithms.

  8. The fast Amsterdam multiprocessor (FAMP) system hardware

    International Nuclear Information System (INIS)

    Hertzberger, L.O.; Kieft, G.; Kisielewski, B.; Wiggers, L.W.; Engster, C.; Koningsveld, L. van

    1981-01-01

    The architecture of a multiprocessor system is described that will be used for on-line filter and second stage trigger applications. The system is based on the MC 68000 microprocessor from Motorola. Emphasis is paid to hardware aspects, in particular the modularity, processor communication and interfacing, whereas the system software and the applications will be described in separate articles. (orig.)

  9. Microprocessor Design Using Hardware Description Language

    Science.gov (United States)

    Mita, Rosario; Palumbo, Gaetano

    2008-01-01

    The following paper has been conceived to deal with the contents of some lectures aimed at enhancing courses on digital electronic, microelectronic or VLSI systems. Those lectures show how to use a hardware description language (HDL), such as the VHDL, to specify, design and verify a custom microprocessor. The general goal of this work is to teach…

  10. CAMAC high energy physics electronics hardware

    International Nuclear Information System (INIS)

    Kolpakov, I.F.

    1977-01-01

    CAMAC hardware for high energy physics large spectrometers and control systems is reviewed as is the development of CAMAC modules at the High Energy Laboratory, JINR (Dubna). The total number of crates used at the Laboratory is 179. The number of CAMAC modules of 120 different types exceeds 1700. The principles of organization and the structure of developed CAMAC systems are described. (author)

  11. Enabling Open Hardware through FOSS tools

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Software developers often take open file formats and tools for granted. When you publish code on github, you do not ask yourself if somebody will be able to open it and modify it. We need the same freedom in the open hardware world, to make it truly accessible for everyone.

  12. Hardware Acceleration of Sparse Cognitive Algorithms

    Science.gov (United States)

    2016-05-01

    is clear that these emerging algorithms that can support unsupervised , or lightly supervised learning , as well as incremental learning , map poorly...distribution unlimited. 8.0 CONCLUDING REMARKS These emerging algorithms that can support unsupervised , or lightly supervised learning , as well as...15. SUBJECT TERMS Cortical Algorithms; Machine Learning ; Hardware; VLSI; ASIC 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT: SAR

  13. Optimum design of a nuclear heat supply

    International Nuclear Information System (INIS)

    Borel, J.P.

    1984-01-01

    This paper presents an economic analysis for the optimum design of a nuclear heat supply to a given district-heating network. First, a general description of the system is given, which includes a nuclear power plant, a heating power plant and a district-heating network. The heating power plant is fed with steam from the nuclear power plant. It is assumed that the heating network is already in operation and that the nuclear power plant was previously designed to supply electricity. Second, a technical definition of the heat production and transportation installations is given. The optimal power of these installations is examined. The main result is a relationship between the network capacity and the level of the nuclear heat supply as a substitute for oil under the best economic conditions. The analysis also presents information for choosing the best operating mode. Finally, the heating power plant is studied in more detail from the energy, technical and economic aspects. (author)

  14. Optimum Fuzzy Design of Ecological Pressurised Containers

    Directory of Open Access Journals (Sweden)

    Heikki Martikka

    2011-01-01

    Full Text Available In this study, the basic engineering principles, goals, and constraints are all combined to fuzzy methodology and applied to design of optimally pressurised containers emphasising the ecological and durability merits of various materials. The present fuzzy heuristics approach is derivable from generalisation of conventional analytical optimisation method into fuzzy multitechnical tasks. In the present approach, first the goals and constraints of the end-user are identified. Then decision variables are expressed as functions of the design variables. Their desirable ranges and biases are defined using the same fuzzy satisfaction function form. The optimal result has highest total satisfaction. These are then checked and fine-tuned by finite element method FEM. The optimal solution is the ecoplastic vessel, and aluminium was close. The method reveals that optimum depends strongly on the preset goals and values of the producer, society, and end-user.

  15. Choosing an optimum sand control method

    Directory of Open Access Journals (Sweden)

    Ehsan Khamehchi

    2015-06-01

    Full Text Available Formation sand control is always one of the main concerns of production engineers. There are some different methods to prevent sand production. Choosing a method for preventing formation sand production depends on different reservoir parameters and politic and economic conditions. Sometimes, economic and politic conditions are more effective to choose an optimum than reservoir parameters. Often, simultaneous investigation of politic and economic conditions with reservoir parameters has different results with what is expected. So, choosing the best sand control method is the result of thorough study. Global oil price, duration of sand control project and costs of necessary equipment for each method as economic and politic conditions and well productivity index as reservoir parameter are the main parameters studied in this paper.

  16. Impacts of optimum cost effective energy efficiency standards

    International Nuclear Information System (INIS)

    Brancic, A.B.; Peters, J.S.; Arch, M.

    1991-01-01

    Building Codes are increasingly required to be responsive to social and economic policy concerns. In 1990 the State of Connecticut passes An Act Concerning Global Warming, Public Act 90-219, which mandates the revision of the state building code to require that buildings and building elements be designed to provide optimum cost-effective energy efficiency over the useful life of the building. Further, such revision must meet the American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE) Standard 90.1 - 1989. As the largest electric energy supplier in Connecticut, Northeast Utilities (NU) sponsored a pilot study of the cost effectiveness of alternative building code standards for commercial construction. This paper reports on this study which analyzed design and construction means, building elements, incremental construction costs, and energy savings to determine the optimum cost-effective building code standard. Findings are that ASHRAE 90.1 results in 21% energy savings and alternative standards above it result in significant additional savings. Benefit/cost analysis showed that both are cost effective

  17. Optimum Water Quality Monitoring Network Design for Bidirectional River Systems

    Directory of Open Access Journals (Sweden)

    Xiaohui Zhu

    2018-01-01

    Full Text Available Affected by regular tides, bidirectional water flows play a crucial role in surface river systems. Using optimization theory to design a water quality monitoring network can reduce the redundant monitoring nodes as well as save the costs for building and running a monitoring network. A novel algorithm is proposed to design an optimum water quality monitoring network for tidal rivers with bidirectional water flows. Two optimization objectives of minimum pollution detection time and maximum pollution detection probability are used in our optimization algorithm. We modify the Multi-Objective Particle Swarm Optimization (MOPSO algorithm and develop new fitness functions to calculate pollution detection time and pollution detection probability in a discrete manner. In addition, the Storm Water Management Model (SWMM is used to simulate hydraulic characteristics and pollution events based on a hypothetical river system studied in the literature. Experimental results show that our algorithm can obtain a better Pareto frontier. The influence of bidirectional water flows to the network design is also identified, which has not been studied in the literature. Besides that, we also find that the probability of bidirectional water flows has no effect on the optimum monitoring network design but slightly changes the mean pollution detection time.

  18. Carbon sequestration, optimum forest rotation and their environmental impact

    International Nuclear Information System (INIS)

    Kula, Erhun; Gunalay, Yavuz

    2012-01-01

    Due to their large biomass forests assume an important role in the global carbon cycle by moderating the greenhouse effect of atmospheric pollution. The Kyoto Protocol recognises this contribution by allocating carbon credits to countries which are able to create new forest areas. Sequestrated carbon provides an environmental benefit thus must be taken into account in cost–benefit analysis of afforestation projects. Furthermore, like timber output carbon credits are now tradable assets in the carbon exchange. By using British data, this paper looks at the issue of identifying optimum felling age by considering carbon sequestration benefits simultaneously with timber yields. The results of this analysis show that the inclusion of carbon benefits prolongs the optimum cutting age by requiring trees to stand longer in order to soak up more CO 2 . Consequently this finding must be considered in any carbon accounting calculations. - Highlights: ► Carbon sequestration in forestry is an environmental benefit. ► It moderates the problem of global warming. ► It prolongs the gestation period in harvesting. ► This paper uses British data in less favoured districts for growing Sitka spruce species.

  19. Optimum heart sound signal selection based on the cyclostationary property.

    Science.gov (United States)

    Li, Ting; Qiu, Tianshuang; Tang, Hong

    2013-07-01

    Noise often appears in parts of heart sound recordings, which may be much longer than those necessary for subsequent automated analysis. Thus, human intervention is needed to select the heart sound signal with the best quality or the least noise. This paper presents an automatic scheme for optimum sequence selection to avoid such human intervention. A quality index, which is based on finding that sequences with less random noise contamination have a greater degree of periodicity, is defined on the basis of the cyclostationary property of heart beat events. The quality score indicates the overall quality of a sequence. No manual intervention is needed in the process of subsequence selection, thereby making this scheme useful in automatic analysis of heart sound signals. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Outline of a Hardware Reconfiguration Framework for Modular Industrial Mobile Manipulators

    DEFF Research Database (Denmark)

    Schou, Casper; Bøgh, Simon; Madsen, Ole

    2014-01-01

    This paper presents concepts and ideas of a hard- ware reconfiguration framework for modular industrial mobile manipulators. Mobile manipulators pose a highly flexible pro- duction resource due to their ability to autonomously navigate between workstations. However, due to this high flexibility new...... approaches to the operation of the robots are needed. Reconfig- uring the robot to a new task should be carried out by shop floor operators and, thus, be both quick and intuitive. Late research has already proposed a method for intuitive robot programming. However, this relies on a predetermined hardware...... configuration. Finding a single multi-purpose hardware configuration suited to all tasks is considered unrealistic. As a result, the need for reconfiguration of the hardware is inevitable. In this paper an outline of a framework for making hardware reconfiguration quick and intuitive is presented. Two main...

  1. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  2. Optimum aberration coefficients for recording high-resolution off-axis holograms in a Cs-corrected TEM

    International Nuclear Information System (INIS)

    Linck, Martin

    2013-01-01

    Amongst the impressive improvements in high-resolution electron microscopy, the Cs-corrector also has significantly enhanced the capabilities of off-axis electron holography. Recently, it has been shown that the signal above noise in the reconstructable phase can be significantly improved by combining holography and hardware aberration correction. Additionally, with a spherical aberration close to zero, the traditional optimum focus for recording high-resolution holograms (“Lichte's defocus”) has become less stringent and both, defocus and spherical aberration, can be selected freely within a certain range. This new degree of freedom can be used to improve the signal resolution in the holographically reconstructed object wave locally, e.g. at the atomic positions. A brute force simulation study for an aberration corrected 200 kV TEM is performed to determine optimum values for defocus and spherical aberration for best possible signal to noise in the reconstructed atomic phase signals. Compared to the optimum aberrations for conventional phase contrast imaging (NCSI), which produce “bright atoms” in the image intensity, the resulting optimum values of defocus and spherical aberration for off-axis holography enable “black atom contrast” in the hologram. However, they can significantly enhance the local signal resolution at the atomic positions. At the same time, the benefits of hardware aberration correction for high-resolution off-axis holography are preserved. It turns out that the optimum is depending on the object and its thickness and therefore not universal. -- Highlights: ► Optimized aberration parameters for high-resolution off-axis holography. ► Simulation and analysis of noise in high-resolution off-axis holograms. ► Improving signal resolution in the holographically reconstructed phase shift. ► Comparison of “black” and “white” atom contrast in off-axis holograms.

  3. An Optimum Solution for Electric Power Theft

    Directory of Open Access Journals (Sweden)

    Aamir Hussain Memon

    2013-07-01

    Full Text Available Electric power theft is a problem that continues to plague power sector across the whole country. Every year, the electricity companies face the line losses at an average 20-30% and according to power ministry estimation WAPDA companies lose more than Rs. 125 billion. Significantly, it is enough to destroy the entire power sector of country. According to sources 20% losses means the masses would have to pay extra 20% in terms of electricity tariffs. In other words, the innocent consumers pay the bills of those who steal electricity. For all that, no any permanent solution for this major issue has ever been proposed. We propose an applicable and optimum solution for this impassable problem. In our research, we propose an Electric power theft solution based on three stages; Transmission stage, Distribution stage, and User stage. Without synchronization among all, the complete solution can not be achieved. The proposed solution is simulated on NI (National Instruments Circuit Design Suite Multisim v.10.0. Our research work is an implicit and a workable approach towards the Electric power theft, as for conditions in Pakistan, which is bearing the brunt of power crises already

  4. Optimum harvest maturity for Leymus chinensis seed

    Directory of Open Access Journals (Sweden)

    Jixiang Lin

    2016-06-01

    Full Text Available Timely harvest is critical to achieve maximum seed viability and vigour in agricultural production. However, little information exists concerning how to reap the best quality seeds of Leymus chinensis, which is the dominant and most promising grass species in the Songnen Grassland of Northern China. The objective of this study was to investigate and evaluate possible quality indices of the seeds at different days after peak anthesis. Seed quality at different development stages was assessed by the colours of the seed and lemmas, seed weight, moisture content, electrical conductivity of seed leachate and germination indices. Two consecutive years of experimental results showed that the maximum seed quality was recorded at 39 days after peak anthesis. At this date, the colours of the seed and lemmas reached heavy brown and yellow, respectively. The seed weight was highest and the moisture content and the electrical conductivity of seed leachate were lowest. In addition, the seed also reached its maximum germination percentage and energy at this stage, determined using a standard germination test (SGT and accelerated ageing test (AAT. Thus, Leymus chinensis can be harvested at 39 days after peak anthesis based on the changes in parameters. Colour identification can be used as an additional indicator to provide a more rapid and reliable measure of optimum seed maturity; approximately 10 days after the colour of the lemmas reached yellow and the colour of the seed reached heavy brown, the seed of this species was suitable for harvest.

  5. Particle Transport Simulation on Heterogeneous Hardware

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    CPUs and GPGPUs. About the speaker Vladimir Koylazov is CTO and founder of Chaos Software and one of the original developers of the V-Ray raytracing software. Passionate about 3D graphics and programming, Vlado is the driving force behind Chaos Group's software solutions. He participated in the implementation of algorithms for accurate light simulations and support for different hardware platforms, including CPU and GPGPU, as well as distributed calculat...

  6. Hardware-Independent Proofs of Numerical Programs

    Science.gov (United States)

    Boldo, Sylvie; Nguyen, Thi Minh Tuyen

    2010-01-01

    On recent architectures, a numerical program may give different answers depending on the execution hardware and the compilation. Our goal is to formally prove properties about numerical programs that are true for multiple architectures and compilers. We propose an approach that states the rounding error of each floating-point computation whatever the environment. This approach is implemented in the Frama-C platform for static analysis of C code. Small case studies using this approach are entirely and automatically proved

  7. Reconfigurable Hardware Adapts to Changing Mission Demands

    Science.gov (United States)

    2003-01-01

    A new class of computing architectures and processing systems, which use reconfigurable hardware, is creating a revolutionary approach to implementing future spacecraft systems. With the increasing complexity of electronic components, engineers must design next-generation spacecraft systems with new technologies in both hardware and software. Derivation Systems, Inc., of Carlsbad, California, has been working through NASA s Small Business Innovation Research (SBIR) program to develop key technologies in reconfigurable computing and Intellectual Property (IP) soft cores. Founded in 1993, Derivation Systems has received several SBIR contracts from NASA s Langley Research Center and the U.S. Department of Defense Air Force Research Laboratories in support of its mission to develop hardware and software for high-assurance systems. Through these contracts, Derivation Systems began developing leading-edge technology in formal verification, embedded Java, and reconfigurable computing for its PF3100, Derivational Reasoning System (DRS ), FormalCORE IP, FormalCORE PCI/32, FormalCORE DES, and LavaCORE Configurable Java Processor, which are designed for greater flexibility and security on all space missions.

  8. Trends in computer hardware and software.

    Science.gov (United States)

    Frankenfeld, F M

    1993-04-01

    Previously identified and current trends in the development of computer systems and in the use of computers for health care applications are reviewed. Trends identified in a 1982 article were increasing miniaturization and archival ability, increasing software costs, increasing software independence, user empowerment through new software technologies, shorter computer-system life cycles, and more rapid development and support of pharmaceutical services. Most of these trends continue today. Current trends in hardware and software include the increasing use of reduced instruction-set computing, migration to the UNIX operating system, the development of large software libraries, microprocessor-based smart terminals that allow remote validation of data, speech synthesis and recognition, application generators, fourth-generation languages, computer-aided software engineering, object-oriented technologies, and artificial intelligence. Current trends specific to pharmacy and hospitals are the withdrawal of vendors of hospital information systems from the pharmacy market, improved linkage of information systems within hospitals, and increased regulation by government. The computer industry and its products continue to undergo dynamic change. Software development continues to lag behind hardware, and its high cost is offsetting the savings provided by hardware.

  9. A Hardware Lab Anywhere At Any Time

    Directory of Open Access Journals (Sweden)

    Tobias Schubert

    2004-12-01

    Full Text Available Scientific technical courses are an important component in any student's education. These courses are usually characterised by the fact that the students execute experiments in special laboratories. This leads to extremely high costs and a reduction in the maximum number of possible participants. From this traditional point of view, it doesn't seem possible to realise the concepts of a Virtual University in the context of sophisticated technical courses since the students must be "on the spot". In this paper we introduce the so-called Mobile Hardware Lab which makes student participation possible at any time and from any place. This lab nevertheless transfers a feeling of being present in a laboratory. This is accomplished with a special Learning Management System in combination with hardware components which correspond to a fully equipped laboratory workstation that are lent out to the students for the duration of the lab. The experiments are performed and solved at home, then handed in electronically. Judging and marking are also both performed electronically. Since 2003 the Mobile Hardware Lab is now offered in a completely web based form.

  10. Implementation of optimum solar electricity generating system

    Science.gov (United States)

    Singh, Balbir Singh Mahinder; Sivapalan, Subarna; Najib, Nurul Syafiqah Mohd; Menon, Pradeep; Karim, Samsul Ariffin A.

    2014-10-01

    Under the 10th Malaysian Plan, the government is expecting the renewable energy to contribute approximately 5.5% to the total electricity generation by the year 2015, which amounts to 98MW. One of the initiatives to ensure that the target is achievable was to establish the Sustainable Energy Development Authority of Malaysia. SEDA is given the authority to administer and manage the implementation of the feed-in tariff (FiT) mechanism which is mandated under the Renewable Energy Act 2011. The move to establish SEDA is commendable and the FiT seems to be attractive but there is a need to create awareness on the implementation of the solar electricity generating system (SEGS). In Malaysia, harnessing technologies related to solar energy resources have great potential for implementation. However, the main issue that plagues the implementation of SEGS is the intermittent nature of this source of energy. The availability of sunlight is during the day time, and there is a need for electrical energy storage system, so that there is electricity available during the night time as well. The meteorological condition such as clouds, haze and pollution affects the SEGS as well. The PV based SEGS is seems to be promising electricity generating system that can contribute towards achieving the 5.5% target and will be able to minimize the negative effects of utilizing fossil fuels for electricity generation on the environment. Malaysia is committed to Kyoto Protocol, which emphasizes on fighting global warming by achieving stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system. In this paper, the technical aspects of the implementation of optimum SEGS is discussed, especially pertaining to the positioning of the PV panels.

  11. Implementation of optimum solar electricity generating system

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Balbir Singh Mahinder, E-mail: balbir@petronas.com.my; Karim, Samsul Ariffin A., E-mail: samsul-ariffin@petronas.com.my [Department of Fundamental and Applied Sciences, Universiti Teknologi PETRONAS, 31750 Bandar Seri Iskandar, Perak (Malaysia); Sivapalan, Subarna, E-mail: subarna-sivapalan@petronas.com.my [Department of Management and Humanities, Universiti Teknologi PETRONAS, 31750 Bandar Seri Iskandar, Perak (Malaysia); Najib, Nurul Syafiqah Mohd; Menon, Pradeep [Department of Electrical and Electronics Engineering, Universiti Teknologi PETRONAS, 31750 Bandar Seri Iskandar, Perak (Malaysia)

    2014-10-24

    Under the 10{sup th} Malaysian Plan, the government is expecting the renewable energy to contribute approximately 5.5% to the total electricity generation by the year 2015, which amounts to 98MW. One of the initiatives to ensure that the target is achievable was to establish the Sustainable Energy Development Authority of Malaysia. SEDA is given the authority to administer and manage the implementation of the feed-in tariff (FiT) mechanism which is mandated under the Renewable Energy Act 2011. The move to establish SEDA is commendable and the FiT seems to be attractive but there is a need to create awareness on the implementation of the solar electricity generating system (SEGS). In Malaysia, harnessing technologies related to solar energy resources have great potential for implementation. However, the main issue that plagues the implementation of SEGS is the intermittent nature of this source of energy. The availability of sunlight is during the day time, and there is a need for electrical energy storage system, so that there is electricity available during the night time as well. The meteorological condition such as clouds, haze and pollution affects the SEGS as well. The PV based SEGS is seems to be promising electricity generating system that can contribute towards achieving the 5.5% target and will be able to minimize the negative effects of utilizing fossil fuels for electricity generation on the environment. Malaysia is committed to Kyoto Protocol, which emphasizes on fighting global warming by achieving stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system. In this paper, the technical aspects of the implementation of optimum SEGS is discussed, especially pertaining to the positioning of the PV panels.

  12. Implementation of optimum solar electricity generating system

    International Nuclear Information System (INIS)

    Singh, Balbir Singh Mahinder; Karim, Samsul Ariffin A.; Sivapalan, Subarna; Najib, Nurul Syafiqah Mohd; Menon, Pradeep

    2014-01-01

    Under the 10 th Malaysian Plan, the government is expecting the renewable energy to contribute approximately 5.5% to the total electricity generation by the year 2015, which amounts to 98MW. One of the initiatives to ensure that the target is achievable was to establish the Sustainable Energy Development Authority of Malaysia. SEDA is given the authority to administer and manage the implementation of the feed-in tariff (FiT) mechanism which is mandated under the Renewable Energy Act 2011. The move to establish SEDA is commendable and the FiT seems to be attractive but there is a need to create awareness on the implementation of the solar electricity generating system (SEGS). In Malaysia, harnessing technologies related to solar energy resources have great potential for implementation. However, the main issue that plagues the implementation of SEGS is the intermittent nature of this source of energy. The availability of sunlight is during the day time, and there is a need for electrical energy storage system, so that there is electricity available during the night time as well. The meteorological condition such as clouds, haze and pollution affects the SEGS as well. The PV based SEGS is seems to be promising electricity generating system that can contribute towards achieving the 5.5% target and will be able to minimize the negative effects of utilizing fossil fuels for electricity generation on the environment. Malaysia is committed to Kyoto Protocol, which emphasizes on fighting global warming by achieving stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system. In this paper, the technical aspects of the implementation of optimum SEGS is discussed, especially pertaining to the positioning of the PV panels

  13. Optimum community energy storage system for demand load shifting

    International Nuclear Information System (INIS)

    Parra, David; Norman, Stuart A.; Walker, Gavin S.; Gillott, Mark

    2016-01-01

    Highlights: • PbA-acid and lithium-ion batteries are optimised up to a 100-home community. • A 4-period real-time pricing and Economy 7 (2-period time-of-use) are compared. • Li-ion batteries perform worse with Economy 7 for small communities and vice versa. • The community approach reduced the levelised cost by 56% compared to a single home. • Heat pumps reduced the levelised cost and increased the profitability of batteries. - Abstract: Community energy storage (CES) is becoming an attractive technological option to facilitate the use of distributed renewable energy generation, manage demand loads and decarbonise the residential sector. There is strong interest in understanding the techno-economic benefits of using CES systems, which energy storage technology is more suitable and the optimum CES size. In this study, the performance including equivalent full cycles and round trip efficiency of lead-acid (PbA) and lithium-ion (Li-ion) batteries performing demand load shifting are quantified as a function of the size of the community using simulation-based optimisation. Two different retail tariffs are compared: a time-of-use tariff (Economy 7) and a real-time-pricing tariff including four periods based on the electricity prices on the wholesale market. Additionally, the economic benefits are quantified when projected to two different years: 2020 and a hypothetical zero carbon year. The findings indicate that the optimum PbA capacity was approximately twice the optimum Li-ion capacity in the case of the real-time-pricing tariff and around 1.6 times for Economy 7 for any community size except a single home. The levelised cost followed a negative logarithmic trend while the internal rate of return followed a positive logarithmic trend as a function of the size of the community. PbA technology reduced the levelised cost down to 0.14 £/kW h when projected to the year 2020 for the retail tariff Economy 7. CES systems were sized according to the demand load and

  14. Pre-Hardware Optimization of Spacecraft Image Processing Software Algorithms and Hardware Implementation

    Science.gov (United States)

    Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)

    2001-01-01

    Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the

  15. Hardware implementation of a GFSR pseudo-random number generator

    Science.gov (United States)

    Aiello, G. R.; Budinich, M.; Milotti, E.

    1989-12-01

    We describe the hardware implementation of a pseudo-random number generator of the "Generalized Feedback Shift Register" (GFSR) type. After brief theoretical considerations we describe two versions of the hardware, the tests done and the performances achieved.

  16. The Impact of Flight Hardware Scavenging on Space Logistics

    Science.gov (United States)

    Oeftering, Richard C.

    2011-01-01

    For a given fixed launch vehicle capacity the logistics payload delivered to the moon may be only roughly 20 percent of the payload delivered to the International Space Station (ISS). This is compounded by the much lower flight frequency to the moon and thus low availability of spares for maintenance. This implies that lunar hardware is much more scarce and more costly per kilogram than ISS and thus there is much more incentive to preserve hardware. The Constellation Lunar Surface System (LSS) program is considering ways of utilizing hardware scavenged from vehicles including the Altair lunar lander. In general, the hardware will have only had a matter of hours of operation yet there may be years of operational life remaining. By scavenging this hardware the program, in effect, is treating vehicle hardware as part of the payload. Flight hardware may provide logistics spares for system maintenance and reduce the overall logistics footprint. This hardware has a wide array of potential applications including expanding the power infrastructure, and exploiting in-situ resources. Scavenging can also be seen as a way of recovering the value of, literally, billions of dollars worth of hardware that would normally be discarded. Scavenging flight hardware adds operational complexity and steps must be taken to augment the crew s capability with robotics, capabilities embedded in flight hardware itself, and external processes. New embedded technologies are needed to make hardware more serviceable and scavengable. Process technologies are needed to extract hardware, evaluate hardware, reconfigure or repair hardware, and reintegrate it into new applications. This paper also illustrates how scavenging can be used to drive down the cost of the overall program by exploiting the intrinsic value of otherwise discarded flight hardware.

  17. Computer hardware for radiologists: Part 2

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. “Storage drive” is a term describing a “memory” hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. “Drive interfaces” connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular “input/output devices” used commonly with computers are the printer, monitor, mouse, and keyboard. The “bus” is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. “Ports” are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ‘ever increasing’ digital future

  18. Computer hardware for radiologists: Part 2

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU, chipset, random access memory (RAM, and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. "Storage drive" is a term describing a "memory" hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. "Drive interfaces" connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular "input/output devices" used commonly with computers are the printer, monitor, mouse, and keyboard. The "bus" is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated ISA bus. "Ports" are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ′ever increasing′ digital future.

  19. Open Source Hardware for DIY Environmental Sensing

    Science.gov (United States)

    Aufdenkampe, A. K.; Hicks, S. D.; Damiano, S. G.; Montgomery, D. S.

    2014-12-01

    The Arduino open source electronics platform has been very popular within the DIY (Do It Yourself) community for several years, and it is now providing environmental science researchers with an inexpensive alternative to commercial data logging and transmission hardware. Here we present the designs for our latest series of custom Arduino-based dataloggers, which include wireless communication options like self-meshing radio networks and cellular phone modules. The main Arduino board uses a custom interface board to connect to various research-grade sensors to take readings of turbidity, dissolved oxygen, water depth and conductivity, soil moisture, solar radiation, and other parameters. Sensors with SDI-12 communications can be directly interfaced to the logger using our open Arduino-SDI-12 software library (https://github.com/StroudCenter/Arduino-SDI-12). Different deployment options are shown, like rugged enclosures to house the loggers and rigs for mounting the sensors in both fresh water and marine environments. After the data has been collected and transmitted by the logger, the data is received by a mySQL-PHP stack running on a web server that can be accessed from anywhere in the world. Once there, the data can be visualized on web pages or served though REST requests and Water One Flow (WOF) services. Since one of the main benefits of using open source hardware is the easy collaboration between users, we are introducing a new web platform for discussion and sharing of ideas and plans for hardware and software designs used with DIY environmental sensors and data loggers.

  20. Methodology for Assessing Reusability of Spaceflight Hardware

    Science.gov (United States)

    Childress-Thompson, Rhonda; Thomas, L. Dale; Farrington, Phillip

    2017-01-01

    In 2011 the Space Shuttle, the only Reusable Launch Vehicle (RLV) in the world, returned to earth for the final time. Upon retirement of the Space Shuttle, the United States (U.S.) no longer possessed a reusable vehicle or the capability to send American astronauts to space. With the National Aeronautics and Space Administration (NASA) out of the RLV business and now only pursuing Expendable Launch Vehicles (ELV), not only did companies within the U.S. start to actively pursue the development of either RLVs or reusable components, but entities around the world began to venture into the reusable market. For example, SpaceX and Blue Origin are developing reusable vehicles and engines. The Indian Space Research Organization is developing a reusable space plane and Airbus is exploring the possibility of reusing its first stage engines and avionics housed in the flyback propulsion unit referred to as the Advanced Expendable Launcher with Innovative engine Economy (Adeline). Even United Launch Alliance (ULA) has announced plans for eventually replacing the Atlas and Delta expendable rockets with a family of RLVs called Vulcan. Reuse can be categorized as either fully reusable, the situation in which the entire vehicle is recovered, or partially reusable such as the National Space Transportation System (NSTS) where only the Space Shuttle, Space Shuttle Main Engines (SSME), and Solid Rocket Boosters (SRB) are reused. With this influx of renewed interest in reusability for space applications, it is imperative that a systematic approach be developed for assessing the reusability of spaceflight hardware. The partially reusable NSTS offered many opportunities to glean lessons learned; however, when it came to efficient operability for reuse the Space Shuttle and its associated hardware fell short primarily because of its two to four-month turnaround time. Although there have been several attempts at designing RLVs in the past with the X-33, Venture Star and Delta Clipper

  1. Open Hardware for CERN's accelerator control systems

    International Nuclear Information System (INIS)

    Bij, E van der; Serrano, J; Wlostowski, T; Cattin, M; Gousiou, E; Sanchez, P Alvarez; Boccardi, A; Voumard, N; Penacoba, G

    2012-01-01

    The accelerator control systems at CERN will be upgraded and many electronics modules such as analog and digital I/O, level converters and repeaters, serial links and timing modules are being redesigned. The new developments are based on the FPGA Mezzanine Card, PCI Express and VME64x standards while the Wishbone specification is used as a system on a chip bus. To attract partners, the projects are developed in an 'Open' fashion. Within this Open Hardware project new ways of working with industry are being evaluated and it has been proven that industry can be involved at all stages, from design to production and support.

  2. Management of cladding hulls and fuel hardware

    International Nuclear Information System (INIS)

    1985-01-01

    The reprocessing of spent fuel from power reactors based on chop-leach technology produces a solid waste product of cladding hulls and other metallic residues. This report describes the current situation in the management of fuel cladding hulls and hardware. Information is presented on the material composition of such waste together with the heating effects due to neutron-induced activation products and fuel contamination. As no country has established a final disposal route and the corresponding repository, this report also discusses possible disposal routes and various disposal options under consideration at present

  3. Hardware trigger processor for the MDT system

    CERN Document Server

    AUTHOR|(SzGeCERN)757787; The ATLAS collaboration; Hazen, Eric; Butler, John; Black, Kevin; Gastler, Daniel Edward; Ntekas, Konstantinos; Taffard, Anyes; Martinez Outschoorn, Verena; Ishino, Masaya; Okumura, Yasuyuki

    2017-01-01

    We are developing a low-latency hardware trigger processor for the Monitored Drift Tube system in the Muon spectrometer. The processor will fit candidate Muon tracks in the drift tubes in real time, improving significantly the momentum resolution provided by the dedicated trigger chambers. We present a novel pure-FPGA implementation of a Legendre transform segment finder, an associative-memory alternative implementation, an ARM (Zynq) processor-based track fitter, and compact ATCA carrier board architecture. The ATCA architecture is designed to allow a modular, staged approach to deployment of the system and exploration of alternative technologies.

  4. Development of Hardware Dual Modality Tomography System

    Directory of Open Access Journals (Sweden)

    R. M. Zain

    2009-06-01

    Full Text Available The paper describes the hardware development and performance of the Dual Modality Tomography (DMT system. DMT consists of optical and capacitance sensors. The optical sensors consist of 16 LEDs and 16 photodiodes. The Electrical Capacitance Tomography (ECT electrode design use eight electrode plates as the detecting sensor. The digital timing and the control unit have been developing in order to control the light projection of optical emitters, switching the capacitance electrodes and to synchronize the operation of data acquisition. As a result, the developed system is able to provide a maximum 529 set data per second received from the signal conditioning circuit to the computer.

  5. Hardware-efficient autonomous quantum memory protection.

    Science.gov (United States)

    Leghtas, Zaki; Kirchmair, Gerhard; Vlastakis, Brian; Schoelkopf, Robert J; Devoret, Michel H; Mirrahimi, Mazyar

    2013-09-20

    We propose to encode a quantum bit of information in a superposition of coherent states of an oscillator, with four different phases. Our encoding in a single cavity mode, together with a protection protocol, significantly reduces the error rate due to photon loss. This protection is ensured by an efficient quantum error correction scheme employing the nonlinearity provided by a single physical qubit coupled to the cavity. We describe in detail how to implement these operations in a circuit quantum electrodynamics system. This proposal directly addresses the task of building a hardware-efficient quantum memory and can lead to important shortcuts in quantum computing architectures.

  6. Reconfigurable Hardware for Compressing Hyperspectral Image Data

    Science.gov (United States)

    Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua

    2010-01-01

    High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of

  7. Dynamically-Loaded Hardware Libraries (HLL) Technology for Audio Applications

    DEFF Research Database (Denmark)

    Esposito, A.; Lomuscio, A.; Nunzio, L. Di

    2016-01-01

    In this work, we apply hardware acceleration to embedded systems running audio applications. We present a new framework, Dynamically-Loaded Hardware Libraries or HLL, to dynamically load hardware libraries on reconfigurable platforms (FPGAs). Provided a library of application-specific processors,...

  8. Visual basic application in computer hardware control and data ...

    African Journals Online (AJOL)

    ... hardware device control and data acquisition is experimented using Visual Basic and the Speech Application Programming Interface (SAPI) Software Development Kit. To control hardware using Visual Basic, all hardware requests were designed to go through Windows via the printer parallel ports which is accessed and ...

  9. PACE: A dynamic programming algorithm for hardware/software partitioning

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1996-01-01

    with a hardware area constraint and the problem of minimizing hardware area with a system execution time constraint. The target architecture consists of a single microprocessor and a single hardware chip (ASIC, FPGA, etc.) which are connected by a communication channel. The algorithm incorporates a realistic...

  10. ARM assembly language with hardware experiments

    CERN Document Server

    Elahi, Ata

    2015-01-01

    This book provides a hands-on approach to learning ARM assembly language with the use of a TI microcontroller. The book starts with an introduction to computer architecture and then discusses number systems and digital logic. The text covers ARM Assembly Language, ARM Cortex Architecture and its components, and Hardware Experiments using TILM3S1968. Written for those interested in learning embedded programming using an ARM Microcontroller. ·         Introduces number systems and signal transmission methods   ·         Reviews logic gates, registers, multiplexers, decoders and memory   ·         Provides an overview and examples of ARM instruction set   ·         Uses using Keil development tools for writing and debugging ARM assembly language Programs   ·         Hardware experiments using a Mbed NXP LPC1768 microcontroller; including General Purpose Input/Output (GPIO) configuration, real time clock configuration, binary input to 7-segment display, creating ...

  11. Hardware system for man-machine interface

    International Nuclear Information System (INIS)

    Niki, Kiyoshi; Tai, Ichirou; Hiromoto, Hiroshi; Inubushi, Hiroyuki; Makino, Teruyuki.

    1988-01-01

    Keeping pace with the recent advance of electronic technology, the adoption of the system that can present more information efficiently and in orderly form to operators has been promoted rapidly, in place of the man-machine interface for power stations, which comprises conventional indicators, switches and annunciators. By the introduction of new hardware and software, the form of the central control rooms of power stations and the sharing of roles between man and machine there have been reexamined. In this report, the way the man-machine interface in power stations should be and the requirement for the role of operators are summarized, and based on them, the role of man-machine equipment is considered, thereafter, the features and functions of new typical man-machine equipments that are used for power stations at present or can be applied are described. Finally, the example of how these equipments are applied to power plants as the actual system is shown. The role of man-machine system in power stations, recent operation monitoring and control, the sharing of roles between hardware and operators, the role of machines, the recent typical hard ware of man-machine interface, and the examples of the latest application are reported. (K.I.)

  12. ISS Logistics Hardware Disposition and Metrics Validation

    Science.gov (United States)

    Rogers, Toneka R.

    2010-01-01

    I was assigned to the Logistics Division of the International Space Station (ISS)/Spacecraft Processing Directorate. The Division consists of eight NASA engineers and specialists that oversee the logistics portion of the Checkout, Assembly, and Payload Processing Services (CAPPS) contract. Boeing, their sub-contractors and the Boeing Prime contract out of Johnson Space Center, provide the Integrated Logistics Support for the ISS activities at Kennedy Space Center. Essentially they ensure that spares are available to support flight hardware processing and the associated ground support equipment (GSE). Boeing maintains a Depot for electrical, mechanical and structural modifications and/or repair capability as required. My assigned task was to learn project management techniques utilized by NASA and its' contractors to provide an efficient and effective logistics support infrastructure to the ISS program. Within the Space Station Processing Facility (SSPF) I was exposed to Logistics support components, such as, the NASA Spacecraft Services Depot (NSSD) capabilities, Mission Processing tools, techniques and Warehouse support issues, required for integrating Space Station elements at the Kennedy Space Center. I also supported the identification of near-term ISS Hardware and Ground Support Equipment (GSE) candidates for excessing/disposition prior to October 2010; and the validation of several Logistics Metrics used by the contractor to measure logistics support effectiveness.

  13. Introduction to Hardware Security and Trust

    CERN Document Server

    Wang, Cliff

    2012-01-01

    The emergence of a globalized, horizontal semiconductor business model raises a set of concerns involving the security and trust of the information systems on which modern society is increasingly reliant for mission-critical functionality. Hardware-oriented security and trust issues span a broad range including threats related to the malicious insertion of Trojan circuits designed, e.g.,to act as a ‘kill switch’ to disable a chip, to integrated circuit (IC) piracy,and to attacks designed to extract encryption keys and IP from a chip. This book provides the foundations for understanding hardware security and trust, which have become major concerns for national security over the past decade.  Coverage includes security and trust issues in all types of electronic devices and systems such as ASICs, COTS, FPGAs, microprocessors/DSPs, and embedded systems.  This serves as an invaluable reference to the state-of-the-art research that is of critical significance to the security of,and trust in, modern society�...

  14. Determination of the Optimum Collector Angle for Composite Solar ...

    African Journals Online (AJOL)

    A model for predicting solar radiation available at any given time in the inhabited area in Ilorin was developed. From the equation developed, the optimum tilt angle of the collector due south was carried out. The optimum angle of tilt of the collector and the orientation are dependent on the month of the year and the location ...

  15. Determination of Optimum Moisture Content of Palm Nut Cracking ...

    African Journals Online (AJOL)

    USER

    A study of the optimum drying time for sun-dried palm nuts, for efficient nut cracking was carried out by Okoli (2003) but the moisture content was not reported. The objective of this study was therefore to determine the optimum moisture content for the production of whole kernel from a palm nut cracked by impact in a static ...

  16. Optimum material gradient composition for the functionally graded ...

    African Journals Online (AJOL)

    This study investigates the relation between the material gradient properties and the optimum sensing/actuation design of the functionally graded piezoelectric beams. Three-dimensional (3D) finite element analysis has been employed for the prediction of an optimum composition profile in these types of sensors and ...

  17. Optimum conditions for carbonisation of coconut shell | Gimba ...

    African Journals Online (AJOL)

    The optimum conditions that are useful in the carbonization of coconut shell have been examined. The carbonization was effected using particle sizes (150 – 2000μm) at carbonization temperatures between 200 and 900C in a laboratory muffle furnace. The study involved determination of yield, rate of weight loss, optimum ...

  18. Development of optimum preplanning for maxillofacial surgery using ...

    African Journals Online (AJOL)

    It discusses the development of optimum preplanning for maxillofacial surgery using selective laser sintering process. It involves identifying the optimum value of various parameters like threshold value, gantry tilt angle, resolution, layer thickness and interval thickness of CT scan image. The 3D model of the CT scan image ...

  19. Determining the optimum temperature for dry extrusion of | Palic ...

    African Journals Online (AJOL)

    In this study, the slope-ratio technique was used in two trials with broilers for determining the optimum treatment temperature for soyabeans. Average daily weight gain (ADWG) and feed conversion efficiency (FCE) were used as response parameters. The optimum temperature for dry extrusion of FFSB was 144 °C for Trial 1 ...

  20. Optimum mobility’ facelift. Part 2 – the technique

    OpenAIRE

    Fanous, Nabil; Karsan, Naznin; Zakhary, Kristina; Tawile, Carolyne

    2006-01-01

    In the first of this two-part article on the ‘optimum mobility’ facelift, facial tissue mobility was analyzed, and three theories or mechanisms emerged: ‘intrinsic mobility’, ‘surgically induced mobility’ and ‘optimum mobility points’.

  1. Determining optimum rates of mineral fertilizers for economic rice ...

    African Journals Online (AJOL)

    Nutrient input and output balances are very essential for maintaining balances in not only soil nutrient management but also in preventing pollution and waste through excess use. A study was undertaken to determine the optimum levels of the major elements (N, P, K) required for optimum lowland rice yields under the ...

  2. Hardware development process for Human Research facility applications

    Science.gov (United States)

    Bauer, Liz

    2000-01-01

    The simple goal of the Human Research Facility (HRF) is to conduct human research experiments on the International Space Station (ISS) astronauts during long-duration missions. This is accomplished by providing integration and operation of the necessary hardware and software capabilities. A typical hardware development flow consists of five stages: functional inputs and requirements definition, market research, design life cycle through hardware delivery, crew training, and mission support. The purpose of this presentation is to guide the audience through the early hardware development process: requirement definition through selecting a development path. Specific HRF equipment is used to illustrate the hardware development paths. .

  3. Solar cooling in the hardware-in-the-loop test; Solare Kuehlung im Hardware-in-the-Loop-Test

    Energy Technology Data Exchange (ETDEWEB)

    Lohmann, Sandra; Radosavljevic, Rada; Goebel, Johannes; Gottschald, Jonas; Adam, Mario [Fachhochschule Duesseldorf (Germany). Erneuerbare Energien und Energieeffizienz E2

    2012-07-01

    The first part of the BMBF-funded research project 'Solar cooling in the hardware-in-the-loop test' (SoCool HIL) deals with the simulation of a solar refrigeration system using the simulation environment Matlab / Simulink with the toolboxes Stateflow and Carnot. Dynamic annual simulations and DoE supported parameter variations were used to select meaningful system configurations, control strategies and dimensioning of components. The second part of this project deals with hardware-in-the-loop tests using the 17.5 kW absorption chiller of the company Yazaki Europe Limited (Hertfordshire, United Kingdom). For this, the chiller is operated on a test bench in order to emulate the behavior of other system components (solar circuit with heat storage, recooling, buildings and cooling distribution / transfer). The chiller is controlled by a simulation of the system using MATLAB / Simulink / Carnot. Based on the knowledge on the real dynamic performance of the chiller the simulation model of the chiller can then be validated. Further tests are used to optimize the control of the chiller to the current cooling load. In addition, some changes in system configurations (for example cold backup) are tested with the real machine. The results of these tests and the findings on the dynamic performance of the chiller are presented.

  4. Handbook of hardware/software codesign

    CERN Document Server

    Teich, Jürgen

    2017-01-01

    This handbook presents fundamental knowledge on the hardware/software (HW/SW) codesign methodology. Contributing expert authors look at key techniques in the design flow as well as selected codesign tools and design environments, building on basic knowledge to consider the latest techniques. The book enables readers to gain real benefits from the HW/SW codesign methodology through explanations and case studies which demonstrate its usefulness. Readers are invited to follow the progress of design techniques through this work, which assists readers in following current research directions and learning about state-of-the-art techniques. Students and researchers will appreciate the wide spectrum of subjects that belong to the design methodology from this handbook. .

  5. Algorithms for Hardware-Based Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Müller Dietmar

    2004-01-01

    Full Text Available Nonlinear spatial transforms and fuzzy pattern classification with unimodal potential functions are established in signal processing. They have proved to be excellent tools in feature extraction and classification. In this paper, we will present a hardware-accelerated image processing and classification system which is implemented on one field-programmable gate array (FPGA. Nonlinear discrete circular transforms generate a feature vector. The features are analyzed by a fuzzy classifier. This principle can be used for feature extraction, pattern recognition, and classification tasks. Implementation in radix-2 structures is possible, allowing fast calculations with a computational complexity of up to . Furthermore, the pattern separability properties of these transforms are better than those achieved with the well-known method based on the power spectrum of the Fourier Transform, or on several other transforms. Using different signal flow structures, the transforms can be adapted to different image and signal processing applications.

  6. Communication Estimation for Hardware/Software Codesign

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1998-01-01

    to be general enough to be able to capture the characteristics of a wide range of communication protocols and yet to be sufficiently detailed as to allow the designer or design tool to efficiently explore tradeoffs between throughput, bus widths, burst/non-burst transfers and data packing strategies. Thus......This paper presents a general high level estimation model of communication throughput for the implementation of a given communication protocol. The model, which is part of a larger model that includes component price, software driver object code size and hardware driver area, is intended...... it provides a basis for decision making with respect to communication protocols/components and communication driver design in the initial design space exploration phase of a co-synthesis process where a large number of possibilities must be examined and where fast estimators are therefore necessary. The fill...

  7. SAMBA: hardware accelerator for biological sequence comparison.

    Science.gov (United States)

    Guerdoux-Jamet, P; Lavenier, D

    1997-12-01

    SAMBA (Systolic Accelerator for Molecular Biological Applications) is a 128 processor hardware accelerator for speeding up the sequence comparison process. The short-term objective is to provide a low-cost board to boost PC or workstation performance on this class of applications. This paper places SAMBA amongst other existing systems and highlights the original features. Real performance obtained from the prototype is demonstrated. For example, a sequence of 300 amino acids is scanned against SWISS-PROT-34 (21 210 389 residues) in 30 s using the Smith and Waterman algorithm. More time-consuming applications, like the bank-to-bank comparison, are computed in a few hours instead of days on standard workstations. Technology allows the prototype to fit onto a single PCI board for plugging into any PC or workstation. SAMBA can be tested on the WEB server at URL http://www.irisa.fr/SAMBA/.

  8. Theorem Proving in Intel Hardware Design

    Science.gov (United States)

    O'Leary, John

    2009-01-01

    For the past decade, a framework combining model checking (symbolic trajectory evaluation) and higher-order logic theorem proving has been in production use at Intel. Our tools and methodology have been used to formally verify execution cluster functionality (including floating-point operations) for a number of Intel products, including the Pentium(Registered TradeMark)4 and Core(TradeMark)i7 processors. Hardware verification in 2009 is much more challenging than it was in 1999 - today s CPU chip designs contain many processor cores and significant firmware content. This talk will attempt to distill the lessons learned over the past ten years, discuss how they apply to today s problems, outline some future directions.

  9. Battery Management System Hardware Concepts: An Overview

    Directory of Open Access Journals (Sweden)

    Markus Lelie

    2018-03-01

    Full Text Available This paper focuses on the hardware aspects of battery management systems (BMS for electric vehicle and stationary applications. The purpose is giving an overview on existing concepts in state-of-the-art systems and enabling the reader to estimate what has to be considered when designing a BMS for a given application. After a short analysis of general requirements, several possible topologies for battery packs and their consequences for the BMS’ complexity are examined. Four battery packs that were taken from commercially available electric vehicles are shown as examples. Later, implementation aspects regarding measurement of needed physical variables (voltage, current, temperature, etc. are discussed, as well as balancing issues and strategies. Finally, safety considerations and reliability aspects are investigated.

  10. EPICS: Allen-Bradley hardware reference manual

    International Nuclear Information System (INIS)

    Nawrocki, G.

    1993-01-01

    This manual covers the following hardware: Allen-Bradley 6008 -- SV VMEbus I/O scanner; Allen-Bradley universal I/O chassis 1771-A1B, -A2B, -A3B, and -A4B; Allen-Bradley power supply module 1771-P4S; Allen-Bradley 1771-ASB remote I/O adapter module; Allen-Bradley 1771-IFE analog input module; Allen-Bradley 1771-OFE analog output module; Allen-Bradley 1771-IG(D) TTL input module; Allen-Bradley 1771-OG(d) TTL output; Allen-Bradley 1771-IQ DC selectable input module; Allen-Bradley 1771-OW contact output module; Allen-Bradley 1771-IBD DC (10--30V) input module; Allen-Bradley 1771-OBD DC (10--60V) output module; Allen-Bradley 1771-IXE thermocouple/millivolt input module; and the Allen-Bradley 2705 RediPANEL push button module

  11. The double Chooz hardware trigger system

    Energy Technology Data Exchange (ETDEWEB)

    Cucoanes, Andi; Beissel, Franz; Reinhold, Bernd; Roth, Stefan; Stahl, Achim; Wiebusch, Christopher [RWTH Aachen (Germany)

    2008-07-01

    The double Chooz neutrino experiment aims to improve the present knowledge on {theta}{sub 13} mixing angle using two similar detectors placed at {proportional_to}280 m and respectively 1 km from the Chooz power plant reactor cores. The detectors measure the disappearance of reactor antineutrinos. The hardware trigger has to be very efficient for antineutrinos as well as for various types of background events. The triggering condition is based on discriminated PMT sum signals and the multiplicity of groups of PMTs. The talk gives an outlook to the double Chooz experiment and explains the requirements of the trigger system. The resulting concept and its performance is shown as well as first results from a prototype system.

  12. Compressive Sensing Image Sensors-Hardware Implementation

    Directory of Open Access Journals (Sweden)

    Shahram Shirani

    2013-04-01

    Full Text Available The compressive sensing (CS paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal–oxide–semiconductor technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed.

  13. Fast Gridding on Commodity Graphics Hardware

    DEFF Research Database (Denmark)

    Sørensen, Thomas Sangild; Schaeffter, Tobias; Noe, Karsten Østergaard

    2007-01-01

    is the far most time consuming of the three steps (Table 1). Modern graphics cards (GPUs) can be utilised as a fast parallel processor provided that algorithms are reformulated in a parallel solution. The purpose of this work is to test the hypothesis, that a non-cartesian reconstruction can be efficiently......The most commonly used algorithm for non-cartesian MRI reconstruction is the gridding algorithm [1]. It consists of three steps:                    1) convolution with a gridding kernel and resampling on a cartesian grid, 2) inverse FFT, and 3) deapodization. On the CPU the convolution step...... implemented on graphics hardware giving a significant speedup compared to CPU based alternatives. We present a novel GPU implementation of the convolution step that overcomes the problems of memory bandwidth that has limited the speed of previous GPU gridding algorithms [2]....

  14. Locating hardware faults in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-04-13

    Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

  15. Programming languages and compiler design for realistic quantum hardware

    Science.gov (United States)

    Chong, Frederic T.; Franklin, Diana; Martonosi, Margaret

    2017-09-01

    Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.

  16. Programming languages and compiler design for realistic quantum hardware.

    Science.gov (United States)

    Chong, Frederic T; Franklin, Diana; Martonosi, Margaret

    2017-09-13

    Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.

  17. The optimum design of power distribution for pressurized water reactor

    International Nuclear Information System (INIS)

    Dai, Chunhui; Wei, Xinyu; Tai, Yun; Zhao, Fuyu

    2012-01-01

    Highlights: ► A two-level optimization method is developed. ► LP is optimized by backward diffusion calculation theory. ► Pontryagin’s maximum principle is used to investigate the optimum BP arrangement. ► NSGA-II is applied to coordinate the interrelationship between LP and BP. ► The optimized core saves fuel while providing a large power. -- Abstract: The aim of this work is to develop a two-level optimization method for designing the optimum initial fuel loading pattern and burnable poison placement in pressurized water reactors. At the lower level, based on the fuel loading pattern (LP) optimized by backward diffusion calculation theory, Pontryagin’s maximum principle is employed to investigate the optimum arrangement of burnable poison (BP) that can generate the lowest radial power peaking factor (PPF). At the upper level a multi-objective problem (MOP), with LP and BP as two objective functions, is proposed by coordinate the interrelationship of LP and BP, and optimized by non-dominated sorting genetic algorithm (NSGA-II). The results of optimum designs called ‘Pareto optimum solutions’ are a set of multiple optimum solutions. After sensitivity analysis is performed, the final optimum solution which is chosen based on a typical VVER-1000 reactor reveals that the method could not only save the fuel consumption but also reduce the PPF in comparison to published data.

  18. Generalized Pareto optimum and semi-classical spinors

    Science.gov (United States)

    Rouleux, M.

    2018-02-01

    In 1971, S. Smale presented a generalization of Pareto optimum he called the critical Pareto set. The underlying motivation was to extend Morse theory to several functions, i.e. to find a Morse theory for m differentiable functions defined on a manifold M of dimension ℓ. We use this framework to take a 2 × 2 Hamiltonian ℋ = ℋ(p) ∈ 2 C ∞(T * R 2) to its normal form near a singular point of the Fresnel surface. Namely we say that ℋ has the Pareto property if it decomposes, locally, up to a conjugation with regular matrices, as ℋ(p) = u ‧(p)C(p)(u ‧(p))*, where u : R 2 → R 2 has singularities of codimension 1 or 2, and C(p) is a regular Hermitian matrix (“integrating factor”). In particular this applies in certain cases to the matrix Hamiltonian of Elasticity theory and its (relative) perturbations of order 3 in momentum at the origin.

  19. Investigation of earthquake factor for optimum tuned mass dampers

    Science.gov (United States)

    Nigdeli, Sinan Melih; Bekdaş, Gebrail

    2012-09-01

    In this study the optimum parameters of tuned mass dampers (TMD) are investigated under earthquake excitations. An optimization strategy was carried out by using the Harmony Search (HS) algorithm. HS is a metaheuristic method which is inspired from the nature of musical performances. In addition to the HS algorithm, the results of the optimization objective are compared with the results of the other documented method and the corresponding results are eliminated. In that case, the best optimum results are obtained. During the optimization, the optimum TMD parameters were searched for single degree of freedom (SDOF) structure models with different periods. The optimization was done for different earthquakes separately and the results were compared.

  20. Optimum unambiguous discrimination of linearly independent pure state

    DEFF Research Database (Denmark)

    Pang, Shengshi; Wu, Shengjun

    2009-01-01

    analytical relation between the optimum solution and the n states to be discriminated. We also solve a generalized equal-probability measurement problem analytically. Finally, as another application of our result, the unambiguous discrimination problem of three pure states is studied in detail and analytical......Given n linearly independent pure states and their prior probabilities, we study the optimum unambiguous state discrimination problem. We derive the conditions for the optimum measurement strategy to achieve the maximum average success probability and establish two sets of equations that must...

  1. A data acquisition computer for high energy physics applications DAFNE:- hardware manual

    International Nuclear Information System (INIS)

    Barlow, J.; Seller, P.; De-An, W.

    1983-07-01

    A high performance stand alone computer system based on the Motorola 68000 micro processor has been built at the Rutherford Appleton Laboratory. Although the design was strongly influenced by the requirement to provide a compact data acquisition computer for the high energy physics environment, the system is sufficiently general to find applications in a wider area. It provides colour graphics and tape and disc storage together with access to CAMAC systems. This report is the hardware manual of the data acquisition computer, DAFNE (Data Acquisition For Nuclear Experiments), and as such contains a full description of the hardware structure of the computer system. (author)

  2. Hardware Locks with Priority Ceiling Emulation for a Java Chip-Multiprocessor

    DEFF Research Database (Denmark)

    Strøm, Torur Biskopstø; Schoeberl, Martin

    2015-01-01

    According to the safety-critical Java specification, priority ceiling emulation is a requirement for implementations, as it has preferable properties, such as avoiding priority inversion and being deadlock free on uni-core systems. In this paper we explore our hardware supported implementation...... of priority ceiling emulation on the multicore Java optimized processor, and compare it to the existing hardware locks on the Java optimized processor. We find that the additional overhead for priority ceiling emulation on a multicore processor is several times higher than simpler, non-premptive locks, mainly...

  3. Theoretical optimum of implant positional index design.

    Science.gov (United States)

    Semper, W; Kraft, S; Krüger, T; Nelson, K

    2009-08-01

    Rotational freedom of the implant-abutment connection influences its screw joint stability; for optimization, influential factors need to be evaluated based on a previously developed closed formula. The underlying hypothesis is that the manufacturing tolerances, geometric pattern, and dimensions of the index do not influence positional stability. We used the dimensions of 5 commonly used implant systems with a clearance of 20 microm to calculate the extent of rotational freedom; a 3D simulation (SolidWorks) validated the analytical findings. Polygonal positional indices showed the highest degrees of rotational freedom. The polygonal profile displayed higher positional stability than the polygons, but less positional accuracy than the cam-groove connection. Features of a maximal rotation-safe positional index were determined. The analytical calculation of rotational freedom of implant positional indices is possible. Rotational freedom is dependent on the geometric design of the index and may be decreased by incorporating specific aspects into the positional index design.

  4. Optimum strategies for nuclear energy system development (method of synthesis)

    International Nuclear Information System (INIS)

    Belenky, V.Z.

    1983-01-01

    The problem of optimum long-term development of the nuclear energy system is considered. The optimum strategies (i.e. minimum total uranium consumption) for the transition phase leading to a stationary regime of development are found. For this purpose the author has elaborated a new method of solving linear problems of optimal control which can include jumps in trajectories. The method gives a possibility to fulfil a total synthesis of optimum strategies. A key characteristic of the problem is the productivity function of the nuclear energy system which connects technological system parameters with its growth rate. There are only two types of optimum strategies, according to an increasing or decreasing productivity function. Both cases are illustrated with numerical examples. (orig.) [de

  5. Hardware descriptions of the I and C systems for NPP

    International Nuclear Information System (INIS)

    Lee, Cheol Kwon; Oh, In Suk; Park, Joo Hyun; Kim, Dong Hoon; Han, Jae Bok; Shin, Jae Whal; Kim, Young Bak

    2003-09-01

    The hardware specifications for I and C Systems of SNPP(Standard Nuclear Power Plant) are reviewed in order to acquire the hardware requirement and specification of KNICS (Korea Nuclear Instrumentation and Control System). In the study, we investigated hardware requirements, hardware configuration, hardware specifications, man-machine hardware requirements, interface requirements with the other system, and data communication requirements that are applicable to SNP. We reviewed those things of control systems, protection systems, monitoring systems, information systems, and process instrumentation systems. Through the study, we described the requirements and specifications of digital systems focusing on a microprocessor and a communication interface, and repeated it for analog systems focusing on the manufacturing companies. It is expected that the experience acquired from this research will provide vital input for the development of the KNICS

  6. Energy - achieving an optimum through information

    International Nuclear Information System (INIS)

    Gitt, W.

    1986-01-01

    What have computer programs in common with everyday human behaviour. Or the birds' passage, or photosynthesis, or the chemical reactions in a cell. They all primarily are information-controlled processes. The book under review deals with 'information' and 'energy', two main concepts in today's technological world. 'Energy' during the last few years has become a significant criterion with regard to technological progress. 'Information' is not only a main term in informatics terminology, but also a central concept for example in biology, linguistics, and communication science. The author shows that every 'information' is the result of an intellectual and purposeful process. The concept of information is taken as the red thread leading the author's journey through manifold strata of modern life, asking questions, finding answers, discussing problems. The wide spectrum of aspects discussed, including for instance a new approach to the Bible, and the remarkable examples presented by the author, make this book a treasure of knowledge, and of faith. (orig./HP) [de

  7. Optimum penstocks for low head microhydro schemes

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, K.V. [Department of Mechanical Engineering, University of Canterbury, Christchurch, P.O. Box 4800, Christchurch (New Zealand); Giddens, E.P. [Formally of Department of Civil Engineering, University of Canterbury, 81 Grange Street, Opawa, Christchurch (New Zealand)

    2008-03-15

    This paper presents an analysis for penstock optimization, for low head microhydro schemes. The intent of the optimization is to minimize capital cost per kilowatt rather than maximize the energy from the site. It is shown that site slope is an important consideration that affects the economics. While this work stands alone it has been generated as part of a research program that is in the final stages of developing a modular set of cost-effective low-head microhydro schemes for site heads below those currently serviced by Pelton Wheels. The rationale for the work has been that there is a multitude of viable low-head sites in isolated areas where microhydro is a realistic energy option, and where conventional economics are not appropriate. This is especially the case in third world countries. The goals of this paper have been to illustrate the issues and show how to decide on the most cost-effective penstock solutions that systematically cover the 0.2-20 kW supply. The paper presents the results as a matrix of the most cost-effective penstocks, and in the larger project it matches them to a modular set of turbines. It shows how to find the relative cost-effectiveness of alternative penstocks, and concludes with examples illustrating the results. (author)

  8. {sup 18}F-FDG PET/CT evaluation of children and young adults with suspected spinal fusion hardware infection

    Energy Technology Data Exchange (ETDEWEB)

    Bagrosky, Brian M. [University of Colorado School of Medicine, Department of Pediatric Radiology, Children' s Hospital Colorado, 12123 E. 16th Ave., Box 125, Aurora, CO (United States); University of Colorado School of Medicine, Department of Radiology, Division of Nuclear Medicine, Aurora, CO (United States); Hayes, Kari L.; Fenton, Laura Z. [University of Colorado School of Medicine, Department of Pediatric Radiology, Children' s Hospital Colorado, 12123 E. 16th Ave., Box 125, Aurora, CO (United States); Koo, Phillip J. [University of Colorado School of Medicine, Department of Radiology, Division of Nuclear Medicine, Aurora, CO (United States)

    2013-08-15

    Evaluation of the child with spinal fusion hardware and concern for infection is challenging because of hardware artifact with standard imaging (CT and MRI) and difficult physical examination. Studies using {sup 18}F-FDG PET/CT combine the benefit of functional imaging with anatomical localization. To discuss a case series of children and young adults with spinal fusion hardware and clinical concern for hardware infection. These people underwent FDG PET/CT imaging to determine the site of infection. We performed a retrospective review of whole-body FDG PET/CT scans at a tertiary children's hospital from December 2009 to January 2012 in children and young adults with spinal hardware and suspected hardware infection. The PET/CT scan findings were correlated with pertinent clinical information including laboratory values of inflammatory markers, postoperative notes and pathology results to evaluate the diagnostic accuracy of FDG PET/CT. An exempt status for this retrospective review was approved by the Institution Review Board. Twenty-five FDG PET/CT scans were performed in 20 patients. Spinal fusion hardware infection was confirmed surgically and pathologically in six patients. The most common FDG PET/CT finding in patients with hardware infection was increased FDG uptake in the soft tissue and bone immediately adjacent to the posterior spinal fusion rods at multiple contiguous vertebral levels. Noninfectious hardware complications were diagnosed in ten patients and proved surgically in four. Alternative sources of infection were diagnosed by FDG PET/CT in seven patients (five with pneumonia, one with pyonephrosis and one with superficial wound infections). FDG PET/CT is helpful in evaluation of children and young adults with concern for spinal hardware infection. Noninfectious hardware complications and alternative sources of infection, including pneumonia and pyonephrosis, can be diagnosed. FDG PET/CT should be the first-line cross-sectional imaging study in

  9. Dietary energy level for optimum productivity and carcass ...

    African Journals Online (AJOL)

    A quadratic equation was used to determine dietary energy levels for optimum feed intake, growth rate, FCR and ME intake at both the starter and grower phases and the carcass characteristics of the birds at 91 days. Dietary energy levels of 12.91, 12.42, 12.34 and 12.62 MJ ME/kg DM feed supported optimum feed intake, ...

  10. Accelerating epistasis analysis in human genetics with consumer graphics hardware

    Directory of Open Access Journals (Sweden)

    Cancare Fabio

    2009-07-01

    Full Text Available Abstract Background Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs have more memory bandwidth and computational capability than Central Processing Units (CPUs and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. Findings We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective

  11. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (noise of less than 1 μVrms for a useful signal-to-noise ratio (SNR). Flat interface nerve electrode (FINE) contacts alone generate thermal noise of at least 0.5 μVrms therefore the amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating noise when connected to a FINE placed on the sciatic nerve of an awake animal. An algorithm was introduced to find the value of N that can minimize both the power consumption and the noise in order to design a miniaturized ultralow-noise neural amplifier. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This

  12. Ultra-low noise miniaturized neural amplifier with hardware averaging

    Science.gov (United States)

    Dweiri, Yazan M.; Eggers, Thomas; McCallum, Grant; Durand, Dominique M.

    2015-08-01

    Objective. Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (noise of less than 1 μVrms for a useful signal-to-noise ratio (SNR). Flat interface nerve electrode (FINE) contacts alone generate thermal noise of at least 0.5 μVrms therefore the amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Approach. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. Main results. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating noise when connected to a FINE placed on the sciatic nerve of an awake animal. An algorithm was introduced to find the value of N that can minimize both the power consumption and the noise in order to design a miniaturized ultralow-noise neural amplifier. Significance. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and

  13. Swarm behavioral sorting based on robotic hardware variation

    OpenAIRE

    Shang, Beining; Crowder, Richard; Zauner, Klaus-Peter

    2014-01-01

    Swarm robotic systems can offer advantages of robustness, flexibility and scalability, just like social insects. One of the issues that researchers are facing is the hardware variation when implementing real robotic swarms. Identical software cannot guarantee identical behaviors among all robots due to hardware differences between swarm members. We propose a novel approach for sorting swarm robots according to their hardware differences. This method is based on the large number of interaction...

  14. Why Open Source Hardware matters and why you should care

    OpenAIRE

    Gürkaynak, Frank K.

    2017-01-01

    Open source hardware is currently where open source software was about 30 years ago. The idea is well received by enthusiasts, there is interest and the open source hardware has gained visible momentum recently, with several well-known universities including UC Berkeley, Cambridge and ETH Zürich actively working on large projects involving open source hardware, attracting the attention of companies big and small. But it is still not quite there yet. In this talk, based on my experience on the...

  15. Hardware/Software Co-design using Primitive Interface

    OpenAIRE

    Navin Chourasia; Puran Gaur

    2011-01-01

    Most engineering designs can be viewed as systems, i.e., as collections of several components whose combined operation provides useful services. Components can be heterogeneous in nature and their interaction may be regulated by some simple or complex means. Interface between Hardware & Software plays a very important role in co-design of the embedded system. Hardware/software co-design means meeting system-level objectives by exploiting the synergism of hardware and software through their co...

  16. Finding Optimum Focal Point Position with Neural Networks in CO2 Laser Welding

    DEFF Research Database (Denmark)

    Gong, Hui; Olsen, Flemming Ove

    1997-01-01

    CO2 lasers are increasingly being utilized for quality welding in production. Considering the high equipment cost, the start-up time and set-up time should be minimized. Ideally the parameters should be set up and optimized more or less automatically. In this article neural networks are designed...... to optimize the focal point position, one of the most critical parameters in laser welding. The feasibility to automatically optimize the focal point position is analyzed. Preliminary tests demonstrate that neural networks can be used to optimize the focal point position with good accuracy in CW CO2 laser...

  17. Reliable software for unreliable hardware a cross layer perspective

    CERN Document Server

    Rehman, Semeen; Henkel, Jörg

    2016-01-01

    This book describes novel software concepts to increase reliability under user-defined constraints. The authors’ approach bridges, for the first time, the reliability gap between hardware and software. Readers will learn how to achieve increased soft error resilience on unreliable hardware, while exploiting the inherent error masking characteristics and error (stemming from soft errors, aging, and process variations) mitigations potential at different software layers. · Provides a comprehensive overview of reliability modeling and optimization techniques at different hardware and software levels; · Describes novel optimization techniques for software cross-layer reliability, targeting unreliable hardware.

  18. Nanorobot Hardware Architecture for Medical Defense

    Directory of Open Access Journals (Sweden)

    Luiz C. Kretly

    2008-05-01

    Full Text Available This work presents a new approach with details on the integrated platform and hardware architecture for nanorobots application in epidemic control, which should enable real time in vivo prognosis of biohazard infection. The recent developments in the field of nanoelectronics, with transducers progressively shrinking down to smaller sizes through nanotechnology and carbon nanotubes, are expected to result in innovative biomedical instrumentation possibilities, with new therapies and efficient diagnosis methodologies. The use of integrated systems, smart biosensors, and programmable nanodevices are advancing nanoelectronics, enabling the progressive research and development of molecular machines. It should provide high precision pervasive biomedical monitoring with real time data transmission. The use of nanobioelectronics as embedded systems is the natural pathway towards manufacturing methodology to achieve nanorobot applications out of laboratories sooner as possible. To demonstrate the practical application of medical nanorobotics, a 3D simulation based on clinical data addresses how to integrate communication with nanorobots using RFID, mobile phones, and satellites, applied to long distance ubiquitous surveillance and health monitoring for troops in conflict zones. Therefore, the current model can also be used to prevent and save a population against the case of some targeted epidemic disease.

  19. Live HDR video streaming on commodity hardware

    Science.gov (United States)

    McNamee, Joshua; Hatchett, Jonathan; Debattista, Kurt; Chalmers, Alan

    2015-09-01

    High Dynamic Range (HDR) video provides a step change in viewing experience, for example the ability to clearly see the soccer ball when it is kicked from the shadow of the stadium into sunshine. To achieve the full potential of HDR video, so-called true HDR, it is crucial that all the dynamic range that was captured is delivered to the display device and tone mapping is confined only to the display. Furthermore, to ensure widespread uptake of HDR imaging, it should be low cost and available on commodity hardware. This paper describes an end-to-end HDR pipeline for capturing, encoding and streaming high-definition HDR video in real-time using off-the-shelf components. All the lighting that is captured by HDR-enabled consumer cameras is delivered via the pipeline to any display, including HDR displays and even mobile devices with minimum latency. The system thus provides an integrated HDR video pipeline that includes everything from capture to post-production, archival and storage, compression, transmission, and display.

  20. 8-Channel Broadband Laser Ranging Hardware Development

    Science.gov (United States)

    Bennett, Corey; La Lone, Brandon; Younk, Patrick; Daykin, Ed; Rhodes, Michelle; Perry, Daniel; Tran, Vu; Miller, Edward

    2017-06-01

    Broadband Laser Ranging (BLR) is a new diagnostic being developed to precisely measure the position vs. time of surfaces, shock break out, particle clouds, jets, and debris moving at kilometers per second speeds. The instrument uses interferometry to encode distance into a modulation in the spectrum of pulses from a mode-locked fiber laser and uses a dispersive Fourier transformation to map the spectral modulation into time. Range information is thereby recorded on a fast oscilloscope at the repetition rate of the laser, approximately every 50 ns. Current R&D is focused on developing a compact 8-channel system utilizing one laser and one high-speed oscilloscope. This talk will emphasize the hardware being developed for applications at the Contained Firing Facility at LLNL, but has a common architecture being developed in collaboration with NSTec and LANL for applications at multiple other facilities. Prepared by LLNL under Contract DE-AC52-07NA27344, by LANL under Contract DE-AC52-06NA25396, and by NSTec Contract DE-AC52-06NA25946.

  1. Hardware image assessment for wireless endoscopy capsules.

    Science.gov (United States)

    Khorsandi, M A; Karimi, N; Samavi, S; Hajabdollahi, M; Soroushmehr, S M R; Ward, K; Najarian, K

    2016-08-01

    Wireless capsule endoscopy is a new technology in the realm of telemedicine that has many advantages over the traditional endoscopy systems. Transmitted images should help diagnosis of diseases of the gastrointestinal tract. Two important technical challenges for the manufacturers of these capsules are power consumption and size of the circuitry. Also, the system must be fast enough for real-time processing of image or video data. To solve this problem, many hardware designs have been proposed for implementation of the image processing unit. In this paper we propose an architecture that could be used for the assessment of endoscopy images. The assessment allows avoidance of transmission of medically useless images. Hence, volume of data is reduced for more efficient transmission of images by the endoscopy capsule. This is done by color space conversion and moment calculation of images captured by the capsule. The inputs of the proposed architecture are RGB image frames and the outputs are images with converted colors and calculated image moments. Experimental results indicate that the proposed architecture has low complexity and is appropriate for a real-time application.

  2. A Hardware Track Finder for ATLAS Trigger

    CERN Document Server

    Volpi, G; The ATLAS collaboration; Andreazza, A; Citterio, M; Favareto, A; Liberali, V; Meroni, C; Riva, M; Sabatini, F; Stabile, A; Annovi, A; Beretta, M; Castegnaro, A; Bevacqua, V; Crescioli, F; Francesco, C; Dell'Orso, M; Giannetti, P; Magalotti, D; Piendibene, M; Roda, C; Sacco, I; Tripiccione, R; Fabbri, L; Franchini, M; Giorgi, F; Giannuzzi, F; Lasagni, F; Sbarra, C; Valentinetti, S; Villa, M; Zoccoli, A; Lanza, A; Negri, A; Vercesi, V; Bogdan, M; Boveia, A; Canelli, F; Cheng, Y; Dunford, M; Li, H L; Kapliy, A; Kim, Y K; Melachrinos, C; Shochet, M; Tang, F; Tang, J; Tuggle, J; Tompkins, L; Webster, J; Atkinson, M; Cavaliere, V; Chang, P; Kasten, M; McCarn, A; Neubauer, M; Hoff, J; Liu, T; Okumura, Y; Olsen, J; Penning, B; Todri, A; Wu, J; Drake, G; Proudfoot, J; Zhang, J; Blair, R; Anderson, J; Auerbach, B; Blazey, G; Kimura, N; Yorita, K; Sakurai, Y; Mitani, T; Iizawa, T

    2012-01-01

    The existing three level ATLAS trigger system is deployed to reduce the event rate from the bunch crossing rate of 40 MHz to ~400 Hz for permanent storage at the LHC design luminosity of 10^34 cm^-2 s^-1. When the LHC reaches beyond the design luminosity, the load on the Level-2 trigger system will significantly increase due to both the need for more sophisticated algorithms to suppress background and the larger event sizes. The Fast TracKer (FTK) is a custom electronics system that will operate at the full Level-1 accepted rate of 100 KHz and provide high quality tracks at the beginning of processing in the Level-2 trigger, by performing track reconstruction in hardware with massive parallelism of associative memories and FPGAs. The performance in important physics areas including b-tagging, tau-tagging and lepton isolation will be demonstrated with the ATLAS MC simulation at different LHC luminosities. The system design will be overviewed. The latest R&D progress of individual components...

  3. Magnetic qubits as hardware for quantum computers

    International Nuclear Information System (INIS)

    Tejada, J.; Chudnovsky, E.; Barco, E. del

    2000-01-01

    We propose two potential realisations for quantum bits based on nanometre scale magnetic particles of large spin S and high anisotropy molecular clusters. In case (1) the bit-value basis states vertical bar-0> and vertical bar-1> are the ground and first excited spin states S z = S and S-1, separated by an energy gap given by the ferromagnetic resonance (FMR) frequency. In case (2), when there is significant tunnelling through the anisotropy barrier, the qubit states correspond to the symmetric, vertical bar-0>, and antisymmetric, vertical bar-1>, combinations of the two-fold degenerate ground state S z = ± S. In each case the temperature of operation must be low compared to the energy gap, Δ, between the states vertical bar-0> and vertical bar-1>. The gap Δ in case (2) can be controlled with an external magnetic field perpendicular to the easy axis of the molecular cluster. The states of different molecular clusters and magnetic particles may be entangled by connecting them by superconducting lines with Josephson switches, leading to the potential for quantum computing hardware. (author)

  4. Magnetic qubits as hardware for quantum computers

    Energy Technology Data Exchange (ETDEWEB)

    Tejada, J.; Chudnovsky, E.; Barco, E. del [and others

    2000-07-01

    We propose two potential realisations for quantum bits based on nanometre scale magnetic particles of large spin S and high anisotropy molecular clusters. In case (1) the bit-value basis states vertical bar-0> and vertical bar-1> are the ground and first excited spin states S{sub z} = S and S-1, separated by an energy gap given by the ferromagnetic resonance (FMR) frequency. In case (2), when there is significant tunnelling through the anisotropy barrier, the qubit states correspond to the symmetric, vertical bar-0>, and antisymmetric, vertical bar-1>, combinations of the two-fold degenerate ground state S{sub z} = {+-} S. In each case the temperature of operation must be low compared to the energy gap, {delta}, between the states vertical bar-0> and vertical bar-1>. The gap {delta} in case (2) can be controlled with an external magnetic field perpendicular to the easy axis of the molecular cluster. The states of different molecular clusters and magnetic particles may be entangled by connecting them by superconducting lines with Josephson switches, leading to the potential for quantum computing hardware. (author)

  5. Mechanics of Granular Materials labeled hardware

    Science.gov (United States)

    2000-01-01

    Mechanics of Granular Materials (MGM) flight hardware takes two twin double locker assemblies in the Space Shuttle middeck or the Spacehab module. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: NASA/MSFC).

  6. Open Hardware For CERN's Accelerator Control Systems

    CERN Document Server

    van der Bij, E; Ayass, M; Boccardi, A; Cattin, M; Gil Soriano, C; Gousiou, E; Iglesias Gonsálvez, S; Penacoba Fernandez, G; Serrano, J; Voumard, N; Wlostowski, T

    2011-01-01

    The accelerator control systems at CERN will be renovated and many electronics modules will be redesigned as the modules they will replace cannot be bought anymore or use obsolete components. The modules used in the control systems are diverse: analog and digital I/O, level converters and repeaters, serial links and timing modules. Overall around 120 modules are supported that are used in systems such as beam instrumentation, cryogenics and power converters. Only a small percentage of the currently used modules are commercially available, while most of them had been specifically designed at CERN. The new developments are based on VITA and PCI-SIG standards such as FMC (FPGA Mezzanine Card), PCI Express and VME64x using transition modules. As system-on-chip interconnect, the public domain Wishbone specification is used. For the renovation, it is considered imperative to have for each board access to the full hardware design and its firmware so that problems could quickly be resolved by CERN engineers or its ...

  7. Optimum feeding rate of solid hazardous waste in a cement kiln burner

    OpenAIRE

    Ariyaratne, W. K. Hiromi; Melaaen, Morten Christian; Tokheim, Lars-André

    2013-01-01

    Solid hazardous waste mixed with wood chips (SHW) is a partly CO2 neutral fuel, and hence is a good candidate for substituting fossil fuels like pulverized coal in rotary kiln burners used in cement kiln systems. SHW is used in several cement plants, but the optimum substitution rate has apparently not yet been fully investigated. The present study aims to find the maximum possible replacement of coal by SHW, without negatively affecting the product quality, emissions and overall operation of...

  8. New hardware and software design for electrical impedance tomography

    Science.gov (United States)

    Goharian, Mehran

    find a regularization parameter. Our results show that the TRS algorithm has the advantage that it does not require any knowledge of the norm of the noise for its process. (4) The second part of thesis discusses the designing, implementation, and testing a novel 48-channel multi-frequency EIT system. The system specifications proved to be comparable with the existing EIT systems with capability of 3-D measurement over selectable frequencies. The proposed algorithms are finally tested under experimental situation using designed EIT hardware. The conductivity and permittivity images for different targets were reconstructed using four different approaches: dog-leg, principal component analysis (PCA), Gauss-Newton, and difference imaging. In the case of the multi-frequency analysis, the PCA-based approach provided a substantial improvement over the Gauss-Newton technique in terms of systematic error reduction. Our EIT system recovered a conductivity value of 0.08 Sm-1 for the 0.07 Sm-1 piece of cucumber (14% error).

  9. FPGA BASED HARDWARE KEY FOR TEMPORAL ENCRYPTION

    Directory of Open Access Journals (Sweden)

    B. Lakshmi

    2010-09-01

    Full Text Available In this paper, a novel encryption scheme with time based key technique on an FPGA is presented. Time based key technique ensures right key to be entered at right time and hence, vulnerability of encryption through brute force attack is eliminated. Presently available encryption systems, suffer from Brute force attack and in such a case, the time taken for breaking a code depends on the system used for cryptanalysis. The proposed scheme provides an effective method in which the time is taken as the second dimension of the key so that the same system can defend against brute force attack more vigorously. In the proposed scheme, the key is rotated continuously and four bits are drawn from the key with their concatenated value representing the delay the system has to wait. This forms the time based key concept. Also the key based function selection from a pool of functions enhances the confusion and diffusion to defend against linear and differential attacks while the time factor inclusion makes the brute force attack nearly impossible. In the proposed scheme, the key scheduler is implemented on FPGA that generates the right key at right time intervals which is then connected to a NIOS – II processor (a virtual microcontroller which is brought out from Altera FPGA that communicates with the keys to the personal computer through JTAG (Joint Test Action Group communication and the computer is used to perform encryption (or decryption. In this case the FPGA serves as hardware key (dongle for data encryption (or decryption.

  10. Hardware packet pacing using a DMA in a parallel computer

    Science.gov (United States)

    Chen, Dong; Heidelberger, Phillip; Vranas, Pavlos

    2013-08-13

    Method and system for hardware packet pacing using a direct memory access controller in a parallel computer which, in one aspect, keeps track of a total number of bytes put on the network as a result of a remote get operation, using a hardware token counter.

  11. Hardware and software for image acquisition in nuclear medicine

    International Nuclear Information System (INIS)

    Fideles, E.L.; Vilar, G.; Silva, H.S.

    1992-01-01

    A system for image acquisition and processing in nuclear medicine is presented, including the hardware and software referring to acquisition. The hardware is consisted of an analog-digital conversion card, developed in wire-wape. Its function is digitate the analogic signs provided by gamma camera. The acquisitions are made in list or frame mode. (C.G.C.)

  12. Parametric Investigation of Optimum Thermal Insulation Thickness for External Walls

    Directory of Open Access Journals (Sweden)

    Omer Kaynakli

    2011-06-01

    Full Text Available Numerous studies have estimated the optimum thickness of thermal insulation materials used in building walls for different climate conditions. The economic parameters (inflation rate, discount rate, lifetime and energy costs, the heating/cooling loads of the building, the wall structure and the properties of the insulation material all affect the optimum insulation thickness. This study focused on the investigation of these parameters that affect the optimum thermal insulation thickness for building walls. To determine the optimum thickness and payback period, an economic model based on life-cycle cost analysis was used. As a result, the optimum thermal insulation thickness increased with increasing the heating and cooling energy requirements, the lifetime of the building, the inflation rate, energy costs and thermal conductivity of insulation. However, the thickness decreased with increasing the discount rate, the insulation material cost, the total wall resistance, the coefficient of performance (COP of the cooling system and the solar radiation incident on a wall. In addition, the effects of these parameters on the total life-cycle cost, payback periods and energy savings were also investigated.

  13. A Practical Introduction to HardwareSoftware Codesign

    CERN Document Server

    Schaumont, Patrick R

    2013-01-01

    This textbook provides an introduction to embedded systems design, with emphasis on integration of custom hardware components with software. The key problem addressed in the book is the following: how can an embedded systems designer strike a balance between flexibility and efficiency? The book describes how combining hardware design with software design leads to a solution to this important computer engineering problem. The book covers four topics in hardware/software codesign: fundamentals, the design space of custom architectures, the hardware/software interface and application examples. The book comes with an associated design environment that helps the reader to perform experiments in hardware/software codesign. Each chapter also includes exercises and further reading suggestions. Improvements in this second edition include labs and examples using modern FPGA environments from Xilinx and Altera, which make the material applicable to a greater number of courses where these tools are already in use.  Mo...

  14. Optimum Arrangement of Reactive Power Sources While Using Genetic Algori

    Directory of Open Access Journals (Sweden)

    A. M. Gashimov

    2010-01-01

    Full Text Available Reduction of total losses in distribution electricity supply network is considered as an important measure which serves for improvement of efficiency of electric power supply systems. This objective can be achieved by optimum distribution of reactive power sources in proper places of distribution electricity supply network. The proposed methodology is based on application of a genetic algorithm. Total expenses for installation of capacitor banks, their operation and also expenses related to electric power losses are considered as an efficiency function which is used for determination of places with optimum values of capacitor bank power. The methodology is the most efficient for selection of optimum places in the network where it is necessary to install capacitor banks with due account of their power control depending on a switched-on load value in the units.

  15. Hardware Development Process for Human Research Facility Applications

    Science.gov (United States)

    Bauer, Liz

    2000-01-01

    The simple goal of the Human Research Facility (HRF) is to conduct human research experiments on the International Space Station (ISS) astronauts during long-duration missions. This is accomplished by providing integration and operation of the necessary hardware and software capabilities. A typical hardware development flow consists of five stages: functional inputs and requirements definition, market research, design life cycle through hardware delivery, crew training, and mission support. The purpose of this presentation is to guide the audience through the early hardware development process: requirement definition through selecting a development path. Specific HRF equipment is used to illustrate the hardware development paths. The source of hardware requirements is the science community and HRF program. The HRF Science Working Group, consisting of SCientists from various medical disciplines, defined a basic set of equipment with functional requirements. This established the performance requirements of the hardware. HRF program requirements focus on making the hardware safe and operational in a space environment. This includes structural, thermal, human factors, and material requirements. Science and HRF program requirements are defined in a hardware requirements document which includes verification methods. Once the hardware is fabricated, requirements are verified by inspection, test, analysis, or demonstration. All data is compiled and reviewed to certify the hardware for flight. Obviously, the basis for all hardware development activities is requirement definition. Full and complete requirement definition is ideal prior to initiating the hardware development. However, this is generally not the case, but the hardware team typically has functional inputs as a guide. The first step is for engineers to conduct market research based on the functional inputs provided by scientists. CommerCially available products are evaluated against the science requirements as

  16. Generic Advertising Optimum Budget for Iran’s Milk Industry

    Directory of Open Access Journals (Sweden)

    H. Shahbazi

    2016-05-01

    Full Text Available Introduction One of the main targets of planners, decision makers and governments is increasing society health with promotion and production of suitable and healthy food. One of the basic commodities that have important role in satisfaction of required human food is milk. So, some part of government and producer healthy budget allocate to milk consumption promotion by using generic advertising. If effectiveness of advertising budget on profitability is more, producer will have more willing to spend for advertising. Determination of optimal generic advertising budget is one of important problem in managerial decision making in producing firm as well as increase in consumption and profit and decrease in wasting and non-optimality of budget. Materials and Methods: In this study, optimal generic advertising budget intensity index (advertising budget share of production cost was estimated under two different scenarios by using equilibrium replacement model. In equilibrium replacement model, producer surplus are maximized in respect to generic advertising in retail level. According to market where two levels of farm and processing before retail exist and there is trade in farm and retail level, we present different models. Fixed and variable proportion hypothesis is another one. Finally, eight relations are presented for determination of milk generic advertising optimum budget. So, we use data from several resources such as previous studies, national (Iran Static center and international institute (Fao formal data and own estimation. Because there are several estimations in previous studies, we identify some scenarios (in two general scenarios for calculation of milk generic advertising optimum budget. Results and Discussion: Estimation of milk generic advertising optimum budget in scenario 1 shows that in case of one market level, fixed supplies and no trade, optimum budget is 0.4672539 percent. In case of one market level and no trade, optimum

  17. Hydrofoils: optimum lift-off speed for sailboats.

    Science.gov (United States)

    Baker, R M

    1968-12-13

    For a hydrofoil sailboat there is a unique optimum lift-off speed. Before this speed is reached, if there are no parasitic vertical hydrofoil appendages, the submerged or partially submerged hydrofoils increase drag and degrade performance. As soon as this speed is reached and the hydrofoils are fully and promptly deployed, the performance of a hydrofoil-borne craft is significantly improved. At speeds exceeding optimum lift-off speed, partially submerged hydrofoils impair performance if there is no significant effect of loading on the hydrofoil lift-to-drag ratio.

  18. PENENTUAN WAKTU KONTAK DAN pH OPTIMUM PENYERAPAN METILEN BIRU MENGGUNAKAN ABU SEKAM PADI

    Directory of Open Access Journals (Sweden)

    Anung Riapanitra

    2006-11-01

    Full Text Available Dyes are widely used for colouring in textile industries, significant losses occur during the manufacture and processing of the product, and these lost chemical are discharged in surrounding effluent. Adsorption of dyes is an effective technology for treatment of wastewater contaminated by the mismanaged of different types of dyes. In this research, we investigated the potential of rice husk ash for removal of methylene blue dyeing agent in aqueous system. The aim of this research is to find out the optimum contact time and pH on the adsorption of methylene blue using rice husk ash. Batch kinetics studies were carried out under varying experimental condition of contact time and pH. An adsorption equilibrium condition was reached within 10 minutes and the optimum condition for adsorption was at pH 3. The adsorption of methylene blue was decreasing with decreasing the solution pH value.

  19. Targeting multiple heterogeneous hardware platforms with OpenCL

    Science.gov (United States)

    Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.

    2014-06-01

    The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware

  20. Modeling Budget Optimum Allocation of Khorasan Razavi Province Agriculture Sector

    Directory of Open Access Journals (Sweden)

    Seyed Mohammad Fahimifard

    2016-09-01

    Full Text Available Introduction: Stock shortage is one of the development impasses in developing countries and trough it the agriculture sector has faced with the most limitation. The share of Iran’s agricultural sector from total investments after the Islamic revolution (1979 has been just 5.5 percent. This fact causes low efficiency in Iran’s agriculture sector. For instance per each 1 cubic meter of water in Iran’s agriculture sector, less that 1 kilogram dry food produced and each Iranian farmer achieves less annual income and has less mechanization in comparison with similar countries in Iran’s 1404 perspective document. Therefore, it is clear that increasing investment in agriculture sector, optimize the budget allocation for this sector is mandatory however has not been adequately and scientifically revised until now. Thus, in this research optimum budget allocation of Iran- Khorasan Razavi province agriculture sector was modeled. Materials and Methods: In order to model the optimum budget allocation of Khorasan Razavi province’s agriculture sector at first optimum budget allocation between agriculture programs was modeled with compounding three indexes: 1. Analyzing the priorities of Khorasan Razavi province’s agriculture sector experts with the application of Analytical Hierarchy Process (AHP, 2. The average share of agriculture sector programs from 4th country’s development program for Khorasan Razavi province’s agriculture sector, and 3.The average share of agriculture sector programs from 5th country’s development program for Khorasan Razavi province’s agriculture sector. Then, using Delphi technique potential indexes of each program was determined. After that, determined potential indexes were weighted using Analytical Hierarchy Process (AHP and finally, using numerical taxonomy model to optimize allocation of the program’s budget between cities based on two scenarios. Required data, also was gathered from the budget and planning

  1. Hardware Implementation of a Bilateral Subtraction Filter

    Science.gov (United States)

    Huertas, Andres; Watson, Robert; Villalpando, Carlos; Goldberg, Steven

    2009-01-01

    A bilateral subtraction filter has been implemented as a hardware module in the form of a field-programmable gate array (FPGA). In general, a bilateral subtraction filter is a key subsystem of a high-quality stereoscopic machine vision system that utilizes images that are large and/or dense. Bilateral subtraction filters have been implemented in software on general-purpose computers, but the processing speeds attainable in this way even on computers containing the fastest processors are insufficient for real-time applications. The present FPGA bilateral subtraction filter is intended to accelerate processing to real-time speed and to be a prototype of a link in a stereoscopic-machine- vision processing chain, now under development, that would process large and/or dense images in real time and would be implemented in an FPGA. In terms that are necessarily oversimplified for the sake of brevity, a bilateral subtraction filter is a smoothing, edge-preserving filter for suppressing low-frequency noise. The filter operation amounts to replacing the value for each pixel with a weighted average of the values of that pixel and the neighboring pixels in a predefined neighborhood or window (e.g., a 9 9 window). The filter weights depend partly on pixel values and partly on the window size. The present FPGA implementation of a bilateral subtraction filter utilizes a 9 9 window. This implementation was designed to take advantage of the ability to do many of the component computations in parallel pipelines to enable processing of image data at the rate at which they are generated. The filter can be considered to be divided into the following parts (see figure): a) An image pixel pipeline with a 9 9- pixel window generator, b) An array of processing elements; c) An adder tree; d) A smoothing-and-delaying unit; and e) A subtraction unit. After each 9 9 window is created, the affected pixel data are fed to the processing elements. Each processing element is fed the pixel value for

  2. FPGA Acceleration by Dynamically-Loaded Hardware Libraries

    DEFF Research Database (Denmark)

    Lomuscio, Andrea; Nannarelli, Alberto; Re, Marco

    Hardware acceleration is a viable solution to obtain energy efficiency in data intensive computation. In this work, we present a hardware framework to dynamically load hardware libraries, HLL, on reconfigurable platforms (FPGAs). Provided a library of application-specific processors, we load on......-the-y the speciffic processor in the FPGA, and we transfer the execution from the CPU to the FPGA-based accelerator. Results show that significant speed-up and energy efficiency can be obtained by HLL acceleration on system-on-chips where reconfigurable fabric is placed next to the CPUs....

  3. Hardware support for collecting performance counters directly to memory

    Science.gov (United States)

    Gara, Alan; Salapura, Valentina; Wisniewski, Robert W.

    2012-09-25

    Hardware support for collecting performance counters directly to memory, in one aspect, may include a plurality of performance counters operable to collect one or more counts of one or more selected activities. A first storage element may be operable to store an address of a memory location. A second storage element may be operable to store a value indicating whether the hardware should begin copying. A state machine may be operable to detect the value in the second storage element and trigger hardware copying of data in selected one or more of the plurality of performance counters to the memory location whose address is stored in the first storage element.

  4. Hardware Realization of Chaos Based Symmetric Image Encryption

    KAUST Repository

    Barakat, Mohamed L.

    2012-06-01

    This thesis presents a novel work on hardware realization of symmetric image encryption utilizing chaos based continuous systems as pseudo random number generators. Digital implementation of chaotic systems results in serious degradations in the dynamics of the system. Such defects are illuminated through a new technique of generalized post proceeding with very low hardware cost. The thesis further discusses two encryption algorithms designed and implemented as a block cipher and a stream cipher. The security of both systems is thoroughly analyzed and the performance is compared with other reported systems showing a superior results. Both systems are realized on Xilinx Vetrix-4 FPGA with a hardware and throughput performance surpassing known encryption systems.

  5. Detection of optimum maturity of maize using image processing and ...

    African Journals Online (AJOL)

    Detection of optimum maturity of maize using image processing and artificial neural networks. ... The leaves of maize are also very good source of food for grazing livestock like cows, goats, sheep, etc. However, in Nigeria ... of maturity. Keywords: Maize, Maturity, CCD Camera, Image Processing, Artificial Neural Network ...

  6. Determination of optimum schools fees in Nigerian private schools ...

    African Journals Online (AJOL)

    There is the need to understand how private schools in Nigeria survive since government does not give out grants or subvention to them. Their major source of revenue is school fees. In this study, we derive a formula to determine the optimum school fees to be charged in private schools. The conceptual background of the ...

  7. Determination of optimum welding parameters in connecting high ...

    Indian Academy of Sciences (India)

    In this study, different welding parameters were applied to two different steels with high alloys and mechanical and metallographical investigations are performed. Thus, the optimum welding parameters were determined for these materials and working conditions. 12.30 diameter steel bars made up of 1.4871 ...

  8. Assessment of sustainable yield and optimum fishing effort for the ...

    African Journals Online (AJOL)

    The tilapia (Oreochromis niloticus, L. 1758) stock of Lake Hawassa, Ethiopia, was assessed to estimate sustainable yield (MSY) and optimum fishing effort (fopt) using length-based analytical models (Jone's cohort analysis and Thompson and Bell). Pertinent data (length, weight, catch, effort, etc.) were collected on a daily ...

  9. Bud initiation and optimum harvest date in Brussels sprouts

    NARCIS (Netherlands)

    Everaarts, A.P.; Sukkel, W.

    1999-01-01

    For six cultivars of Brussels sprouts (Brassica oleracea var. gemmifera) with a decreasing degree of earliness, or optimum harvest date, the time of bud initiation was determined during two seasons. Fifty percent of the plants had initiated buds between 60 and 75 days after planting (DAP) in 1994

  10. Determining the optimum cell size of digital elevation model for ...

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 120; Issue 4. Determining the optimum cell size of digital elevation model for hydrologic ... Technology, Bahal 127 028, Bhiwani, Haryana, India. Agricultural & Food Engineering Department, Indian Institute of Technology, Kharagpur-721302, West Bengal, India.

  11. Optimum conditions for cotton nitrate reductase extraction and ...

    African Journals Online (AJOL)

    GREGO

    mM of glutamine in the extraction buffer stimulates significantly, in vitro, the reduction of nitrate. Enzyme activity is moreover optimal when 1 M of exogenous nitrate, as substrate, is added to the reaction medium. At these optimum conditions of nitrate reductase activity determination, the substrate was completely reduced ...

  12. Optimum position for wells producing at constant wellbore pressure

    Energy Technology Data Exchange (ETDEWEB)

    Camacho-Velazquez, R.; Rodriguez de la Garza, F. [Univ. Nacional Autonoma de Mexico, Mexico City (Mexico); Galindo-Nava, A. [Inst. Mexicanos del Petroleo, Mexico City (Mexico)]|[Univ. Nacional de Mexico, Mexico City (Mexico); Prats, M.

    1994-12-31

    This paper deals with the determination of the optimum position of several wells, producing at constant different wellbore pressures from a two-dimensional closed-boundary reservoirs, to maximize the cumulative production or the total flow rate. To achieve this objective they authors use an improved version of the analytical solution recently proposed by Rodriguez and Cinco-Ley and an optimization algorithm based on a quasi-Newton procedure with line search. At each iteration the algorithm approximates the negative of the objective function by a cuadratic relation derived from a Taylor series. The improvement of rodriguez and Cinco`s solution is attained in four ways. First, an approximation is obtained, which works better at earlier times (before the boundary dominated period starts) than the previous solution. Second, the infinite sums that are present in the solution are expressed in a condensed form, which is relevant for reducing the computer time when the optimization algorithm is used. Third, the solution is modified to take into account the possibility of having wells starting to produce at different times. This point allows them to deal with the problem of getting the optimum position for an infill drilling program. Last, the solution is extended to include the possibility of changing the value of wellbore pressure or being able to stimulate any of the wells at any time. When the wells are producing at different wellbore pressures it is found that the optimum position is a function of time, otherwise the optimum position is fixed.

  13. Optimum culture medium composition for lipopeptide production by ...

    Indian Academy of Sciences (India)

    Optimum culture medium composition for lipopeptide production by Bacillus subtilis using response surface model-based ant colony optimization. J SATYA ESWARI1, M ANAND2,∗ and C VENKATESWARLU1. 1Chemical Engineering Sciences Division, Indian Institute of Chemical Technology,. Hyderabad 500007, India.

  14. An optimum linear receiver for multiple channel digital transmission systems

    NARCIS (Netherlands)

    van Etten, Wim

    2007-01-01

    An optimum linear receiver for multiple channel digital transmission systems is developed for the minimum P. and for the zero-forcing criterion. A multidimensional Nyquist criterion is defined together with a theorem on the optimality of a finite length multiple tapped delay line. Furthermore an

  15. Determination of Optimum Number of Trunk Lines for Corporate ...

    African Journals Online (AJOL)

    The problem of determining the optimum number of telecommunication trunk lines subscribers' common equipment) for a given average traffic intensity is contributory, among other things, to the problem encountered in providing cost effective and high quality telecommunication services to corporate network users.

  16. Optimum position of isolators within erbium-doped fibers

    DEFF Research Database (Denmark)

    Lumholt, Ole; Schüsler, Kim; Bjarklev, Anders Overgaard

    1992-01-01

    An isolator is used as an amplified spontaneous emission suppressing component within an erbium-doped fiber. The optimum isolator placement is both experimentally and theoretically determined and found to be slightly dependent upon pump power. Improvements of 4 dB in gain and 2 dB in noise figure...

  17. Optimum workforce-size model using dynamic programming approach

    African Journals Online (AJOL)

    This paper presents an optimum workforce-size model which determines the minimum number of excess workers (overstaffing) as well as the minimum total recruitment cost during a specified planning horizon. The model is an extension of other existing dynamic programming models for manpower planning in the sense ...

  18. An Application of Calculus: Optimum Parabolic Path Problem

    Science.gov (United States)

    Atasever, Merve; Pakdemirli, Mehmet; Yurtsever, Hasan Ali

    2009-01-01

    A practical and technological application of calculus problem is posed to motivate freshman students or junior high school students. A variable coefficient of friction is used in modelling air friction. The case in which the coefficient of friction is a decreasing function of altitude is considered. The optimum parabolic path for a flying object…

  19. A Semi Analytical Solution for the Optimum Insulation Thickness Problem

    International Nuclear Information System (INIS)

    Abdullah, A.M.; Mina, A.R.

    1995-01-01

    The problem of optimizing the thickness of insulation installed on large hot vessels has been solved using a semi-analytical method. Unlike the previous studies, the derived mathematical expressions for the optimum thickness and total cost of burned fuel and insulating material has been formulated in a general form which facilitates their application to any fuel and insulation characteristics and life time of the system. Moreover, the system analysis took into consideration the normally expected annual increase in fuel price. Also an expression for the net saving in fuel cost-due to the installation of insulation has been derived. The results showed that the required optimum insulation thickness increases as the lifetime of a vessel and fuel price based on the results it is recommended to: (i) Estimate the virtual lifetime of a vessel to calculate the corresponding correct optimum thickness of insulation, (ii) Install an insulation which is thicker somewhat (say 10%) than the optimum one to compensate for both the expected annual increase in fuel price and the natural deterioration in the thermal and mechanical characteristics of the insulation material. 4 figs

  20. optimum workforce-size model using dynamic programming approach

    African Journals Online (AJOL)

    DJFLEX

    This paper presents an optimum workforce-size model which determines the minimum number of excess workers (overstaffing) as well as the minimum total recruitment cost during a specified planning horizon. The model is an extension of other existing dynamic programming models for manpower planning in the sense ...

  1. Optimum commodity taxation with a non-renewable resource

    DEFF Research Database (Denmark)

    Daubanes, Julien Xavier; Lasserre, Pierre

    2017-01-01

    We examine optimum commodity taxation (OCT), including the taxation of non-renewable resources (NRRs), by a government that needs to rely on commodity taxes to raise revenues. NRRs should be taxed at higher rates than otherwise-identical conventional commodities, according to an augmented, dynamic...

  2. experimental validation of optimum resistance moment of concrete ...

    African Journals Online (AJOL)

    user

    Fibre-Reinforced Plastics (FRPs) have been suggested as suitable reinforcement for concrete structures among other solutions to combat corrosion problems in steel reinforced concrete. This paper presents the experimental validation of optimum resistance moment of concrete slabs reinforced with Carbon-Fibre ...

  3. Experimental validation of optimum resistance moment of concrete ...

    African Journals Online (AJOL)

    Fibre-Reinforced Plastics (FRPs) have been suggested as suitable reinforcement for concrete structures among other solutions to combat corrosion problems in steel reinforced concrete. This paper presents the experimental validation of optimum resistance moment of concrete slabs reinforced with Carbon-Fibre ...

  4. Optimum Resolution in X-Ray Energy-Dispersive Diffractometry

    DEFF Research Database (Denmark)

    Buras, B.; Niimura, N.; Staun Olsen, J.

    1978-01-01

    The resolution problem in X-ray energy-dispersive diffractometry is discussed. It is shown that for a given characteristic of the solid-state detector system and a given range of interplanar spacings, an optimum scattering angle can be easily found for any divergence of the incident and scattered...

  5. Applied orthogonal experiment design for the optimum microwave ...

    African Journals Online (AJOL)

    An experiment on polysaccharides from Rhodiolae Radix (PRR) extraction was carried out using microwave-assisted extraction (MAE) method with an objective to establishing the optimum MAE conditions of PRR. Single factor experiments were performed to determine the appropriate range of extraction conditions, and the ...

  6. Optimum dietary protein requirement of genetically male tilapia ...

    African Journals Online (AJOL)

    The study was conducted to investigate the optimum dietary protein level needed for growing genetically male tilapia, Oreochromis niloticus. Diets containing crude protein levels 40, 42.5, 45, 47.5 and 50% were formulated and tried in triplicates. Test diets were fed to 20 fish/1m3 floating hapa at 5% of fish body weight daily ...

  7. On optimum dispatch of electric power generation via numerical ...

    African Journals Online (AJOL)

    In this work we develop an optimum dispatch / generating strategy by presenting economically the best load flow configuration in supplying load demand among the generators. The main aim is to minimize the total production / generation costs, with minimum losses and at the same time satisfy the load flow equation without ...

  8. How stem defects affect the capability of optimum bucking method?

    Directory of Open Access Journals (Sweden)

    Abdullah Emin Akay

    2015-07-01

    Full Text Available In forest harvesting activities, computer-assisted optimum bucking method increases the economic value of harvested trees. The bucking decision highly depends on the log quality grades which mainly vary with the surface characteristics such as stem defects and form of the stems. In this study, the effects of stem defects on optimum bucking method was investigated by comparing bucking applications which were conducted during the logging operations in two different Brutian Pine (Pinus brutia Ten stands. In the applications, the first stand contained the stems with relatively more stem defects than that of the stems in the second stand. The average number of defects per log for sample trees in the first and the second stand was recorded as 3.64 and 2.70, respectively. The results indicated that optimum bucking method increased the average economic value of harvested trees by 15.45% and 8.26 % in the stands, respectively. Therefore, the computer-assisted optimum bucking method potentially provides better results than that of traditional bucking method especially for the harvested trees with more stem defects.

  9. Optimum length of finned pipe for waste heat recovery

    International Nuclear Information System (INIS)

    Soeylemez, M.S.

    2008-01-01

    A thermoeconomic feasibility analysis is presented yielding a simple algebraic optimization formula for estimating the optimum length of a finned pipe that is used for waste heat recovery. A simple economic optimization method is used in the present study by combining it with an integrated overall heat balance method based on fin effectiveness for calculating the maximum savings from a waste heat recovery system

  10. The effects of physical and chemical changes on the optimum ...

    African Journals Online (AJOL)

    The aim of this study was to determine physical and chemical changes during fruit development and their relationship with optimum harvest maturity for Bacon, Fuerte and Zutano avocado cultivars grown under Dörtyol ecological condition. Fruits cv. Bacon, Fuerte and Zutano were obtained trees grafted on seedlings and ...

  11. Applicability Problem in Optimum Reinforced Concrete Structures Design

    Directory of Open Access Journals (Sweden)

    Ashara Assedeq

    2016-01-01

    Full Text Available Optimum reinforced concrete structures design is very complex problem, not only considering exactness of calculus but also because of questionable applicability of existing methods in practice. This paper presents the main theoretical mathematical and physical features of the problem formulation as well as the review and analysis of existing methods and solutions considering their exactness and applicability.

  12. Optimum design of Nd-doped fiber optical amplifiers

    DEFF Research Database (Denmark)

    Rasmussen, Thomas; Bjarklev, Anders Overgaard; Lumholt, Ole

    1992-01-01

    The waveguide parameters for a Nd-doped fluoride (Nd:ZBLANP) fiber amplifier have been optimized for small-signal and booster operation using an accurate numerical model. The optimum cutoff wavelength is shown to be 800 nm and the numerical aperture should be made as large as possible. Around 80...

  13. Procedure for determining the optimum rate of increasing shaft depth

    Energy Technology Data Exchange (ETDEWEB)

    Durov, E.M.

    1983-03-01

    Presented is an economic analysis of increasing shaft depth during mine modernization. Investigations carried out by the Yuzhgiproshakht Institute are analyzed. The investigations are aimed at determining the optimum shaft sinking rate (the rate which reduces investment to the minimum). The following factors are considered: coal output of a mine (0.9, 1.2, 1.5 and 1.8 Mt/year), depth at which the new mining level is situated (600, 800, 1200, 1400 and 1600 m), four schemes of increasing depth of 2 central shafts (rock hoisting to ground surface, rock hoisting to the existing level, rock haulage to the developed level, rock haulage to the level being developed using a large diameter borehole drilled from the new level to the shaft bottom and enlarged from shaft bottom to the new level), shaft sinking rate (10, 20, 30, 40, 50 and 60 m/month), range of increasing shaft depth (the difference between depth of the shaft before and after increasing its depth by 100, 200, 300 and 400 m). Comparative evaluations show that the optimum shaft sinking rate depends on the scheme for rock hoisting (one of 4 analyzed), range of increasing shaft depth and gas content in coal seams. The optimum shaft sinking rate ranges from 20 to 40 m/month in coal mines with low methane content and from 20 to 30 m/month in gassy coal mines. The planned coal output of a mine does not influence the optimum shaft sinking rate.

  14. The Optimum Conditions of Foreign Languages in Primary Education

    Science.gov (United States)

    Giannikas, Christina Nicole

    2014-01-01

    The aim of the paper is to review the primary language learning situation in Europe and shed light on the benefits it carries. Early language learning is the biggest policy development in education and has developed in rapid speed over the past 30 years; this article considers the effects and advantages of the optimum condition of an early start,…

  15. What Scientific Applications can Benefit from Hardware Transactional Memory?

    Energy Technology Data Exchange (ETDEWEB)

    Schindewolf, M; Bihari, B; Gyllenhaal, J; Schulz, M; Wang, A; Karl, W

    2012-06-04

    Achieving efficient and correct synchronization of multiple threads is a difficult and error-prone task at small scale and, as we march towards extreme scale computing, will be even more challenging when the resulting application is supposed to utilize millions of cores efficiently. Transactional Memory (TM) is a promising technique to ease the burden on the programmer, but only recently has become available on commercial hardware in the new Blue Gene/Q system and hence the real benefit for realistic applications has not been studied, yet. This paper presents the first performance results of TM embedded into OpenMP on a prototype system of BG/Q and characterizes code properties that will likely lead to benefits when augmented with TM primitives. We first, study the influence of thread count, environment variables and memory layout on TM performance and identify code properties that will yield performance gains with TM. Second, we evaluate the combination of OpenMP with multiple synchronization primitives on top of MPI to determine suitable task to thread ratios per node. Finally, we condense our findings into a set of best practices. These are applied to a Monte Carlo Benchmark and a Smoothed Particle Hydrodynamics method. In both cases an optimized TM version, executed with 64 threads on one node, outperforms a simple TM implementation. MCB with optimized TM yields a speedup of 27.45 over baseline.

  16. A Fast hardware tracker for the ATLAS Trigger

    CERN Document Server

    Pandini, Carlo Enrico; The ATLAS collaboration

    2015-01-01

    The trigger system at the ATLAS experiment is designed to lower the event rate occurring from the nominal bunch crossing at 40 MHz to about 1 kHz for a designed LHC luminosity of 10$^{34}$ cm$^{-2}$ s$^{-1}$. To achieve high background rejection while maintaining good efficiency for interesting physics signals, sophisticated algorithms are needed which require extensive use of tracking information. The Fast TracKer (FTK) trigger system, part of the ATLAS trigger upgrade program, is a highly parallel hardware device designed to perform track-finding at 100 kHz and based on a mixture of advanced technologies. Modern, powerful Field Programmable Gate Arrays (FPGA) form an important part of the system architecture, and the combinatorial problem of pattern recognition is solved by ~8000 standard-cell ASICs named Associative Memories. The availability of the tracking and subsequent vertex information within a short latency ensures robust selections and allows improved trigger performance for the most difficult sign...

  17. A Fast hardware Tracker for the ATLAS Trigger system

    CERN Document Server

    Pandini, Carlo Enrico; The ATLAS collaboration

    2015-01-01

    The trigger system at the ATLAS experiment is designed to lower the event rate occurring from the nominal bunch crossing at 40 MHz to about 1 kHz for a designed LHC luminosity of 10$^{34}$ cm$^{-2}$ s$^{-1}$. After a very successful data taking run the LHC is expected to run starting in 2015 with much higher instantaneous luminosities and this will increase the load on the High Level Trigger system. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals, which requires a more extensive use of tracking information. The Fast Tracker (FTK) trigger system, part of the ATLAS trigger upgrade program, is a highly parallel hardware device designed to perform full-scan track-finding at the event rate of 100 kHz. FTK is a dedicated processor based on a mixture of advanced technologies. Modern, powerful, Field Programmable Gate Arrays form an important part of the system architecture, and the combinatorial problem of pattern r...

  18. Pre-Hardware Optimization of Spacecraft Image Processing Algorithms and Hardware Implementation

    Science.gov (United States)

    Kizhner, Semion; Petrick, David J.; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Day, John H. (Technical Monitor)

    2002-01-01

    Spacecraft telemetry rates and telemetry product complexity have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image data processing and color picture generation application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The proposed solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms, and reconfigurable computing hardware (RC) technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processors (DSP). It has been shown that this approach can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft.

  19. Optimum gas turbine cycle for combined cycle power plant

    International Nuclear Information System (INIS)

    Polyzakis, A.L.; Koroneos, C.; Xydis, G.

    2008-01-01

    The gas turbine based power plant is characterized by its relatively low capital cost compared with the steam power plant. It has environmental advantages and short construction lead time. However, conventional industrial engines have lower efficiencies, especially at part load. One of the technologies adopted nowadays for efficiency improvement is the 'combined cycle'. The combined cycle technology is now well established and offers superior efficiency to any of the competing gas turbine based systems that are likely to be available in the medium term for large scale power generation applications. This paper has as objective the optimization of a combined cycle power plant describing and comparing four different gas turbine cycles: simple cycle, intercooled cycle, reheated cycle and intercooled and reheated cycle. The proposed combined cycle plant would produce 300 MW of power (200 MW from the gas turbine and 100 MW from the steam turbine). The results showed that the reheated gas turbine is the most desirable overall, mainly because of its high turbine exhaust gas temperature and resulting high thermal efficiency of the bottoming steam cycle. The optimal gas turbine (GT) cycle will lead to a more efficient combined cycle power plant (CCPP), and this will result in great savings. The initial approach adopted is to investigate independently the four theoretically possible configurations of the gas plant. On the basis of combining these with a single pressure Rankine cycle, the optimum gas scheme is found. Once the gas turbine is selected, the next step is to investigate the impact of the steam cycle design and parameters on the overall performance of the plant, in order to choose the combined cycle offering the best fit with the objectives of the work as depicted above. Each alterative cycle was studied, aiming to find the best option from the standpoint of overall efficiency, installation and operational costs, maintainability and reliability for a combined power

  20. Performance comparison between ISCSI and other hardware and software solutions

    CERN Document Server

    Gug, M

    2003-01-01

    We report on our investigations on some technologies that can be used to build disk servers and networks of disk servers using commodity hardware and software solutions. It focuses on the performance that can be achieved by these systems and gives measured figures for different configurations. It is divided into two parts : iSCSI and other technologies and hardware and software RAID solutions. The first part studies different technologies that can be used by clients to access disk servers using a gigabit ethernet network. It covers block access technologies (iSCSI, hyperSCSI, ENBD). Experimental figures are given for different numbers of clients and servers. The second part compares a system based on 3ware hardware RAID controllers, a system using linux software RAID and IDE cards and a system mixing both hardware RAID and software RAID. Performance measurements for reading and writing are given for different RAID levels.

  1. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Dominique Houzet

    2006-08-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  2. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Ouadjaout Salim

    2006-01-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  3. Towards hardware-intrinsic security foundations and practice

    CERN Document Server

    Sadeghi, Ahmad-Reza; Tuyls, Pim

    2010-01-01

    Hardware-intrinsic security is a young field dealing with secure secret key storage. This book features contributions from researchers and practitioners with backgrounds in physics, mathematics, cryptography, coding theory and processor theory.

  4. International Space Station (ISS) Addition of Hardware - Computer Generated Art

    Science.gov (United States)

    1995-01-01

    This computer generated scene of the International Space Station (ISS) represents the first addition of hardware following the completion of Phase II. The 8-A Phase shows the addition of the S-9 truss.

  5. Preventive Safety Measures: A Guide to Security Hardware.

    Science.gov (United States)

    Gottwalt, T. J.

    2003-01-01

    Emphasizes the importance of an annual security review of a school facility's door hardware and provides a description of the different types of locking devices typically used on schools and where they are best applied. (EV)

  6. Hardware device to physical structure binding and authentication

    Science.gov (United States)

    Hamlet, Jason R.; Stein, David J.; Bauer, Todd M.

    2013-08-20

    Detection and deterrence of device tampering and subversion may be achieved by including a cryptographic fingerprint unit within a hardware device for authenticating a binding of the hardware device and a physical structure. The cryptographic fingerprint unit includes an internal physically unclonable function ("PUF") circuit disposed in or on the hardware device, which generate an internal PUF value. Binding logic is coupled to receive the internal PUF value, as well as an external PUF value associated with the physical structure, and generates a binding PUF value, which represents the binding of the hardware device and the physical structure. The cryptographic fingerprint unit also includes a cryptographic unit that uses the binding PUF value to allow a challenger to authenticate the binding.

  7. Aspects of system modelling in Hardware/Software partitioning

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1996-01-01

    This paper addresses fundamental aspects of system modelling and partitioning algorithms in the area of Hardware/Software Codesign. Three basic system models for partitioning are presented and the consequences of partitioning according to each of these are analyzed. The analysis shows the importa......This paper addresses fundamental aspects of system modelling and partitioning algorithms in the area of Hardware/Software Codesign. Three basic system models for partitioning are presented and the consequences of partitioning according to each of these are analyzed. The analysis shows...... the importance of making a clear distinction between the model used for partitioning and the model used for evaluation It also illustrates the importance of having a realistic hardware model such that hardware sharing can be taken into account. Finally, the importance of integrating scheduling and allocation...

  8. Hardware Implementation Of Line Clipping A lgorithm By Using FPGA

    Directory of Open Access Journals (Sweden)

    Amar Dawod

    2013-04-01

    Full Text Available The computer graphics system performance is increasing faster than any other computing application. Algorithms for line clipping against convex polygons and lines have been studied for a long time and many research papers have been published so far. In spite of the latest graphical hardware development and significant increase of performance the clipping is still a bottleneck of any graphical system. So its implementation in hardware is essential for real time applications. In this paper clipping operation is discussed and a hardware implementation of the line clipping algorithm is presented and finally formulated and tested using Field Programmable Gate Arrays (FPGA. The designed hardware unit consists of two parts : the first is positional code generator unit and the second is the clipping unit. Finally it is worth mentioning that the  designed unit is capable of clipping (232524 line segments per second.       

  9. Testing the LIGO inspiral analysis with hardware injections

    International Nuclear Information System (INIS)

    Brown, D A

    2004-01-01

    Injection of simulated binary inspiral signals into detector hardware provides an excellent test of the inspiral detection pipeline. By recovering the physical parameters of an injected signal, we test our understanding of both instrumental calibration and the data analysis pipeline. We describe an inspiral search code and results from hardware injection tests and demonstrate that injected signals can be recovered by the data analysis pipeline. The parameters of the recovered signals match those of the injected signals

  10. Fifty Years of Observing Hardware and Human Behavior

    Science.gov (United States)

    McMann, Joe

    2011-01-01

    During this half-day workshop, Joe McMann presented the lessons learned during his 50 years of experience in both industry and government, which included all U.S. manned space programs, from Mercury to the ISS. He shared his thoughts about hardware and people and what he has learned from first-hand experience. Included were such topics as design, testing, design changes, development, failures, crew expectations, hardware, requirements, and meetings.

  11. Hardware control system using modular software under RSX-11D

    International Nuclear Information System (INIS)

    Kittell, R.S.; Helland, J.A.

    1978-01-01

    A modular software system used to control extensive hardware is described. The development, operation, and experience with this software are discussed. Included are the methods employed to implement this system while taking advantage of the Real-Time features of RSX-11D. Comparisons are made between this system and an earlier nonmodular system. The controlled hardware includes magnet power supplies, stepping motors, DVM's, and multiplexors, and is interfaced through CAMAC. 4 figures

  12. Accelerator Technology: Injection and Extraction Related Hardware: Kickers and Septa

    CERN Document Server

    Barnes, M J; Mertens, V

    2013-01-01

    This document is part of Subvolume C 'Accelerators and Colliders' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the the Section '8.7 Injection and Extraction Related Hardware: Kickers and Septa' of the Chapter '8 Accelerator Technology' with the content: 8.7 Injection and Extraction Related Hardware: Kickers and Septa 8.7.1 Fast Pulsed Systems (Kickers) 8.7.2 Electrostatic and Magnetic Septa

  13. Basics of spectroscopic instruments. Hardware of NMR spectrometer

    International Nuclear Information System (INIS)

    Sato, Hajime

    2009-01-01

    NMR is a powerful tool for structure analysis of small molecules, natural products, biological macromolecules, synthesized polymers, samples from material science and so on. Magnetic Resonance Imaging (MRI) is applicable to plants and animals Because most of NMR experiments can be done by an automation mode, one can forget hardware of NMR spectrometers. It would be good to understand features and performance of NMR spectrometers. Here I present hardware of a modern NMR spectrometer which is fully equipped with digital technology. (author)

  14. A Survey on Hardware Implementations of Visual Object Trackers

    OpenAIRE

    El-Shafie, Al-Hussein A.; Habib, S. E. D.

    2017-01-01

    Visual object tracking is an active topic in the computer vision domain with applications extending over numerous fields. The main sub-tasks required to build an object tracker (e.g. object detection, feature extraction and object tracking) are computation-intensive. In addition, real-time operation of the tracker is indispensable for almost all of its applications. Therefore, complete hardware or hardware/software co-design approaches are pursued for better tracker implementations. This pape...

  15. Finding Trapped Miners by Using a Prototype Seismic Recording System Made from Music-Recording Hardware

    Science.gov (United States)

    Pratt, Thomas L.

    2009-01-01

    The goal of this project was to use off-the-shelf music recording equipment to build and test a prototype seismic system to listen for people trapped in underground chambers (mines, caves, collapsed buildings). Previous workers found that an array of geophones is effective in locating trapped miners; displaying the data graphically, as well as playing it back into an audio device (headphones) at high speeds, was found to be effective for locating underground tapping. The desired system should record the data digitally to allow for further analysis, be capable of displaying the data graphically, allow for rudimentary analysis (bandpass filter, deconvolution), and allow the user to listen to the data at varying speeds. Although existing seismic reflection systems are adequate to record, display and analyze the data, they are relatively expensive and difficult to use and do not have an audio playback option. This makes it difficult for individual mines to have a system waiting on the shelf for an emergency. In contrast, music recording systems, like the one I used to construct the prototype system, can be purchased for about 20 percent of the cost of a seismic reflection system and are designed to be much easier to use. The prototype system makes use of an ~$3,000, 16-channel music recording system made by Presonus, Inc., of Baton Rouge, Louisiana. Other manufacturers make competitive systems that would serve equally well. Connecting the geophones to the recording system required the only custom part of this system - a connector that takes the output from the geophone cable and breaks it into 16 microphone inputs to be connected to the music recording system. The connector took about 1 day of technician time to build, using about $300 in off-the-shelf parts. Comparisons of the music recording system and a standard seismic reflection system (A 24-channel 'Geode' system manufactured by Geometrics, Inc., of San Jose, California) were carried out at two locations. Initial recordings of small hammer taps were carried out in a small field in Seattle, Washington; more elaborate tests were carried out at the San Juan Coal Mine in San Juan, New Mexico, in which miners underground were signaling. The comparisons demonstrate that the recordings made by the two systems are nearly identical, indicating that either system adequately records the data from the geophones. In either system the data can quickly be converted to a format (Society of Exploration Geophysicists 'Y' format; 'SEGY') to allow for filtering and other signal processing. With a modest software development effort, it is clear that either system could produce equivalent data products (SEGY data and audio data) within a few minutes of finishing the recording. The two systems both have significant advantages and drawbacks. With the seismograph, the tapping was distinctly visible when it occurred during a time window that was displayed. I have not identified or developed software for converting the resulting data to sound recordings that can be heard, but this limitation could be overcome with a trivial software development effort. The main drawbacks to the seismograph are that it does not allow for real-time listening, it is expensive to purchase, and it contains many features that are not utilized for this application. The music recording system is simple to use (it is designed for a general user, rather than a trained technician), allows for listening during recording, and has the advantage of using inexpensive, off-the-shelf components. It also allows for quick (within minutes) playback of the audio data at varying speeds. The data display by the software in the prototype system, however, is clearly inferior to the display on the seismograph. The music system also has the drawback of substantially oversampling the data by a factor of 24 (48,000 samples per second versus 2,000 samples per second) because the user interface only allows limited subsampling. This latte

  16. Tax Efficiency vs. Tax Equity – Points of View regarding Tax Optimum

    Directory of Open Access Journals (Sweden)

    Stela Aurelia Toader

    2011-10-01

    Full Text Available Objectives. Starting from the idea that tax equity requirements, administration costs and the tendency towards tax evasion determine the design of tax systems, it is important to identify a satisfactory efficiency/equity deal in order to build a tax system as close to optimum requirements as possible. Prior Work Previous studies proved that an optimum tax system is that through which it will be collected a level of tax revenues which will satisfy budgetary demands, while losing only a minimum ‘amount’ of welfare. In what degree the Romanian tax system meets these requirements? Approach We envisage analyzing the possibilities of improving Romanian tax system as to come nearest to optimum requirements. Results We can conclude fiscal system can uphold important improvements in what assuring tax equity is concerned, resulting in raising the degree of free conformation in the field of tax payment and, implicitly, the degree of tax efficiency. Implications Knowing to what extent it can be acted upon in the direction of finding that satisfactory efficiency/equity deal may allow oneself to identify the blueprint of a tax system in which the loss of welfare is kept down to minimum. Value For the Romanian institutions empowered to impose taxes, the knowledge of the possibilities of making the tax system more efficient can be important while aiming at reducing the level of evasion phenomenon.

  17. Technoeconomical Assessment of Optimum Design for Photovoltaic Water Pumping System for Rural Area in Oman

    Directory of Open Access Journals (Sweden)

    Hussein A. Kazem

    2015-01-01

    Full Text Available Photovoltaic (PV systems have been used globally for a long time to supply electricity for water pumping system for irrigation. System cost drops down with time since PV technology, efficiency, and design methodology have been improved and cost of wattage drops dramatically in the last decade. In the present paper optimum PV system design for water pumping system has been proposed for Oman. Intuitive and numerical methods were used to design the system. HOMER software as a numerical method was used to design the system to come up with optimum design for Oman. Also, REPS.OM software has been used to find the optimum design based on hourly meteorological data. The daily solar energy in Sohar was found to be 6.182 kWh/m2·day. However, it is found that the system annual yield factor is 2024.66 kWh/kWp. Furthermore, the capacity factor was found to be 23.05%, which is promising. The cost of energy and system capital cost has been compared with that of diesel generator and systems in literature. The comparison shows that the cost of energy is 0.180, 0.309, and 0.790 USD/kWh for PV-REPS.OM, PV-HOMER, and diesel systems, respectively, which sound that PV water pumping systems are promising in Oman.

  18. Annual Optimum Tilt Angle Prediction of Solar Collector using PSO Estimator

    Science.gov (United States)

    Dixit, T. V.; Yadav, Anamika; pre="Senior Member ">IEEE,

    2017-08-01

    The amount of solar flux falls on solar collector depends on tilt angle and orientation of collector from the surface. By efficiently regulating the tilt and orientation of solar collector unnecessary loss in potential power can be minimized. In general, for north hemisphere, south facing of the collector is considered as optimum orientation. There are several metrological and geographical factors which affect the optimum tilt angle. In this paper, the PSO estimator has been proposed in order to find optimum tilt angle on annual basis. The results of PSO estimators are compared with ANN estimator and satellite (RETScreen software) data. To evaluate the performance of proposed model MBE, RMSE, Error range, percentage annual error as well as direct method of statistical study have been carried out. During annual tilt angle prediction the annual percentage errors of proposed method and RETScreensoftware data are 0.03% and 7.03% respectively with respect to ANN results. Finally, the average percentage error indicates that the proposed estimator gives better prediction as compared to satellite based results for collecting maximum solar flux at surface of solar collector.

  19. Hardware Middleware for Person Tracking on Embedded Distributed Smart Cameras

    Directory of Open Access Journals (Sweden)

    Ali Akbar Zarezadeh

    2012-01-01

    Full Text Available Tracking individuals is a prominent application in such domains like surveillance or smart environments. This paper provides a development of a multiple camera setup with jointed view that observes moving persons in a site. It focuses on a geometry-based approach to establish correspondence among different views. The expensive computational parts of the tracker are hardware accelerated via a novel system-on-chip (SoC design. In conjunction with this vision application, a hardware object request broker (ORB middleware is presented as the underlying communication system. The hardware ORB provides a hardware/software architecture to achieve real-time intercommunication among multiple smart cameras. Via a probing mechanism, a performance analysis is performed to measure network latencies, that is, time traversing the TCP/IP stack, in both software and hardware ORB approaches on the same smart camera platform. The empirical results show that using the proposed hardware ORB as client and server in separate smart camera nodes will considerably reduce the network latency up to 100 times compared to the software ORB.

  20. MSAP Hardware Verification: Testing Multi-Mission System Architecture Platform Hardware Using Simulation and Bench Test Equipment

    Science.gov (United States)

    Crossin, Kent R.

    2005-01-01

    The Multi-Mission System Architecture Platform (MSAP) project aims to develop a system of hardware and software that will provide the core functionality necessary in many JPL missions and can be tailored to accommodate mission-specific requirements. The MSAP flight hardware is being developed in the Verilog hardware description language, allowing developers to simulate their design before releasing it to a field programmable gate array (FPGA). FPGAs can be updated in a matter of minutes, drastically reducing the time and expense required to produce traditional application-specific integrated circuits. Bench test equipment connected to the FPGAs can then probe and run Tcl scripts on the hardware. The Verilog and Tcl code can be reused or modified with each design. These steps are effective in confirming that the design operates according specifications.

  1. RECENT APPROACHES IN THE OPTIMUM CURRENCY AREAS THEORY

    Directory of Open Access Journals (Sweden)

    AURA SOCOL

    2011-04-01

    Full Text Available This study is dealing with the endogenous characteristic of the OCA criteria, starting from the idea that a higherconformity of the business cycles will result in a better timing of the economic cycles and, thus, in getting closerto the quality of an optimum currency area. Thus, if the classical theory is focused on a static approach of theproblem, the new theories assert that these conditions are dynamic, and they cannot be positively affected evenby the establishment of the Economic and Monetary Union. The consequences are overwhelming, as theendogenous approach shows that a monetary union can be achieved even if all the conditions mentioned inMundell’s optimum currency areas theory are not met, showing that some of them may also be met subsequentto the unification. Thus, a country joining a monetary union, althogh it does not meet the criteria for an optimumcurrency area, will ex post lead to the increase of the integration and business cycle correlation degree.

  2. Determining optimum aging time using novel core flooding equipment

    DEFF Research Database (Denmark)

    Ahkami, Mehrdad; Chakravarty, Krishna Hara; Xiarchos, Ioannis

    2016-01-01

    New methods for enhanced oil recovery are typically developed using core flooding techniques. Establishing reservoir conditions is essential before the experimental campaign commences. The realistic oil-rock wettability can be obtained through optimum aging of the core. Aging time is affected...... by temperature, crude oil, formation brine, and coreplug lithology. Minimum time can significantly reduce the experimental cost while insufficient aging time can result in false conclusions. Real-time online resistivity measurements of coreplugs are presented and a novel method is introduced for determining...... the optimum aging time regardless of variations in crude oil, rock, and brine properties. State of the art core flooding equipment has been developed that can be used for consistently determining the resistivity of the coreplug during aging and waterflooding using advanced data acquisition software...

  3. Determination of optimum pressurizer level for kori unit 1

    Energy Technology Data Exchange (ETDEWEB)

    Song, Dong Soo; Lee, Chang Sup; Lee Jae Yong; Kim, Yo Han; Lee, Dong Hyuk [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    To determine the optimum pressurizer water level during normal operation for Kori unit 1, performance and safety analysis are performed. The methodology is developed by evaluating {sup d}ecrease in secondary heat removal{sup e}vents such as Loss of Normal Feedwater accident. To demonstrate optimum pressurizer level setpoint, RETRAN-03 code is used for performance analysis. Analysis results of RETRAN following reactor trip are compared with the actual plant data to justify RETRAN code modelling. The results of performance and safety analyses show that the newly established level setpoints not only improve the performance of pressurizer during transient including reactor trip but also meet the design bases of the pressurizer volume and pressure. 6 refs., 5 figs. (Author)

  4. Parallel operation of NH3 screw compressors - the optimum way

    Science.gov (United States)

    Pijnenburg, B.; Ritmann, J.

    2015-08-01

    The use of more smaller industrial NH3 screw compressors operating in parallel seems to offer the optimum way when it comes to fulfilling maximum part load efficiency, increased redundancy and other highly requested features in the industrial refrigeration industry today. Parallel operation in an optimum way can be selected to secure continuous operation and can in most applications be configured to ensure lower overall operating economy. New compressors are developed to meet requirements for flexibility in operation and are controlled in an intelligent way. The intelligent control system keeps focus on all external demands, but yet striving to offer always the lowest possible absorbed power, including in future scenarios with connection to smart grid.

  5. Determination of the Optimum Ozone Product on the Plasma Ozonizer

    International Nuclear Information System (INIS)

    Agus Purwadi; Widdi Usada; Suryadi; Isyuniarto; Sri Sukmajaya

    2002-01-01

    An experiment of the optimum ozone product determination on the cylindrical plasma ozonizer has been done. The experiment is carried out by using alternating high voltage power supply, oscilloscope CS-1577 A, flow meter and spectronik-20 instrument for the absorbance solution samples which produced by varying the physics parameter values of the discharge alternating high voltage and velocity of oxygen gas input. The plasma ozonizer is made of cylinder stainless steel as the electrode and cylinder glass as the dielectric with 1.00 mm of the discharge gap and 7.225 mm 3 of the discharge tube volume. The experiment results shows that the optimum ozone product is 0.360 mg/s obtained at the the discharge of alternating high voltage of 25.50 kV, the frequency of 1.00 kHz and the rate of oxygen gas input of 1.00 lpm. (author)

  6. Narratives of Optimum Currency Area theory and Eurozone Governance

    DEFF Research Database (Denmark)

    Snaith, Holly Grace

    2014-01-01

    Optimum Currency Area theory (OCA) is a body of research that has, since its inception in 1961, been highly influential for the discourse and design of Economic and Monetary Union, exercising a significant hermeneutical force. Nonetheless, there has been little acknowledgement that OCA is the sub......Optimum Currency Area theory (OCA) is a body of research that has, since its inception in 1961, been highly influential for the discourse and design of Economic and Monetary Union, exercising a significant hermeneutical force. Nonetheless, there has been little acknowledgement that OCA...... suppressed, capitalising upon the fundamental uncertainty in the theory itself. The final part of the paper goes on to consider the financial crisis, and how OCA theory might aid policy-makers’ attempts to induce ex-post convergence, demonstrating the continued relevance of the theory....

  7. Application of customer-interruption costs for optimum distribution planning

    International Nuclear Information System (INIS)

    Mok, Y.L.; Chung, T.S.

    1996-01-01

    We present a new methodology for obtaining optimum values of the integrated cost of utility investment with customer interruption in distribution planning for electric power systems by determining the reliability cost and worth of the distribution system. Reliability cost refers to investment cost of the utility in achieving a defined level of reliability. Reliability worth is the benefit gained by the utility customer from an increase of reliability. A computer program has been developed to determine comparative reliability indices for a typical distribution network. With the average interruption cost, outage duration, average disconnected load, cost data for distribution equipment, etc. being known, the relation between reliability cost, reliability worth and reliability at the specified load point are obtained. The optimum reliability of the distribution system is then determined from the minimum cost to the utility with customer interruption. The applicability of this approach is demonstrated by several practical networks. (Author)

  8. Optimum Conditions for Uricase Enzyme Production by Gliomastix gueg

    Directory of Open Access Journals (Sweden)

    Atalla, M. M.

    2009-01-01

    Full Text Available Nineteen strains of microorganisms were screened for uricase production. Gliomastix gueg was recognized to produce high levels of the enzyme. The optimum fermentation conditions for uricase production by Gliomastix gueg were examined. Results showed that uric acid medium was the most favorable one, the optimum temperature was at 30ºC, and incubation period required for maximum production was 8 days with aeration level at 150 rpm and at pH 8.0. Sucrose proved to be the best carbon source, uric acid was found to be the best nitrogen source. Both, dipotassium hydrogen phosphate and ferrous chloride as well as some vitamins gave the highest amount of uricase by Gliomastix gueg.

  9. Managerial implications of calculating optimum nurse staffing in medical units.

    Science.gov (United States)

    Bordoloi, S K; Weatherby, E J

    1999-01-01

    A critical managerial decision in health care organizations is the staffing decision. We offer a model to derive an optimum mix of different staff categories that minimizes total cost subject to constraints imposed by the patient acuity system and minimum staffing policies in a medical unit of Fairbanks Memorial Hospital, Alaska. We also indicate several managerial implications on how our results and their sensitivity analyses can be used effectively in decision making in a variety of categories.

  10. Deuterium–tritium catalytic reaction in fast ignition: Optimum ...

    Indian Academy of Sciences (India)

    of fuel parameters are: areal density ρR ≥ 5g · cm−2 and initial tritium fraction x ≤ 0.025. For the proton beam, the corresponding optimum interval values are proton average energy 3 ≤ Ep ≤ 10 MeV, pulse duration 5 ≤ tp ≤ 15 ps and power 5 ≤ Wp ≤ 12 ×1022 (keV · cm3 · ps−1. ). It was proved that under the above ...

  11. Theoretical and numerical study of an optimum design algorithm

    International Nuclear Information System (INIS)

    Destuynder, Philippe.

    1976-08-01

    This work can be separated into two main parts. First, the behavior of the solution of an elliptic variational equation is analyzed when the domain is submitted to a small perturbation. The case of inequations is also considered. Secondly the previous results are used for deriving an optimum design algorithm. This algorithm was suggested by the center-method proposed by Huard. Numerical results show the superiority of the method on other different optimization techniques [fr

  12. Optimum design of dual pressure heat recovery steam generator using non-dimensional parameters based on thermodynamic and thermoeconomic approaches

    International Nuclear Information System (INIS)

    Naemi, Sanaz; Saffar-Avval, Majid; Behboodi Kalhori, Sahand; Mansoori, Zohreh

    2013-01-01

    The thermodynamic and thermoeconomic analyses are investigated to achieve the optimum operating parameters of a dual pressure heat recovery steam generator (HRSG), coupled with a heavy duty gas turbine. In this regard, the thermodynamic objective function including the exergy waste and the exergy destruction, is defined in such a way to find the optimum pinch point, and consequently to minimize the objective function by using non-dimensional operating parameters. The results indicated that, the optimum pinch point from thermodynamic viewpoint is 2.5 °C and 2.1 °C for HRSGs with live steam at 75 bar and 90 bar respectively. Since thermodynamic analysis is not able to consider economic factors, another objective function including annualized installation cost and annual cost of irreversibilities is proposed. To find the irreversibility cost, electricity price and also fuel price are considered independently. The optimum pinch point from thermoeconomic viewpoint on basis of electricity price is 20.6 °C (75 bar) and 19.2 °C (90 bar), whereas according to the fuel price it is 25.4 °C and 23.7 °C. Finally, an extensive sensitivity analysis is performed to compare optimum pinch point for different electricity and fuel prices. -- Highlights: ► Presenting thermodynamic and thermoeconomic optimization of a heat recovery steam generator. ► Defining an objective function consists of exergy waste and exergy destruction. ► Defining an objective function including capital cost and cost of irreversibilities. ► Obtaining the optimized operating parameters of a dual pressure heat recovery boiler. ► Computing the optimum pinch point using non-dimensional operating parameters

  13. An integrated expert system for optimum in core fuel management

    International Nuclear Information System (INIS)

    Abd Elmoatty, Mona S.; Nagy, M.S.; Aly, Mohamed N.; Shaat, M.K.

    2011-01-01

    Highlights: → An integrated expert system constructed for optimum in core fuel management. → Brief discussion of the ESOIFM Package modules, inputs and outputs. → Package was applied on the DALAT Nuclear Research Reactor (0.5 MW). → The Package verification showed good agreement. - Abstract: An integrated expert system called Efficient and Safe Optimum In-core Fuel Management (ESOIFM Package) has been constructed to achieve an optimum in core fuel management and automate the process of data analysis. The Package combines the constructed mathematical models with the adopted artificial intelligence techniques. The paper gives a brief discussion of the ESOIFM Package modules, inputs and outputs. The Package was applied on the DALAT Nuclear Research Reactor (0.5 MW). Moreover, the data of DNRR have been used as a case study for testing and evaluation of ESOIFM Package. This paper shows the comparison between the ESOIFM Package burn-up results, the DNRR experimental burn-up data, and other DNRR Codes burn-up results. The results showed good agreement.

  14. An optimum organizational structure for a large earth-orbiting multidisciplinary Space Base

    Science.gov (United States)

    Ragusa, J. M.

    1973-01-01

    The purpose of this exploratory study was to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The essential finding of this research was that a four-level project type 'total matrix' model will optimize the efficiency and effectiveness of Space Base technologists.

  15. A Parallel Approach To Optimum Actuator Selection With a Genetic Algorithm

    Science.gov (United States)

    Rogers, James L.

    2000-01-01

    Recent discoveries in smart technologies have created a variety of aerodynamic actuators which have great potential to enable entirely new approaches to aerospace vehicle flight control. For a revolutionary concept such as a seamless aircraft with no moving control surfaces, there is a large set of candidate locations for placing actuators, resulting in a substantially larger number of combinations to examine in order to find an optimum placement satisfying the mission requirements. The placement of actuators on a wing determines the control effectiveness of the airplane. One approach to placement Maximizes the moments about the pitch, roll, and yaw axes, while minimizing the coupling. Genetic algorithms have been instrumental in achieving good solutions to discrete optimization problems, such as the actuator placement problem. As a proof of concept, a genetic has been developed to find the minimum number of actuators required to provide uncoupled pitch, roll, and yaw control for a simplified, untapered, unswept wing model. To find the optimum placement by searching all possible combinations would require 1,100 hours. Formulating the problem and as a multi-objective problem and modifying it to take advantage of the parallel processing capabilities of a multi-processor computer, reduces the optimization time to 22 hours.

  16. Optimum conditions for enzymatic degradation of some oilseed proteins

    Directory of Open Access Journals (Sweden)

    El-Zanaty, E. A.

    2002-09-01

    Full Text Available Soybean, sesame seed, and rice bran meal proteins were hydrolyzed with two enzymes, namely, papain and bromelain. Experiments were carried out to elucidate the optimum condition for each enzyme when acting on each substrate seperately. Results revealed that the highest relative activities for papain were achieved with E/S 0.06 , 0.29, 0.19 and pH 7.2, 7.0, 7.0 for soybean, sesame,and rice bran meal proteins, respectively. Optimum temperature for papain while hydrolysing the three substrates was 50 ºC. When using bromelain optimum E/S resulting in highest relative activities were 0.067, 0.058 and 0.21 for soybean, sesame,and rice bran meal protein, respectively. Optimum pH was 6.0 and optimum temperature was 45 ºC for bromelain when hydrolysing the protein of the three substrates. A numerical correlation of enzymatic behaviour for the different substrates was calculated.Proteínas de haba de soja, semilla de sésamo y harina de germen de arroz se hidrolizaron con dos enzimas, denominadas, papaina y bromelaina. Se han llevado a cabo experimentos para determinar las condiciones óptimas de cada enzima cuando actúan separadamente sobre cada sustrato. Los resultados mostraron que las mayores actividades relativas para la papaina se consiguieron con una E/S 0,06, 0,29, 0,19 y un pH 7.2, 7.0, 7.0 para las proteínas de haba de soja, sésamo y harina de germen de arroz, respectivamente. La temperatura óptima para la papaina durante la hidrólisis de los tres sustratos fue de 50 ºC. Cuando se usa bromelaina las relaciones E/S óptimas que proporcionaron mayor actividad relativa fueron 0.067, 0.058 y 0.21 para las proteínas de habas de soja, sésamo y harina de germen de arroz respectivamente. El pH óptimo fue 6.0 y la temperatura óptima 45 ºC para la bromelaina cuando se hidroliza la proteína de los tres sustratos. Con estos datos se hizo una correlación numérica del comportamiento enzimático para los diferentes sustratos.

  17. Flight Hardware Virtualization for On-Board Science Data Processing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Utilize Hardware Virtualization technology to benefit on-board science data processing by investigating new real time embedded Hardware Virtualization solutions and...

  18. Industrial hardware and software verification with ACL2.

    Science.gov (United States)

    Hunt, Warren A; Kaufmann, Matt; Moore, J Strother; Slobodova, Anna

    2017-10-13

    The ACL2 theorem prover has seen sustained industrial use since the mid-1990s. Companies that have used ACL2 regularly include AMD, Centaur Technology, IBM, Intel, Kestrel Institute, Motorola/Freescale, Oracle and Rockwell Collins. This paper introduces ACL2 and focuses on how and why ACL2 is used in industry. ACL2 is well-suited to its industrial application to numerous software and hardware systems, because it is an integrated programming/proof environment supporting a subset of the ANSI standard Common Lisp programming language. As a programming language ACL2 permits the coding of efficient and robust programs; as a prover ACL2 can be fully automatic but provides many features permitting domain-specific human-supplied guidance at various levels of abstraction. ACL2 specifications and models often serve as efficient execution engines for the modelled artefacts while permitting formal analysis and proof of properties. Crucially, ACL2 also provides support for the development and verification of other formal analysis tools. However, ACL2 did not find its way into industrial use merely because of its technical features. The core ACL2 user/development community has a shared vision of making mechanized verification routine when appropriate and has been committed to this vision for the quarter century since the Computational Logic, Inc., Verified Stack. The community has focused on demonstrating the viability of the tool by taking on industrial projects (often at the expense of not being able to publish much).This article is part of the themed issue 'Verified trustworthy software systems'. © 2017 The Author(s).

  19. OS friendly microprocessor architecture: Hardware level computer security

    Science.gov (United States)

    Jungwirth, Patrick; La Fratta, Patrick

    2016-05-01

    We present an introduction to the patented OS Friendly Microprocessor Architecture (OSFA) and hardware level computer security. Conventional microprocessors have not tried to balance hardware performance and OS performance at the same time. Conventional microprocessors have depended on the Operating System for computer security and information assurance. The goal of the OS Friendly Architecture is to provide a high performance and secure microprocessor and OS system. We are interested in cyber security, information technology (IT), and SCADA control professionals reviewing the hardware level security features. The OS Friendly Architecture is a switched set of cache memory banks in a pipeline configuration. For light-weight threads, the memory pipeline configuration provides near instantaneous context switching times. The pipelining and parallelism provided by the cache memory pipeline provides for background cache read and write operations while the microprocessor's execution pipeline is running instructions. The cache bank selection controllers provide arbitration to prevent the memory pipeline and microprocessor's execution pipeline from accessing the same cache bank at the same time. This separation allows the cache memory pages to transfer to and from level 1 (L1) caching while the microprocessor pipeline is executing instructions. Computer security operations are implemented in hardware. By extending Unix file permissions bits to each cache memory bank and memory address, the OSFA provides hardware level computer security.

  20. Fast DRR splat rendering using common consumer graphics hardware

    International Nuclear Information System (INIS)

    Spoerk, Jakob; Bergmann, Helmar; Wanschitz, Felix; Dong, Shuo; Birkfellner, Wolfgang

    2007-01-01

    Digitally rendered radiographs (DRR) are a vital part of various medical image processing applications such as 2D/3D registration for patient pose determination in image-guided radiotherapy procedures. This paper presents a technique to accelerate DRR creation by using conventional graphics hardware for the rendering process. DRR computation itself is done by an efficient volume rendering method named wobbled splatting. For programming the graphics hardware, NVIDIAs C for Graphics (Cg) is used. The description of an algorithm used for rendering DRRs on the graphics hardware is presented, together with a benchmark comparing this technique to a CPU-based wobbled splatting program. Results show a reduction of rendering time by about 70%-90% depending on the amount of data. For instance, rendering a volume of 2x10 6 voxels is feasible at an update rate of 38 Hz compared to 6 Hz on a common Intel-based PC using the graphics processing unit (GPU) of a conventional graphics adapter. In addition, wobbled splatting using graphics hardware for DRR computation provides higher resolution DRRs with comparable image quality due to special processing characteristics of the GPU. We conclude that DRR generation on common graphics hardware using the freely available Cg environment is a major step toward 2D/3D registration in clinical routine

  1. GOSH! A roadmap for open-source science hardware

    CERN Document Server

    Stefania Pandolfi

    2016-01-01

    The goal of the Gathering for Open Science Hardware (GOSH! 2016), held from 2 to 5 March 2016 at IdeaSquare, was to lay the foundations of the open-source hardware for science movement.   The participants in the GOSH! 2016 meeting gathered in IdeaSquare. (Image: GOSH Community) “Despite advances in technology, many scientific innovations are held back because of a lack of affordable and customisable hardware,” says François Grey, a professor at the University of Geneva and coordinator of Citizen Cyberlab – a partnership between CERN, the UN Institute for Training and Research and the University of Geneva – which co-organised the GOSH! 2016 workshop. “This scarcity of accessible science hardware is particularly obstructive for citizen science groups and humanitarian organisations that don’t have the same economic means as a well-funded institution.” Instead, open sourcing science hardware co...

  2. Hardware and software maintenance strategies for upgrading vintage computers

    International Nuclear Information System (INIS)

    Wang, B.C.; Buijs, W.J.; Banting, R.D.

    1992-01-01

    The paper focuses on the maintenance of the computer hardware and software for digital control computers (DCC). Specific design and problems related to various maintenance strategies are reviewed. A foundation was required for a reliable computer maintenance and upgrading program to provide operation of the DCC with high availability and reliability for 40 years. This involved a carefully planned and executed maintenance and upgrading program, involving complementary hardware and software strategies. The computer system was designed on a modular basis, with large sections easily replaceable, to facilitate maintenance and improve availability of the system. Advances in computer hardware have made it possible to replace DCC peripheral devices with reliable, inexpensive, and widely available components from PC-based systems (PC = personal computer). By providing a high speed link from the DCC to a PC, it is now possible to use many commercial software packages to process data from the plant. 1 fig

  3. Plutonium Protection System (PPS). Volume 2. Hardware description. Final report

    International Nuclear Information System (INIS)

    Miyoshi, D.S.

    1979-05-01

    The Plutonium Protection System (PPS) is an integrated safeguards system developed by Sandia Laboratories for the Department of Energy, Office of Safeguards and Security. The system is designed to demonstrate and test concepts for the improved safeguarding of plutonium. Volume 2 of the PPS final report describes the hardware elements of the system. The major areas containing hardware elements are the vault, where plutonium is stored, the packaging room, where plutonium is packaged into Container Modules, the Security Operations Center, which controls movement of personnel, the Material Accountability Center, which maintains the system data base, and the Material Operations Center, which monitors the operating procedures in the system. References are made to documents in which details of the hardware items can be found

  4. DAQ Hardware and software development for the ATLAS Pixel Detector

    CERN Document Server

    Stramaglia, Maria Elena; The ATLAS collaboration

    2015-01-01

    In 2014, the Pixel Detector of the ATLAS experiment was extended by about 12 million pixels with the installation of the Insertable B-Layer (IBL). Data-taking and tuning procedures have been implemented by employing newly designed read-out hardware, which supports the full detector bandwidth even for calibration. The hardware is supported by an embedded software stack running on the read-out boards. The same boards will be used to upgrade the read-out bandwidth for the two outermost layers of the ATLAS Pixel Barrel (54 million pixels). We present the IBL read-out hardware and the supporting software architecture used to calibrate and operate the 4-layer ATLAS Pixel detector. We discuss the technical implementations and status for data taking, validation of the DAQ system in recent cosmic ray data taking, in-situ calibrations, and results from additional tests in preparation for Run 2 at the LHC.

  5. High-performance free-space optical modem hardware

    Science.gov (United States)

    Sluz, Joseph E.; Juarez, Juan C.; Bair, Chun-Huei; Oberc, Rachel L.; Venkat, Radha A.; Rollend, Derek; Young, David W.

    2012-06-01

    This paper describes key aspects of modem hardware designed to operate in free space optical (FSO) links of up to 200 km. The hardware serves as a bridge between 10 gigabit Ethernet client data systems and FSO terminals. The modem hardware alters the client data rate and format for optimal transmission and reception over the FSO link by applying forward error correction (FEC) processing and differential phase shift keying (DPSK) modulation. Optical automatic gain control (OAGC) is also used. The result of these features provide sensitivities approaching -48 dBm with 60 dB of error-free dynamic range while in the presence of turbulent optical conditions to deal with large dynamic range optical power fades.

  6. Hardware Abstraction and Protocol Optimization for Coded Sensor Networks

    DEFF Research Database (Denmark)

    Nistor, Maricica; Roetter, Daniel Enrique Lucani; Barros, João

    2015-01-01

    The design of the communication protocols in wireless sensor networks (WSNs) often neglects several key characteristics of the sensor's hardware, while assuming that the number of transmitted bits is the dominating factor behind the system's energy consumption. A closer look at the hardware...... specifications of common sensors reveals, however, that other equally important culprits exist, such as the reception and processing energy. Hence, there is a need for a more complete hardware abstraction of a sensor node to reduce effectively the total energy consumption of the network by designing energy......-efficient protocols that use such an abstraction, as well as mechanisms to optimize a communication protocol in terms of energy consumption. The problem is modeled for different feedback-based techniques, where sensors are connected to a base station, either directly or through relays. We show that for four example...

  7. Asymmetric Hardware Distortions in Receive Diversity Systems: Outage Performance Analysis

    KAUST Repository

    Javed, Sidrah

    2017-02-22

    This paper studies the impact of asymmetric hardware distortion (HWD) on the performance of receive diversity systems using linear and switched combining receivers. The asymmetric attribute of the proposed model motivates the employment of improper Gaussian signaling (IGS) scheme rather than the traditional proper Gaussian signaling (PGS) scheme. The achievable rate performance is analyzed for the ideal and non-ideal hardware scenarios using PGS and IGS transmission schemes for different combining receivers. In addition, the IGS statistical characteristics are optimized to maximize the achievable rate performance. Moreover, the outage probability performance of the receive diversity systems is analyzed yielding closed form expressions for both PGS and IGS based transmission schemes. HWD systems that employ IGS is proven to efficiently combat the self interference caused by the HWD. Furthermore, the obtained analytic expressions are validated through Monte-Carlo simulations. Eventually, non-ideal hardware transceivers degradation and IGS scheme acquired compensation are quantified through suitable numerical results.

  8. Hardware Realization of Chaos-based Symmetric Video Encryption

    KAUST Repository

    Ibrahim, Mohamad A.

    2013-05-01

    This thesis reports original work on hardware realization of symmetric video encryption using chaos-based continuous systems as pseudo-random number generators. The thesis also presents some of the serious degradations caused by digitally implementing chaotic systems. Subsequently, some techniques to eliminate such defects, including the ultimately adopted scheme are listed and explained in detail. Moreover, the thesis describes original work on the design of an encryption system to encrypt MPEG-2 video streams. Information about the MPEG-2 standard that fits this design context is presented. Then, the security of the proposed system is exhaustively analyzed and the performance is compared with other reported systems, showing superiority in performance and security. The thesis focuses more on the hardware and the circuit aspect of the system’s design. The system is realized on Xilinx Vetrix-4 FPGA with hardware parameters and throughput performance surpassing conventional encryption systems.

  9. XOR-FREE Implementation of Convolutional Encoder for Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    Gaurav Purohit

    2016-01-01

    Full Text Available This paper presents a novel XOR-FREE algorithm to implement the convolutional encoder using reconfigurable hardware. The approach completely removes the XOR processing of a chosen nonsystematic, feedforward generator polynomial of larger constraint length. The hardware (HW implementation of new architecture uses Lookup Table (LUT for storing the parity bits. The design implements architectural reconfigurability by modifying the generator polynomial of the same constraint length and code rate to reduce the design complexity. The proposed architecture reduces the dynamic power up to 30% and improves the hardware cost and propagation delay up to 20% and 32%, respectively. The performance of the proposed architecture is validated in MATLAB Simulink and tested on Zynq-7 series FPGA.

  10. Analisis Kinerja Reksa Dana Nusantara PT Bhakti Asset Management dan Penyusunan Portfolio Optimum Teoritis Periode Januari – April 2005

    Directory of Open Access Journals (Sweden)

    Tomy Gurtama S.

    2007-03-01

    Full Text Available The research objective is to find theoritical optimum portfolio of the Reksa Dana Nusantara (RDN of BAM, using data from January – April 2005 (60 workdays.  The coefficient of variance (CV will be  used to determine the stock rank, then it will be optimize with 3 methods ; linear programing, trial-error, and tableau method. The research found that RDN has Sharpe Performance Index (SPI of 1.15 but the market can reach 1.60. The linear programing can not find optimum solution of portfolios consist of 9-5 stocks, but it can find the best portfolio consist in 3 stocks (TLKM, TKIM an CMNP with SPI of 2.19. Overall, the research summarizes that RDN is still below the market performance.

  11. Alternative, Green Processes for the Precision Cleaning of Aerospace Hardware

    Science.gov (United States)

    Maloney, Phillip R.; Grandelli, Heather Eilenfield; Devor, Robert; Hintze, Paul E.; Loftin, Kathleen B.; Tomlin, Douglas J.

    2014-01-01

    Precision cleaning is necessary to ensure the proper functioning of aerospace hardware, particularly those systems that come in contact with liquid oxygen or hypergolic fuels. Components that have not been cleaned to the appropriate levels may experience problems ranging from impaired performance to catastrophic failure. Traditionally, this has been achieved using various halogenated solvents. However, as information on the toxicological and/or environmental impacts of each came to light, they were subsequently regulated out of use. The solvent currently used in Kennedy Space Center (KSC) precision cleaning operations is Vertrel MCA. Environmental sampling at KSC indicates that continued use of this or similar solvents may lead to high remediation costs that must be borne by the Program for years to come. In response to this problem, the Green Solvents Project seeks to develop state-of-the-art, green technologies designed to meet KSCs precision cleaning needs.Initially, 23 solvents were identified as potential replacements for the current Vertrel MCA-based process. Highly halogenated solvents were deliberately omitted since historical precedents indicate that as the long-term consequences of these solvents become known, they will eventually be regulated out of practical use, often with significant financial burdens for the user. Three solvent-less cleaning processes (plasma, supercritical carbon dioxide, and carbon dioxide snow) were also chosen since they produce essentially no waste stream. Next, experimental and analytical procedures were developed to compare the relative effectiveness of these solvents and technologies to the current KSC standard of Vertrel MCA. Individually numbered Swagelok fittings were used to represent the hardware in the cleaning process. First, the fittings were cleaned using Vertrel MCA in order to determine their true cleaned mass. Next, the fittings were dipped into stock solutions of five commonly encountered contaminants and were

  12. Carbonate fuel cell endurance: Hardware corrosion and electrolyte management status

    Energy Technology Data Exchange (ETDEWEB)

    Yuh, C.; Johnsen, R.; Farooque, M.; Maru, H.

    1993-01-01

    Endurance tests of carbonate fuel cell stacks (up to 10,000 hours) have shown that hardware corrosion and electrolyte losses can be reasonably controlled by proper material selection and cell design. Corrosion of stainless steel current collector hardware, nickel clad bipolar plate and aluminized wet seal show rates within acceptable limits. Electrolyte loss rate to current collector surface has been minimized by reducing exposed current collector surface area. Electrolyte evaporation loss appears tolerable. Electrolyte redistribution has been restrained by proper design of manifold seals.

  13. Carbonate fuel cell endurance: Hardware corrosion and electrolyte management status

    Energy Technology Data Exchange (ETDEWEB)

    Yuh, C.; Johnsen, R.; Farooque, M.; Maru, H.

    1993-05-01

    Endurance tests of carbonate fuel cell stacks (up to 10,000 hours) have shown that hardware corrosion and electrolyte losses can be reasonably controlled by proper material selection and cell design. Corrosion of stainless steel current collector hardware, nickel clad bipolar plate and aluminized wet seal show rates within acceptable limits. Electrolyte loss rate to current collector surface has been minimized by reducing exposed current collector surface area. Electrolyte evaporation loss appears tolerable. Electrolyte redistribution has been restrained by proper design of manifold seals.

  14. Mini-O, simple Omega receiver hardware for user education

    Science.gov (United States)

    Burhans, R. W.

    1976-01-01

    A problem with the Omega system is a lack of suitable low cost hardware for the small user community. A collection of do it yourself circuit modules are under development intended for use by educational institutions, small boat owners, aviation enthusiasts, and others who have some skills in fabricating their own electronic equipment. Applications of the hardware to time frequency standards measurements, signal propagation monitoring, and navigation experiments are presented. A family of Mini-O systems have been constructed varying from the simplest RF preamplifiers and narrowband filters front-ends, to sophisticated microcomputer interface adapters.

  15. Hardware Evaluation of the Horizontal Exercise Fixture with Weight Stack

    Science.gov (United States)

    Newby, Nate; Leach, Mark; Fincke, Renita; Sharp, Carwyn

    2009-01-01

    HEF with weight stack seems to be a very sturdy and reliable exercise device that should function well in a bed rest training setting. A few improvements should be made to both the hardware and software to improve usage efficiency, but largely, this evaluation has demonstrated HEF's robustness. The hardware offers loading to muscles, bones, and joints, potentially sufficient to mitigate the loss of muscle mass and bone mineral density during long-duration bed rest campaigns. With some minor modifications, the HEF with weight stack equipment provides the best currently available means of performing squat, heel raise, prone row, bench press, and hip flexion/extension exercise in a supine orientation.

  16. Hardware-assisted software clock synchronization for homogeneous distributed systems

    Science.gov (United States)

    Ramanathan, P.; Kandlur, Dilip D.; Shin, Kang G.

    1990-01-01

    A clock synchronization scheme that strikes a balance between hardware and software solutions is proposed. The proposed is a software algorithm that uses minimal additional hardware to achieve reasonably tight synchronization. Unlike other software solutions, the guaranteed worst-case skews can be made insensitive to the maximum variation of message transit delay in the system. The scheme is particularly suitable for large partially connected distributed systems with topologies that support simple point-to-point broadcast algorithms. Examples of such topologies include the hypercube and the mesh interconnection structures.

  17. Surface moisture measurement system hardware acceptance test report

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.A., Westinghouse Hanford

    1996-05-28

    This document summarizes the results of the hardware acceptance test for the Surface Moisture Measurement System (SMMS). This test verified that the mechanical and electrical features of the SMMS functioned as designed and that the unit is ready for field service. The bulk of hardware testing was performed at the 306E Facility in the 300 Area and the Fuels and Materials Examination Facility in the 400 Area. The SMMS was developed primarily in support of Tank Waste Remediation System (TWRS) Safety Programs for moisture measurement in organic and ferrocyanide watch list tanks.

  18. Automated power distribution system hardware. [for space station power supplies

    Science.gov (United States)

    Anderson, Paul M.; Martin, James A.; Thomason, Cindy

    1989-01-01

    An automated power distribution system testbed for the space station common modules has been developed. It incorporates automated control and monitoring of a utility-type power system. Automated power system switchgear, control and sensor hardware requirements, hardware design, test results, and potential applications are discussed. The system is designed so that the automated control and monitoring of the power system is compatible with both a 208-V, 20-kHz single-phase AC system and a high-voltage (120 to 150 V) DC system.

  19. Integrated circuit authentication hardware Trojans and counterfeit detection

    CERN Document Server

    Tehranipoor, Mohammad; Zhang, Xuehui

    2013-01-01

    This book describes techniques to verify the authenticity of integrated circuits (ICs). It focuses on hardware Trojan detection and prevention and counterfeit detection and prevention. The authors discuss a variety of detection schemes and design methodologies for improving Trojan detection techniques, as well as various attempts at developing hardware Trojans in IP cores and ICs. While describing existing Trojan detection methods, the authors also analyze their effectiveness in disclosing various types of Trojans, and demonstrate several architecture-level solutions. 

  20. Computer organization and design the hardware/software interface

    CERN Document Server

    Hennessy, John L

    1994-01-01

    Computer Organization and Design: The Hardware/Software Interface presents the interaction between hardware and software at a variety of levels, which offers a framework for understanding the fundamentals of computing. This book focuses on the concepts that are the basis for computers.Organized into nine chapters, this book begins with an overview of the computer revolution. This text then explains the concepts and algorithms used in modern computer arithmetic. Other chapters consider the abstractions and concepts in memory hierarchies by starting with the simplest possible cache. This book di

  1. Hardware Design for a Smart Lock System for Home Automation

    OpenAIRE

    Javierre, Sergio

    2016-01-01

    Developing a system that can be controlled by a portable device and easily implemented on any door is the main goal of the Smart Lock System. Its purpose is to avoid the usage of a hardware key; the new key will be an Android app in the mobile device which provides security to the user and to the specific area due to the fact that only restricted personnel is permitted access in this area. The design of the embedded system and its implementation, focusing on the system hardware part, are t...

  2. Electrical, electronics, and digital hardware essentials for scientists and engineers

    CERN Document Server

    Lipiansky, Ed

    2012-01-01

    A practical guide for solving real-world circuit board problems Electrical, Electronics, and Digital Hardware Essentials for Scientists and Engineers arms engineers with the tools they need to test, evaluate, and solve circuit board problems. It explores a wide range of circuit analysis topics, supplementing the material with detailed circuit examples and extensive illustrations. The pros and cons of various methods of analysis, fundamental applications of electronic hardware, and issues in logic design are also thoroughly examined. The author draws on more than tw

  3. Search for an optimum time response of spark counters

    International Nuclear Information System (INIS)

    Devismes, A.; Finck, Ch.; Kress, T.; Gobbi, A.; Eschke, J.; Herrmann, N.; Hildenbrand, K.D.; Koczon, P.; Petrovici, M.

    2002-01-01

    A spark counter of the type developed by Pestov has been tested with the aim of searching for an optimum time response function, changing voltage, content of noble and quencher gases, pressure and energy-loss. Replacing the usual argon by neon has brought an improvement of the resolution and a significant reduction of tails in the time response function. It has been proven that a counter as long as 90 cm can deliver, using neon gas mixture, a time resolution σ<60 ps with about 1% absolute tail and an efficiency of about 90%

  4. Optimum filter-based discrimination of neutrons and gamma rays

    International Nuclear Information System (INIS)

    Amiri, Moslem; Prenosil, Vaclav; Cvachovec, Frantisek

    2015-01-01

    An optimum filter-based method for discrimination of neutrons and gamma-rays in a mixed radiation field is presented. The existing filter-based implementations of discriminators require sample pulse responses in advance of the experiment run to build the filter coefficients, which makes them less practical. Our novel technique creates the coefficients during the experiment and improves their quality gradually. Applied to several sets of mixed neutron and photon signals obtained through different digitizers using stilbene scintillator, this approach is analyzed and its discrimination quality is measured. (authors)

  5. Optimum policies for a system with general imperfect maintenance

    International Nuclear Information System (INIS)

    Sheu, S.-H.; Lin, Y.-B.; Liao, G.-L.

    2006-01-01

    This study considers periodic preventive maintenance policies, which maximizes the availability of a repairable system with major repair at failure. Three types of preventive maintenance are performed, namely: imperfect preventive maintenance (IPM), perfect preventive maintenance (PPM) and failed preventive maintenance (FPM). The probability that preventive maintenance is perfect depends on the number of imperfect maintenances conducted since the previous renewal cycle, and the probability that preventive maintenance remains imperfect is not increasing. The optimum preventive maintenance time that maximizes availability is derived. Various special cases are considered. A numerical example is given

  6. Optimum amount of an insurance sum in life insurance

    Directory of Open Access Journals (Sweden)

    Janez Balkovec

    2001-01-01

    Full Text Available Personal insurance represents one of the sources of personal social security as a category of personal property. How to get a proper life insurance is a frequently asked question. When insuring material objects (car, house..., the problem is usually not in the amount of the taken insurance. With life insurance (abstract goods, problems as such occur. In this paper, we wish to present a model that, according to the financial situation and the anticipated future, makes it possible to calculate the optimum insurance sum in life insurance.

  7. Is optimum and effective work done in administrative jurisdiction

    International Nuclear Information System (INIS)

    Hoecht, H.

    1980-01-01

    Is optimum and effective work done in administrative jurisdiction. The author describes the general situation prevailing in administrative jurisdiction. He gives tables on the number of subjects received per annum, of judges administering justice and figures on executed and non-executed proceedings. He reports on districts of jurisdiction, personnel, court administration and the amount of work. The investigation into administrative jurisdiction has shown accomplishments for 1978 which are not bad at all. Sporadic administrative shortcomings are to be realized and put to an end. (HSCH) [de

  8. Optimum value of original events on the PEPT technique

    International Nuclear Information System (INIS)

    Sadremomtaz, Alireza; Taherparvar, Payvand

    2011-01-01

    Do Positron emission particle tracking (PEPT) has been used to track the motion of a single radioactively labeled tracer particle within a bed of similar particles. In this paper, the effect of the original event fraction on the results precise in two experiments has been reviewed. Results showed that the algorithm can no longer distinguish some corrupt trajectories, in addition to; further iteration reduces the statistical significance of the sample without improving its quality. Results show that the optimum value of trajectories depends on the type of experiment.

  9. Modified loss coefficients in the determination of optimum generation scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Hazarika, D.; Bordoloi, P.K. (Assam Engineering Coll. (IN))

    1991-03-01

    A modified method has been evolved to form the loss coefficients of an electrical power system network by decoupling load and generation and thereby creating additional fictitious load buses. The system losses are then calculated and co-ordinated to arrive at an optimum scheduling of generation using the standard co-ordination equation. The method presented is superior to the ones currently available, in that it is applicable to a multimachine system with random variation of load and it accounts for limits in plant generations and line losses. The precise nature of results and the economy in the cost of energy production obtained by this method is quantified and presented. (author).

  10. Optimum discrimination problem and one solution to it

    Science.gov (United States)

    Caulfield, H. John

    1998-10-01

    Fourier optical pattern recognition has wonderful properties (high speed, space-invariant operation, low power consumption, target location) and one terrible property (It can work perfectly only for the rare, uninteresting case of two linearly separable categories). More powerful discrimination methods lack the other wonderful properties. I show here how to have it both ways at once. By using roughly the Vapnik- Chervonenkis (VC) dimension number of properly trained Fourier filters in parallel and performing pixel-by-pixel thresholding on the output planes, we can assemble a net output plane which achieves provably optimum, predictable discrimination on any sets.

  11. Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models

    Directory of Open Access Journals (Sweden)

    Marcello Benedetti

    2017-11-01

    Full Text Available Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here, we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.

  12. Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models

    Science.gov (United States)

    Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro

    2017-10-01

    Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here, we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.

  13. Performance Analysis of a Hardware Implemented Complex Signal Kurtosis Radio-Frequency Interference Detector

    Science.gov (United States)

    Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark

    2016-01-01

    Radio-frequency interference (RFI) is a known problem for passive remote sensing as evidenced in the L-band radiometers SMOS, Aquarius and more recently, SMAP. Various algorithms have been developed and implemented on SMAP to improve science measurements. This was achieved by the use of a digital microwave radiometer. RFI mitigation becomes more challenging for microwave radiometers operating at higher frequencies in shared allocations. At higher frequencies larger bandwidths are also desirable for lower measurement noise further adding to processing challenges. This work focuses on finding improved RFI mitigation techniques that will be effective at additional frequencies and at higher bandwidths. To aid the development and testing of applicable detection and mitigation techniques, a wide-band RFI algorithm testing environment has been developed using the Reconfigurable Open Architecture Computing Hardware System (ROACH) built by the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) Group. The testing environment also consists of various test equipment used to reproduce typical signals that a radiometer may see including those with and without RFI. The testing environment permits quick evaluations of RFI mitigation algorithms as well as show that they are implementable in hardware. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The complex signal kurtosis detector showed improved performance over the real kurtosis detector under certain conditions. The real kurtosis is implemented on SMAP at 24 MHz bandwidth. The complex signal kurtosis algorithm was then implemented in hardware at 200 MHz bandwidth using the ROACH. In this work, performance of the complex signal kurtosis and the real signal kurtosis are compared. Performance evaluations and comparisons in both simulation as well as experimental hardware implementations were done with the use of receiver operating characteristic (ROC

  14. Implementation of a Hardware-in-the-Loop System Using Scale Model Hardware for Hybrid Electric Vehicle Development

    OpenAIRE

    Janczak, John

    2007-01-01

    Hardware-in-a-loop (HIL) testing and simulation for components and control strategies can reduce both time and cost of development. HIL testing focuses on one component or control system rather than the entire vehicle. The rest of the system is simulated by computer systems which use real time data acquisition systems to read outputs and respond like the systems in the actual vehicle would respond. The hardware for the system is on a scaled-down level to save both time and money during tes...

  15. Optimum distributed generation placement with voltage sag effect minimization

    International Nuclear Information System (INIS)

    Biswas, Soma; Goswami, Swapan Kumar; Chatterjee, Amitava

    2012-01-01

    Highlights: ► A new optimal distributed generation placement algorithm is proposed. ► Optimal number, sizes and locations of the DGs are determined. ► Technical factors like loss, voltage sag problem are minimized. ► The percentage savings are optimized. - Abstract: The present paper proposes a new formulation for the optimum distributed generator (DG) placement problem which considers a hybrid combination of technical factors, like minimization of the line loss, reduction in the voltage sag problem, etc., and economical factors, like installation and maintenance cost of the DGs. The new formulation proposed is inspired by the idea that the optimum placement of the DGs can help in reducing and mitigating voltage dips in low voltage distribution networks. The problem is configured as a multi-objective, constrained optimization problem, where the optimal number of DGs, along with their sizes and bus locations, are simultaneously obtained. This problem has been solved using genetic algorithm, a traditionally popular stochastic optimization algorithm. A few benchmark systems radial and networked (like 34-bus radial distribution system, 30 bus loop distribution system and IEEE 14 bus system) are considered as the case study where the effectiveness of the proposed algorithm is aptly demonstrated.

  16. Refining mimicry: phenotypic variation tracks the local optimum.

    Science.gov (United States)

    Mérot, Claire; Le Poul, Yann; Théry, Marc; Joron, Mathieu

    2016-07-01

    Müllerian mimicry between chemically defended preys is a textbook example of natural selection favouring phenotypic convergence onto a shared warning signal. Studies of mimicry have concentrated on deciphering the ecological and genetic underpinnings of dramatic switches in mimicry association, producing a well-known mosaic distribution of mimicry patterns across geography. However, little is known about the accuracy of resemblance between natural comimics when the local phenotypic optimum varies. In this study, using analyses of wing shape, pattern and hue, we quantify multimodal phenotypic similarity between butterfly comimics sharing the so-called postman pattern in different localities with varying species composition. We show that subtle but consistent variation between populations of the localized species, Heliconius timareta thelxinoe, enhance resemblance to the abundant comimics which drive the mimicry in each locality. Those results suggest that rarer comimics track the changes in the phenotypic optimum caused by gradual changes in the composition of the mimicry community, providing insights into the process by which intraspecific diversity of mimetic pattern may arise. Furthermore, our results suggest a multimodal evolution of similarity, with coordinated convergence in different features of the phenotype such as wing outline, pattern and hue. Finally, multilocus genotyping allows estimating local hybridization rates between H. timareta and comimic H. melpomene in different populations, raising the hypothesis that mimicry refinement between closely related comimics may be enhanced by adaptive introgression at loci modifying the accuracy of resemblance. © 2016 The Authors. Journal of Animal Ecology © 2016 British Ecological Society.

  17. Investigation of Various Essential Factors for Optimum Infrared Thermography

    Science.gov (United States)

    OKADA, Keiji; TAKEMURA, Kei; SATO, Shigeru

    2013-01-01

    ABSTRACT We investigated various essential factors for optimum infrared thermography for cattle clinics. The effect of various factors on the detection of surface temperature was investigated in an experimental room with a fixed ambient temperature using a square positioned on a wall. Various factors of animal objects were examined using cattle to determine the relationships among presence of hair, body surface temperature, surface temperature of the eyeball, the highest temperature of the eye circle, rectum temperature and ambient temperature. Also, the surface temperature of the flank at different time points after eating was examined. The best conditions of thermography for cattle clinics were determined and were as follows: (1) The distance between a thermal camera and an object should be fixed, and the camera should be set within a 45-degree angle with respect to the objects using the optimum focal length. (2) Factors that affect the camera temperature, such as extreme cold or heat, direct sunshine, high humidity and wind, should be avoided. (3) For the comparison of thermographs, imaging should be performed under identical conditions. If this is not achievable, hairless parts should be used. PMID:23759714

  18. Toward optimum efficiency in a quantum receiver for coded ppm

    Science.gov (United States)

    Boroson, D. M.

    2017-09-01

    Communications systems builders continue to search for signal formats and receiver architectures that can provide the most efficient utilization of their subsystems, which include power amplifiers as well as transmit and receive apertures. Receivers requiring very small amounts of received power are of particular interest in communications links where transmission distances are very long and losses are large, such as from Deep Space. Helstrom and others ([1],[2],[3]) initiated the study of optimum signal reception using quantum mechanical signal models. They derived the mathematical description and predicted performance of receivers that optimize certain criteria, such as Minimum Probability of Error (MPE). Unfortunately, practical implementation of their proposed receivers has still not been achieved. In parallel, technology has advanced to where noiseless photon counters can be used to achieve quite good performance ([4]). We show here that, when an end-to-end error correction code is added, in fact such a system can out-perform the "optimum" MPE system at low signal powers. In this report, we derive the formulation of a quantum receiver that is shown to be uniformly better than either the MPE or photon-counting receiver.

  19. Optimum concrete compression strength using bio-enzyme

    Directory of Open Access Journals (Sweden)

    Bagio Tony Hartono

    2017-01-01

    Full Text Available To make concrete with high compressive strength and has a certain concrete specifications other than the main concrete materials are also needed concrete mix quality control and other added material is also in line with the current technology of concrete mix that produces concrete with specific characteristics. Addition of bio enzyme on five concrete mixture that will be compared with normal concrete in order to know the optimum level bio-enzyme in concrete to increase the strength of the concrete. Concrete with bio-enzyme 200 ml/m3, 400 ml/m3, 600 ml/m3, 800 ml/m3, 1000 ml/m3 and normal concrete. Refer to the crushing test result, its tends to the mathematical model using 4th degree polynomial regression (least quartic, as represent on the attached data series, which is for the design mix fc′ = 25 MPa generate optimum value for 33,98 MPa, on the bio-additive dosage of 509 ml bio enzymes.

  20. The optimum content of rubber ash in concrete: flexural strength

    Science.gov (United States)

    Senin, M. S.; Shahidan, S.; Shamsuddin, S. M.; Ariffin, S. F. A.; Othman, N. H.; Rahman, R.; Khalid, F. S.; Nazri, F. M.

    2017-11-01

    Discarded scrap tyres have become one of the major environmental problems nowadays. Several studies have been carried out to reuse waste tires as an additive or sand replacement in concrete with appropriate percentages of tire rubber, called as rubberized concrete to solve this problem. The main objectives of this study are to investigate the flexural strength performance of concrete when adding the rubber ash and also to analyse the optimum content of rubber ash in concrete prisms. The performance total of 30 number of concrete prisms in size of 100mm x 100mm x 500 mm were investigated, by partially replacement of rubber ash with percentage of 0%, 3%, 5%, 7% and 9% from the volume of the sand. The flexural strength is increased when percentage of rubber ash is added 3% from control concrete prism, RA 0 for both concrete prism age, 7 days and 28 days with value 1.21% and 0.976% respectively. However, for RA 5, RA 7 and RA 9, the flexural strength was decreased compared to the control for both age, 7 days and 28 days. In conclusion, 3% is the optimum content of rubber ash in concrete prism for both concrete age

  1. Determination of the Optimum Conditions for Production of Chitosan Nanoparticles

    Directory of Open Access Journals (Sweden)

    A. Dustgani

    2007-12-01

    Full Text Available Bioedegradable nanoparticles are intensively investigated for their potential applications in drug delivery systems. Being a biocompatible and biodegradable polymer, chitosan holds great promise for use in this area. This investigation was concerned with determination and optimization of the effective parameters involved in the production of chitosan nanoparticles using ionic gelation method. Studied variables were concentration and pH of the chitosan solution, the ratio of chitosan to sodium tripolyphosphate therein and the molecular weight of chitosan. For this purpose, Taguchistatistical method was used for design of experiments in three levels. The size of chitosan nanoparticle was determined using laser light scattering. The experimental results showed that concentration of chitosan solution was the most important parameter and chitosan molecular weight the least effective parameter. The optimum conditions for preparation of nanoparticles were found to be 1 mg/mL chitosan solution with pH=5, chitosan to sodium tripolyphosphate ratio of 3 and chitosan molecular weight of 200,000 daltons. The average nanoparticle size at optimum conditions was found to be about 150 nm.

  2. Optimum Tower Crane Selection and Supporting Design Management

    Directory of Open Access Journals (Sweden)

    Hyo Won Sohn

    2014-08-01

    Full Text Available To optimize tower crane selection and supporting design, lifting requirements (as well as stability should be examined, followed by a review of economic feasibility. However, construction engineers establish plans based on data provided by equipment suppliers since there are no tools with which to thoroughly examine a support design's suitability for various crane types, and such plans lack the necessary supporting data. In such cases it is impossible to optimize a tower crane selection to satisfy lifting requirements in terms of cost, and to perform lateral support and foundation design. Thus, this study is intended to develop an optimum tower crane selection and supporting design management method based on stability. All cases that are capable of generating an optimization of approximately 3,000 ˜ 15,000 times are calculated to identify the candidate cranes with minimized cost, which are examined. The optimization method developed in the study is expected to support engineers in determining the optimum lifting equipment management.

  3. Hardware realization of an SVM algorithm implemented in FPGAs

    Science.gov (United States)

    Wiśniewski, Remigiusz; Bazydło, Grzegorz; Szcześniak, Paweł

    2017-08-01

    The paper proposes a technique of hardware realization of a space vector modulation (SVM) of state function switching in matrix converter (MC), oriented on the implementation in a single field programmable gate array (FPGA). In MC the SVM method is based on the instantaneous space-vector representation of input currents and output voltages. The traditional computation algorithms usually involve digital signal processors (DSPs) which consumes the large number of power transistors (18 transistors and 18 independent PWM outputs) and "non-standard positions of control pulses" during the switching sequence. Recently, hardware implementations become popular since computed operations may be executed much faster and efficient due to nature of the digital devices (especially concurrency). In the paper, we propose a hardware algorithm of SVM computation. In opposite to the existing techniques, the presented solution applies COordinate Rotation DIgital Computer (CORDIC) method to solve the trigonometric operations. Furthermore, adequate arithmetic modules (that is, sub-devices) used for intermediate calculations, such as code converters or proper sectors selectors (for output voltages and input current) are presented in detail. The proposed technique has been implemented as a design described with the use of Verilog hardware description language. The preliminary results of logic implementation oriented on the Xilinx FPGA (particularly, low-cost device from Artix-7 family from Xilinx was used) are also presented.

  4. Improving Reliability, Security, and Efficiency of Reconfigurable Hardware Systems (Habilitation)

    NARCIS (Netherlands)

    Ziener, Daniel

    2017-01-01

    In this treatise,  my research on methods to improve efficiency, reliability, and security of reconfigurable hardware systems, i.e., FPGAs, through partial dynamic reconfiguration is outlined. The efficiency of reconfigurable systems can be improved by loading optimized data paths on-the-fly on an

  5. A Hardware Framework for on-Chip FPGA Acceleration

    DEFF Research Database (Denmark)

    Lomuscio, Andrea; Cardarilli, Gian Carlo; Nannarelli, Alberto

    2016-01-01

    In this work, we present a new framework to dynamically load hardware accelerators on reconfigurable platforms (FPGAs). Provided a library of application-specific processors, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA...

  6. TreeBASIS Feature Descriptor and Its Hardware Implementation

    Directory of Open Access Journals (Sweden)

    Spencer Fowers

    2014-01-01

    Full Text Available This paper presents a novel feature descriptor called TreeBASIS that provides improvements in descriptor size, computation time, matching speed, and accuracy. This new descriptor uses a binary vocabulary tree that is computed using basis dictionary images and a test set of feature region images. To facilitate real-time implementation, a feature region image is binary quantized and the resulting quantized vector is passed into the BASIS vocabulary tree. A Hamming distance is then computed between the feature region image and the effectively descriptive basis dictionary image at a node to determine the branch taken and the path the feature region image takes is saved as a descriptor. The TreeBASIS feature descriptor is an excellent candidate for hardware implementation because of its reduced descriptor size and the fact that descriptors can be created and features matched without the use of floating point operations. The TreeBASIS descriptor is more computationally and space efficient than other descriptors such as BASIS, SIFT, and SURF. Moreover, it can be computed entirely in hardware without the support of a CPU for additional software-based computations. Experimental results and a hardware implementation show that the TreeBASIS descriptor compares well with other descriptors for frame-to-frame homography computation while requiring fewer hardware resources.

  7. Chip-Multiprocessor Hardware Locks for Safety-Critical Java

    DEFF Research Database (Denmark)

    Strøm, Torur Biskopstø; Puffitsch, Wolfgang; Schoeberl, Martin

    2013-01-01

    and may void a task set's schedulability. In this paper we present a hardware locking mechanism to reduce the synchronization overhead. The solution is implemented for the chip-multiprocessor version of the Java Optimized Processor in the context of safety-critical Java. The implementation is compared...

  8. Hardware Algorithms For Tile-Based Real-Time Rendering

    NARCIS (Netherlands)

    Crisu, D.

    2012-01-01

    In this dissertation, we present the GRAphics AcceLerator (GRAAL) framework for developing embedded tile-based rasterization hardware for mobile devices, meant to accelerate real-time 3-D graphics (OpenGL compliant) applications. The goal of the framework is a low-cost, low-power, high-performance

  9. Detecting System of Nested Hardware Virtual Machine Monitor

    Directory of Open Access Journals (Sweden)

    Artem Vladimirovich Iuzbashev

    2015-03-01

    Full Text Available Method of nested hardware virtual machine monitor detection was proposed in this work. The method is based on HVM timing attack. In case of HVM presence in system, the number of different instruction sequences execution time values will increase. We used this property as indicator in our detection.

  10. Foundations of digital signal processing theory, algorithms and hardware design

    CERN Document Server

    Gaydecki, Patrick

    2005-01-01

    An excellent introductory text, this book covers the basic theoretical, algorithmic and real-time aspects of digital signal processing (DSP). Detailed information is provided on off-line, real-time and DSP programming and the reader is effortlessly guided through advanced topics such as DSP hardware design, FIR and IIR filter design and difference equation manipulation.

  11. Hardware support for the tumult real-time scheduler

    NARCIS (Netherlands)

    van der Bij, H.C.; Smit, Gerardus Johannes Maria; Havinga, Paul J.M.

    1989-01-01

    This article describes the hardware which is designed for speeding up and supporting the schedule routines of the TUMULT multi-tasking operating system. TUMULT uses a “priority running up” schedule algorithm which automatically increases the priority of a process when (part of) it must be finished

  12. Hardware Descriptive Languages: An Efficient Approach to Device ...

    African Journals Online (AJOL)

    Contemporarily, owing to astronomical advancements in the very large scale integration (VLSI) market segments, hardware engineers are now focusing on how to develop their new digital system designs in programmable languages like very high speed integrated circuit hardwaredescription language (VHDL) and Verilog ...

  13. A selective logging mechanism for hardware transactional memory systems

    OpenAIRE

    Lupon Navazo, Marc; Magklis, Grigorios; González Colás, Antonio María

    2011-01-01

    Log-based Hardware Transactional Memory (HTM) systems offer an elegant solution to handle speculative data that overflow transactional L1 caches. By keeping the pre-transactional values on a software-resident log, speculative values can be safely moved across the memory hierarchy, without requiring expensive searches on L1 misses or commits.

  14. Towards Shop Floor Hardware Reconfiguration for Industrial Collaborative Robots

    DEFF Research Database (Denmark)

    Schou, Casper; Madsen, Ole

    2016-01-01

    In this paper we propose a roadmap for hardware reconfiguration of industrial collaborative robots. As a flexible resource, the collaborative robot will often need transitioning to a new task. Our goal is, that this transitioning should be done by the shop floor operators, not highly specialized ...

  15. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Science.gov (United States)

    Hwang, Wen-Jyi; Lee, Wei-Hao; Lin, Shiow-Jyu; Lai, Sheng-Ying

    2013-01-01

    This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA) and fuzzy C-means (FCM) algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA). It is embedded in a System-on-Chip (SOC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation. PMID:24189331

  16. Hardware Design Considerations for Edge-Accelerated Stereo Correspondence Algorithms

    Directory of Open Access Journals (Sweden)

    Christos Ttofis

    2012-01-01

    Full Text Available Stereo correspondence is a popular algorithm for the extraction of depth information from a pair of rectified 2D images. Hence, it has been used in many computer vision applications that require knowledge about depth. However, stereo correspondence is a computationally intensive algorithm and requires high-end hardware resources in order to achieve real-time processing speed in embedded computer vision systems. This paper presents an overview of the use of edge information as a means to accelerate hardware implementations of stereo correspondence algorithms. The presented approach restricts the stereo correspondence algorithm only to the edges of the input images rather than to all image points, thus resulting in a considerable reduction of the search space. The paper highlights the benefits of the edge-directed approach by applying it to two stereo correspondence algorithms: an SAD-based fixed-support algorithm and a more complex adaptive support weight algorithm. Furthermore, we present design considerations about the implementation of these algorithms on reconfigurable hardware and also discuss issues related to the memory structures needed, the amount of parallelism that can be exploited, the organization of the processing blocks, and so forth. The two architectures (fixed-support based versus adaptive-support weight based are compared in terms of processing speed, disparity map accuracy, and hardware overheads, when both are implemented on a Virtex-5 FPGA platform.

  17. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    Sheng-Ying Lai

    2013-11-01

    Full Text Available This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA and fuzzy C-means (FCM algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA. It is embedded in a System-on-Chip (SOC platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation.

  18. CT image reconstruction system based on hardware implementation

    International Nuclear Information System (INIS)

    Silva, Hamilton P. da; Evseev, Ivan; Schelin, Hugo R.; Paschuk, Sergei A.; Milhoretto, Edney; Setti, Joao A.P.; Zibetti, Marcelo; Hormaza, Joel M.; Lopes, Ricardo T.

    2009-01-01

    Full text: The timing factor is very important for medical imaging systems, which can nowadays be synchronized by vital human signals, like heartbeats or breath. The use of hardware implemented devices in such a system has advantages considering the high speed of information treatment combined with arbitrary low cost on the market. This article refers to a hardware system which is based on electronic programmable logic called FPGA, model Cyclone II from ALTERA Corporation. The hardware was implemented on the UP3 ALTERA Kit. A partially connected neural network with unitary weights was programmed. The system was tested with 60 topographic projections, 100 points in each, of the Shepp and Logan phantom created by MATLAB. The main restriction was found to be the memory size available on the device: the dynamic range of reconstructed image was limited to 0 65535. Also, the normalization factor must be observed in order to do not saturate the image during the reconstruction and filtering process. The test shows a principal possibility to build CT image reconstruction systems for any reasonable amount of input data by arranging the parallel work of the hardware units like we have tested. However, further studies are necessary for better understanding of the error propagation from topographic projections to reconstructed image within the implemented method. (author)

  19. Hardware prototype with component specification and usage description

    NARCIS (Netherlands)

    Azam, Tre; Aswat, Soyeb; Klemke, Roland; Sharma, Puneet; Wild, Fridolin

    2017-01-01

    Following on from D3.1 and the final selection of sensors, in this D3.2 report we present the first version of the experience capturing hardware prototype design and API architecture taking into account the current limitations of the Hololens not being available until early next month in time for

  20. Hardware Synchronization for Embedded Multi-Core Processors

    DEFF Research Database (Denmark)

    Stoif, Christian; Schoeberl, Martin; Liccardi, Benito

    2011-01-01

    -core systems, using an FPGA-development board with two hard PowerPC processor cores. Best- and worst-case results, together with intensive benchmarking of all synchronization primitives implemented, show the expected superiority of the hardware solutions. It is also shown that dual-ported memory outperforms...

  1. Evaluation of In-House versus Contract Computer Hardware Maintenance

    International Nuclear Information System (INIS)

    Wright, H.P.

    1981-09-01

    The issue of In-House versus Contract Computer Hardware Maintenance is one which every organization who uses computers must resolve. This report discusses the advantages and disadvantages of both approaches to computer maintenance, the costs involved (based on the current AGNS computer inventory), and the AGNS maintenance experience to date. A recommendation on an appropriate approach for AGNS is made

  2. Use of Heritage Hardware on MPCV Exploration Flight Test One

    Science.gov (United States)

    Rains, George Edward; Cross, Cynthia D.

    2011-01-01

    Due to an aggressive schedule for the first orbital test flight of an unmanned Orion capsule, known as Exploration Flight Test One (EFT1), combined with severe programmatic funding constraints, an effort was made to identify heritage hardware, i.e., already existing, flight-certified components from previous manned space programs, which might be available for use on EFT1. With the end of the Space Shuttle Program, no current means exists to launch Multi Purpose Logistics Modules (MPLMs) to the International Space Station (ISS), and so the inventory of many flight-certified Shuttle and MPLM components are available for other purposes. Two of these items are the Shuttle Ground Support Equipment Heat Exchanger (GSE Hx) and the MPLM cabin Positive Pressure Relief Assembly (PPRA). In preparation for the utilization of these components by the Orion Program, analyses and testing of the hardware were performed. The PPRA had to be analyzed to determine its susceptibility to pyrotechnic shock, and vibration testing had to be performed, since those environments are predicted to be significantly more severe during an Orion mission than those the hardware was originally designed to accommodate. The GSE Hx had to be tested for performance with the Orion thermal working fluids, which are different from those used by the Space Shuttle. This paper summarizes the certification of the use of heritage hardware for EFT1.

  3. Choropleth Mapping on Personal Computers: Software Sources and Hardware Requirements.

    Science.gov (United States)

    Lewis, Lawrence T.

    1986-01-01

    Describes the hardware and some of the choropleth mapping software available for the IBM-PC, PC compatible and Apple II microcomputers. Reviewed are: Micromap II, Documap, Desktop Information Display System (DIDS) , Multimap, Execuvision, Iris Gis, Mapmaker, PC Map, Statmap, and Atlas Map. Vendors' addresses are provided. (JDH)

  4. Hardware methods in cosmetology. Programs of face care

    OpenAIRE

    Chuhraev, N.; Zukow, W.; Samosiuk, N.; Chuhraeva, E.; Tereshchenko, A.; Gunko, M.; Unichenko, A.; Paramonova, A.

    2016-01-01

    Medical Innovative Technologies, Kiev, Ukraine Radomska Szkoła Wyższa w Radomiu, Polska Radom University in Radom, Poland HARDWARE METHODS IN COSMETOLOGY PROGRAMS OF FACE CARE N. Chuhraev, W. Zukow, N. Samosiuk, E. Chuhraeva, A. Tereshchenko, M. Gunko, A. Unichenko, A. Paramonova Edited by N. Chuhraev W. Zukow N. Samosiuk E. Chuhraeva A. Tereshchenko M. Gunko A. Unichenko A. Paramonov...

  5. Parallel asynchronous hardware implementation of image processing algorithms

    Science.gov (United States)

    Coon, Darryl D.; Perera, A. G. U.

    1990-01-01

    Research is being carried out on hardware for a new approach to focal plane processing. The hardware involves silicon injection mode devices. These devices provide a natural basis for parallel asynchronous focal plane image preprocessing. The simplicity and novel properties of the devices would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture built from arrays of the devices would form a two-dimensional (2-D) array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuron-like asynchronous pulse-coded form through the laminar processor. No multiplexing, digitization, or serial processing would occur in the preprocessing state. High performance is expected, based on pulse coding of input currents down to one picoampere with noise referred to input of about 10 femtoamperes. Linear pulse coding has been observed for input currents ranging up to seven orders of magnitude. Low power requirements suggest utility in space and in conjunction with very large arrays. Very low dark current and multispectral capability are possible because of hardware compatibility with the cryogenic environment of high performance detector arrays. The aforementioned hardware development effort is aimed at systems which would integrate image acquisition and image processing.

  6. Hardware Approach for Real Time Machine Stereo Vision

    Directory of Open Access Journals (Sweden)

    Michael Tornow

    2006-02-01

    Full Text Available Image processing is an effective tool for the analysis of optical sensor information for driver assistance systems and controlling of autonomous robots. Algorithms for image processing are often very complex and costly in terms of computation. In robotics and driver assistance systems, real-time processing is necessary. Signal processing algorithms must often be drastically modified so they can be implemented in the hardware. This task is especially difficult for continuous real-time processing at high speeds. This article describes a hardware-software co-design for a multi-object position sensor based on a stereophotogrammetric measuring method. In order to cover a large measuring area, an optimized algorithm based on an image pyramid is implemented in an FPGA as a parallel hardware solution for depth map calculation. Object recognition and tracking are then executed in real-time in a processor with help of software. For this task a statistical cluster method is used. Stabilization of the tracking is realized through use of a Kalman filter. Keywords: stereophotogrammetry, hardware-software co-design, FPGA, 3-d image analysis, real-time, clustering and tracking.

  7. Tomographic image reconstruction and rendering with texture-mapping hardware

    International Nuclear Information System (INIS)

    Azevedo, S.G.; Cabral, B.K.; Foran, J.

    1994-07-01

    The image reconstruction problem, also known as the inverse Radon transform, for x-ray computed tomography (CT) is found in numerous applications in medicine and industry. The most common algorithm used in these cases is filtered backprojection (FBP), which, while a simple procedure, is time-consuming for large images on any type of computational engine. Specially-designed, dedicated parallel processors are commonly used in medical CT scanners, whose results are then passed to graphics workstation for rendering and analysis. However, a fast direct FBP algorithm can be implemented on modern texture-mapping hardware in current high-end workstation platforms. This is done by casting the FBP algorithm as an image warping operation with summing. Texture-mapping hardware, such as that on the Silicon Graphics Reality Engine (TM), shows around 600 times speedup of backprojection over a CPU-based implementation (a 100 Mhz R4400 in this case). This technique has the further advantages of flexibility and rapid programming. In addition, the same hardware can be used for both image reconstruction and for volumetric rendering. The techniques can also be used to accelerate iterative reconstruction algorithms. The hardware architecture also allows more complex operations than straight-ray backprojection if they are required, including fan-beam, cone-beam, and curved ray paths, with little or no speed penalties

  8. On-Chip Reconfigurable Hardware Accelerators for Popcount Computations

    Directory of Open Access Journals (Sweden)

    Valery Sklyarov

    2016-01-01

    Full Text Available Popcount computations are widely used in such areas as combinatorial search, data processing, statistical analysis, and bio- and chemical informatics. In many practical problems the size of initial data is very large and increase in throughput is important. The paper suggests two types of hardware accelerators that are (1 designed in FPGAs and (2 implemented in Zynq-7000 all programmable systems-on-chip with partitioning of algorithms that use popcounts between software of ARM Cortex-A9 processing system and advanced programmable logic. A three-level system architecture that includes a general-purpose computer, the problem-specific ARM, and reconfigurable hardware is then proposed. The results of experiments and comparisons with existing benchmarks demonstrate that although throughput of popcount computations is increased in FPGA-based designs interacting with general-purpose computers, communication overheads (in experiments with PCI express are significant and actual advantages can be gained if not only popcount but also other types of relevant computations are implemented in hardware. The comparison of software/hardware designs for Zynq-7000 all programmable systems-on-chip with pure software implementations in the same Zynq-7000 devices demonstrates increase in performance by a factor ranging from 5 to 19 (taking into account all the involved communication overheads between the programmable logic and the processing systems.

  9. Towards automated construction of dependable software/hardware systems

    Energy Technology Data Exchange (ETDEWEB)

    Yakhnis, A.; Yakhnis, V. [Pioneer Technologies & Rockwell Science Center, Albuquerque, NM (United States)

    1997-11-01

    This report contains viewgraphs on the automated construction of dependable computer architecture systems. The outline of this report is: examples of software/hardware systems; dependable systems; partial delivery of dependability; proposed approach; removing obstacles; advantages of the approach; criteria for success; current progress of the approach; and references.

  10. Smart Home Hardware-in-the-Loop Testing

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, Annabelle

    2017-07-12

    This presentation provides a high-level overview of NREL's smart home hardware-in-the-loop testing. It was presented at the Fourth International Workshop on Grid Simulator Testing of Energy Systems and Wind Turbine Powertrains, held April 25-26, 2017, hosted by NREL and Clemson University at the Energy Systems Integration Facility in Golden, Colorado.

  11. Detection of hardware backdoor through microcontroller read time ...

    African Journals Online (AJOL)

    The objective of this work, christened “HABA” (Hardware Backdoor Aware) is to collect data samples of series of read time of microcontroller embedded on military grade equipments and correlate it with previously stored expected behavior read time samples so as to detect abnormality or otherwise. I was motivated by the ...

  12. Prediction of optimum catalysts and cocatalysts for chemical growth of carbon nanotubes

    Science.gov (United States)

    Alekseev, N. I.; Afanas'ev, D. V.; Charykov, N. A.

    2008-05-01

    A practically important problem of growth of different kinds of carbon nanotubes from nanodrops of a metal catalyst oversaturated by carbon is solved by finding cocatalysts that provide a minimum nucleation energy for the critical nucleus of a nanotube. For pure catalysts, it turns out that the optimum is achieved using atoms of well-known elements of the iron group, which have a minimum energy of the van-der-Waals interaction with graphene islands and a certain energy E Me-C of the interaction with a carbon atom. It is also possible to obtain even more effective catalysts by finding an appropriate ratio of the components by the trial and error method. In particular, the experimentally found combinations nickel-yttrium and cobalt-molybdenum are among the most effective ones.

  13. Flight Hardware Packaging Design for Stringent EMC Radiated Emission Requirements

    Science.gov (United States)

    Lortz, Charlene L.; Huang, Chi-Chien N.; Ravich, Joshua A.; Steiner, Carl N.

    2013-01-01

    This packaging design approach can help heritage hardware meet a flight project's stringent EMC radiated emissions requirement. The approach requires only minor modifications to a hardware's chassis and mainly concentrates on its connector interfaces. The solution is to raise the surface area where the connector is mounted by a few millimeters using a pedestal, and then wrapping with conductive tape from the cable backshell down to the surface-mounted connector. This design approach has been applied to JPL flight project subsystems. The EMC radiated emissions requirements for flight projects can vary from benign to mission critical. If the project's EMC requirements are stringent, the best approach to meet EMC requirements would be to design an EMC control program for the project early on and implement EMC design techniques starting with the circuit board layout. This is the ideal scenario for hardware that is built from scratch. Implementation of EMC radiated emissions mitigation techniques can mature as the design progresses, with minimal impact to the design cycle. The real challenge exists for hardware that is planned to be flown following a built-to-print approach, in which heritage hardware from a past project with a different set of requirements is expected to perform satisfactorily for a new project. With acceptance of heritage, the design would already be established (circuit board layout and components have already been pre-determined), and hence any radiated emissions mitigation techniques would only be applicable at the packaging level. The key is to take a heritage design with its known radiated emissions spectrum and repackage, or modify its chassis design so that it would have a better chance of meeting the new project s radiated emissions requirements.

  14. SpecCert: Specifying and Verifying Hardware-based Security Enforcement

    OpenAIRE

    Letan , Thomas; Chifflier , Pierre; Hiet , Guillaume; Néron , Pierre; Morin , Benjamin

    2016-01-01

    Over time, hardware designs have constantly grown in complexity and modern platforms involve multiple interconnected hardware components. During the last decade, several vulnerability disclosures have proven that trust in hardware can be misplaced. In this article, we give a formal definition of Hardware-based Security Enforcement (HSE) mechanisms, a class of security enforcement mechanisms such that a software component relies on the underlying hardware platform to enforce a security policy....

  15. How to create successful Open Hardware projects - About White Rabbits and open fields

    CERN Document Server

    van der Bij, E; Lewis, J; Stana, T; Wlostowski, T; Gousiou, E; Serrano, J; Arruat, M; Lipinski, M M; Daniluk, G; Voumard, N; Cattin, M

    2013-01-01

    CERN's accelerator control group has embraced "Open Hardware" (OH) to facilitate peer review, avoid vendor lock-in and make support tasks scalable. A web-based tool for easing collaborative work was set up and the CERN OH Licence was created. New ADC, TDC, fine delay and carrier cards based on VITA and PCI-SIG standards were designed and drivers for Linux were written. Often industry was paid for developments, while quality and documentation was controlled by CERN. An innovative timing network was also developed with the OH paradigm. Industry now sells and supports these designs that find their way into new fields.

  16. Hardware/software co-design and optimization for cyberphysical integration in digital microfluidic biochips

    CERN Document Server

    Luo, Yan; Ho, Tsung-Yi

    2015-01-01

    This book describes a comprehensive framework for hardware/software co-design, optimization, and use of robust, low-cost, and cyberphysical digital microfluidic systems. Readers with a background in electronic design automation will find this book to be a valuable reference for leveraging conventional VLSI CAD techniques for emerging technologies, e.g., biochips or bioMEMS. Readers from the circuit/system design community will benefit from methods presented to extend design and testing techniques from microelectronics to mixed-technology microsystems. For readers from the microfluidics domain,

  17. A Decision Support System for Optimum Use of Fertilizers

    Energy Technology Data Exchange (ETDEWEB)

    R. L. Hoskinson; J. R. Hess; R. K. Fink

    1999-07-01

    The Decision Support System for Agriculture (DSS4Ag) is an expert system being developed by the Site-Specific Technologies for Agriculture (SST4Ag) precision farming research project at the INEEL. DSS4Ag uses state-of-the-art artificial intelligence and computer science technologies to make spatially variable, site-specific, economically optimum decisions on fertilizer use. The DSS4Ag has an open architecture that allows for external input and addition of new requirements and integrates its results with existing agricultural systems' infrastructures. The DSS4Ag reflects a paradigm shift in the information revolution in agriculture that is precision farming. We depict this information revolution in agriculture as an historic trend in the agricultural decision-making process.

  18. A Decision Support System for Optimum Use of Fertilizers

    Energy Technology Data Exchange (ETDEWEB)

    Hoskinson, Reed Louis; Hess, John Richard; Fink, Raymond Keith

    1999-07-01

    The Decision Support System for Agriculture (DSS4Ag) is an expert system being developed by the Site-Specific Technologies for Agriculture (SST4Ag) precision farming research project at the INEEL. DSS4Ag uses state-of-the-art artificial intelligence and computer science technologies to make spatially variable, site-specific, economically optimum decisions on fertilizer use. The DSS4Ag has an open architecture that allows for external input and addition of new requirements and integrates its results with existing agricultural systems’ infrastructures. The DSS4Ag reflects a paradigm shift in the information revolution in agriculture that is precision farming. We depict this information revolution in agriculture as an historic trend in the agricultural decision-making process.

  19. Optimum conditions for aging of stainless maraging steels

    International Nuclear Information System (INIS)

    Mironenko, P.A.; Krasnikova, S.I.; Drobot, A.V.

    1980-01-01

    Aging kinetics of two 0Kh11N10M2T type steels in which 3 % Mo (steel 1), and 3 % Mo and 11 % Co (steel 2) had been additionally introduced instead of titanium were investigated. Electron microscopy and X-ray methods were used. It was ascertained that the process of steel aging proceeded in 3 stages. Steel 2 was hardened more intensively during the aging, had a higher degree of hardness and strength after the aging, weakened more slowly if overaged than steel 1. The intermetallide hcp-phase Fe 2 Mo was the hardening phase on steels extended aging. Optimum combination of impact strength and strength was was achieved using two-stage aging: the first stage - maximum strength aging was achieved, the second stage - aging at minimum temperatures of two-phase α+γ region

  20. Optimum radars and filters for the passive sphere system

    Science.gov (United States)

    Luers, J. K.; Soltes, A.

    1971-01-01

    Studies have been conducted to determine the influence of the tracking radar and data reduction technique on the accuracy of the meteorological measurements made in the 30 to 100 kilometer altitude region by the ROBIN passive falling sphere. A survey of accuracy requirements was made of agencies interested in data from this region of the atmosphere. In light of these requirements, various types of radars were evaluated to determine the tracking system most applicable to the ROBIN, and methods were developed to compute the errors in wind and density that arise from noise errors in the radar supplied data. The effects of launch conditions on the measurements were also examined. Conclusions and recommendations have been made concerning the optimum tracking and data reduction techniques for the ROBIN falling sphere system.

  1. Optimum selection of an energy resource using fuzzy logic

    Energy Technology Data Exchange (ETDEWEB)

    Abouelnaga, Ayah E., E-mail: ayahabouelnaga@hotmail.co [Nuclear Engineering Department, Faculty of Engineering, Alexandria University, 21544 Alexandria (Egypt); Metwally, Abdelmohsen; Nagy, Mohammad E.; Agamy, Saeed [Nuclear Engineering Department, Faculty of Engineering, Alexandria University, 21544 Alexandria (Egypt)

    2009-12-15

    Optimum selection of an energy resource is a vital issue in developed countries. Considering energy resources as alternatives (nuclear, hydroelectric, gas/oil, and solar) and factors upon which the proper decision will be taken as attributes (economics, availability, environmental impact, and proliferation), one can use the multi-attribute utility theory (MAUT) to optimize the selection process. Recently, fuzzy logic is extensively applied to the MAUT as it expresses the linguistic appraisal for all attributes in wide and reliable manners. The rise in oil prices and the increased concern about environmental protection from CO{sub 2} emissions have promoted the attention to the use of nuclear power as a viable energy source for power generation. For Egypt, as a case study, the nuclear option is found to be an appropriate choice. Following the introduction of innovative designs of nuclear power plants, improvements in the proliferation resistance, environmental impacts, and economics will enhance the selection of the nuclear option.

  2. Optimum investment strategy in the power industry mathematical models

    CERN Document Server

    Bartnik, Ryszard; Hnydiuk-Stefan, Anna

    2016-01-01

    This book presents an innovative methodology for identifying optimum investment strategies in the power industry. To do so, it examines results including, among others, the impact of oxy-fuel technology on CO2 emissions prices, and the specific cost of electricity production. The technical and economic analysis presented here extend the available knowledge in the field of investment optimization in energy engineering, while also enabling investors to make decisions involving its application. Individual chapters explore the potential impacts of different factors like environmental charges on costs connected with investments in the power sector, as well as discussing the available technologies for heat and power generation. The book offers a valuable resource for researchers, market analysts, decision makers, power engineers and students alike.

  3. Optimum energy management of a photovoltaic water pumping system

    International Nuclear Information System (INIS)

    Sallem, Souhir; Chaabene, Maher; Kamoun, M.B.A.

    2009-01-01

    This paper presents a new management approach which makes decision on the optimum connection times of the elements of a photovoltaic water pumping installation: battery, water pump and photovoltaic panel. The decision is made by fuzzy rules considering the battery safety on the first hand and the Photovoltaic Panel Generation (PVPG) forecast during a considered day and the load required power on the second hand. The optimization approach consists of the extension of the operation time of the water pump with respects to multi objective management criteria. Compared to the stand alone management method, the new approach effectiveness is confirmed by the extension of the pumping period for more than 5 h a day.

  4. Parametrization of optimum filter passbands for rotational Raman temperature measurements.

    Science.gov (United States)

    Hammann, Eva; Behrendt, Andreas

    2015-11-30

    We revisit the methodology of rotational Raman temperature measurements covering both lidar and non-range-resolved measurements, e.g., for aircraft control. The results of detailed optimization calculations are presented for the commonly used extraction of signals from the anti-Stokes branch. Different background conditions and realistic shapes of the filter transmission curves are taken into account. Practical uncertainties of the central passbands and widths are discussed. We found a simple parametrization for the optimum filter passband shifts depending on the atmospheric temperature range of interest and the background. The approximation errors of this parametrization are smaller than 2% for temperatures between 200 and 300 K and smaller than 4% between 180 and 200 K.

  5. Optimum selection of an energy resource using fuzzy logic

    International Nuclear Information System (INIS)

    Abouelnaga, Ayah E.; Metwally, Abdelmohsen; Nagy, Mohammad E.; Agamy, Saeed

    2009-01-01

    Optimum selection of an energy resource is a vital issue in developed countries. Considering energy resources as alternatives (nuclear, hydroelectric, gas/oil, and solar) and factors upon which the proper decision will be taken as attributes (economics, availability, environmental impact, and proliferation), one can use the multi-attribute utility theory (MAUT) to optimize the selection process. Recently, fuzzy logic is extensively applied to the MAUT as it expresses the linguistic appraisal for all attributes in wide and reliable manners. The rise in oil prices and the increased concern about environmental protection from CO 2 emissions have promoted the attention to the use of nuclear power as a viable energy source for power generation. For Egypt, as a case study, the nuclear option is found to be an appropriate choice. Following the introduction of innovative designs of nuclear power plants, improvements in the proliferation resistance, environmental impacts, and economics will enhance the selection of the nuclear option.

  6. Optimum design of band-gap beam structures

    DEFF Research Database (Denmark)

    Olhoff, Niels; Niu, Bin; Cheng, Gengdong

    2012-01-01

    in the present paper that such an a priori assumption is not necessary since, in general, just the maximization of the gap between two consecutive natural frequencies leads to significant design periodicity. The aim of this paper is to maximize frequency gaps by shape optimization of transversely vibrating......The design of band-gap structures receives increasing attention for many applications in mitigation of undesirable vibration and noise emission levels. A band-gap structure usually consists of a periodic distribution of elastic materials or segments, where the propagation of waves is impeded...... or significantly suppressed for a range of external excitation frequencies. Maximization of the band-gap is therefore an obvious objective for optimum design. This problem is sometimes formulated by optimizing a parameterized design model which assumes multiple periodicity in the design. However, it is shown...

  7. Optimum synthesis of oscillating slide actuators for mechatronic applications

    Directory of Open Access Journals (Sweden)

    P.A. Simionescu

    2018-04-01

    Full Text Available The oscillating-slide inversion of the slider-crank mechanism, commonly symbolized RPRR, is widely used to convert the displacement of an input linear motor (either electric, hydraulic or pneumatic, into the swing motion of a rocker. This paper discusses the optimum kinematic synthesis of the centric RPRR mechanisms for prescribed limit positions, while simultaneously satisfying either (i minimum deviation from 90° of its transmission angle, (ii maximum mechanical advantage, or (iii linear correlation between the input- and output-link motions. To assist practicing engineers, step-by-step design procedures, together with performance charts and parametric design charts are also provided in the paper. Keywords: Slider-crank inversion, Limit positions, Transmission angle, Mechanical advantage, Uniform motion, Optimization

  8. Optimum energy management of a photovoltaic water pumping system

    Energy Technology Data Exchange (ETDEWEB)

    Sallem, Souhir; Chaabene, Maher; Kamoun, M.B.A. [Unite de Commande de Machines et Reseaux de Puissance CMERP-ENIS, Route de Soukra, km 3.5, BP W, 3038 Sfax (Tunisia)

    2009-11-15

    This paper presents a new management approach which makes decision on the optimum connection times of the elements of a photovoltaic water pumping installation: battery, water pump and photovoltaic panel. The decision is made by fuzzy rules considering the battery safety on the first hand and the Photovoltaic Panel Generation (PVPG) forecast during a considered day and the load required power on the second hand. The optimization approach consists of the extension of the operation time of the water pump with respects to multi objective management criteria. Compared to the stand alone management method, the new approach effectiveness is confirmed by the extension of the pumping period for more than 5 h a day. (author)

  9. Optimum Identification Method of Sorting Green Household Waste

    Directory of Open Access Journals (Sweden)

    Daud Mohd Hisam

    2016-01-01

    Full Text Available This project is related to design of sorting facility for reducing, reusing, recycling green waste material, and in particular to invent an automatic system to distinguish household waste in order to separate them from the main waste stream. The project focuses on thorough analysis of the properties of green household waste. The method of identification is using capacitive sensor where the characteristic data taken on three different sensor drive frequency. Three types of material have been chosen as a medium of this research, to be separated using the selected method. Based on capacitance characteristics and its ability to penetrate green object, optimum identification method is expected to be recognized in this project. The output capacitance sensor is in analogue value. The results demonstrate that the information from the sensor is enough to recognize the materials that have been selected.

  10. Optimum survival strategies against zombie infestations - a population dynamics approach

    Science.gov (United States)

    Mota, Bruno

    2014-03-01

    We model a zombie infestation by three coupled ODEs that jointly describe the time evolution of three populations: regular humans, zombies, and survivors (humans that have survived at least one zombie encounter). This can be generalized to take into account more levels of expertise and/or skill degradation. We compute the fixed points, and stability thereof, that correspond to one of three possible outcomes: human extinction, zombie extermination or, if one allows for a human non-zero birth-rate, co-habitation. We obtain analytically the optimum strategy for humans in terms of the model's parameters (essentially, whether to flee and hide, or fight). Zombies notwithstanding, this can also be seen as a toy model for infections of immune system cells, such as CD4+ T cells in AIDS, and macrophages in tuberculosis, whereby cells are both the target of infection, and mediate the acquired immunity response against the same infection. I thank FAPERJ for financial support.

  11. On the optimum area-balanced filters for nuclear spectroscopy

    International Nuclear Information System (INIS)

    Ripamonti, G.; Pullia, A.

    1996-01-01

    The minimum noise area-balanced (A-B) filters for nuclear spectroscopy are disentangled in the sum of two optimized individual filters. The former is the unipolar finite cusp filter, used for pulse amplitude estimation but affected by baseline shift errors, the latter is a specific filter used for baseline estimation. Each of them is optimized so as to give the minimum noise in the estimation of the pulse amplitude or of its baseline level. It is shown that double optimisation produces an overall optimum filter exhibiting a total noise V 2 n equal to the sum of the noises V 2 n1 and V 2 n2 exhibited by each filter individually. This is a consequence of the orthogonality of the individual filter weight-functions in a function space where the norm is defined as √(V 2 n ). (orig.)

  12. Optimum conditions for radiation curing of polyester/epoxy compositions

    International Nuclear Information System (INIS)

    Brzostowski, A.; Pietrzak, M.

    1982-01-01

    The effects of dose rate of γ 60 Co radiation, heat removal conditions and construction of polyester-epoxy compositions on the curing conditions of the latter were investigated. It was found that the optimum dose rate was within 0.1x10 4 Gy/h and 0.6x10 4 Gy/h for the following composition: 100 parts by weight of polyester resin, 10 to 15 parts by weight of an unsaturated epoxy resin and 100 parts by weight of silicon dioxide. The mixture was well cooled during the curing process. Curing is completed after the absorption of a dose from 0.3x10 4 Gy to 0.45x10 4 Gy. (author)

  13. On convergence of differential evolution over a class of continuous functions with unique global optimum.

    Science.gov (United States)

    Ghosh, Sayan; Das, Swagatam; Vasilakos, Athanasios V; Suresh, Kaushik

    2012-02-01

    Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms of current interest. Since its inception in the mid 1990s, DE has been finding many successful applications in real-world optimization problems from diverse domains of science and engineering. This paper takes a first significant step toward the convergence analysis of a canonical DE (DE/rand/1/bin) algorithm. It first deduces a time-recursive relationship for the probability density function (PDF) of the trial solutions, taking into consideration the DE-type mutation, crossover, and selection mechanisms. Then, by applying the concepts of Lyapunov stability theorems, it shows that as time approaches infinity, the PDF of the trial solutions concentrates narrowly around the global optimum of the objective function, assuming the shape of a Dirac delta distribution. Asymptotic convergence behavior of the population PDF is established by constructing a Lyapunov functional based on the PDF and showing that it monotonically decreases with time. The analysis is applicable to a class of continuous and real-valued objective functions that possesses a unique global optimum (but may have multiple local optima). Theoretical results have been substantiated with relevant computer simulations.

  14. Multidisciplinary Aerodynamic Design of a Rotor Blade for an Optimum Rotor Speed Helicopter

    Directory of Open Access Journals (Sweden)

    Jiayi Xie

    2017-06-01

    Full Text Available The aerodynamic design of rotor blades is challenging, and is crucial for the development of helicopter technology. Previous aerodynamic optimizations that focused only on limited design points find it difficult to balance flight performance across the entire flight envelope. This study develops a global optimum envelope (GOE method for determining blade parameters—blade twist, taper ratio, tip sweep—for optimum rotor speed helicopters (ORS-helicopters, balancing performance improvements in hover and various freestream velocities. The GOE method implements aerodynamic blade design by a bi-level optimization, composed of a global optimization step and a secondary optimization step. Power loss as a measure of rotor performance is chosen as the objective function, referred to as direct power loss (DPL in this study. A rotorcraft comprehensive code for trim simulation with a prescribed wake method is developed. With the application of the GOE method, a DPL reduction of as high as 16.7% can be achieved in hover, and 24% at high freestream velocity.

  15. Optimum frequency and gradient for the CLIC main linac accelerating structure

    CERN Document Server

    Grudiev, A; Wuensch, Walter

    2006-01-01

    A novel procedure for the optimization of CLIC main linac parameters including operating frequency and the accelerating gradient is presented. The optimization procedure takes into account both beam dynamics and high power rf constraints. Beam dynamics constraints are given by emittance growth due to short- and long-range transverse wakefields. RF constraints are given by rf breakdown and pulsed surface heating limitations of the accelerating structure. Interpolation of beam and structure parameters in a wide range allows hundreds of millions of accelerating structures to be analyzed to find the structure with the highest ratio of luminosity to main linac input power, which is used as the figure of merit. The frequency and gradient have been varied in the ranges 12-30 GHz and 90-150 MV/m respectively. It is shown that the optimum frequency lies in the range from 16 to 20 GHz depending on the accelerating gradient and that the optimum gradient is below 100 MV/m. Based on our current understanding of the constr...

  16. Determination of optimum processing temperature for transformation of glyceryl monostearate.

    Science.gov (United States)

    Yajima, Toshio; Itai, Shigeru; Takeuchi, Hirofumi; Kawashima, Yoshiaki

    2002-11-01

    The purpose of this study was to clarify the mechanism of transformation from alpha-form to beta-form via beta'-form of glyceryl monostearate (GM) and to determine the optimum conditions of heat-treatment for physically stabilizing GM in a pharmaceutical formulation. Thermal analysis repeated twice using a differential scanning calorimeter (DSC) were performed on mixtures of two crystal forms. In the first run (enthalpy of melting: DeltaH1), two endothermic peaks of alpha-form and beta-form were observed. However, in the second run (enthalpy of melting: DeltaH2), only the endothermic peak of the alpha-form was observed. From a strong correlation observed between the beta-form content in the mixture of alpha-form and beta-form and the enthalpy change, (DeltaH1-DeltaH2)/DeltaH2, beta-form content was expressed as a function of the enthalpy change. Using this relation, the stable beta-form content during the heat-treatment could be determined, and the maximum beta-form content was obtained when the heat-treatment was carried out at 50 degrees C. An inflection point existed in the time course of transformation of alpha-form to beta-form. It was assumed that almost all of alpha-form transformed to beta'-form at this point, and that subsequently only transformation from beta'-form to beta-form occurred. Based on this aspect, the transformation rate equations were derived as consecutive reaction. Experimental data coincided well with the theoretical curve. In conclusion, GM was transformed in the consecutive reaction, and 50 degrees C was the optimum heat-treatment temperature for transforming GM from the alpha-form to the stable beta-form.

  17. Summary of multi-core hardware and programming model investigations

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Suzanne Marie; Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.

    2008-05-01

    This report summarizes our investigations into multi-core processors and programming models for parallel scientific applications. The motivation for this study was to better understand the landscape of multi-core hardware, future trends, and the implications on system software for capability supercomputers. The results of this study are being used as input into the design of a new open-source light-weight kernel operating system being targeted at future capability supercomputers made up of multi-core processors. A goal of this effort is to create an agile system that is able to adapt to and efficiently support whatever multi-core hardware and programming models gain acceptance by the community.

  18. Computer organization and design the hardware/software interface

    CERN Document Server

    Patterson, David A

    2013-01-01

    The 5th edition of Computer Organization and Design moves forward into the post-PC era with new examples, exercises, and material highlighting the emergence of mobile computing and the cloud. This generational change is emphasized and explored with updated content featuring tablet computers, cloud infrastructure, and the ARM (mobile computing devices) and x86 (cloud computing) architectures. Because an understanding of modern hardware is essential to achieving good performance and energy efficiency, this edition adds a new concrete example, "Going Faster," used throughout the text to demonstrate extremely effective optimization techniques. Also new to this edition is discussion of the "Eight Great Ideas" of computer architecture. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, computer arithmetic, pipelining, memory hierarchies and I/O. Optimization techniques featured throughout the text. It covers parallelism in depth with...

  19. Hardware emulation of Memristor based Ternary Content Addressable Memory

    KAUST Repository

    Bahloul, Mohamed A.

    2017-12-13

    MTCAM (Memristor Ternary Content Addressable Memory) is a special purpose storage medium in which data could be retrieved based on the stored content. Using Memristors as the main storage element provides the potential of achieving higher density and more efficient solutions than conventional methods. A key missing item in the validation of such approaches is the wide spread availability of hardware emulation platforms that can provide reliable and repeatable performance statistics. In this paper, we present a hardware MTCAM emulation based on 2-Transistors-2Memristors (2T2M) bit-cell. It builds on a bipolar memristor model with storing and fetching capabilities based on the actual current-voltage behaviour. The proposed design offers a flexible verification environment with quick design revisions, high execution speeds and powerful debugging techniques. The proposed design is modeled using VHDL and prototyped on Xilinx Virtex® FPGA.

  20. Advances in neuromorphic hardware exploiting emerging nanoscale devices

    CERN Document Server

    2017-01-01

    This book covers all major aspects of cutting-edge research in the field of neuromorphic hardware engineering involving emerging nanoscale devices. Special emphasis is given to leading works in hybrid low-power CMOS-Nanodevice design. The book offers readers a bidirectional (top-down and bottom-up) perspective on designing efficient bio-inspired hardware. At the nanodevice level, it focuses on various flavors of emerging resistive memory (RRAM) technology. At the algorithm level, it addresses optimized implementations of supervised and stochastic learning paradigms such as: spike-time-dependent plasticity (STDP), long-term potentiation (LTP), long-term depression (LTD), extreme learning machines (ELM) and early adoptions of restricted Boltzmann machines (RBM) to name a few. The contributions discuss system-level power/energy/parasitic trade-offs, and complex real-world applications. The book is suited for both advanced researchers and students interested in the field.

  1. Cumulative Measurement Errors for Dynamic Testing of Space Flight Hardware

    Science.gov (United States)

    Winnitoy, Susan

    2012-01-01

    Located at the NASA Johnson Space Center in Houston, TX, the Six-Degree-of-Freedom Dynamic Test System (SDTS) is a real-time, six degree-of-freedom, short range motion base simulator originally designed to simulate the relative dynamics of two bodies in space mating together (i.e., docking or berthing). The SDTS has the capability to test full scale docking and berthing systems utilizing a two body dynamic docking simulation for docking operations and a Space Station Remote Manipulator System (SSRMS) simulation for berthing operations. The SDTS can also be used for nonmating applications such as sensors and instruments evaluations requiring proximity or short range motion operations. The motion base is a hydraulic powered Stewart platform, capable of supporting a 3,500 lb payload with a positional accuracy of 0.03 inches. The SDTS is currently being used for the NASA Docking System testing and has been also used by other government agencies. The SDTS is also under consideration for use by commercial companies. Examples of tests include the verification of on-orbit robotic inspection systems, space vehicle assembly procedures and docking/berthing systems. The facility integrates a dynamic simulation of on-orbit spacecraft mating or de-mating using flight-like mechanical interface hardware. A force moment sensor is used for input during the contact phase, thus simulating the contact dynamics. While the verification of flight hardware presents unique challenges, one particular area of interest involves the use of external measurement systems to ensure accurate feedback of dynamic contact. The measurement systems for the test facility have two separate functions. The first is to take static measurements of facility and test hardware to determine both the static and moving frames used in the simulation and control system. The test hardware must be measured after each configuration change to determine both sets of reference frames. The second function is to take dynamic

  2. Binary Associative Memories as a Benchmark for Spiking Neuromorphic Hardware

    Directory of Open Access Journals (Sweden)

    Andreas Stöckel

    2017-08-01

    Full Text Available Large-scale neuromorphic hardware platforms, specialized computer systems for energy efficient simulation of spiking neural networks, are being developed around the world, for example as part of the European Human Brain Project (HBP. Due to conceptual differences, a universal performance analysis of these systems in terms of runtime, accuracy and energy efficiency is non-trivial, yet indispensable for further hard- and software development. In this paper we describe a scalable benchmark based on a spiking neural network implementation of the binary neural associative memory. We treat neuromorphic hardware and software simulators as black-boxes and execute exactly the same network description across all devices. Experiments on the HBP platforms under varying configurations of the associative memory show that the presented method allows to test the quality of the neuron model implementation, and to explain significant deviations from the expected reference output.

  3. Fast and Reliable Mouse Picking Using Graphics Hardware

    Directory of Open Access Journals (Sweden)

    Hanli Zhao

    2009-01-01

    Full Text Available Mouse picking is the most commonly used intuitive operation to interact with 3D scenes in a variety of 3D graphics applications. High performance for such operation is necessary in order to provide users with fast responses. This paper proposes a fast and reliable mouse picking algorithm using graphics hardware for 3D triangular scenes. Our approach uses a multi-layer rendering algorithm to perform the picking operation in linear time complexity. The objectspace based ray-triangle intersection test is implemented in a highly parallelized geometry shader. After applying the hardware-supported occlusion queries, only a small number of objects (or sub-objects are rendered in subsequent layers, which accelerates the picking efficiency. Experimental results demonstrate the high performance of our novel approach. Due to its simplicity, our algorithm can be easily integrated into existing real-time rendering systems.

  4. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    Directory of Open Access Journals (Sweden)

    David R. W. Barr

    2009-01-01

    Full Text Available We present a software environment for the efficient simulation of cellular processor arrays (CPAs. This software (APRON is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  5. Hardware support for CSP on a Java chip multiprocessor

    DEFF Research Database (Denmark)

    Gruian, Flavius; Schoeberl, Martin

    2013-01-01

    Due to memory bandwidth limitations, chip multiprocessors (CMPs) adopting the convenient shared memory model for their main memory architecture scale poorly. On-chip core-to-core communication is a solution to this problem, that can lead to further performance increase for a number of multithreaded...... applications. Programmatically, the Communicating Sequential Processes (CSPs) paradigm provides a sound computational model for such an architecture with message based communication. In this paper we explore hardware support for CSP in the context of an embedded Java CMP. The hardware support for CSP are on......-chip communication channels, implemented by a ring-based network-on-chip (NoC), to reduce the memory bandwidth pressure on the shared memory.The presented solution is scalable and also specific for our limited resources and real-time predictability requirements. CMP architectures of three to eight processors were...

  6. Parallel random number generator for inexpensive configurable hardware cells

    Science.gov (United States)

    Ackermann, J.; Tangen, U.; Bödekker, B.; Breyer, J.; Stoll, E.; McCaskill, J. S.

    2001-11-01

    A new random number generator ( RNG) adapted to parallel processors has been created. This RNG can be implemented with inexpensive hardware cells. The correlation between neighboring cells is suppressed with smart connections. With such connection structures, sequences of pseudo-random numbers are produced. Numerical tests including a self-avoiding random walk test and the simulation of the order parameter and energy of the 2D Ising model give no evidence for correlation in the pseudo-random sequences. Because the new random number generator has suppressed the correlation between neighboring cells which is usually observed in cellular automaton implementations, it is applicable for extended time simulations. It gives an immense speed-up factor if implemented directly in configurable hardware, and has recently been used for long time simulations of spatially resolved molecular evolution.

  7. Verification of OpenSSL version via hardware performance counters

    Science.gov (United States)

    Bruska, James; Blasingame, Zander; Liu, Chen

    2017-05-01

    Many forms of malware and security breaches exist today. One type of breach downgrades a cryptographic program by employing a man-in-the-middle attack. In this work, we explore the utilization of hardware events in conjunction with machine learning algorithms to detect which version of OpenSSL is being run during the encryption process. This allows for the immediate detection of any unknown downgrade attacks in real time. Our experimental results indicated this detection method is both feasible and practical. When trained with normal TLS and SSL data, our classifier was able to detect which protocol was being used with 99.995% accuracy. After the scope of the hardware event recording was enlarged, the accuracy diminished greatly, but to 53.244%. Upon removal of TLS 1.1 from the data set, the accuracy returned to 99.905%.

  8. Outline of a fast hardware implementation of Winograd's DFT algorithm

    Science.gov (United States)

    Zohar, S.

    1980-01-01

    The main characteristics of the discrete Fourier transform (DFT) algorithm considered by Winograd (1976) is a significant reduction in the number of multiplications. Its primary disadvantage is a higher structural complexity. It is, therefore, difficult to translate the reduced number of multiplications into faster execution of the DFT by means of a software implementation of the algorithm. For this reason, a hardware implementation is considered in the current study, taking into account a design based on the algorithm prescription discussed by Zohar (1979). The hardware implementation of a FORTRAN subroutine is proposed, giving attention to a pipelining scheme in which 5 consecutive data batches are being operated on simultaneously, each batch undergoing one of 5 processing phases.

  9. Implementation of a Hardware Ray Tracer for digital design education

    OpenAIRE

    Eggen, Jonas Agentoft

    2017-01-01

    Digital design is a large and complex field of electronic engineering, and learning digital design requires maturing over time. The learning process can be facilitated by making use of a single learning platform throughout a whole course. A learning platform built around a hardware ray tracer can be used in illustrating many important aspects of digital design. A unified learning platform allows students to delve into intricate details of digital design while still seeing the bigger pictur...

  10. IDEAS and App Development Internship in Hardware and Software Design

    Science.gov (United States)

    Alrayes, Rabab D.

    2016-01-01

    In this report, I will discuss the tasks and projects I have completed while working as an electrical engineering intern during the spring semester of 2016 at NASA Kennedy Space Center. In the field of software development, I completed tasks for the G-O Caching Mobile App and the Asbestos Management Information System (AMIS) Web App. The G-O Caching Mobile App was written in HTML, CSS, and JavaScript on the Cordova framework, while the AMIS Web App is written in HTML, CSS, JavaScript, and C# on the AngularJS framework. My goals and objectives on these two projects were to produce an app with an eye-catching and intuitive User Interface (UI), which will attract more employees to participate; to produce a fully-tested, fully functional app which supports workforce engagement and exploration; to produce a fully-tested, fully functional web app that assists technicians working in asbestos management. I also worked in hardware development on the Integrated Display and Environmental Awareness System (IDEAS) wearable technology project. My tasks on this project were focused in PCB design and camera integration. My goals and objectives for this project were to successfully integrate fully functioning custom hardware extenders on the wearable technology headset to minimize the size of hardware on the smart glasses headset for maximum user comfort; to successfully integrate fully functioning camera onto the headset. By the end of this semester, I was able to successfully develop four extender boards to minimize hardware on the headset, and assisted in integrating a fully-functioning camera into the system.

  11. Generation of embedded Hardware/Software from SystemC

    OpenAIRE

    Houzet , Dominique; Ouadjaout , Salim

    2006-01-01

    International audience; Designers increasingly rely on reusing intellectual property (IP) and on raising the level of abstraction to respect system-on-chip (SoC) market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propo...

  12. BCI meeting 2005--workshop on technology: hardware and software.

    Science.gov (United States)

    Cincotti, Febo; Bianchi, Luigi; Birch, Gary; Guger, Christoph; Mellinger, Jürgen; Scherer, Reinhold; Schmidt, Robert N; Yáñez Suárez, Oscar; Schalk, Gerwin

    2006-06-01

    This paper describes the outcome of discussions held during the Third International BCI Meeting at a workshop to review and evaluate the current state of BCI-related hardware and software. Technical requirements and current technologies, standardization procedures and future trends are covered. The main conclusion was recognition of the need to focus technical requirements on the users' needs and the need for consistent standards in BCI research.

  13. 10161 Executive Summary -- Decision Procedures in Software, Hardware and Bioware

    OpenAIRE

    Bjorner, Nikolaj; Nieuwenhuis, Robert; Veith, Helmut; Voronkov, Andrei

    2010-01-01

    The main goal of the seminar Decision Procedures in Soft, Hard and Bio-ware was to bring together renowned as well as young aspiring researchers from two groups. The first group formed by researchers who develop both theory and efficient implementations of decision procedures. The second group comprising of researchers from application areas such as program analysis and testing, crypto-analysis, hardware verification, industrial planning and scheduling, and bio-inform...

  14. Toward Composable Hardware Agnostic Communications Blocks Lessons Learned

    Science.gov (United States)

    2016-11-01

    processing through a common thread- ing, scheduling, IPC, and memory management approach • Hardware-specific optimization abstraction • Flow -based block...composition - Each block may receive multiple inputs and generate multiple outputs to different blocks enabling flow -based usage Presentation Name - 5...with a high level block complexity analysis. Assumptions such as infinite memory/all access in L1 cache , hand assembly (no function call overhead/stack

  15. Hardware realization of chaos based block cipher for image encryption

    KAUST Repository

    Barakat, Mohamed L.

    2011-12-01

    Unlike stream ciphers, block ciphers are very essential for parallel processing applications. In this paper, the first hardware realization of chaotic-based block cipher is proposed for image encryption applications. The proposed system is tested for known cryptanalysis attacks and for different block sizes. When implemented on Virtex-IV, system performance showed high throughput and utilized small area. Passing successfully in all tests, our system proved to be secure with all block sizes. © 2011 IEEE.

  16. Introduction to hardware for nuclear medicine data systems

    International Nuclear Information System (INIS)

    Erickson, J.J.

    1976-01-01

    Hardware included in a computer-based data system for nuclear medicine imaging studies is discussed. The report is written for the newcomer to computer collection and analysis. Emphasis is placed on the effect of the various portions of the system on the final application in the nuclear medicine clinic. While an attempt is made to familiarize the user with some of the terms he will encounter, no attempt is made to make him a computer expert. 1 figure, 2 tables

  17. Hardware of automation systems of isotope mass spectrometers

    International Nuclear Information System (INIS)

    Manojlov, V.V.; Meleshkin, A.S.; Novikov, L.V.; Kornil'ev, S.O.; Voronin, B.M.

    1997-01-01

    The modernized hardware of isotope mass spectrometers is described. The modern control systems for the mass spectrometers are fulfilled on the basis of IBM/PC AT. Versions of subsystems mass spectrometer control through a standard bus and through a digital-to-analog converter are considered. The characteristics of an electrometric amplifier and interface cards developed for modernized automation systems of the isotope mass spectrometers are presented

  18. Low extractable wipers for cleaning space flight hardware

    Science.gov (United States)

    Tijerina, Veronica; Gross, Frederick C.

    1986-01-01

    There is a need for low extractable wipers for solvent cleaning of space flight hardware. Soxhlet extraction is the method utilized today by most NASA subcontractors, but there may be alternate methods to achieve the same results. The need for low non-volatile residue materials, the history of soxhlet extraction, and proposed alternate methods are discussed, as well as different types of wipers, test methods, and current standards.

  19. Corrosion Testing of Stainless Steel Fuel Cell Hardware

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, M.S.; Zawodzinski, C.; Gottesfeld, S.

    1998-11-01

    Metal hardware is gaining increasing interest in polymer electrolyte fuel cell (PEFC) development as a possible alternative to machined graphite hardware because of its potential for low-cost manufacturing combined with its intrinsic high conductivity, minimal permeability and advantageous mechanical properties. A major barrier to more widespread use of metal hardware has been the susceptibility of various metals to corrosion. Few pure metals can withstand the relatively aggressive environment of a fuel cell and thus the choices for hardware are quite limited. Precious metals such as platinum or gold are prohibitively expensive and so tend to be utilized as coatings on inexpensive substrates such as aluminum or stainless steel. The main challenge with coatings has been to achieve pin-hole free surfaces that will remain so after years of use. Titanium has been used to some extent and though it is very corrosion-resistant, it is also relatively expensive and often still requires some manner of surface coating to prevent the formation of a poorly conducting oxide layer. In contrast, metal alloys may hold promise as potentially low-cost, corrosion-resistant materials for bipolar plates. The dozens of commercially available stainless steel and nickel based alloys have been specifically formulated to offer a particular advantage depending upon their application. In the case of austenitic stainless steels, for example, 316 SS contains molybdenum and a higher chromium content than its more common counterpart, 304 SS, that makes it more noble and increases its corrosion resistance. Likewise, 316L SS contains less carbon than 316 SS to make it easier to weld. A number of promising corrosion-resistant, highly noble alloys such as Hastelloy{trademark} or Duplex{trademark} (a stainless steel developed for seawater service) are available commercially, but are expensive and difficult to obtain in various forms (i.e. wire screen, foil, etc.) or in small amounts for R and D

  20. Automatic Optimization of Hardware Accelerators for Image Processing

    OpenAIRE

    Reiche, Oliver; Häublein, Konrad; Reichenbach, Marc; Hannig, Frank; Teich, Jürgen; Fey, Dietmar

    2015-01-01

    In the domain of image processing, often real-time constraints are required. In particular, in safety-critical applications, such as X-ray computed tomography in medical imaging or advanced driver assistance systems in the automotive domain, timing is of utmost importance. A common approach to maintain real-time capabilities of compute-intensive applications is to offload those computations to dedicated accelerator hardware, such as Field Programmable Gate Arrays (FPGAs). Programming such arc...

  1. FY16 ISCP Nuclear Counting Facility Hardware Expansion Summary

    International Nuclear Information System (INIS)

    Church, Jennifer A.; Kashgarian, Michaele; Wooddy, Todd; Haslett, Bob; Torretto, Phil

    2016-01-01

    Hardware expansion and detector calibrations were the focus of FY 16 ISCP efforts in the Nuclear Counting Facility. Work focused on four main objectives: 1) Installation, calibration, and validation of 4 additional HPGe gamma spectrometry systems; including two Low Energy Photon Spectrometers (LEPS). 2) Re-Calibration and validation of 3 previously installed gamma-ray detectors, 3) Integration of the new systems into the NCF IT infrastructure, and 4) QA/QC and maintenance of current detector systems.

  2. FY16 ISCP Nuclear Counting Facility Hardware Expansion Summary

    Energy Technology Data Exchange (ETDEWEB)

    Church, Jennifer A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kashgarian, Michaele [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wooddy, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Haslett, Bob [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Torretto, Phil [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-09-15

    Hardware expansion and detector calibrations were the focus of FY 16 ISCP efforts in the Nuclear Counting Facility. Work focused on four main objectives: 1) Installation, calibration, and validation of 4 additional HPGe gamma spectrometry systems; including two Low Energy Photon Spectrometers (LEPS). 2) Re-Calibration and validation of 3 previously installed gamma-ray detectors, 3) Integration of the new systems into the NCF IT infrastructure, and 4) QA/QC and maintenance of current detector systems.

  3. Treatment alternatives for non-fuel-bearing hardware

    Energy Technology Data Exchange (ETDEWEB)

    Ross, W.A.; Clark, L.L.; Oma, K.H.

    1987-01-01

    This evaluation compared four alternatives for the treatment or processing of non-fuel bearing hardware (NFBH) to reduce its volume and prepare it for disposal. These treatment alternatives are: shredding; shredding and low pressure compaction; shredding and supercompaction; and melting. These alternatives are compared on the basis of system costs, waste form characteristics, and process considerations. The study recommends that melting and supercompaction alternatives be further considered and that additional testing be conducted for these two alternatives.

  4. Peculiarities of hardware implementation of generalized cellular tetra automaton

    OpenAIRE

    Аноприенко, Александр Яковлевич; Федоров, Евгений Евгениевич; Иваница, Сергей Васильевич; Альрабаба, Хамза

    2015-01-01

    Cellular automata are widely used in many fields of knowledge for the study of variety of complex real processes: computer engineering and computer science, cryptography, mathematics, physics, chemistry, ecology, biology, medicine, epidemiology, geology, architecture, sociology, theory of neural networks. Thus, cellular automata (CA) and tetra automata are gaining relevance taking into account the hardware and software solutions.Also it is marked a trend towards an increase in the number of p...

  5. Hardware Trojans - Prevention, Detection, Countermeasures (A Literature Review)

    Science.gov (United States)

    2011-07-01

    manufacturing process in-house is infeasible for all but the smallest Application Specific Integrated Circuit (ASIC) designs. Our reliance on the globalisation ...for all but the smallest ASIC designs. Our reliance on the globalisation of the electronics industry is critical for developing both our commercial and...on the detection mechanism used, a Hardware Trojan may be either definitively identified, or a statistical measure may be provided indicating the

  6. Reconfigurable Signal Processing and Hardware Architecture for Broadband Wireless Communications

    Directory of Open Access Journals (Sweden)

    Liang Ying-Chang

    2005-01-01

    Full Text Available This paper proposes a broadband wireless transceiver which can be reconfigured to any type of cyclic-prefix (CP -based communication systems, including orthogonal frequency-division multiplexing (OFDM, single-carrier cyclic-prefix (SCCP system, multicarrier (MC code-division multiple access (MC-CDMA, MC direct-sequence CDMA (MC-DS-CDMA, CP-based CDMA (CP-CDMA, and CP-based direct-sequence CDMA (CP-DS-CDMA. A hardware platform is proposed and the reusable common blocks in such a transceiver are identified. The emphasis is on the equalizer design for mobile receivers. It is found that after block despreading operation, MC-DS-CDMA and CP-DS-CDMA have the same equalization blocks as OFDM and SCCP systems, respectively, therefore hardware and software sharing is possible for these systems. An attempt has also been made to map the functional reconfigurable transceiver onto the proposed hardware platform. The different functional entities which will be required to perform the reconfiguration and realize the transceiver are explained.

  7. 2D neural hardware versus 3D biological ones

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    This paper will present important limitations of hardware neural nets as opposed to biological neural nets (i.e. the real ones). The author starts by discussing neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural nets. Going further, the focus will be on hardware constraints. The author will present recent results for three different alternatives of implementing neural networks: digital, threshold gate, and analog, while the area and the delay will be related to neurons' fan-in and weights' precision. Based on all of these, it will be shown why hardware implementations cannot cope with their biological inspiration with respect to their power of computation: the mapping onto silicon lacking the third dimension of biological nets. This translates into reduced fan-in, and leads to reduced precision. The main conclusion is that one is faced with the following alternatives: (1) try to cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow one to use the third dimension, e.g. using optical interconnections.

  8. Hardware Accelerators Targeting a Novel Group Based Packet Classification Algorithm

    Directory of Open Access Journals (Sweden)

    O. Ahmed

    2013-01-01

    Full Text Available Packet classification is a ubiquitous and key building block for many critical network devices. However, it remains as one of the main bottlenecks faced when designing fast network devices. In this paper, we propose a novel Group Based Search packet classification Algorithm (GBSA that is scalable, fast, and efficient. GBSA consumes an average of 0.4 megabytes of memory for a 10 k rule set. The worst-case classification time per packet is 2 microseconds, and the preprocessing speed is 3 M rules/second based on an Xeon processor operating at 3.4 GHz. When compared with other state-of-the-art classification techniques, the results showed that GBSA outperforms the competition with respect to speed, memory usage, and processing time. Moreover, GBSA is amenable to implementation in hardware. Three different hardware implementations are also presented in this paper including an Application Specific Instruction Set Processor (ASIP implementation and two pure Register-Transfer Level (RTL implementations based on Impulse-C and Handel-C flows, respectively. Speedups achieved with these hardware accelerators ranged from 9x to 18x compared with a pure software implementation running on an Xeon processor.

  9. Hardware Design Improvements to the Major Constituent Analyzer

    Science.gov (United States)

    Combs, Scott; Schwietert, Daniel; Anaya, Marcial; DeWolf, Shannon; Merrill, Dave; Gardner, Ben D.; Thoresen, Souzan; Granahan, John; Belcher, Paul; Matty, Chris

    2011-01-01

    The Major Constituent Analyzer (MCA) onboard the International Space Station (ISS) is designed to monitor the major constituents of the ISS's internal atmosphere. This mass spectrometer based system is an integral part of the Environmental Control and Life Support System (ECLSS) and is a primary tool for the management of ISS atmosphere composition. As a part of NASA Change Request CR10773A, several alterations to the hardware have been made to accommodate improved MCA logistics. First, the ORU 08 verification gas assembly has been modified to allow the verification gas cylinder to be installed on orbit. The verification gas is an essential MCA consumable that requires periodic replenishment. Designing the cylinder for subassembly transport reduces the size and weight of the maintained item for launch. The redesign of the ORU 08 assembly includes a redesigned housing, cylinder mounting apparatus, and pneumatic connection. The second hardware change is a redesigned wiring harness for the ORU 02 analyzer. The ORU 02 electrical connector interface was damaged in a previous on-orbit installation, and this necessitated the development of a temporary fix while a more permanent solution was developed. The new wiring harness design includes flexible cable as well as indexing fasteners and guide-pins, and provides better accessibility during the on-orbit maintenance operation. This presentation will describe the hardware improvements being implemented for MCA as well as the expected improvement to logistics and maintenance.

  10. Automation Hardware & Software for the STELLA Robotic Telescope

    Science.gov (United States)

    Weber, M.; Granzer, Th.; Strassmeier, K. G.

    The STELLA telescope (a joint project of the AIP, Hamburger Sternwarte and the IAC) is to operate in fully robotic mode, with no human interaction necessary for regular operation. Thus, the hardware must be kept as simple as possible to avoid unnecessary failures, and the environmental conditions must be monitored accurately to protect the telescope in case of bad weather. All computers are standard PCs running Linux, and communication with specialized hardware is done via a RS232/RS485 bus system. The high level (java based) control software consists of independent modules to ease bug-tracking and to allow the system to be extended without changing existing modules. Any command cycle consists of three messages, the actual command sent from the central node to the operating device, an immediate acknowledge, and a final done message, both sent back from the receiving device to the central node. This reply-splitting allows a direct distinction between communication problems (no acknowledge message) and hardware problems (no or a delayed done message). To avoid bug-prone packing of all the sensor-analyzing software into a single package, each sensor-reading and interaction with other sensors is done within a self-contained thread. Weather-decision making is therefore totally decoupled from the core control software to avoid dead-locks in the core module.

  11. Secure Hardware Performance Analysis in Virtualized Cloud Environment

    Directory of Open Access Journals (Sweden)

    Chee-Heng Tan

    2013-01-01

    Full Text Available The main obstacle in mass adoption of cloud computing for database operations is the data security issue. In this paper, it is shown that IT services particularly in hardware performance evaluation in virtual machine can be accomplished effectively without IT personnel gaining access to real data for diagnostic and remediation purposes. The proposed mechanisms utilized TPC-H benchmark to achieve 2 objectives. First, the underlying hardware performance and consistency is supervised via a control system, which is constructed using a combination of TPC-H queries, linear regression, and machine learning techniques. Second, linear programming techniques are employed to provide input to the algorithms that construct stress-testing scenarios in the virtual machine, using the combination of TPC-H queries. These stress-testing scenarios serve 2 purposes. They provide the boundary resource threshold verification to the first control system, so that periodic training of the synthetic data sets for performance evaluation is not constrained by hardware inadequacy, particularly when the resources in the virtual machine are scaled up or down which results in the change of the utilization threshold. Secondly, they provide a platform for response time verification on critical transactions, so that the expected Quality of Service (QoS from these transactions is assured.

  12. Rupture hardware minimization in pressurized water reactor piping

    International Nuclear Information System (INIS)

    Mukherjee, S.K.; Ski, J.J.; Chexal, V.; Norris, D.M.; Goldstein, N.A.; Beaudoin, B.F.; Quinones, D.F.; Server, W.L.

    1989-01-01

    For much of the high-energy piping in light reactor systems, fracture mechanics calculations can be used to assure pipe failure resistance, thus allowing the elimination of excessive rupture restraint hardware both inside and outside containment. These calculations use the concept of leak-before-break (LBB) and include part-through-wall flaw fatigue crack propagation, through-wall flaw detectable leakage, and through-wall flaw stability analyses. Performing these analyses not only reduces initial construction, future maintenance, and radiation exposure costs, but also improves the overall safety and integrity of the plant since much more is known about the piping and its capabilities than would be the case had the analyses not been performed. This paper presents the LBB methodology applied a Beaver Valley Power Station- Unit 2 (BVPS-2); the application for two specific lines, one inside containment (stainless steel) and the other outside containment (ferrutic steel), is shown in a generic sense using a simple parametric matrix. The overall results for BVPS-2 indicate that pipe rupture hardware is not necessary for stainless steel lines inside containment greater than or equal to 6-in. (152-mm) nominal pipe size that have passed a screening criteria designed to eliminate potential problem systems (such as the feedwater system). Similarly, some ferritic steel line as small as 3-in. (76-mm) diameter (outside containment) can qualify for pipe rupture hardware elemination

  13. Pipe rupture hardware minimization in pressurized water reactor system

    International Nuclear Information System (INIS)

    Mukherjee, S.K.; Szyslowski, J.J.; Chexal, V.; Norris, D.M.; Goldstein, N.A.; Beaudoin, B.; Quinones, D.; Server, W.

    1987-01-01

    For much of the high energy piping in light water reactor systems, fracture mechanics calculations can be used to assure pipe failure resistance, thus allowing the elimination of excessive rupture restraint hardware both inside and outside containment. These calculations use the concept of leak-before-break (LBB) and include part-through-wall flaw fatigue crack propagation, through-wall flaw detectable leakage, and through-wall flaw stability analyses. Performing these analyses not only reduces initial construction, future maintenance, and radiation exposure costs, but the overall safety and integrity of the plant are improved since much more is known about the piping and its capabilities than would be the case had the analyses not been performed. This paper presents the LBB methodology applied at Beaver Valley Power Station - Unit 2 (BVPS-2); the application for two specific lines, one inside containment (stainless steel) and the other outside containment (ferritic steel), is shown in a generic sense using a simple parametric matrix. The overall results for BVPS-2 indicate that pipe rupture hardware is not necessary for stainless steel lines inside containment greater than or equal to 6-in (152 mm) nominal pipe size that have passed a screening criteria designed to eliminate potential problem systems (such as the feedwater system). Similarly, some ferritic steel lines as small as 3-in (76 mm) diameter (outside containment) can qualify for pipe rupture hardware elimination

  14. Hardware demonstration of high-speed networks for satellite applications.

    Energy Technology Data Exchange (ETDEWEB)

    Donaldson, Jonathon W.; Lee, David S.

    2008-09-01

    This report documents the implementation results of a hardware demonstration utilizing the Serial RapidIO{trademark} and SpaceWire protocols that was funded by Sandia National Laboratories (SNL's) Laboratory Directed Research and Development (LDRD) office. This demonstration was one of the activities in the Modeling and Design of High-Speed Networks for Satellite Applications LDRD. This effort has demonstrated the transport of application layer packets across both RapidIO and SpaceWire networks to a common downlink destination using small topologies comprised of commercial-off-the-shelf and custom devices. The RapidFET and NEX-SRIO debug and verification tools were instrumental in the successful implementation of the RapidIO hardware demonstration. The SpaceWire hardware demonstration successfully demonstrated the transfer and routing of application data packets between multiple nodes and also was able reprogram remote nodes using configuration bitfiles transmitted over the network, a key feature proposed in node-based architectures (NBAs). Although a much larger network (at least 18 to 27 nodes) would be required to fully verify the design for use in a real-world application, this demonstration has shown that both RapidIO and SpaceWire are capable of routing application packets across a network to a common downlink node, illustrating their potential use in real-world NBAs.

  15. Hardware implementation of on -chip learning using re configurable FPGAS

    International Nuclear Information System (INIS)

    Kelash, H.M.; Sorour, H.S; Mahmoud, I.I.; Zaki, M; Haggag, S.S.

    2009-01-01

    The multilayer perceptron (MLP) is a neural network model that is being widely applied in the solving of diverse problems. A supervised training is necessary before the use of the neural network.A highly popular learning algorithm called back-propagation is used to train this neural network model. Once trained, the MLP can be used to solve classification problems. An interesting method to increase the performance of the model is by using hardware implementations. The hardware can do the arithmetical operations much faster than software. In this paper, a design and implementation of the sequential mode (stochastic mode) of backpropagation algorithm with on-chip learning using field programmable gate arrays (FPGA) is presented, a pipelined adaptation of the on-line back propagation algorithm (BP) is shown.The hardware implementation of forward stage, backward stage and update weight of backpropagation algorithm is also presented. This implementation is based on a SIMD parallel architecture of the forward propagation the diagnosis of the multi-purpose research reactor of Egypt accidents is used to test the proposed system

  16. Optimum conditions for prebiotic evolution in extraterrestrial environments

    Science.gov (United States)

    Abbas, Ousama H.

    The overall goal of the dissertation was to devise synthetic pathways leading to the production of peptides and amino acids from smaller organic precursors. To this end, eight different zeolites were tested in order to determine their catalytic potential in the conversion of amino acids to peptides. The zeolites tested were either synthetic or naturally occurring. Acidic solutions of amino acids were prepared with or without zeolites and their reactivity was monitored over a four-week time interval. The kinetics and feasibility of peptide synthesis from selected amino acid combinations was investigated via the paper chromatography technique. Nine different amino acids were tested. The nature and extent of product were measured at constant time intervals. It was found that two ZSM-5 synthetic zeolites as well as the Fisher Scientific zeolite mix without alumina salts may have a catalytic potential in the conversion of amino acids to peptides. The conversion was verified by matching the paper chromatogram of the experimental product with that of a known peptide. The experimental results demonstrate that the optimum solvent system for paper chromatographic analysis of the zeolite-catalyzed self-assembly of the amino acids L-aspartic acid, L- asparagine, L-histidine, and L-serine is a 50:50 mixture of 1-butanol and acetone by volume. For the amino acids L-alanine, L-glycine, and L-valine, the optimum solvent was found to be a 30:70 mixture of ammonia and propanol by volume. A mathematical model describing the distance traveled (spot position) versus reaction time was constructed for the zeolite-catalyzed conversion of L- leucine and L-tyrosine and was found to approximately follow the function f(t) = 25 ln t. Two case studies for prebiotic synthesis leading to the production of amino acids or peptides in extraterrestrial environments were discussed: one involving Saturn's moon Titan, and the other involving Jupiter's moon Europa. In the Titan study, it was determined

  17. Determination of Optimum Cross-section for Oran Highway Revetment

    Science.gov (United States)

    Velioglu, Deniz; Sogut, Erdinc; Guler, Isikhan

    2017-04-01

    Revetments are shore parallel, sloping coastal structures which are built to provide protection from the negative effects of the sea. The revetment mentioned in this study is located in the City of Oran, Algeria and is currently under construction. This study investigates the determination of the optimum revetment cross section for Oran highway, considering both the hydraulic stability of the revetment and economy. The existence of cliffs in the region and the settlement of the City of Oran created a necessity to re-align Oran highway; therefore, it was shifted towards the Gulf of Oran. Approximately 1 km of the highway is to be constructed on the Mediterranean Sea due to the new alignment. In order to protect the sea side of the road from the adverse effects of the sea, a revetment was designed. The proposed cross section had an armour layer composed of 23 tons of antifer units and regular placement of armour units was recommended. In order to check the hydraulic stability of the proposed section, physical model tests were performed in the laboratory of LEM (Laboratoire d'Etudes Maritimes) in Algeria, using the pre-determined design wave conditions. The physical model tests revealed that the trunk of the revetment was totaly damaged. Accordingly, the proposed section was found insufficient and certain modifications were required. The first modification was made in the arrangement of armour units, changing them from regular to irregular. After testing the new cross section, it was observed that the revetment was vulnerable to breaking wave attack due to the toe geometry and thus the toe of the revetment had to be re-shaped. Therefore, the second option was to reduce the toe elevation. It was observed that even though the revetment trunk was safe, the damage in the toe was not in acceptable limits. The new cross section was found insufficient and as the final option, the weight of the antifer units used in the armour layer was increased, the toe length of the

  18. An Approach to Optimum Joint Beamforming Design in a MIMO-OFDM Multiuser System

    Directory of Open Access Journals (Sweden)

    Pascual-Iserte Antonio

    2004-01-01

    Full Text Available This paper describes a multiuser scenario with several terminals acceding simultaneously to the same frequency channel. The objective is to design an optimal multiuser system that may be used as a comparative framework when evaluating other suboptimal solutions and to contribute to the already published works on this topic. The present work assumes that a centralized manager knows perfectly all the channel responses between all the terminals. According to this, the transmitters and receivers, using antenna arrays and leading to the so-called multiple-input-multiple-output (MIMO channels, are designed in a joint beamforming approach, attempting to minimize the total transmit power subject to quality of service (QoS constraints. Since this optimization problem is not convex, the use of the simulated annealing (SA technique is proposed to find the optimum solution.

  19. Achieving optimum mechanical performance in metallic nanolayered Cu/X (X = Zr, Cr) micropillars

    Science.gov (United States)

    Zhang, J. Y.; Li, J.; Liang, X. Q.; Liu, G.; Sun, J.

    2014-03-01

    The selection and design of modern high-performance structural engineering materials such as nanostructured metallic multilayers (NMMs) is driven by optimizing combinations of mechanical properties and requirements for predictable and noncatastrophic failure in service. Here, the Cu/X (X = Zr, Cr) nanolayered micropillars with equal layer thickness (h) spanning from 5-125 nm are uniaxially compressed and it is found that these NMMs exhibit a maximum strain hardening capability and simultaneously display a transition from bulk-like to small-volume materials behavior associated with the strength at a critical intrinsic size h ~ 20 nm. We develop a deformation mode-map to bridge the gap between the interface characteristics of NMMs and their failure phenomena, which, as shrinking the intrinsic size, transit from localized interface debonding/extrusion to interface shearing. Our findings demonstrate that the optimum robust performance can be achieved in NMMs and provide guidance for their microstructure sensitive design for performance optimization.

  20. Optimum Parameters of a Tuned Liquid Column Damper in a Wind Turbine Subject to Stochastic Load

    Science.gov (United States)

    Alkmim, M. H.; de Morais, M. V. G.; Fabro, A. T.

    2017-12-01

    Parameter optimization for tuned liquid column dampers (TLCD), a class of passive structural control, have been previously proposed in the literature for reducing vibration in wind turbines, and several other applications. However, most of the available work consider the wind excitation as either a deterministic harmonic load or random load with white noise spectra. In this paper, a global direct search optimization algorithm to reduce vibration of a tuned liquid column damper (TLCD), a class of passive structural control device, is presented. The objective is to find optimized parameters for the TLCD under stochastic load from different wind power spectral density. A verification is made considering the analytical solution of undamped primary system under white noise excitation by comparing with result from the literature. Finally, it is shown that different wind profiles can significantly affect the optimum TLCD parameters.

  1. Optimum hypersonic airfoil with power law shock waves

    International Nuclear Information System (INIS)

    Wagner, B.A.

    1990-01-01

    In the present paper the flow field over a class of two-dimensional lifting surfaces is examined from the viewpoint of inviscid, hypersonic small-disturbance theory (HSDT). It is well known that a flow field in which the shock shape S(x) is similar to the body shape F(x) is only possible for F(x) = x k and the freestream Mach number M ∞ = ∞. This self-similar flow has been studied for several decades as it represents one of the few existing exact solutions of the equations of HSDT. Detailed discussions are found for example in papers by Cole, Mirels, Chernyi and Gersten and Nicolai but they are limited to convex body shapes, that is, k ≤ 1. The only study of concave body shapes was attempted by Sullivan where only special cases were considered. The method used here shows that similarity also exists for concave shapes and a complete solution of the flow field for any k > 2/3 is given. The effect of varying k on C L 3/2 /C D is then determined and an optimum shape is found. Furthermore, a wider class of lifting surfaces is constructed using the streamlines of the basic flow field and analysed with respect to the effect on C L 3/2 /C D . 9 refs., 3 figs

  2. Fixation identification: the optimum threshold for a dispersion algorithm.

    Science.gov (United States)

    Blignaut, Pieter

    2009-05-01

    It is hypothesized that the number, position, size, and duration of fixations are functions of the metric used for dispersion in a dispersion-based fixation detection algorithm, as well as of the threshold value. The sensitivity of the I-DT algorithm for the various independent variables was determined through the analysis of gaze data from chess players during a memory recall experiment. A procedure was followed in which scan paths were generated at distinct intervals in a range of threshold values for each of five different metrics of dispersion. The percentage of points of regard (PORs) used, the number of fixations returned, the spatial dispersion of PORs within fixations, and the difference between the scan paths were used as indicators to determine an optimum threshold value. It was found that a fixation radius of 1 degrees provides a threshold that will ensure replicable results in terms of the number and position of fixations while utilizing about 90% of the gaze data captured.

  3. Optimum Water Chemistry in radiation field buildup control

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Chien, C. [Vallecitos Nuclear Center, Pleasanton, CA (United States)

    1995-03-01

    Nuclear utilities continue to face the challenGE of reducing exposure of plant maintenance personnel. GE Nuclear Energy has developed the concept of Optimum Water Chemistry (OWC) to reduce the radiation field buildup and minimize the radioactive waste production. It is believed that reduction of radioactive sources and improvement of the water chemistry quality should significantly reduce both the radiation exposure and radwaste production. The most important source of radioactivity is cobalt and replacement of cobalt containing alloy in the core region as well as in the entire primary system is considered the first priority to achieve the goal of low exposure and minimized waste production. A plant specific computerized cobalt transport model has been developed to evaluate various options in a BWR system under specific conditions. Reduction of iron input and maintaining low ionic impurities in the coolant have been identified as two major tasks for operators. Addition of depleted zinc is a proven technique to reduce Co-60 in reactor water and on out-of-core piping surfaces. The effect of HWC on Co-60 transport in the primary system will also be discussed.

  4. Optimum Drafting Conditions Of Polyester And Viscose Blend Yarns

    Directory of Open Access Journals (Sweden)

    Hatamvand Mohammad

    2017-09-01

    Full Text Available In this study, we used an experimental design to investigate the influence of the total draft, break draft, distance between the aprons (Clips and production roller pressure on yarn quality in order to obtain optimum drafting conditions for polyester and viscose (PES/CV blend yarns in ring spinning frame. We used PES fibers (1.4 dtex × 38 mm long and CV fibers (1.6 dtex × 38 mm long to spin a 20 Tex blend yarn of PES (70%/CV (30% blend ratio. When the break draft, adjustment of distance between of aprons and roller pressure is not reasonable, controlling and leading of the fibers is not sufficient for proper orientation of the fibers in the yarn structure to produce a high quality yarn. Experimental results and statistical analysis show that the best yarn quality will be obtained under drafting conditions total draft of 38, 1.2 break draft, 2.8 mm distance between of aprons and maximum pressure of the production top roller (18daN.

  5. A methodological approach for optimum preservation results: The packaging paradigm

    Directory of Open Access Journals (Sweden)

    Antonios Kanavouras

    2017-04-01

    Full Text Available The food preservation hypothesis as impacted by overall packaging applications is considered in this work. The objective was to devise a decision supportive method for the selection of “just-right” packaging materials, techniques and procedures. For that, food preservation was critically approached in order to identify the optimum outcome at experimental and packaging selection decision-making levels. A mathematically supported and proven knowledge classification, and the establishment of a straightforward coherence mode among the principles of the natural systemic phenomena, were used. The ultimate aim of this work was to justifiably surpass a simple description of packaging according to its measurable specifications, and instead, engage its inherent properties into a cyclic 8-steps-process for eventually understanding its potential to support any particular preservation hypothesis in question. The proposed methodology includes primarily, the consideration of the study hypothesis and, in parallel, the conclusive remarks and claims with respect to the experimental factors involved (properties, parameters, relations and conditions. Considering the experimentally controlled set-ups that a researcher has to expose the food system to and the role of packaging in obtaining its preservation potential, our method supports the experimenters in selecting the experimental conditions under which the preservation hypothesis can be disclaimed and furthermore, it could indicate the way to reduce experimentation research waste. 

  6. An Optimum Solution for Electric-Power Theft

    International Nuclear Information System (INIS)

    Memon, A.H.; Memon, F.

    2013-01-01

    Electric power theft is a problem that continues to plague power sector across the whole country. Every year, the electricity companies face the line losses at an average 20-30% and according to power ministry estimation WAPDA companies lose more than Rs. 125 billion. Significantly, it is enough to destroy the entire power sector of country. According to sources 20% losses means the masses would have to pay extra 20% in terms of electricity tariffs. In other words, the innocent consumers pay the bills of those who steal electricity. For all that, no any permanent solution for this major issue has ever been proposed. We propose an applicable and optimum solution for this impassable problem. In our research, we propose an Electric power theft solution based on three stages; Transmission stage, Distribution stage, and User stage. Without synchronization among all, the complete solution can not be achieved. The proposed solution is simulated on NI (National Instruments) Circuit Design Suite Multisim v.10.0. Our research work is an implicit and a workable approach towards the Electric power theft, as for conditions in Pakistan, which is bearing the brunt of power crises already. (author)

  7. Analysis of optimum density of forest roads in rural properties

    Directory of Open Access Journals (Sweden)

    Flávio Cipriano de Assis do Carmo

    2013-09-01

    Full Text Available This study analyzed the density of roads in rural properties in the south of the Espírito Santo and compared it with the calculation of the optimal density in forestry companies in steep areas. The work was carried out in six small rural properties based on the costs of roads of forest use, wood extraction and the costs of loss of productive area. The technical analysis included time and movement study and productivity. The economic analysis included operational costs, production costs and returns for different scenarios of productivity (180m.ha-1, 220m.ha-1and 250 m.ha-1. According to the results, all the properties have densities of road well above the optimum, which reflects the lack of criteria in the planning of the forest stands, resulting in a inadequate use of plantation area. Property 1 had the highest density of roads (373.92 m.ha-1 and the property 5 presented the lowest density (111.56 m.ha-1.

  8. Evolution of the optimum bidirectional (+/- biphasic) wave for defibrillation.

    Science.gov (United States)

    Geddes, L A; Havel, W

    2000-01-01

    Introduction of the asymmetric bidirectional (+/- biphasic) current waveform has made it possible to achieve ventricular defibrillation with less energy and current than are needed with a unidirectional (monophasic) waveform. The symmetrical bidirectional (sinusoidal) waveform was used for the first human-heart defibrillation. Subsequent studies employed the underdamped and overdamped sine waves, then the trapezoidal (monophasic) wave. Studies were then undertaken to investigate the benefit of adding a second identical and inverted wave; little success rewarded these efforts until it was discovered that the second inverted wave needed to be much less in amplitude to lower the threshold for defibrillation. However, there is no physiologic theory that explains the mechanism of action of the bidirectional wave, nor does any theory predict the optimum amplitude and time dimensions for the second inverted wave. The authors analyze the research that shows that the threshold defibrillation energy is lowest when the charge in the second, inverted phase is slightly more than a third of that in the first phase. An ion-flux, spatial-K+ summation hypothesis is presented that shows the effect on myocardial cells of adding the second inverted current pulse.

  9. Determination of Optimum Compression Ratio: A Tribological Aspect

    Directory of Open Access Journals (Sweden)

    L. Yüksek

    2013-12-01

    Full Text Available Internal combustion engines are the primary energy conversion machines both in industry and transportation. Modern technologies are being implemented to engines to fulfill today's low fuel consumption demand. Friction energy consumed by the rubbing parts of the engines are becoming an important parameter for higher fuel efficiency. Rate of friction loss is primarily affected by sliding speed and the load acting upon rubbing surfaces. Compression ratio is the main parameter that increases the peak cylinder pressure and hence normal load on components. Aim of this study is to investigate the effect of compression ratio on total friction loss of a diesel engine. A variable compression ratio diesel engine was operated at four different compression ratios which were "12.96", "15:59", "18:03", "20:17". Brake power and speed was kept constant at predefined value while measuring the in- cylinder pressure. Friction mean effective pressure ( FMEP data were obtained from the in cylinder pressure curves for each compression ratio. Ratio of friction power to indicated power of the engine was increased from 22.83% to 37.06% with varying compression ratio from 12.96 to 20:17. Considering the thermal efficiency , FMEP and maximum in- cylinder pressure optimum compression ratio interval of the test engine was determined as 18.8 ÷ 19.6.

  10. Optimum Conditions for Artificial Fruiting Body Formation of Cordyceps cardinalis

    Science.gov (United States)

    Kim, Soo-Young; Shrestha, Bhushan; Sung, Gi-Ho; Han, Sang-Kuk

    2010-01-01

    Stromatal fruiting bodies of Cordyceps cardinalis were successfully produced in cereals. Brown rice, German millet and standard millet produced the longest-length of stromata, followed by Chinese pearl barley, Indian millet, black rice and standard barley. Oatmeal produced the shortest-length of fruiting bodies. Supplementation of pupa and larva to the grains resulted in a slightly enhanced production of fruiting bodies; pupa showing better production than larva. 50~60 g of brown rice and 10~20 g of pupa mixed with 50~60 mL of water in 1,000 mL polypropylene (PP) bottle was found to be optimum for fruiting body production. Liquid inoculation of 15~20 mL per PP bottle produced best fruiting bodies. The optimal temperature for the formation of fruiting bodies was 25℃, under conditions of continuous light. Few fruiting bodies were produced under the condition of complete darkness, and the fresh weight was considerable low, compared to that of light condition. PMID:23956641

  11. Derivative load voltage and particle swarm optimization to determine optimum sizing and placement of shunt capacitor in improving line losses

    Directory of Open Access Journals (Sweden)

    Mohamed Milad Baiek

    2016-12-01

    Full Text Available The purpose of this research is to study optimal size and placement of shunt capacitor in order to minimize line loss. Derivative load bus voltage was calculated to determine the sensitive load buses which further being optimum with the placement of shunt capacitor. Particle swarm optimization (PSO was demonstrated on the IEEE 14 bus power system to find optimum size of shunt capacitor in reducing line loss. The objective function was applied to determine the proper placement of capacitor and get satisfied solutions towards constraints. The simulation was run over Matlab under two scenarios namely base case and increasing 100% load. Derivative load bus voltage was simulated to determine the most sensitive load bus. PSO was carried out to determine the optimum sizing of shunt capacitor at the most sensitive bus. The results have been determined that the most sensitive bus was bus number 14 for the base case and increasing 100% load. The optimum sizing was 8.17 Mvar for the base case and 23.98 Mvar for increasing load about 100%. Line losses were able to reduce approximately 0.98% for the base case and increasing 100% load reduced about 3.16%. The proposed method was also proven as a better result compared with harmony search algorithm (HSA method. HSA method recorded loss reduction ratio about 0.44% for the base case and 2.67% when the load was increased by 100% while PSO calculated loss reduction ratio about 1.12% and 4.02% for the base case and increasing 100% load respectively. The result of this study will support the previous study and it is concluded that PSO was successfully able to solve some engineering problems as well as to find a solution in determining shunt capacitor sizing on the power system simply and accurately compared with other evolutionary optimization methods.

  12. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    Science.gov (United States)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  13. Analysis of Systems Hardware Flown on LDEF-Results of the Systems Special Investigation Group

    National Research Council Canada - National Science Library

    Dursch, H

    1992-01-01

    .... The Systems Special Investigation Group (Systems SIG) was formed to investigate the effects of the long term exposure to LEO on systems related hardware and to coordinate and collate all systems analysis of LDEF hardware...

  14. NCERA-101 STATION REPORT - KENNEDY SPACE CENTER: Large Plant Growth Hardware for the International Space Station

    Science.gov (United States)

    Massa, Gioia D.

    2013-01-01

    This is the station report for the national controlled environments meeting. Topics to be discussed will include the Veggie and Advanced Plant Habitat ISS hardware. The goal is to introduce this hardware to a potential user community.

  15. Parameter Validation for Evaluation of Spaceflight Hardware Reusability

    Science.gov (United States)

    Childress-Thompson, Rhonda; Dale, Thomas L.; Farrington, Phillip

    2017-01-01

    Within recent years, there has been an influx of companies around the world pursuing reusable systems for space flight. Much like NASA, many of these new entrants are learning that reusable systems are complex and difficult to acheive. For instance, in its first attempts to retrieve spaceflight hardware for future reuse, SpaceX unsuccessfully tried to land on a barge at sea, resulting in a crash-landing. As this new generation of launch developers continues to develop concepts for reusable systems, having a systematic approach for determining the most effective systems for reuse is paramount. Three factors that influence the effective implementation of reusability are cost, operability and reliability. Therefore, a method that integrates these factors into the decision-making process must be utilized to adequately determine whether hardware used in space flight should be reused or discarded. Previous research has identified seven features that contribute to the successful implementation of reusability for space flight applications, defined reusability for space flight applications, highlighted the importance of reusability, and presented areas that hinder successful implementation of reusability. The next step is to ensure that the list of reusability parameters previously identified is comprehensive, and any duplication is either removed or consolidated. The characteristics to judge the seven features as good indicators for successful reuse are identified and then assessed using multiattribute decision making. Next, discriminators in the form of metrics or descriptors are assigned to each parameter. This paper explains the approach used to evaluate these parameters, define the Measures of Effectiveness (MOE) for reusability, and quantify these parameters. Using the MOEs, each parameter is assessed for its contribution to the reusability of the hardware. Potential data sources needed to validate the approach will be identified.

  16. Optimum Design Of Grid Connected Photovoltaic System Using Concentrators

    Directory of Open Access Journals (Sweden)

    Eng. Mohammed Fawzy

    2015-08-01

    Full Text Available Abstract Due to the increasing demand of electrical energy in Egypt and also in many neighboring countries around the world the main problem facing electrical energy production using classical methods such steam power stations is the depletion of fossil fuels. The gap between the electrical energy demand and the continuous increase on the fossil fuel cost make the problem of electricity generation more sophisticated. With the continuous decrease of the photovoltaic PV technologies cost it doesnt make sense neglecting the importance of electricity production using solar photovoltaic PV especially that the annual average daily energy received is about 6 kamp12310whmamp123112day in Cairo Egypt 30N.In this work a detailed simulation model including photovoltaic PV module characteristics and climatic conditions of Cairo Egypt is developed. The model compares fixed PV systems electrical energy output with photovoltaic PV system using concentrators and double axis tracker systems. The comparison includes the energy generated area required as well as the cost per kwh generated. The optimality criterion is the cost per kwh generated. The system that gives the minimum cost per kwh is the optimum system. To verify the developed model the simulation results of fixed PV modules and CPV using tracking system obtained by the model are compared with practical measurements of 40KW peak station erected in Cairo Egypt 30N.Very good agreement between measured values and results obtained from detailed simulation model. For fixed PV system the detailed economic analysis showed that it gives minimum cost perkwh generated Comparisons among these systems are presented. For Cairo results showed that a cost of about 6 to 9 US centskwh is attainable.

  17. A theoretical analysis of optimum consumer population and its control.

    Science.gov (United States)

    Jiang, Z; Mao, Z; Wang, H

    1994-01-01

    consumption structure elasticity. This model was used in the correlation analysis of the coordinated healthy development of optimum consumer population and the economy.

  18. Optimum coagulant forecasting by modeling jar test experiments using ANNs

    Science.gov (United States)

    Haghiri, Sadaf; Daghighi, Amin; Moharramzadeh, Sina

    2018-01-01

    Currently, the proper utilization of water treatment plants and optimizing their use is of particular importance. Coagulation and flocculation in water treatment are the common ways through which the use of coagulants leads to instability of particles and the formation of larger and heavier particles, resulting in improvement of sedimentation and filtration processes. Determination of the optimum dose of such a coagulant is of particular significance. A high dose, in addition to adding costs, can cause the sediment to remain in the filtrate, a dangerous condition according to the standards, while a sub-adequate dose of coagulants can result in the reducing the required quality and acceptable performance of the coagulation process. Although jar tests are used for testing coagulants, such experiments face many constraints with respect to evaluating the results produced by sudden changes in input water because of their significant costs, long time requirements, and complex relationships among the many factors (turbidity, temperature, pH, alkalinity, etc.) that can influence the efficiency of coagulant and test results. Modeling can be used to overcome these limitations; in this research study, an artificial neural network (ANN) multi-layer perceptron (MLP) with one hidden layer has been used for modeling the jar test to determine the dosage level of used coagulant in water treatment processes. The data contained in this research have been obtained from the drinking water treatment plant located in Ardabil province in Iran. To evaluate the performance of the model, the mean squared error (MSE) and correlation coefficient (R2) parameters have been used. The obtained values are within an acceptable range that demonstrates the high accuracy of the models with respect to the estimation of water-quality characteristics and the optimal dosages of coagulants; so using these models will allow operators to not only reduce costs and time taken to perform experimental jar tests

  19. A Newly Proposed Method to Predict Optimum Occlusal Vertical Dimension.

    Science.gov (United States)

    Yamashita, Shuichiro; Shimizu, Mariko; Katada, Hidenori

    2015-06-01

    Establishing the optimum occlusal vertical dimension (OVD) in prosthetic treatment is an important clinical procedure. No methods are considered to be scientifically accurate in determining the reduced OVD in patients with missing posterior teeth. The purpose of this study was to derive a new formula to predict the lower facial height (LFH) using cephalometric analysis. Fifty-eight lateral cephalometric radiographs of Japanese clinical residents (mean age, 28.6 years) with complete natural dentition were used for this study. Conventional skeletal landmarks were traced. Not only the LFH, but six angular parameters and four linear parameters, which did not vary with reduced OVD, were selected. Multiple linear regression analysis with a stepwise forward approach was used to develop a prediction formula for the LFH using other measured parameters as independent variables. The LFH was significantly correlated with Gonial angle, SNA, N-S, Go-Me, Nasal floor to FH, Nasal floor to SN, and FH to SN. By stepwise multiple linear regression analysis, the following formula was obtained: LFH (degree) = 65.38 + 0.30* (Gonial angle; degree) - 0.49* (SNA; degree) - 0.41* (N-S; mm) + 0.21* (Go-Me; mm) - 15.45* (Nasal floor to FH; degree) + 15.22* (Nasal floor to SN; degree) - 15.40* (FH to SN; degree). Within the limitations of this study for one racial group, our prediction formula is valid in every LFH range (37 to 59°), and it may also be applicable to patients in whom the LFH deviated greatly from the average. © 2014 by the American College of Prosthodontists.

  20. Optimum design of Cassegrain antenna for space laser communication

    Science.gov (United States)

    Hu, Yuan; Jiang, Lun; Wang, Chao; Li, Yingchao

    2016-10-01

    The divergence angle is very important index in space laser communication for energy transfer. Typically, the large aperture telescope as optical antenna is used for angle compression, and the divergence angle of communication beam is usually calculated by diffraction limit angle equation 1.22λ/D. This equation expresses the diffraction of a spherical wave through a circular aperture. However, the light source commonly used laser with a Gaussian distribution, and the optical antenna is central obscurations. The antenna parameters which is obscuration ratio and Gaussian beam apodization were significantly relative with the far field energy. In this study, we obtain the mathematic relation between the divergence angle, energy loss and the antenna parameters. From the relationship, we know that the divergence angle smaller as the increase of antenna obscuration ratio. It would tend to enhance the far-field energy density. But a larger obscuration ratio will increase the energy loss. At the same time, the increase of Gaussian beam apodization resulted in the energy of first diffraction ring was raised but the radius of first ring was increased. They were conflict. And then, the antenna parameters of trade-off was found from curves of obscuration ratio and curves of divergence angle. The parameters of a Cassegrain antenna was optimum designed for the energy maximization, and considerd the apodization from mechanical structure blocking. The long-distance laser communications were successful in these airborne tests. Stable communication was demonstrated. The energy gain is sufficient for SNR of high-bandwidth transmission in atmospheric channel.

  1. Acquisition of reliable vacuum hardware for large accelerator systems

    International Nuclear Information System (INIS)

    Welch, K.M.

    1996-01-01

    Credible and effective communications prove to be the major challenge in the acquisition of reliable vacuum hardware. Technical competence is necessary but not sufficient. We must effectively communicate with management, sponsoring agencies, project organizations, service groups, staff and with vendors. Most of Deming's 14 quality assurance tenets relate to creating an enlightened environment of good communications. All projects progress along six distinct, closely coupled, dynamic phases; all six phases are in a state of perpetual change. These phases and their elements are discussed, with emphasis given to the acquisition phase and its related vocabulary. (author)

  2. Use of Hardware Battery Drill in Orthopedic Surgery.

    Science.gov (United States)

    Satish, Bhava R J; Shahdi, Masood; Ramarao, Duddupudi; Ranganadham, Atmakuri V; Kalamegam, Sundaresan

    2017-03-01

    Among the power drills (Electrical/Pneumatic/Battery) used in Orthopedic surgery, battery drill has got several advantages. Surgeons in low resource settings could not routinely use Orthopedic battery drills (OBD) due to the prohibitive cost of good drills or poor quality of other drills. "Hardware" or Engineering battery drill (HBD) is a viable alternative to OBD. HBD is easy to procure, rugged in nature, easy to maintain, durable, easily serviceable and 70 to 75 times cheaper than the standard high end OBD. We consider HBD as one of the cost effective equipment in Orthopedic operation theatres.

  3. Monitoring and Hardware Management for Critical Fusion Plasma Instrumentation

    Directory of Open Access Journals (Sweden)

    Carvalho Paulo F.

    2018-01-01

    Full Text Available Controlled nuclear fusion aims to obtain energy by particles collision confined inside a nuclear reactor (Tokamak. These ionized particles, heavier isotopes of hydrogen, are the main elements inside of plasma that is kept at high temperatures (millions of Celsius degrees. Due to high temperatures and magnetic confinement, plasma is exposed to several sources of instabilities which require a set of procedures by the control and data acquisition systems throughout fusion experiments processes. Control and data acquisition systems often used in nuclear fusion experiments are based on the Advanced Telecommunication Computer Architecture (AdvancedTCA® standard introduced by the Peripheral Component Interconnect Industrial Manufacturers Group (PICMG®, to meet the demands of telecommunications that require large amount of data (TB transportation at high transfer rates (Gb/s, to ensure high availability including features such as reliability, serviceability and redundancy. For efficient plasma control, systems are required to collect large amounts of data, process it, store for later analysis, make critical decisions in real time and provide status reports either from the experience itself or the electronic instrumentation involved. Moreover, systems should also ensure the correct handling of detected anomalies and identified faults, notify the system operator of occurred events, decisions taken to acknowledge and implemented changes. Therefore, for everything to work in compliance with specifications it is required that the instrumentation includes hardware management and monitoring mechanisms for both hardware and software. These mechanisms should check the system status by reading sensors, manage events, update inventory databases with hardware system components in use and maintenance, store collected information, update firmware and installed software modules, configure and handle alarms to detect possible system failures and prevent emergency

  4. Computer, Network, Software, and Hardware Engineering with Applications

    CERN Document Server

    Schneidewind, Norman F

    2012-01-01

    There are many books on computers, networks, and software engineering but none that integrate the three with applications. Integration is important because, increasingly, software dominates the performance, reliability, maintainability, and availability of complex computer and systems. Books on software engineering typically portray software as if it exists in a vacuum with no relationship to the wider system. This is wrong because a system is more than software. It is comprised of people, organizations, processes, hardware, and software. All of these components must be considered in an integr

  5. Hardware and software constructs for a vibration analysis network

    International Nuclear Information System (INIS)

    Cook, S.A.; Crowe, R.D.; Toffer, H.

    1985-01-01

    Vibration level monitoring and analysis has been initiated at N Reactor, the dual purpose reactor operated at Hanford, Washington by UNC Nuclear Industries (UNC) for the Department of Energy (DOE). The machinery to be monitored was located in several buildings scattered over the plant site, necessitating an approach using satellite stations to collect, monitor and temporarily store data. The satellite stations are, in turn, linked to a centralized processing computer for further analysis. The advantages of a networked data analysis system are discussed in this paper along with the hardware and software required to implement such a system

  6. Combining high productivity with high performance on commodity hardware

    DEFF Research Database (Denmark)

    Skovhede, Kenneth

    to a particular hardware platform, is a risky investment. To make this problem worse, the scientists that have the required field expertise to write the algorithms are not formally trained programmers. This usually leads to scientists writing buggy, inefficient and hard to maintain programs. Occasionally......, a skilled programmer is hired, which increases the program quality, but increases the cost of the program. This extra link also introduces longer development iterations and may introduce other errors, as the programmer is not necessarily an expert in the field. And neither approach solves the issue...

  7. Parallel Processing with Digital Signal Processing Hardware and Software

    Science.gov (United States)

    Swenson, Cory V.

    1995-01-01

    The assembling and testing of a parallel processing system is described which will allow a user to move a Digital Signal Processing (DSP) application from the design stage to the execution/analysis stage through the use of several software tools and hardware devices. The system will be used to demonstrate the feasibility of the Algorithm To Architecture Mapping Model (ATAMM) dataflow paradigm for static multiprocessor solutions of DSP applications. The individual components comprising the system are described followed by the installation procedure, research topics, and initial program development.

  8. System for processing an encrypted instruction stream in hardware

    Science.gov (United States)

    Griswold, Richard L.; Nickless, William K.; Conrad, Ryan C.

    2016-04-12

    A system and method of processing an encrypted instruction stream in hardware is disclosed. Main memory stores the encrypted instruction stream and unencrypted data. A central processing unit (CPU) is operatively coupled to the main memory. A decryptor is operatively coupled to the main memory and located within the CPU. The decryptor decrypts the encrypted instruction stream upon receipt of an instruction fetch signal from a CPU core. Unencrypted data is passed through to the CPU core without decryption upon receipt of a data fetch signal.

  9. Hardware interface unit for control of shuttle RMS vibrations

    Science.gov (United States)

    Lindsay, Thomas S.; Hansen, Joseph M.; Manouchehri, Davoud; Forouhar, Kamran

    1994-01-01

    Vibration of the Shuttle Remote Manipulator System (RMS) increases the time for task completion and reduces task safety for manipulator-assisted operations. If the dynamics of the manipulator and the payload can be physically isolated, performance should improve. Rockwell has developed a self contained hardware unit which interfaces between a manipulator arm and payload. The End Point Control Unit (EPCU) is built and is being tested at Rockwell and at the Langley/Marshall Coupled, Multibody Spacecraft Control Research Facility in NASA's Marshall Space Flight Center in Huntsville, Alabama.

  10. Surface moisture measurement system hardware acceptance test procedure

    International Nuclear Information System (INIS)

    Ritter, G.A.

    1996-01-01

    The purpose of this acceptance test procedure is to verify that the mechanical and electrical features of the Surface Moisture Measurement System are operating as designed and that the unit is ready for field service. This procedure will be used in conjunction with a software acceptance test procedure, which addresses testing of software and electrical features not addressed in this document. Hardware testing will be performed at the 306E Facility in the 300 Area and the Fuels and Materials Examination Facility in the 400 Area. These systems were developed primarily in support of Tank Waste Remediation System (TWRS) Safety Programs for moisture measurement in organic and ferrocyanide watch list tanks

  11. Study of hardware implementations of fast tracking algorithms

    International Nuclear Information System (INIS)

    Song, Z.; Huang, G.; Wang, D.; Lentdecker, G. De; Dong, J.; Léonard, A.; Robert, F.; Yang, Y.

    2017-01-01

    Real-time track reconstruction at high event rates is a major challenge for future experiments in high energy physics. To perform pattern-recognition and track fitting, artificial retina or Hough transformation methods have been introduced in the field which have to be implemented in FPGA firmware. In this note we report on a case study of a possible FPGA hardware implementation approach of the retina algorithm based on a Floating-Point core. Detailed measurements with this algorithm are investigated. Retina performance and capabilities of the FPGA are discussed along with perspectives for further optimization and applications.

  12. Hardware and first results of TUNKA-HiSCORE

    International Nuclear Information System (INIS)

    Kunnas, M.; Brückner, M.; Budnev, N.; Büker, M.; Chvalaev, O.; Dyachok, A.; Einhaus, U.; Epimakhov, S.; Gress, O.; Hampf, D.; Horns, D.; Ivanova, A.; Konstantinov, E.; Korosteleva, E.; Kuzmichev, L.; Lubsandorzhiev, B.; Mirgazov, R.; Monkhoev, R.; Nachtigall, R.; Pakhorukov, A.

    2014-01-01

    As a non-imaging wide-angle Cherenkov air shower detector array with an area of up to 100 km 2 , the HiSCORE (Hundred⁎i Square km Cosmic ORigin Explorer) detector concept allows measurements of gamma rays and cosmic rays in an energy range of 10 TeV up to 1 EeV. In the framework of the Tunka-HiSCORE project we have started measurements with a small prototype array, and planned to build an engineering array (1 km 2 ) on the site of the Tunka experiment in Siberia. The first results and the most important hardware components are presented here

  13. UAV payload and mission control hardware/software architecture

    OpenAIRE

    Pastor Llorens, Enric; López Rubio, Juan; Royo Chic, Pablo

    2007-01-01

    This paper presents an embedded hardware/software architecture specially designed to be applied on mini/micro Unmanned Aerial Vehicles (UAV). An UAV is low-cost non-piloted airplane designed to operate in D-cube (Dangerous-Dirty-Dull) situations [8]. Many types of UAVs exist today; however with the advent of UAV's civil applications, the class of mini/micro UAVs is emerging as a valid option in a commercial scenario. This type of UAV shares limitations with most computer embedded systems: lim...

  14. Optimizing Investment Strategies with the Reconfigurable Hardware Platform RIVYERA

    Directory of Open Access Journals (Sweden)

    Christoph Starke

    2012-01-01

    Full Text Available The hardware structure of a processing element used for optimization of an investment strategy for financial markets is presented. It is shown how this processing element can be multiply implemented on the massively parallel FPGA-machine RIVYERA. This leads to a speedup of a factor of about 17,000 in comparison to one single high-performance PC, while saving more than 99% of the consumed energy. Furthermore, it is shown for a special security and different time periods that the optimized investment strategy delivers an outperformance between 2 and 14 percent in relation to a buy and hold strategy.

  15. Technology Corner: Dating of Electronic Hardware for Prior Art Investigations

    Directory of Open Access Journals (Sweden)

    Sellam Ismail

    2012-03-01

    Full Text Available In many legal matters, specifically patent litigation, determining and authenticating the date of computer hardware or other electronic products or components is often key to establishing the item as legitimate evidence of prior art. Such evidence can be used to buttress claims of technologies available or of events transpiring by or at a particular date.In 1945, the Electronics Industry Association published a standard, EIA 476-A, standardized in the reference Source and Date Code Marking (Electronic Industries Association, 1988.(see PDF for full tech corner

  16. Hardware Architectures for the Orthogonal and Biorthogonal Wavelet Transform

    Directory of Open Access Journals (Sweden)

    G. Knowles

    2002-01-01

    Full Text Available In this note, optimal hardware architectures for the orthogonal and biorthogonal wavelet transforms are presented. The approach used here is not the standard lifting method, but takes advantage of the symmetries inherent in the coefficients of the transforms and the decimation/interpolation operators. The design is based on a highly optimized datapath, which seamlessly integrates both orthogonal and biorthogonal transforms, data extension at the edges and the forward and inverse transforms. The datapath design could be further optimized for speed or low power. The datapath is controlled by a small fast control unit which is hard programmed according to the wavelet or wavelets required by the application.

  17. J-2X Upper Stage Engine: Hardware and Testing 2009

    Science.gov (United States)

    Buzzell, James C.

    2009-01-01

    Mission: Common upper stage engine for Ares I and Ares V. Challenge: Use proven technology from Saturn X-33, RS-68 to develop the highest Isp GG cycle engine in history for 2 missions in record time . Key Features: LOX/LH2 GG cycle, series turbines (2), HIP-bonded MCC, pneumatic ball-sector valves, on-board engine controller, tube-wall regen nozzle/large passively-cooled nozzle extension, TEG boost/cooling . Development Philosophy: proven hardware, aggressive schedule, early risk reduction, requirements-driven.

  18. Monitoring and Hardware Management for Critical Fusion Plasma Instrumentation

    Science.gov (United States)

    Carvalho, Paulo F.; Santos, Bruno; Correia, Miguel; Combo, Álvaro M.; Rodrigues, AntÓnio P.; Pereira, Rita C.; Fernandes, Ana; Cruz, Nuno; Sousa, Jorge; Carvalho, Bernardo B.; Batista, AntÓnio J. N.; Correia, Carlos M. B. A.; Gonçalves, Bruno

    2018-01-01

    Controlled nuclear fusion aims to obtain energy by particles collision confined inside a nuclear reactor (Tokamak). These ionized particles, heavier isotopes of hydrogen, are the main elements inside of plasma that is kept at high temperatures (millions of Celsius degrees). Due to high temperatures and magnetic confinement, plasma is exposed to several sources of instabilities which require a set of procedures by the control and data acquisition systems throughout fusion experiments processes. Control and data acquisition systems often used in nuclear fusion experiments are based on the Advanced Telecommunication Computer Architecture (AdvancedTCA®) standard introduced by the Peripheral Component Interconnect Industrial Manufacturers Group (PICMG®), to meet the demands of telecommunications that require large amount of data (TB) transportation at high transfer rates (Gb/s), to ensure high availability including features such as reliability, serviceability and redundancy. For efficient plasma control, systems are required to collect large amounts of data, process it, store for later analysis, make critical decisions in real time and provide status reports either from the experience itself or the electronic instrumentation involved. Moreover, systems should also ensure the correct handling of detected anomalies and identified faults, notify the system operator of occurred events, decisions taken to acknowledge and implemented changes. Therefore, for everything to work in compliance with specifications it is required that the instrumentation includes hardware management and monitoring mechanisms for both hardware and software. These mechanisms should check the system status by reading sensors, manage events, update inventory databases with hardware system components in use and maintenance, store collected information, update firmware and installed software modules, configure and handle alarms to detect possible system failures and prevent emergency scenarios

  19. Computer organization and design the hardware/software interface

    CERN Document Server

    Patterson, David A

    2009-01-01

    The classic textbook for computer systems analysis and design, Computer Organization and Design, has been thoroughly updated to provide a new focus on the revolutionary change taking place in industry today: the switch from uniprocessor to multicore microprocessors. This new emphasis on parallelism is supported by updates reflecting the newest technologies with examples highlighting the latest processor designs, benchmarking standards, languages and tools. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, compu

  20. SYNTHESIS OF INFORMATION SYSTEM FOR SMART HOUSE HARDWARE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Vikentyeva Olga Leonidovna

    2017-10-01

    Full Text Available Subject: smart house maintenance requires taking into account a number of factors: resource-saving, reduction of operational expenditures, safety enhancement, providing comfortable working and leisure conditions. Automation of the corresponding engineering systems of illumination, climate control, security as well as communication systems and networks via utilization of contemporary technologies (e.g., IoT - Internet of Things poses a significant challenge related to storage and processing of the overwhelmingly massive volume of data whose utilization extent is extremely low nowadays. Since a building’s lifespan is large enough and exceeds the lifespan of codes and standards that take into account the requirements of safety, comfort, energy saving, etc., it is necessary to consider management aspects in the context of rational use of large data at the stage of information modeling. Research objectives: increase the efficiency of managing the subsystems of smart buildings hardware on the basis of a web-based information system that has a flexible multi-level architecture with several control loops and an adaptation model. Materials and methods: since a smart house belongs to man-machine systems, the cybernetic approach is considered as the basic method for design and research of information management system. Instrumental research methods are represented by set-theoretical modelling, automata theory and architectural principles of organization of information management systems. Results: a flexible architecture of information system for management of smart house hardware subsystems has been synthesized. This architecture encompasses several levels: client level, application level and data level as well as three layers: presentation level, actuating device layer and analytics layer. The problem of growing volumes of information processed by realtime message controller is attended by employment of sensors and actuating mechanisms with configurable

  1. Simulation of heat exchanger network (HEN) and planning the optimum cleaning schedule

    International Nuclear Information System (INIS)

    Sanaye, Sepehr; Niroomand, Behzad

    2007-01-01

    Modeling and simulation of heat exchanger networks for estimating the amount of fouling, variations in overall heat transfer coefficient, and variations in outlet temperatures of hot and cold streams has a significant effect on production analysis. In this analysis, parameters such as the exchangers' types and arrangements, their heat transfer surface areas, mass flow rates of hot and cold streams, heat transfer coefficients and variations of fouling with time are required input data. The main goal is to find the variations of the outlet temperatures of the hot and cold streams with time to plan the optimum cleaning schedule of heat exchangers that provides the minimum operational cost or maximum amount of savings. In this paper, the simulation of heat exchanger networks is performed by choosing an asymptotic fouling function. Two main parameters in the asymptotic fouling formation model, i.e. the decay time of fouling formation (τ) and the asymptotic fouling resistance (R f ∼ ) were obtained from empirical data as input parameters to the simulation relations. These data were extracted from the technical history sheets of the Khorasan Petrochemical Plant to guaranty the consistency between our model outputs and the real operating conditions. The output results of the software program developed, including the variations with time of the outlet temperatures of the hot and cold streams, the heat transfer coefficient and the heat transfer rate in the exchangers, are presented for two case studies. Then, an objective function (operational cost) was defined, and the optimal cleaning schedule of the HEN (heat exchanger network) in the Urea and Ammonia units were found by minimizing the objective function using a numerical search method. Based on this minimization procedure, the decision was made whether a heat exchanger should be cleaned or continue to operate. The final result was the most cost effective plan for the HEN cleaning schedule. The corresponding savings by

  2. The Mars Science Laboratory (MSL) Entry, Descent And Landing Instrumentation (MEDLI): Hardware Performance and Data Reconstruction

    Science.gov (United States)

    Little, Alan; Bose, Deepak; Karlgaard, Chris; Munk, Michelle; Kuhl, Chris; Schoenenberger, Mark; Antill, Chuck; Verhappen, Ron; Kutty, Prasad; White, Todd

    2013-01-01

    The Mars Science Laboratory (MSL) Entry, Descent and Landing Instrumentation (MEDLI) hardware was a first-of-its-kind sensor system that gathered temperature and pressure readings on the MSL heatshield during Mars entry on August 6, 2012. MEDLI began as challenging instrumentation problem, and has been a model of collaboration across multiple NASA organizations. After the culmination of almost 6 years of effort, the sensors performed extremely well, collecting data from before atmospheric interface through parachute deploy. This paper will summarize the history of the MEDLI project and hardware development, including key lessons learned that can apply to future instrumentation efforts. MEDLI returned an unprecedented amount of high-quality engineering data from a Mars entry vehicle. We will present the performance of the 3 sensor types: pressure, temperature, and isotherm tracking, as well as the performance of the custom-built sensor support electronics. A key component throughout the MEDLI project has been the ground testing and analysis effort required to understand the returned flight data. Although data analysis is ongoing through 2013, this paper will reveal some of the early findings on the aerothermodynamic environment that MSL encountered at Mars, the response of the heatshield material to that heating environment, and the aerodynamic performance of the entry vehicle. The MEDLI data results promise to challenge our engineering assumptions and revolutionize the way we account for margins in entry vehicle design.

  3. Energy Harvesting-based Spectrum Access with Incremental Cooperation, Relay Selection and Hardware Noises

    Directory of Open Access Journals (Sweden)

    T. N. Nguyen

    2017-04-01

    Full Text Available In this paper, we propose an energy harvesting (EH-based spectrum access model in cognitive radio (CR network. In the proposed scheme, one of available secondary transmitters (STs helps a primary transmitter (PT forward primary signals to a primary receiver (PR. Via the cooperation, the selected ST finds opportunities to access licensed bands to transmit secondary signals to its intended secondary receiver (SR. Secondary users are assumed to be mobile, hence, optimization of energy consumption for these users is interested. The EH STs have to harvest energy from the PT's radio-frequency (RF signals to serve the PT-PR communication as well as to transmit their signals. The proposed scheme employs incremental relaying technique in which the PR only requires the assistance from the STs when the transmission between PT and PR is not successful. Moreover, we also investigate impact of hardware impairments on performance of the primary and secondary networks. For performance evaluation, we derive exact and lower-bound expressions of outage probability (OP over Rayleigh fading channel. Monte-Carlo simulations are performed to verify the theoretical results. The results present that the outage performance of both networks can be enhanced by increasing the number of the ST-SR pairs. In addition, it is also shown that fraction of time used for EH, positions of the secondary users and the hardware-impairment level significantly impact on the system performance.

  4. FTK: the hardware Fast TracKer of the ATLAS experiment at CERN

    CERN Document Server

    Maznas, Ioannis; The ATLAS collaboration

    2016-01-01

    FTK: the hardware Fast TracKer of the ATLAS experiment at CERN In the ever increasing pile-up of the Large Hadron Collider environment, the trigger systems of the experiments have to be exceedingly sophisticated and fast at the same time, in order to select the relevant physics processes against the background processes. The Fast TracKer (FTK) is a track finding implementation at hardware level that is designed to deliver full-scan tracks with $p_{T}$ above 1 GeV to the ATLAS trigger system for every L1 accept (at a maximum rate of 100kHz). To accomplish this, FTK is a highly parallel system which is currently under installation in ATLAS. It will first provide the trigger system with tracks in the central region of the ATLAS detector, and next year it is expected to cover the whole detector. The system is based on pattern matching between hits coming from the silicon trackers of the ATLAS detector and 1 billion simulated patterns stored in specially designed ASIC chips (Associative memory – AM06). In a firs...

  5. Replacement Technologies for Precision Cleaning of Aerospace Hardware for Propellant Service

    Science.gov (United States)

    Beeson, Harold; Kirsch, Mike; Hornung, Steven; Biesinger, Paul

    1997-01-01

    The NASA White Sands Test Facility (WSTF) is developing cleaning and verification processes to replace currently used chlorofluorocarbon-l13- (CFC-113-) based processes. The processes being evaluated include both aqueous- and solvent-based techniques. Replacement technologies are being investigated for aerospace hardware and for gauges and instrumentation. This paper includes the findings of investigations of aqueous cleaning and verification of aerospace hardware using known contaminants, such as hydraulic fluid and commonly used oils. The results correlate nonvolatile residue with CFC 113. The studies also include enhancements to aqueous sampling for organic and particulate contamination. Although aqueous alternatives have been identified for several processes, a need still exists for nonaqueous solvent cleaning, such as the cleaning and cleanliness verification of gauges used for oxygen service. The cleaning effectiveness of tetrachloroethylene (PCE), trichloroethylene (TCE), ethanol, hydrochlorofluorocarbon 225 (HCFC 225), HCFC 141b, HFE 7100(R), and Vertrel MCA(R) was evaluated using aerospace gauges and precision instruments and then compared to the cleaning effectiveness of CFC 113. Solvents considered for use in oxygen systems were also tested for oxygen compatibility using high-pressure oxygen autogenous ignition and liquid oxygen mechanical impact testing.

  6. Hardware-Assisted System for Program Execution Security of SOC

    Directory of Open Access Journals (Sweden)

    Wang Xiang

    2016-01-01

    Full Text Available With the rapid development of embedded systems, the systems’ security has become more and more important. Most embedded systems are at the risk of series of software attacks, such as buffer overflow attack, Trojan virus. In addition, with the rapid growth in the number of embedded systems and wide application, followed embedded hardware attacks are also increasing. This paper presents a new hardware assisted security mechanism to protect the program’s code and data, monitoring its normal execution. The mechanism mainly monitors three types of information: the start/end address of the program of basic blocks; the lightweight hash value in basic blocks and address of the next basic block. These parameters are extracted through additional tools running on PC. The information will be stored in the security module. During normal program execution, the security module is designed to compare the real-time state of program with the information in the security module. If abnormal, it will trigger the appropriate security response, suspend the program and jump to the specified location. The module has been tested and validated on the SOPC with OR1200 processor. The experimental analysis shows that the proposed mechanism can defence a wide range of common software and physical attacks with low performance penalties and minimal overheads.

  7. A Hardware Fast Tracker for the ATLAS trigger

    International Nuclear Information System (INIS)

    Asbah, N.

    2016-01-01

    The trigger system of the ATLAS experiment is designed to reduce the event rate from the LHC nominal bunch crossing at 40 MHz to about 1 kHz, at the design luminosity of 10 34 cm -2 · s -1 . After a successful period of data taking from 2010 to early 2013, the LHC already started with much higher instantaneous luminosity. This will increase the load on High Level Trigger system, the second stage of the selection based on software algorithms. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals. The Fast TracKer (FTK) is part of the ATLAS trigger upgrade project. It is a hardware processor that will provide, at every Level-1 accepted event (100 kHz) and within 100 μs, full tracking information for tracks with momentum as low as 1 GeV. Providing fast, extensive access to tracking information, with resolution comparable to the offline reconstruction, FTK will help in precise detection of the primary and secondary vertices to ensure robust selections and improve the trigger performance. FTK exploits hardware technologies with massive parallelism, combining Associative Memory ASICs, FPGAs and high-speed communication links.

  8. Hardware-accelerated autostereogram rendering for interactive 3D visualization

    Science.gov (United States)

    Petz, Christoph; Goldluecke, Bastian; Magnor, Marcus

    2003-05-01

    Single Image Random Dot Stereograms (SIRDS) are an attractive way of depicting three-dimensional objects using conventional display technology. Once trained in decoupling the eyes' convergence and focusing, autostereograms of this kind are able to convey the three-dimensional impression of a scene. We present in this work an algorithm that generates SIRDS at interactive frame rates on a conventional PC. The presented system allows rotating a 3D geometry model and observing the object from arbitrary positions in real-time. Subjective tests show that the perception of a moving or rotating 3D scene presents no problem: The gaze remains focused onto the object. In contrast to conventional SIRDS algorithms, we render multiple pixels in a single step using a texture-based approach, exploiting the parallel-processing architecture of modern graphics hardware. A vertex program determines the parallax for each vertex of the geometry model, and the graphics hardware's texture unit is used to render the dot pattern. No data has to be transferred between main memory and the graphics card for generating the autostereograms, leaving CPU capacity available for other tasks. Frame rates of 25 fps are attained at a resolution of 1024x512 pixels on a standard PC using a consumer-grade nVidia GeForce4 graphics card, demonstrating the real-time capability of the system.

  9. Testing of hardware implementation of infrared image enhancing algorithm

    Science.gov (United States)

    Dulski, R.; Sosnowski, T.; PiÄ tkowski, T.; Trzaskawka, P.; Kastek, M.; Kucharz, J.

    2012-10-01

    The interpretation of IR images depends on radiative properties of observed objects and surrounding scenery. Skills and experience of an observer itself are also of great importance. The solution to improve the effectiveness of observation is utilization of algorithm of image enhancing capable to improve the image quality and the same effectiveness of object detection. The paper presents results of testing the hardware implementation of IR image enhancing algorithm based on histogram processing. Main issue in hardware implementation of complex procedures for image enhancing algorithms is high computational cost. As a result implementation of complex algorithms using general purpose processors and software usually does not bring satisfactory results. Because of high efficiency requirements and the need of parallel operation, the ALTERA's EP2C35F672 FPGA device was used. It provides sufficient processing speed combined with relatively low power consumption. A digital image processing and control module was designed and constructed around two main integrated circuits: a FPGA device and a microcontroller. Programmable FPGA device performs image data processing operations which requires considerable computing power. It also generates the control signals for array readout, performs NUC correction and bad pixel mapping, generates the control signals for display module and finally executes complex image processing algorithms. Implemented adaptive algorithm is based on plateau histogram equalization. Tests were performed on real IR images of different types of objects registered in different spectral bands. The simulations and laboratory experiments proved the correct operation of the designed system in executing the sophisticated image enhancement.

  10. A hardware fast tracker for the ATLAS trigger

    Science.gov (United States)

    Asbah, Nedaa

    2016-09-01

    The trigger system of the ATLAS experiment is designed to reduce the event rate from the LHC nominal bunch crossing at 40 MHz to about 1 kHz, at the design luminosity of 1034 cm-2 s-1. After a successful period of data taking from 2010 to early 2013, the LHC already started with much higher instantaneous luminosity. This will increase the load on High Level Trigger system, the second stage of the selection based on software algorithms. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals. The Fast TracKer (FTK) is part of the ATLAS trigger upgrade project. It is a hardware processor that will provide, at every Level-1 accepted event (100 kHz) and within 100 microseconds, full tracking information for tracks with momentum as low as 1 GeV. Providing fast, extensive access to tracking information, with resolution comparable to the offline reconstruction, FTK will help in precise detection of the primary and secondary vertices to ensure robust selections and improve the trigger performance. FTK exploits hardware technologies with massive parallelism, combining Associative Memory ASICs, FPGAs and high-speed communication links.

  11. TileCal ROD Hardware and Software Requirements

    CERN Document Server

    Castelo, J; Cuenca, C; Ferrer, A; Fullana, E; Higón, E; Iglesias, C; Munar, A; Poveda, J; Ruiz-Martínez, A; Salvachúa, B; Solans, C; Valls, J A

    2005-01-01

    In this paper we present the specific hardware and firmware requirements and modifications to operate the Liquid Argon Calorimeter (LiArg) ROD motherboard in the Hadronic Tile Calorimeter (TileCal) environment. Although the use of the board is similar for both calorimeters there are still some differences in the operation of the front-end associated to both detectors which make the use of the same board incompatible. We review the evolution of the design of the ROD from the early prototype stages (ROD based on commercial and Demonstrator boards) to the production phases (ROD final board based on the LiArg design), with emphasis on the different operation modes for the TileCal detector. We start with a short review of the TileCal ROD system functionality and then we detail the different ROD hardware requirements for options, the baseline (ROD Demo board) and the final (ROD final high density board). We also summarize the performance parameters of the ROD motherboard based on the final high density option and s...

  12. Autonomous open-source hardware apparatus for quantum key distribution

    Directory of Open Access Journals (Sweden)

    Ignacio H. López Grande

    2016-01-01

    Full Text Available We describe an autonomous, fully functional implementation of the BB84 quantum key distribution protocol using open source hardware microcontrollers for the synchronization, communication, key sifting and real-time key generation diagnostics. The quantum bits are prepared in the polarization of weak optical pulses generated with light emitting diodes, and detected using a sole single-photon counter and a temporally multiplexed scheme. The system generates a shared cryptographic key at a rate of 365 bps, with a raw quantum bit error rate of 2.7%. A detailed description of the peripheral electronics for control, driving and communication between stages is released as supplementary material. The device can be built using simple and reliable hardware and it is presented as an alternative for a practical realization of sophisticated, yet accessible quantum key distribution systems. Received: 11 Novembre 2015, Accepted: 7 January 2016; Edited by: O. Martínez; DOI: http://dx.doi.org/10.4279/PIP.080002 Cite as: I H López Grande, C T Schmiegelow, M A Larotonda, Papers in Physics 8, 080002 (2016

  13. Optimizing memory-bound SYMV kernel on GPU hardware accelerators

    KAUST Repository

    Abdelfattah, Ahmad

    2013-01-01

    Hardware accelerators are becoming ubiquitous high performance scientific computing. They are capable of delivering an unprecedented level of concurrent execution contexts. High-level programming language extensions (e.g., CUDA), profiling tools (e.g., PAPI-CUDA, CUDA Profiler) are paramount to improve productivity, while effectively exploiting the underlying hardware. We present an optimized numerical kernel for computing the symmetric matrix-vector product on nVidia Fermi GPUs. Due to its inherent memory-bound nature, this kernel is very critical in the tridiagonalization of a symmetric dense matrix, which is a preprocessing step to calculate the eigenpairs. Using a novel design to address the irregular memory accesses by hiding latency and increasing bandwidth, our preliminary asymptotic results show 3.5x and 2.5x fold speedups over the similar CUBLAS 4.0 kernel, and 7-8% and 30% fold improvement over the Matrix Algebra on GPU and Multicore Architectures (MAGMA) library in single and double precision arithmetics, respectively. © 2013 Springer-Verlag.

  14. Health Maintenance System (HMS) Hardware Research, Design, and Collaboration

    Science.gov (United States)

    Gonzalez, Stefanie M.

    2010-01-01

    The Space Life Sciences division (SLSD) concentrates on optimizing a crew member's health. Developments are translated into innovative engineering solutions, research growth, and community awareness. This internship incorporates all those areas by targeting various projects. The main project focuses on integrating clinical and biomedical engineering principles to design, develop, and test new medical kits scheduled for launch in the Spring of 2011. Additionally, items will be tagged with Radio Frequency Interference Devices (RFID) to keep track of the inventory. The tags will then be tested to optimize Radio Frequency feed and feed placement. Research growth will occur with ground based experiments designed to measure calcium encrusted deposits in the International Space Station (ISS). The tests will assess the urine calcium levels with Portable Clinical Blood Analyzer (PCBA) technology. If effective then a model for urine calcium will be developed and expanded to microgravity environments. To support collaboration amongst the subdivisions of SLSD the architecture of the Crew Healthcare Systems (CHeCS) SharePoint site has been redesigned for maximum efficiency. Community collaboration has also been established with the University of Southern California, Dept. of Aeronautical Engineering and the Food and Drug Administration (FDA). Hardware disbursements will transpire within these communities to support planetary surface exploration and to serve as an educational tool demonstrating how ground based medicine influenced the technological development of space hardware.

  15. MRI - From basic knowledge to advanced strategies: Hardware

    International Nuclear Information System (INIS)

    Carpenter, T.A.; Williams, E.J.

    1999-01-01

    There have been remarkable advances in the hardware used for nuclear magnetic resonance imaging scanners. These advances have enabled an extraordinary range of sophisticated magnetic resonance MR sequences to be performed routinely. This paper focuses on the following particular aspects: (a) Magnet system. Advances in magnet technology have allowed superconducting magnets which are low maintenance and have excellent homogeneity and very small stray field footprints. (b) Gradient system. Optimisation of gradient design has allowed gradient coils which provide excellent field for spatial encoding, have reduced diameter and have technology to minimise the effects of eddy currents. These coils can now routinely provide the strength and switching rate required by modern imaging methods. (c) Radio-frequency (RF) system. The advances in digital electronics can now provide RF electronics which have low noise characteristics, high accuracy and improved stability, which are all essential to the formation of excellent images. The use of surface coils has increased with the availability of phased-array systems, which are ideal for spinal work. (d) Computer system. The largest advance in technology has been in the supporting computer hardware which is now affordable, reliable and with performance to match the processing requirements demanded by present imaging sequences. (orig.)

  16. Hardware Efficient Architecture with Variable Block Size for Motion Estimation

    Directory of Open Access Journals (Sweden)

    Nehal N. Shah

    2016-01-01

    Full Text Available Video coding standards such as MPEG-x and H.26x incorporate variable block size motion estimation (VBSME which is highly time consuming and extremely complex from hardware implementation perspective due to huge computation. In this paper, we have discussed basic aspects of video coding and studied and compared existing architectures for VBSME. Various architectures with different pixel scanning pattern give a variety of performance results for motion vector (MV generation, showing tradeoff between macroblock processed per second and resource requirement for computation. Aim of this paper is to design VBSME architecture which utilizes optimal resources to minimize chip area and offer adequate frame processing rate for real time implementation. Speed of computation can be improved by accessing 16 pixels of base macroblock of size 4 × 4 in single clock cycle using z scanning pattern. Widely adopted cost function for hardware implementation known as sum of absolute differences (SAD is used for VBSME architecture with multiplexer based absolute difference calculator and partial summation term reduction (PSTR based multioperand adders. Device utilization of proposed implementation is only 22k gates and it can process 179 HD (1920 × 1080 resolution frames in best case and 47 HD resolution frames in worst case per second. Due to such higher throughput design is well suitable for real time implementation.

  17. A Hardware Track Trigger (FTK) for the ATLAS Trigger

    CERN Document Server

    Zhang, J; The ATLAS collaboration

    2014-01-01

    The design and studies of the performance for the ATLAS hardware Fast TracKer (FTK) are presented. The existing trigger system of the ATLAS experiment is deployed to reduce the event rate from the bunch crossing rate of 40 MHz to < 1 KHz for permanent storage at the LHC design luminosity of 10^34 cm^-2 s^-1. The LHC has performed exceptionally well and routinely exceeds the design luminosity and from 2015 is due to operate with higher still luminosities. This will place a significant load on the High Level trigger (HLT) system, both due to the need for more sophisticated algorithms to reject background, and from the larger data volumes that will need to be processed. The Fast TracKer is a custom electronics system that will operate at the full Level-1 accepted rate of 100 KHz and provide high quality tracks at the beginning of processing in the HLT. This will be performing by track reconstruction using hardware with massive parallelism using associative memories (AM) and FPGAs. The availability of the full...

  18. The FTK: A Hardware Track Finder for the ATLAS Trigger

    CERN Document Server

    Alison, J; Anderson, J; Andreani, A; Andreazza, A; Annovi, A; Antonelli, M; Atkinson, M; Auerbach, B; Baines, J; Barberio, E; Beccherle, R; Beretta, M; Biesuz, N V; Blair, R; Blazey, G; Bogdan, M; Boveia, A; Britzger, D; Bryant, P; Burghgrave, B; Calderini, G; Cavaliere, V; Cavasinni, V; Chakraborty, D; Chang, P; Cheng, Y; Cipriani, R; Citraro, S; Citterio, M; Crescioli, F; Dell'Orso, M; Donati, S; Dondero, P; Drake, G; Gadomski, S; Gatta, M; Gentsos, C; Giannetti, P; Giulini, M; Gkaitatzis, S; Howarth, J W; Iizawa, T; Kapliy, A; Kasten, M; Kim, Y K; Kimura, N; Klimkovich, T; Kordas, K; Korikawa, T; Krizka, K; Kubota, T; Lanza, A; Lasagni, F; Liberali, V; Li, H L; Love, J; Luciano, P; Luongo, C; Magalotti, D; Melachrinos, C; Meroni, C; Mitani, T; Negri, A; Neroutsos, P; Neubauer, M; Nikolaidis, S; Okumura, Y; Pandini, C; Penning, B; Petridou, C; Piendibene, M; Proudfoot, J; Rados, P; Roda, C; Rossi, E; Sakurai, Y; Sampsonidis, D; Sampsonidou, D; Schmitt, S; Schoening, A; Shochet, M; Shojaii, S; Soltveit, H; Sotiropoulou, C L; Stabile, A; Tang, F; Testa, M; Tompkins, L; Vercesi, V; Villa, M; Volpi, G; Webster, J; Wu, X; Yorita, K; Yurkewicz, A; Zeng, J C; Zhang, J

    2014-01-01

    The ATLAS experiment trigger system is designed to reduce the event rate, at the LHC design luminosity of 1034 cm-2 s-1, from the nominal bunch crossing rate of 40 MHz to less than 1 kHz for permanent storage. During Run 1, the LHC has performed exceptionally well, routinely exceeding the design luminosity. From 2015 the LHC is due to operate with higher still luminosities. This will place a significant load on the High Level Trigger system, both due to the need for more sophisticated algorithms to reject background, and from the larger data volumes that will need to be processed. The Fast TracKer is a hardware upgrade for Run 2, consisting of a custom electronics system that will operate at the full rate for Level-1 accepted events of 100 kHz and provide high quality tracks at the beginning of processing in the High Level Trigger. This will perform track reconstruction using hardware with massive parallelism using associative memories and FPGAs. The availability of the full tracking information will enable r...

  19. Advances in flexible optrode hardware for use in cybernetic insects

    Science.gov (United States)

    Register, Joseph; Callahan, Dennis M.; Segura, Carlos; LeBlanc, John; Lissandrello, Charles; Kumar, Parshant; Salthouse, Christopher; Wheeler, Jesse

    2017-08-01

    Optogenetic manipulation is widely used to selectively excite and silence neurons in laboratory experiments. Recent efforts to miniaturize the components of optogenetic systems have enabled experiments on freely moving animals, but further miniaturization is required for freely flying insects. In particular, miniaturization of high channel-count optical waveguides are needed for high-resolution interfaces. Thin flexible waveguide arrays are needed to bend light around tight turns to access small anatomical targets. We present the design of lightweight miniaturized optogentic hardware and supporting electronics for the untethered steering of dragonfly flight. The system is designed to enable autonomous flight and includes processing, guidance sensors, solar power, and light stimulators. The system will weigh less than 200mg and be worn by the dragonfly as a backpack. The flexible implant has been designed to provide stimuli around nerves through micron scale apertures of adjacent neural tissue without the use of heavy hardware. We address the challenges of lightweight optogenetics and the development of high contrast polymer waveguides for this purpose.

  20. Software and Hardware Developments For a Mobile Manipulator Control

    Directory of Open Access Journals (Sweden)

    F. Abdessemed

    2008-12-01

    Full Text Available In this paper, we present the hardware and software architectures of an experimental real time control system of a mobile manipulator that performs tasks of manipulating objects in an environment of a large space. The mechanical architecture is a manipulator arm mounted on a mobile platform. In this work we show how one can implement an imbedded system, which includes the hardware and the software. The system makes use of a PC as the host and constitutes the high level layer. It is configured in such a way that it performs all the input-output interface operations; and is composed of different modules that constitute the software making up the required operations to be executed in a scheduling manner in order to meet the requirements of the real time control. In this paper, we also focus on the development of the generalized trajectory generation for the case of tasks where only one subsystem is considered to move and when the whole system is in permanent movement to achieve a particular task either in a free environment, or in presence of obstacles.