WorldWideScience

Sample records for biology initiative hardware

  1. BIOLOGICALLY INSPIRED HARDWARE CELL ARCHITECTURE

    DEFF Research Database (Denmark)

    2010-01-01

    Disclosed is a system comprising: - a reconfigurable hardware platform; - a plurality of hardware units defined as cells adapted to be programmed to provide self-organization and self-maintenance of the system by means of implementing a program expressed in a programming language defined as DNA...... language, where each cell is adapted to communicate with one or more other cells in the system, and where the system further comprises a converter program adapted to convert keywords from the DNA language to a binary DNA code; where the self-organisation comprises that the DNA code is transmitted to one...... or more of the cells, and each of the one or more cells is adapted to determine its function in the system; where if a fault occurs in a first cell and the first cell ceases to perform its function, self-maintenance is performed by that the system transmits information to the cells that the first cell has...

  2. SAMBA: hardware accelerator for biological sequence comparison.

    Science.gov (United States)

    Guerdoux-Jamet, P; Lavenier, D

    1997-12-01

    SAMBA (Systolic Accelerator for Molecular Biological Applications) is a 128 processor hardware accelerator for speeding up the sequence comparison process. The short-term objective is to provide a low-cost board to boost PC or workstation performance on this class of applications. This paper places SAMBA amongst other existing systems and highlights the original features. Real performance obtained from the prototype is demonstrated. For example, a sequence of 300 amino acids is scanned against SWISS-PROT-34 (21 210 389 residues) in 30 s using the Smith and Waterman algorithm. More time-consuming applications, like the bank-to-bank comparison, are computed in a few hours instead of days on standard workstations. Technology allows the prototype to fit onto a single PCI board for plugging into any PC or workstation. SAMBA can be tested on the WEB server at URL http://www.irisa.fr/SAMBA/.

  3. 2D neural hardware versus 3D biological ones

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    This paper will present important limitations of hardware neural nets as opposed to biological neural nets (i.e. the real ones). The author starts by discussing neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural nets. Going further, the focus will be on hardware constraints. The author will present recent results for three different alternatives of implementing neural networks: digital, threshold gate, and analog, while the area and the delay will be related to neurons' fan-in and weights' precision. Based on all of these, it will be shown why hardware implementations cannot cope with their biological inspiration with respect to their power of computation: the mapping onto silicon lacking the third dimension of biological nets. This translates into reduced fan-in, and leads to reduced precision. The main conclusion is that one is faced with the following alternatives: (1) try to cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow one to use the third dimension, e.g. using optical interconnections.

  4. ATLAS level-1 calorimeter trigger hardware: initial timing and energy calibration

    International Nuclear Information System (INIS)

    Childers, J T

    2011-01-01

    The ATLAS Level-1 Calorimeter Trigger identifies high-pT objects in the Liquid Argon and Tile Calorimeters with a fixed latency of up to 2.5μs using a hardware-based, pipelined system built with custom electronics. The Preprocessor Module conditions and digitizes about 7200 pre-summed analogue signals from the calorimeters at the LHC bunch-crossing frequency of 40 MHz, and performs bunch-crossing identification (BCID) and deposited energy measurement for each input signal. This information is passed to further processors for object classification and total energy calculation, and the results are used to make the Level-1 trigger decision for the ATLAS detector. The BCID and energy measurement in the trigger depend on precise timing adjustments to achieve correct sampling of the input signal peak. Test pulses from the calorimeters were analysed to derive the initial timing and energy calibration, and first data from the LHC restart in autumn 2009 and early 2010 were used for validation and further optimization. The results from these calibration measurements are presented.

  5. Space experiment "Cellular Responses to Radiation in Space (CELLRAD)": Hardware and biological system tests

    Science.gov (United States)

    Hellweg, Christine E.; Dilruba, Shahana; Adrian, Astrid; Feles, Sebastian; Schmitz, Claudia; Berger, Thomas; Przybyla, Bartos; Briganti, Luca; Franz, Markus; Segerer, Jürgen; Spitta, Luis F.; Henschenmacher, Bernd; Konda, Bikash; Diegeler, Sebastian; Baumstark-Khan, Christa; Panitz, Corinna; Reitz, Günther

    2015-11-01

    One factor contributing to the high uncertainty in radiation risk assessment for long-term space missions is the insufficient knowledge about possible interactions of radiation with other spaceflight environmental factors. Such factors, e.g. microgravity, have to be considered as possibly additive or even synergistic factors in cancerogenesis. Regarding the effects of microgravity on signal transduction, it cannot be excluded that microgravity alters the cellular response to cosmic radiation, which comprises a complex network of signaling pathways. The purpose of the experiment ;Cellular Responses to Radiation in Space; (CELLRAD, formerly CERASP) is to study the effects of combined exposure to microgravity, radiation and general space flight conditions on mammalian cells, in particular Human Embryonic Kidney (HEK) cells that are stably transfected with different plasmids allowing monitoring of proliferation and the Nuclear Factor κB (NF-κB) pathway by means of fluorescent proteins. The cells will be seeded on ground in multiwell plate units (MPUs), transported to the ISS, and irradiated by an artificial radiation source after an adaptation period at 0 × g and 1 × g. After different incubation periods, the cells will be fixed by pumping a formaldehyde solution into the MPUs. Ground control samples will be treated in the same way. For implementation of CELLRAD in the Biolab on the International Space Station (ISS), tests of the hardware and the biological systems were performed. The sequence of different steps in MPU fabrication (cutting, drilling, cleaning, growth surface coating, and sterilization) was optimized in order to reach full biocompatibility. Different coatings of the foil used as growth surface revealed that coating with 0.1 mg/ml poly-D-lysine supports cell attachment better than collagen type I. The tests of prototype hardware (Science Model) proved its full functionality for automated medium change, irradiation and fixation of cells. Exposure of

  6. Microsoft Biology Initiative: .NET Bioinformatics Platform and Tools

    Science.gov (United States)

    Diaz Acosta, B.

    2011-01-01

    The Microsoft Biology Initiative (MBI) is an effort in Microsoft Research to bring new technology and tools to the area of bioinformatics and biology. This initiative is comprised of two primary components, the Microsoft Biology Foundation (MBF) and the Microsoft Biology Tools (MBT). MBF is a language-neutral bioinformatics toolkit built as an extension to the Microsoft .NET Framework—initially aimed at the area of Genomics research. Currently, it implements a range of parsers for common bioinformatics file formats; a range of algorithms for manipulating DNA, RNA, and protein sequences; and a set of connectors to biological web services such as NCBI BLAST. MBF is available under an open source license, and executables, source code, demo applications, documentation and training materials are freely downloadable from http://research.microsoft.com/bio. MBT is a collection of tools that enable biology and bioinformatics researchers to be more productive in making scientific discoveries.

  7. Predicting Translation Initiation Rates for Designing Synthetic Biology

    Energy Technology Data Exchange (ETDEWEB)

    Reeve, Benjamin; Hargest, Thomas [Centre for Synthetic Biology and Innovation, Imperial College London, London (United Kingdom); Department of Bioengineering, Imperial College London, London (United Kingdom); Gilbert, Charlie [Centre for Synthetic Biology and Innovation, Imperial College London, London (United Kingdom); Ellis, Tom, E-mail: t.ellis@imperial.ac.uk [Centre for Synthetic Biology and Innovation, Imperial College London, London (United Kingdom); Department of Bioengineering, Imperial College London, London (United Kingdom)

    2014-01-20

    In synthetic biology, precise control over protein expression is required in order to construct functional biological systems. A core principle of the synthetic biology approach is a model-guided design and based on the biological understanding of the process, models of prokaryotic protein production have been described. Translation initiation rate is a rate-limiting step in protein production from mRNA and is dependent on the sequence of the 5′-untranslated region and the start of the coding sequence. Translation rate calculators are programs that estimate protein translation rates based on the sequence of these regions of an mRNA, and as protein expression is proportional to the rate of translation initiation, such calculators have been shown to give good approximations of protein expression levels. In this review, three currently available translation rate calculators developed for synthetic biology are considered, with limitations and possible future progress discussed.

  8. Predicting Translation Initiation Rates for Designing Synthetic Biology

    International Nuclear Information System (INIS)

    Reeve, Benjamin; Hargest, Thomas; Gilbert, Charlie; Ellis, Tom

    2014-01-01

    In synthetic biology, precise control over protein expression is required in order to construct functional biological systems. A core principle of the synthetic biology approach is a model-guided design and based on the biological understanding of the process, models of prokaryotic protein production have been described. Translation initiation rate is a rate-limiting step in protein production from mRNA and is dependent on the sequence of the 5′-untranslated region and the start of the coding sequence. Translation rate calculators are programs that estimate protein translation rates based on the sequence of these regions of an mRNA, and as protein expression is proportional to the rate of translation initiation, such calculators have been shown to give good approximations of protein expression levels. In this review, three currently available translation rate calculators developed for synthetic biology are considered, with limitations and possible future progress discussed.

  9. Rheumatoid Arthritis Patients after Initiation of a New Biologic Agent

    DEFF Research Database (Denmark)

    Courvoisier, D. S.; Alpizar-Rodriguez, D.; Gottenberg, Jacques-Eric

    2016-01-01

    BACKGROUND: Response to disease modifying antirheumatic drugs (DMARDs) in rheumatoid arthritis (RA) is often heterogeneous. We aimed to identify types of disease activity trajectories following the initiation of a new biologic DMARD (bDMARD). METHODS: Pooled analysis of nine national registries...

  10. Initial operation and current status of the Fermilab D0 VME-based hardware control and monitor system

    International Nuclear Information System (INIS)

    Goodwin, R.; Florian, R.; Johnson, M.; Jones, A.; Shea, M.

    1990-01-01

    D0 is a large colliding-beam detector at Fermilab. The control system for this detector includes 25 VMEbus-based 68020 computers interconnected using the IEEE-802.5 token-ring local-area network. In operation, the system will monitor about 15000 analogue channels and several thousand digital status bits, interfaced to the 68020 computers by the MIL/STD-1553B multiplexed data bus. In addition, the VME control system uses a memory-mapped multi-VMEbus interconnect to download parameters to more than 100 VME data crates in the experiment. Remote host computers can then read and set memory in the detector crates over the network by accessing memory in the control crates. This is an extremely useful feature during the construction phase, because low-level diagnostics and testing of all the detector electronics can be done over the token-ring network using either IBM-PC compatible computers or the laboratory-wide VAX system. The VME control-system hardware is now being installed in the D0 moveable counting house. Installation is expected to be complete later this year. (orig.)

  11. Initial operation and current status of the fermilab D0 VME-based hardware control and monitor system

    Science.gov (United States)

    Goodwin, Robert; Florian, Robert; Johnson, Marvin; Jones, Alan; Shea, Mike

    1990-08-01

    D0 is a large colliding-beam detector at Fermilab. The control system for this detector includes 25 VMEbus-based 68020 computers interconnected using the IEEE-802.5 token-ring local-area network. In operation, the system will monitor about 15000 analogue channels and several thousand digital status bits, interfaced to the 68020 computers by the MIL/STD-1553B multiplexed data bus. In addition, the VME control system uses a memory-mapped multi-VMEbus interconnect to download parameters to more than 100 VME data crates in the experiment. Remote host computers can then read and set memory in the detector crates over the network by accessing memory in the control crates. This is an extremely useful feature during the construction phase, because low-level diagnostics and testing of all the detector electronics can be done over the token-ring network using either IBM-PC compatible computers or the laboratory-wide VAX system. The VME control-system hardware is now being installed in the D0 moveable counting house. Installation is expected to be complete later this year.

  12. Comparing biologic persistence and healthcare costs in rheumatoid arthritis patients initiating subcutaneous biologics.

    Science.gov (United States)

    Nadkarni, Anagha; McMorrow, Donna; Fowler, Robert; Smith, David

    2017-11-01

    Comparing biologic persistence and healthcare costs between rheumatoid arthritis (RA) patients initiating first- or second-line subcutaneous abatacept, adalimumab, or etanercept. Retrospective, observational cohort study, which included adults with RA who initiated either of the three treatments between 29 July 2011 and 1 July 2015. Total healthcare costs were measured during baseline and follow-up. Biologic persistence was compared using multivariable Cox proportional hazards regression. Subcutaneous abatacept-treated patients had numerically lowest adjusted hazards of nonpersistence and increase from baseline in total healthcare costs. Sensitivity analyses measuring outcomes over an alternative follow-up definition produced consistent results. Abatacept-treated RA patients appeared to have the poorest health status yet often had the lowest increase from baseline in healthcare costs and longest duration of biologic persistence.

  13. Chemical, Biological, Radiological and Nuclear Regional Centres of Excellence Initiative

    International Nuclear Information System (INIS)

    Bril, L.V.

    2013-01-01

    This series of slides presents the initiative launched in May 2010 by the European Union to develop at national and regional levels the necessary institutional capacity to fight against the CBRN (Chemical, Biological, Radiological and Nuclear) risk. The origin of the risk can be: -) criminal (proliferation, theft, sabotage and illicit traffics), -) accidental (industrial catastrophes, transport accidents...) and -) natural (mainly pandemics). The initiative consists in the creation of Centres of Excellence for providing assistance and cooperation in the field of CBRN risk and the creation of experts networks for sharing best practices, reviewing laws and regulation, developing technical capacities in order to mitigate the CBRN risk. The initiative is complementary to the instrument for nuclear safety cooperation. Regional Centres of Excellence are being set up in 6 regions: South East Europe, South East Asia, North Africa, West Africa, Middle East, and Central Asia covering nearly 40 countries. A global budget of 100 million Euros will be dedicated to this initiative for the 2009-2013 period. (A.C.)

  14. Vibrational spectroscopy reveals the initial steps of biological hydrogen evolution.

    Science.gov (United States)

    Katz, S; Noth, J; Horch, M; Shafaat, H S; Happe, T; Hildebrandt, P; Zebger, I

    2016-11-01

    [FeFe] hydrogenases are biocatalytic model systems for the exploitation and investigation of catalytic hydrogen evolution. Here, we used vibrational spectroscopic techniques to characterize, in detail, redox transformations of the [FeFe] and [4Fe4S] sub-sites of the catalytic centre (H-cluster) in a monomeric [FeFe] hydrogenase. Through the application of low-temperature resonance Raman spectroscopy, we discovered a novel metastable intermediate that is characterized by an oxidized [Fe I Fe II ] centre and a reduced [4Fe4S] 1+ cluster. Based on this unusual configuration, this species is assigned to the first, deprotonated H-cluster intermediate of the [FeFe] hydrogenase catalytic cycle. Providing insights into the sequence of initial reaction steps, the identification of this species represents a key finding towards the mechanistic understanding of biological hydrogen evolution.

  15. Open hardware for open science

    CERN Document Server

    CERN Bulletin

    2011-01-01

    Inspired by the open source software movement, the Open Hardware Repository was created to enable hardware developers to share the results of their R&D activities. The recently published CERN Open Hardware Licence offers the legal framework to support this knowledge and technology exchange.   Two years ago, a group of electronics designers led by Javier Serrano, a CERN engineer, working in experimental physics laboratories created the Open Hardware Repository (OHR). This project was initiated in order to facilitate the exchange of hardware designs across the community in line with the ideals of “open science”. The main objectives include avoiding duplication of effort by sharing results across different teams that might be working on the same need. “For hardware developers, the advantages of open hardware are numerous. For example, it is a great learning tool for technologies some developers would not otherwise master, and it avoids unnecessary work if someone ha...

  16. Hardware malware

    CERN Document Server

    Krieg, Christian

    2013-01-01

    In our digital world, integrated circuits are present in nearly every moment of our daily life. Even when using the coffee machine in the morning, or driving our car to work, we interact with integrated circuits. The increasing spread of information technology in virtually all areas of life in the industrialized world offers a broad range of attack vectors. So far, mainly software-based attacks have been considered and investigated, while hardware-based attacks have attracted comparatively little interest. The design and production process of integrated circuits is mostly decentralized due to

  17. Secure coupling of hardware components

    NARCIS (Netherlands)

    Hoepman, J.H.; Joosten, H.J.M.; Knobbe, J.W.

    2011-01-01

    A method and a system for securing communication between at least a first and a second hardware components of a mobile device is described. The method includes establishing a first shared secret between the first and the second hardware components during an initialization of the mobile device and,

  18. [Logico-semantic modeling of the structure of the hardware and software of medico-biological measurements].

    Science.gov (United States)

    Ostapiuk, S F; Grum-Grzhimaĭlo, Iu V; Ionov, B V

    1989-01-01

    An optimal correlation between the development tendencies, the popular use of reliable, practically checked technical novelties, the creation and mastering of principally new types of technique and technology is of great value in the development of science and technology programme. This task is solved by logico-semantic modelling. The prospects of the stated approach are conditioned by the possibilities of automatization of the purpose-supposing approach, that during the elaboration of medical scientific and technical programmes leads to reduction of expenditures for carrying out this function, allows to raise the demands for structure and order of program realisation, removes the duplication of search operations and information transmission in the preparatory period, that finally reduces the elaboration time and increases the quality of scientific and technical programme, provides complex approach to medical information problems. The possibilities of the method of logico-semantic modelling are described on the example of structure of the branch scientific and technical programme formation, directed to elaboration of technical and program methods of the system of automatization of medico-biological measuring.

  19. Brain Biology Machine Initiative (BBMI) at the University of Oregon

    Science.gov (United States)

    2008-09-01

    Neville, 2003). They conducted developmental studies of these different neurocognitive systems in children . The lab initiated a series of studies to... dyslexia by Drs. Sally and Ben Shaywitz. A large audience of faculty, students and K-12 teachers and administrators were present. The next morning...have begun to examine how different interventions may change performance in preschool children , particularly those who are eligible for Head Start

  20. Treatment patterns of rheumatoid arthritis in Japanese hospitals and predictors of the initiation of biologic agents.

    Science.gov (United States)

    Mahlich, Joerg; Sruamsiri, Rosarin

    2017-01-01

    To describe the usage of different biologic agents for rheumatoid arthritis (RA) in Japan over time and to identify factors that affects the decision to initiate treatment with biologic agents. Determinants of a switch to another biologic agent for patients who are already on biologic treatment were also analyzed. We utilized a hospital claims database containing 36,504 Japanese patients with a confirmed RA diagnosis. To analyze the determinants of treatment choices, we applied logistic regression analysis taking into account socio-demographic and medical factors. Analyses determined that 11.8% of diagnoses and 25.4% of treated patients in Japan receive a biologic agent. Significant factors associated with biologic treatment initiation include younger age, female sex, and a higher comorbidity index. The route of administration plays a major role when it comes to a switch between different biologic agents. The lower likelihood of elderly patients to be initiated on biologic treatment might be explained by the risk aversion of Japanese physicians' and patients who are afraid of the potential side effects of biologics. This finding is also consistent with the notion of an age bias that impedes elderly patients from optimal access to biologic treatment. Because claims data does not contain clinical parameters such as disease activity the results should be validated in a clinical context.

  1. Introduction to Hardware Security

    OpenAIRE

    Yier Jin

    2015-01-01

    Hardware security has become a hot topic recently with more and more researchers from related research domains joining this area. However, the understanding of hardware security is often mixed with cybersecurity and cryptography, especially cryptographic hardware. For the same reason, the research scope of hardware security has never been clearly defined. To help researchers who have recently joined in this area better understand the challenges and tasks within the hardware security domain an...

  2. An Ethnographic Observational Study of the Biologic Initiation Conversation Between Rheumatologists and Biologic-naïve Rheumatoid Arthritis Patients.

    Science.gov (United States)

    Kottak, Nicholas; Tesser, John; Leibowitz, Evan; Rosenberg, Melissa; Parenti, Dennis; DeHoratius, Raphael

    2018-01-30

    This ethnographic market research study investigated the biologic initiation conversation between rheumatologists and biologic-naïve patients with rheumatoid arthritis to assess how therapy options, particularly mode of administration, were discussed. Consenting rheumatologists (n=16) and patients (n=48) were videotaped during medical visits and interviewed by a trained ethnographer. The content, structure, and timing of conversations regarding biologic initiation were analyzed. The mean duration of physician-patient visits was approximately 15 minutes; biologic therapies were discussed for a mean of 5.6 minutes. Subcutaneous (SC) and intravenous (IV) therapy options were mentioned in 45 and 35 visits, respectively, out of a total of 48 visits. All patients had some familiarity with SC administration, but nearly half of patients (22/48) were unfamiliar with IV therapy going into the visit. IV administration was not defined or described by rheumatologists in 77% (27/35) of visits mentioning IV therapy. Thus, 19 of 22 patients who were initially unfamiliar with IV therapy remained unfamiliar after the visit. Disparities in physician-patient perceptions were revealed, as all rheumatologists (16/16) believed IV therapy would be less convenient than SC therapy for patients, while 46% (22/48) of patients felt this way. In post-visit interviews, some patients seemed confused and overwhelmed, particularly when presented with many treatment choices in a visit. Some patients stated they would benefit from visual aids or summary sheets of key points. This study revealed significant educational opportunities to improve the biologic initiation conversation and indicated a disparity between patients' and rheumatologists' perception of IV therapy. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  3. Introduction to Hardware Security

    Directory of Open Access Journals (Sweden)

    Yier Jin

    2015-10-01

    Full Text Available Hardware security has become a hot topic recently with more and more researchers from related research domains joining this area. However, the understanding of hardware security is often mixed with cybersecurity and cryptography, especially cryptographic hardware. For the same reason, the research scope of hardware security has never been clearly defined. To help researchers who have recently joined in this area better understand the challenges and tasks within the hardware security domain and to help both academia and industry investigate countermeasures and solutions to solve hardware security problems, we will introduce the key concepts of hardware security as well as its relations to related research topics in this survey paper. Emerging hardware security topics will also be clearly depicted through which the future trend will be elaborated, making this survey paper a good reference for the continuing research efforts in this area.

  4. The effect of initial density and parasitoid intergenerational survival rate on classical biological control

    International Nuclear Information System (INIS)

    Xiao Yanni; Tang Sanyi

    2008-01-01

    Models of biological control have a long history of theoretical development that have focused on the interaction of a parasitoid and its host. The host-parasitoid systems have identified several important and general factors affecting the long-term dynamics of interacting populations. However, much less is known about how the initial densities of host-parasitoid populations affect the biological control as well as the stability of host-parasitoid systems. To do this, the classical Nicholson-Bailey model with host self-regulation and parasitoid intergenerational survival rate is used to uncover the effect of initial densities on the successful biological control. The results indicate that the simplest Nicholson-Bailey model has various coexistence with a wide range of parameters, including boundary attractors where the parasitoid population is absent and interior attractors where host-parasitoid coexists. The final stable states of host-parasitoid populations depend on their initial densities as well as their ratios, and those results are confirmed by basins of attraction of initial densities. The results also indicate that the parasitoid intergenerational survival rate increases the stability of the host-parasitoid systems. Therefore, the present research can help us to further understand the dynamical behavior of host-parasitoid interactions, to improve the classical biological control and to make management decisions

  5. The effect of initial density and parasitoid intergenerational survival rate on classical biological control

    Energy Technology Data Exchange (ETDEWEB)

    Xiao Yanni [Department of Applied Mathematics, Xi' an Jiaotong University, Xi' an 710049 (China); Tang Sanyi [College of Mathematics and Information Science, Shaanxi Normal University, Xi' an 710062 (China); Warwick Systems Biology Centre, University of Warwick, Coventry CV4 7AL (United Kingdom)], E-mail: sanyitang219@hotmail.com

    2008-08-15

    Models of biological control have a long history of theoretical development that have focused on the interaction of a parasitoid and its host. The host-parasitoid systems have identified several important and general factors affecting the long-term dynamics of interacting populations. However, much less is known about how the initial densities of host-parasitoid populations affect the biological control as well as the stability of host-parasitoid systems. To do this, the classical Nicholson-Bailey model with host self-regulation and parasitoid intergenerational survival rate is used to uncover the effect of initial densities on the successful biological control. The results indicate that the simplest Nicholson-Bailey model has various coexistence with a wide range of parameters, including boundary attractors where the parasitoid population is absent and interior attractors where host-parasitoid coexists. The final stable states of host-parasitoid populations depend on their initial densities as well as their ratios, and those results are confirmed by basins of attraction of initial densities. The results also indicate that the parasitoid intergenerational survival rate increases the stability of the host-parasitoid systems. Therefore, the present research can help us to further understand the dynamical behavior of host-parasitoid interactions, to improve the classical biological control and to make management decisions.

  6. DOE EPSCoR Initiative in Structural and computational Biology/Bioinformatics

    Energy Technology Data Exchange (ETDEWEB)

    Wallace, Susan S.

    2008-02-21

    The overall goal of the DOE EPSCoR Initiative in Structural and Computational Biology was to enhance the competiveness of Vermont research in these scientific areas. To develop self-sustaining infrastructure, we increased the critical mass of faculty, developed shared resources that made junior researchers more competitive for federal research grants, implemented programs to train graduate and undergraduate students who participated in these research areas and provided seed money for research projects. During the time period funded by this DOE initiative: (1) four new faculty were recruited to the University of Vermont using DOE resources, three in Computational Biology and one in Structural Biology; (2) technical support was provided for the Computational and Structural Biology facilities; (3) twenty-two graduate students were directly funded by fellowships; (4) fifteen undergraduate students were supported during the summer; and (5) twenty-eight pilot projects were supported. Taken together these dollars resulted in a plethora of published papers, many in high profile journals in the fields and directly impacted competitive extramural funding based on structural or computational biology resulting in 49 million dollars awarded in grants (Appendix I), a 600% return on investment by DOE, the State and University.

  7. How Modeling Standards, Software, and Initiatives Support Reproducibility in Systems Biology and Systems Medicine.

    Science.gov (United States)

    Waltemath, Dagmar; Wolkenhauer, Olaf

    2016-10-01

    Only reproducible results are of significance to science. The lack of suitable standards and appropriate support of standards in software tools has led to numerous publications with irreproducible results. Our objectives are to identify the key challenges of reproducible research and to highlight existing solutions. In this paper, we summarize problems concerning reproducibility in systems biology and systems medicine. We focus on initiatives, standards, and software tools that aim to improve the reproducibility of simulation studies. The long-term success of systems biology and systems medicine depends on trustworthy models and simulations. This requires openness to ensure reusability and transparency to enable reproducibility of results in these fields.

  8. Biologically-initiated rock crust on sandstone: Mechanical and hydraulic properties and resistance to erosion

    Czech Academy of Sciences Publication Activity Database

    Slavík, M.; Bruthans, J.; Filippi, Michal; Schweigstillová, Jana; Falteisek, L.; Řihošek, J.

    2017-01-01

    Roč. 278, FEB 1 (2017), s. 298-313 ISSN 0169-555X R&D Projects: GA ČR GA13-28040S; GA ČR(CZ) GA16-19459S Institutional support: RVO:67985831 ; RVO:67985891 Keywords : biofilm * biocrust * biologically-initiated rock crust * sandstone protection * case hardening Subject RIV: DB - Geology ; Mineralogy; DB - Geology ; Mineralogy (USMH-B) OBOR OECD: Geology; Geology (USMH-B) Impact factor: 2.958, year: 2016

  9. Hardware protection through obfuscation

    CERN Document Server

    Bhunia, Swarup; Tehranipoor, Mark

    2017-01-01

    This book introduces readers to various threats faced during design and fabrication by today’s integrated circuits (ICs) and systems. The authors discuss key issues, including illegal manufacturing of ICs or “IC Overproduction,” insertion of malicious circuits, referred as “Hardware Trojans”, which cause in-field chip/system malfunction, and reverse engineering and piracy of hardware intellectual property (IP). The authors provide a timely discussion of these threats, along with techniques for IC protection based on hardware obfuscation, which makes reverse-engineering an IC design infeasible for adversaries and untrusted parties with any reasonable amount of resources. This exhaustive study includes a review of the hardware obfuscation methods developed at each level of abstraction (RTL, gate, and layout) for conventional IC manufacturing, new forms of obfuscation for emerging integration strategies (split manufacturing, 2.5D ICs, and 3D ICs), and on-chip infrastructure needed for secure exchange o...

  10. Open Hardware at CERN

    CERN Multimedia

    CERN Knowledge Transfer Group

    2015-01-01

    CERN is actively making its knowledge and technology available for the benefit of society and does so through a variety of different mechanisms. Open hardware has in recent years established itself as a very effective way for CERN to make electronics designs and in particular printed circuit board layouts, accessible to anyone, while also facilitating collaboration and design re-use. It is creating an impact on many levels, from companies producing and selling products based on hardware designed at CERN, to new projects being released under the CERN Open Hardware Licence. Today the open hardware community includes large research institutes, universities, individual enthusiasts and companies. Many of the companies are actively involved in the entire process from design to production, delivering services and consultancy and even making their own products available under open licences.

  11. Oligo kernels for datamining on biological sequences: a case study on prokaryotic translation initiation sites

    Directory of Open Access Journals (Sweden)

    Merkl Rainer

    2004-10-01

    Full Text Available Abstract Background Kernel-based learning algorithms are among the most advanced machine learning methods and have been successfully applied to a variety of sequence classification tasks within the field of bioinformatics. Conventional kernels utilized so far do not provide an easy interpretation of the learnt representations in terms of positional and compositional variability of the underlying biological signals. Results We propose a kernel-based approach to datamining on biological sequences. With our method it is possible to model and analyze positional variability of oligomers of any length in a natural way. On one hand this is achieved by mapping the sequences to an intuitive but high-dimensional feature space, well-suited for interpretation of the learnt models. On the other hand, by means of the kernel trick we can provide a general learning algorithm for that high-dimensional representation because all required statistics can be computed without performing an explicit feature space mapping of the sequences. By introducing a kernel parameter that controls the degree of position-dependency, our feature space representation can be tailored to the characteristics of the biological problem at hand. A regularized learning scheme enables application even to biological problems for which only small sets of example sequences are available. Our approach includes a visualization method for transparent representation of characteristic sequence features. Thereby importance of features can be measured in terms of discriminative strength with respect to classification of the underlying sequences. To demonstrate and validate our concept on a biochemically well-defined case, we analyze E. coli translation initiation sites in order to show that we can find biologically relevant signals. For that case, our results clearly show that the Shine-Dalgarno sequence is the most important signal upstream a start codon. The variability in position and composition

  12. Harmonization initiatives in the generation, reporting and application of biological variation data.

    Science.gov (United States)

    Aarsand, Aasne K; Røraas, Thomas; Bartlett, William A; Coşkun, Abdurrahman; Carobene, Anna; Fernandez-Calle, Pilar; Jonker, Niels; Díaz-Garzón, Jorge; Braga, Federica; Sandberg, Sverre

    2018-03-29

    Biological variation (BV) data have many applications in laboratory medicine. However, concern has been raised that some BV estimates in use today may be irrelevant or of unacceptable quality. A number of initiatives have been launched by the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) and other parties to deliver a more harmonized practice in the generation, reporting and application of BV data. Resulting from a necessary focus upon the veracity of historical BV studies, critical appraisal and meta-analysis of published BV studies is possible through application of the Biological Variation Data Critical Appraisal Checklist (BIVAC), published in 2017. The BIVAC compliant large-scale European Biological Variation Study delivers updated high-quality BV data for a wide range of measurands. Other significant developments include the publication of a Medical Subject Heading term for BV and recommendations for common terminology for reporting of BV data. In the near future, global BV estimates derived from meta-analysis of BIVAC appraised publications will be accessible in a Biological Variation Database at the EFLM website. The availability of these high-quality data, which have many applications that impact on the quality and interpretation of clinical laboratory results, will afford improved patient care.

  13. A geometric initial guess for localized electronic orbitals in modular biological systems

    Energy Technology Data Exchange (ETDEWEB)

    Beckman, P. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Univ. of Chicago, IL (United States); Fattebert, J. L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lau, E. Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Osei-Kuffuor, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-11

    Recent first-principles molecular dynamics algorithms using localized electronic orbitals have achieved O(N) complexity and controlled accuracy in simulating systems with finite band gaps. However, accurately deter- mining the centers of these localized orbitals during simulation setup may require O(N3) operations, which is computationally infeasible for many biological systems. We present an O(N) approach for approximating orbital centers in proteins, DNA, and RNA which uses non-localized solutions for a set of fixed-size subproblems to create a set of geometric maps applicable to larger systems. This scalable approach, used as an initial guess in the O(N) first-principles molecular dynamics code MGmol, facilitates first-principles simulations in biological systems of sizes which were previously impossible.

  14. NASA HUNCH Hardware

    Science.gov (United States)

    Hall, Nancy R.; Wagner, James; Phelps, Amanda

    2014-01-01

    What is NASA HUNCH? High School Students United with NASA to Create Hardware-HUNCH is an instructional partnership between NASA and educational institutions. This partnership benefits both NASA and students. NASA receives cost-effective hardware and soft goods, while students receive real-world hands-on experiences. The 2014-2015 was the 12th year of the HUNCH Program. NASA Glenn Research Center joined the program that already included the NASA Johnson Space Flight Center, Marshall Space Flight Center, Langley Research Center and Goddard Space Flight Center. The program included 76 schools in 24 states and NASA Glenn worked with the following five schools in the HUNCH Build to Print Hardware Program: Medina Career Center, Medina, OH; Cattaraugus Allegheny-BOCES, Olean, NY; Orleans Niagara-BOCES, Medina, NY; Apollo Career Center, Lima, OH; Romeo Engineering and Tech Center, Washington, MI. The schools built various parts of an International Space Station (ISS) middeck stowage locker and learned about manufacturing process and how best to build these components to NASA specifications. For the 2015-2016 school year the schools will be part of a larger group of schools building flight hardware consisting of 20 ISS middeck stowage lockers for the ISS Program. The HUNCH Program consists of: Build to Print Hardware; Build to Print Soft Goods; Design and Prototyping; Culinary Challenge; Implementation: Web Page and Video Production.

  15. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  16. The LASS hardware processor

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1976-01-01

    The problems of data analysis with hardware processors are reviewed and a description is given of a programmable processor. This processor, the 168/E, has been designed for use in the LASS multi-processor system; it has an execution speed comparable to the IBM 370/168 and uses the subset of IBM 370 instructions appropriate to the LASS analysis task. (Auth.)

  17. CERN Neutrino Platform Hardware

    CERN Document Server

    Nelson, Kevin

    2017-01-01

    My summer research was broadly in CERN's neutrino platform hardware efforts. This project had two main components: detector assembly and data analysis work for ICARUS. Specifically, I worked on assembly for the ProtoDUNE project and monitored the safety of ICARUS as it was transported to Fermilab by analyzing the accelerometer data from its move.

  18. Hon-yaku: a biology-driven Bayesian methodology for identifying translation initiation sites in prokaryotes

    Directory of Open Access Journals (Sweden)

    de Hoon Michiel JL

    2007-02-01

    Full Text Available Abstract Background Computational prediction methods are currently used to identify genes in prokaryote genomes. However, identification of the correct translation initiation sites remains a difficult task. Accurate translation initiation sites (TISs are important not only for the annotation of unknown proteins but also for the prediction of operons, promoters, and small non-coding RNA genes, as this typically makes use of the intergenic distance. A further problem is that most existing methods are optimized for Escherichia coli data sets; applying these methods to newly sequenced bacterial genomes may not result in an equivalent level of accuracy. Results Based on a biological representation of the translation process, we applied Bayesian statistics to create a score function for predicting translation initiation sites. In contrast to existing programs, our combination of methods uses supervised learning to optimally use the set of known translation initiation sites. We combined the Ribosome Binding Site (RBS sequence, the distance between the translation initiation site and the RBS sequence, the base composition of the start codon, the nucleotide composition (A-rich sequences following start codons, and the expected distribution of the protein length in a Bayesian scoring function. To further increase the prediction accuracy, we also took into account the operon orientation. The outcome of the procedure achieved a prediction accuracy of 93.2% in 858 E. coli genes from the EcoGene data set and 92.7% accuracy in a data set of 1243 Bacillus subtilis 'non-y' genes. We confirmed the performance in the GC-rich Gamma-Proteobacteria Herminiimonas arsenicoxydans, Pseudomonas aeruginosa, and Burkholderia pseudomallei K96243. Conclusion Hon-yaku, being based on a careful choice of elements important in translation, improved the prediction accuracy in B. subtilis data sets and other bacteria except for E. coli. We believe that most remaining

  19. Examining the Role of Leadership in an Undergraduate Biology Institutional Reform Initiative

    Science.gov (United States)

    Matz, Rebecca L.; Jardeleza, Sarah E.

    2016-01-01

    Undergraduate science, technology, engineering, and mathematics (STEM) education reform continues to be a national priority. We studied a reform process in undergraduate biology at a research-intensive university to explore what leadership issues arose in implementation of the initiative when characterized with a descriptive case study method. The data were drawn from transcripts of meetings that occurred over the first 2 years of the reform process. Two literature-based models of change were used as lenses through which to view the data. We find that easing the burden of an undergraduate education reform initiative on faculty through articulating clear outcomes, developing shared vision across stakeholders on how to achieve those outcomes, providing appropriate reward systems, and ensuring faculty have ample opportunity to influence the initiative all appear to increase the success of reform. The two literature-based models were assessed, and an extended model of change is presented that moves from change in STEM instructional strategies to STEM organizational change strategies. These lessons may be transferable to other institutions engaging in education reform. PMID:27856545

  20. [Septic arthritis in children with normal initial C-reactive protein: clinical and biological features].

    Science.gov (United States)

    Basmaci, R; Ilharreborde, B; Bonacorsi, S; Kahil, M; Mallet, C; Aupiais, C; Doit, C; Dugué, S; Lorrot, M

    2014-11-01

    Septic arthritis has to be suspected in children with joint effusion and fever so as to perform joint aspiration, which will confirm the diagnosis by bacteriological methods, and to perform surgical treatment by joint lavage. Since development of current molecular methods, such as real-time PCR, Kingella kingae has become the first microbial agent of osteoarticular infections in young children, whereas Staphylococcus aureus is second. C-reactive protein (CRP) is an aid used to diagnose septic arthritis, but its elevation could be moderate. In a previous study, conducted at our hospital, 10% of children hospitalized for S. aureus or K. kingae septic arthritis had a CRP levelseptic arthritis could be made by other parameters, we analyzed the clinical and biologic features of these patients and compared them to those of children hospitalized for septic arthritis with initial CRP ≥10 mg/L. Among the 89 children with septic arthritis, 10% (n=9) had initial CRPseptic arthritis had no fever, CRP elevation, or fibrinogen elevation. In the CRP-negative group, three of four children with S. aureus arthritis and one of five with K. kingae arthritis had a high CRP level (34, 40, 61, and 13 mg/L, respectively) 3 days after surgery and antibiotic treatment. One child with K. kingae septic arthritis and initial CRParthritis. In the S. aureus arthritis group, none of the children with initial CRP10 mg/L during septic arthritis in children, it could be negative in up to 20% of patients in different studies. However, a mild inflammatory syndrome or even a CRPseptic arthritis. Therefore, a first episode of monoarthritis in children has to be considered as septic arthritis and treatment should not be delayed. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  1. Sterilization of space hardware.

    Science.gov (United States)

    Pflug, I. J.

    1971-01-01

    Discussion of various techniques of sterilization of space flight hardware using either destructive heating or the action of chemicals. Factors considered in the dry-heat destruction of microorganisms include the effects of microbial water content, temperature, the physicochemical properties of the microorganism and adjacent support, and nature of the surrounding gas atmosphere. Dry-heat destruction rates of microorganisms on the surface, between mated surface areas, or buried in the solid material of space vehicle hardware are reviewed, along with alternative dry-heat sterilization cycles, thermodynamic considerations, and considerations of final sterilization-process design. Discussed sterilization chemicals include ethylene oxide, formaldehyde, methyl bromide, dimethyl sulfoxide, peracetic acid, and beta-propiolactone.

  2. Hardware characteristic and application

    International Nuclear Information System (INIS)

    Gu, Dong Hyeon

    1990-03-01

    The contents of this book are system board on memory, performance, system timer system click and specification, coprocessor such as programing interface and hardware interface, power supply on input and output, protection for DC output, Power Good signal, explanation on 84 keyboard and 101/102 keyboard,BIOS system, 80286 instruction set and 80287 coprocessor, characters, keystrokes and colors, communication and compatibility of IBM personal computer on application direction, multitasking and code for distinction of system.

  3. A Retrospective Analysis of Corticosteroid Utilization Before Initiation of Biologic DMARDs Among Patients with Rheumatoid Arthritis in the United States.

    Science.gov (United States)

    Spivey, Christina A; Griffith, Jenny; Kaplan, Cameron; Postlethwaite, Arnold; Ganguli, Arijit; Wang, Junling

    2017-12-04

    Understanding the effects of corticosteroid utilization prior to initiation of biologic disease-modifying antirheumatic drugs (DMARDs) can inform decision-makers on the appropriate use of these medications. This study examined treatment patterns and associated burden of corticosteroid utilization before initiation of biologic DMARDs among rheumatoid arthritis (RA) patients. A retrospective analysis was conducted of adult RA patients in the US MarketScan Database (2011-2015). The following patterns of corticosteroid utilization were analyzed: whether corticosteroids were used; duration of use (short/long duration defined as biologic DMARD initiation were examined using Cox proportional hazards models. Likelihood and number of adverse events were examined using logistic and negative binomial regression models. Generalized linear models were used to examine healthcare costs. Independent variables in all models included patient demographics and health characteristics. A total of 25,542 patients were included (40.84% used corticosteroids). Lower hazard of biologic DMARD initiation was associated with corticosteroid use (hazard ratio = 0.89, 95% confidence interval = 0.83-0.96), long duration and lower dose. Corticosteroid users compared to non-users had higher incidence rates of various adverse events including cardiovascular events (P biologic DMARDS and higher burden of adverse events and healthcare utilization/costs before the initiation of biologic DMARDs. AbbVie Inc.

  4. Factors associated with initial or subsequent choice of biologic disease-modifying antirheumatic drugs for treatment of rheumatoid arthritis.

    Science.gov (United States)

    Jin, Yinzhu; Desai, Rishi J; Liu, Jun; Choi, Nam-Kyong; Kim, Seoyoung C

    2017-07-05

    Biologic disease-modifying antirheumatic drugs (DMARDs) are increasingly used for rheumatoid arthritis (RA) treatment. However, little is known based on contemporary data about the factors associated with DMARDs and patterns of use of biologic DMARDs for initial and subsequent RA treatment. We conducted an observational cohort study using claims data from a commercial health plan (2004-2013) and Medicaid (2000-2010) in three study groups: patients with early untreated RA who were naïve to any type of DMARD and patients with prevalent RA with or without prior exposure to one biologic DMARD. Multivariable logistic regression models were used to examine the effect of patient demographics, clinical characteristics and healthcare utilization factors on the initial and subsequent choice of biologic DMARDs for RA. We identified a total of 195,433 RA patients including 78,667 (40%) with early untreated RA and 93,534 (48%) and 23,232 (12%) with prevalent RA, without or with prior biologic DMARD treatment, respectively. Patients in the commercial insurance were 87% more likely to initiate a biologic DMARD versus patients in Medicaid (OR = 1.87, 95% CI = 1.70-2.05). In Medicaid, African-Americans had lower odds of initiating (OR = 0.59, 95% CI = 0.51-0.68 in early untreated RA; OR = 0.71, 95% CI = 0.61-0.74 in prevalent RA) and switching (OR = 0.71, 95% CI = 0.55-0.90) biologic DMARDs than non-Hispanic whites. Prior use of steroid and non-biologic DMARDs predicted both biologic DMARD initiation and subsequent switching. Etanercept, adalimumab, and infliximab were the most commonly used first-line and second-line biologic DMARDS; patients on anakinra and golimumab were most likely to be switched to other biologic DMARDS. Insurance type, race, and previous use of steroids and non-biologic DMARDs were strongly associated with initial or subsequent treatment with biologic DMARDs.

  5. Biological and hardware complications in implant dentistry

    NARCIS (Netherlands)

    Wismeijer, D.; Buser, D.; Chen, S.

    2015-01-01

    The ITI Treatment Guide series, a unique compendium of evidence-based treatment methods in implant dentistry in daily practice, written by renowned clinicians, provides a comprehensive overview of various therapeutic options. Using an illustrated step-by-step approach, the ITI Treatment Guide shows

  6. Ecological-biological Aspects of Stipa krylovii Roshev Adaptation at the Initial Stages of Ontogenesis

    Directory of Open Access Journals (Sweden)

    N.S. Chistyakova

    2016-08-01

    Full Text Available Xerophytic cereal Stipa krylovii Roshev is interesting as a relic with extensive capabilities to adapt to severe climatic conditions of Eastern Zabaikal’ye, which allows it to occupy a vast areal. The species under study is characterized by distinctive ecological-biological peculiarities, which are underpinned by not only distribution, but historical establishment of the species. The primary goal of the research was to study ecological-biological peculiarities of adaptation of wild cereal Stipa krylovii to the habitat in Eastern Zabaikal’ye. According to the observations, Stipa krylovii is characterized by late development rate coinciding with the period of optimal heat and moisture availability. Seed embryos have a well-developed scutellum, distinct structures and well differentiated embryo axis. The studies identified no lateral or secondary roots in the cereal. In nature, seeds of S. krylovii are characterized by profound organic peace period, which persists in the course of sprouting under optimal conditions. Peace period of S. krylovii caryopses is likely to be due to the presence of sprouting inhibitors and is overcome in moist autumn period. Seed viability was determined under various soil moisture parameters up to its complete water capacity; the impact of moisture content on seed sprouting rate was studied. The results of the tests on caryopses sprouting with various moisture content demonstrated that at minimum moisture content (10% S. krylovii forms epidermal hairs on coleorhiza; 30% of soil water content is enough for growth activation, viability and sprouting rate of this cereal, which is due to its xerophytic nature. This morphological peculiarity is likely to ensure in nature sprouting of these species in early spring, when soil contain minimum water. Intensity of the initial growth was determined by a number of parameters: rate of change in linear growth of trunk and root parts of the embryo, growth of dry substance of

  7. Space biology initiative program definition review. Trade study 6: Space Station Freedom/spacelab modules compatibility

    Science.gov (United States)

    Jackson, L. Neal; Crenshaw, John, Sr.; Davidson, William L.; Blacknall, Carolyn; Bilodeau, James W.; Stoval, J. Michael; Sutton, Terry

    1989-01-01

    The differences in rack requirements for Spacelab, the Shuttle Orbiter, and the United States (U.S.) laboratory module, European Space Agency (ESA) Columbus module, and the Japanese Experiment Module (JEM) of Space Station Freedom are identified. The feasibility of designing standardized mechanical, structural, electrical, data, video, thermal, and fluid interfaces to allow space flight hardware designed for use in the U.S. laboratory module to be used in other locations is assessed.

  8. COMPUTER HARDWARE MARKING

    CERN Multimedia

    Groupe de protection des biens

    2000-01-01

    As part of the campaign to protect CERN property and for insurance reasons, all computer hardware belonging to the Organization must be marked with the words 'PROPRIETE CERN'.IT Division has recently introduced a new marking system that is both economical and easy to use. From now on all desktop hardware (PCs, Macintoshes, printers) issued by IT Division with a value equal to or exceeding 500 CHF will be marked using this new system.For equipment that is already installed but not yet marked, including UNIX workstations and X terminals, IT Division's Desktop Support Service offers the following services free of charge:Equipment-marking wherever the Service is called out to perform other work (please submit all work requests to the IT Helpdesk on 78888 or helpdesk@cern.ch; for unavoidable operational reasons, the Desktop Support Service will only respond to marking requests when these coincide with requests for other work such as repairs, system upgrades, etc.);Training of personnel designated by Division Leade...

  9. Foundations of hardware IP protection

    CERN Document Server

    Torres, Lionel

    2017-01-01

    This book provides a comprehensive and up-to-date guide to the design of security-hardened, hardware intellectual property (IP). Readers will learn how IP can be threatened, as well as protected, by using means such as hardware obfuscation/camouflaging, watermarking, fingerprinting (PUF), functional locking, remote activation, hidden transmission of data, hardware Trojan detection, protection against hardware Trojan, use of secure element, ultra-lightweight cryptography, and digital rights management. This book serves as a single-source reference to design space exploration of hardware security and IP protection. · Provides readers with a comprehensive overview of hardware intellectual property (IP) security, describing threat models and presenting means of protection, from integrated circuit layout to digital rights management of IP; · Enables readers to transpose techniques fundamental to digital rights management (DRM) to the realm of hardware IP security; · Introduce designers to the concept of salutar...

  10. Factors associated with the initiation of biologic disease-modifying antirheumatic drugs in Texas Medicaid patients with rheumatoid arthritis.

    Science.gov (United States)

    Kim, Gilwan; Barner, Jamie C; Rascati, Karen; Richards, Kristin

    2015-05-01

    Rheumatoid arthritis (RA) is a progressive autoimmune disorder of joints that is associated with high health care costs, yet guidance is lacking on how early to initiate biologic disease-modifying antirheumatic drugs (DMARDs), a class of medications that is the major cost driver in RA management. Few studies have examined the factors associated with the transition from nonbiologic DMARDs, the first-line therapy for RA, to biologic DMARDs in RA patients.  To examine patient sociodemographics, medication use patterns, and clinical characteristics associated with initiation of biologic DMARDs.   This was a retrospective study using the Texas Medicaid prescription and medical claims database from July 1, 2003-December 31, 2010. Adults (aged 18-63 years) with an RA diagnosis (ICD-9-CM code 714.xx), no nonbiologic DMARD or biologic DMARD use during the 6-month pre-index period, and a minimum of 2 prescription claims for the same nonbiologic DMARD during the post-index period were included in the study. The index date was defined as the date when the first nonbiologic DMARD claim was made. Predictors of initiation of biologic DMARDs were age, gender, race, adherence (proportion of days covered), persistence to nonbiologic DMARDs, comorbidity (Charlson Comorbidity Index [CCI]), pain medication use, glucocorticoid use, and rheumatologist visit. Logistic regression was used to examine the factors associated with the initiation of biologic DMARDs.   A total of 2,714 patients were included. After controlling for patient characteristics, logistic regression showed, that compared with methotrexate (MTX) users, sulfasalazine (SSZ) and hydroxychloroquine (HCQ) users were less likely to initiate biologic DMARDs by 69.0% (OR = 0.310, 95% CI = 0.221-0.434, P  less than  0.0001) and 79.9% (OR = 0.201, 95% CI = 0.152-0.265, P  less than  0.0001), respectively. Nonbiologic DMARD dual therapy users were 39.1% less likely to initiate biologic DMARDs compared

  11. Factors Associated With Initiation of Biologics in Patients With Axial Spondyloarthritis in an Urban Asian City: A PRESPOND Study.

    Science.gov (United States)

    Png, Wan Yu; Kwan, Yu Heng; Lee, Yi Xuan; Lim, Ka Keat; Chew, Eng Hui; Lui, Nai Lee; Tan, Chuen Seng; Thumboo, Julian; Østbye, Truls; Fong, Warren

    2018-04-05

    The aim of this study was to examine if patients' sociodemographic, clinical characteristics, and patient-reported outcomes were associated with biologics initiation in patients with axial spondyloarthritis in Singapore. Data from a dedicated registry from a tertiary referral center in Singapore from January 2011 to July 2016 were used. Initiation of first biologics was the main outcome of interest. Logistic regression analyses were used to explore the association of various factors on biologics initiation. Of 189 eligible patients (aged 37.7 ± 13.3 years; 76.2% were males), 30 (15.9 %) were started on biologics during follow-up. In the multivariable analysis model, age (odds ratio [OR]; 0.93; 95% confidence interval [CI], 0.89-0.98; P < 0.01), mental component summary score of Short-Form 36 Health Survey (OR, 0.18; 95% CI, 0.03-0.89; P = 0.04), erythrocyte sedimentation rate (OR, 1.02; 95% CI, 1.00-1.04; P = 0.02), presence of peptic ulcer disease (OR, 10.4; 95% CI, 2.21-48.8; P < 0.01), and lack of good response to nonsteroidal anti-inflammatory drugs (OR, 4.44; 95% CI, 1.63-12.1; P < 0.01) were found to be associated with biologics initiation. Age, erythrocyte sedimentation rate, mental component summary score, comorbidities of peptic ulcer disease, and responsiveness to nonsteroidal anti-inflammatory drugs were associated with biologics initiation. It is essential that clinicians recognize these factors in order to optimize therapy.

  12. The Spanish biology/disease initiative within the human proteome project: Application to rheumatic diseases.

    Science.gov (United States)

    Ruiz-Romero, Cristina; Calamia, Valentina; Albar, Juan Pablo; Casal, José Ignacio; Corrales, Fernando J; Fernández-Puente, Patricia; Gil, Concha; Mateos, Jesús; Vivanco, Fernando; Blanco, Francisco J

    2015-09-08

    The Spanish Chromosome 16 consortium is integrated in the global initiative Human Proteome Project, which aims to develop an entire map of the proteins encoded following a gene-centric strategy (C-HPP) in order to make progress in the understanding of human biology in health and disease (B/D-HPP). Chromosome 16 contains many genes encoding proteins involved in the development of a broad range of diseases, which have a significant impact on the health care system. The Spanish HPP consortium has developed a B/D platform with five programs focused on selected medical areas: cancer, obesity, cardiovascular, infectious and rheumatic diseases. Each of these areas has a clinical leader associated to a proteomic investigator with the responsibility to get a comprehensive understanding of the proteins encoded by Chromosome 16 genes. Proteomics strategies have enabled great advances in the area of rheumatic diseases, particularly in osteoarthritis, with studies performed on joint cells, tissues and fluids. In this manuscript we describe how the Spanish HPP-16 consortium has developed a B/D platform with five programs focused on selected medical areas: cancer, obesity, cardiovascular, infectious and rheumatic diseases. Each of these areas has a clinical leader associated to a proteomic investigator with the responsibility to get a comprehensive understanding of the proteins encoded by Chromosome 16 genes. We show how the Proteomic strategy has enabled great advances in the area of rheumatic diseases, particularly in osteoarthritis, with studies performed on joint cells, tissues and fluids. This article is part of a Special Issue entitled: HUPO 2014. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Hardware Support for Embedded Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    The general Java runtime environment is resource hungry and unfriendly for real-time systems. To reduce the resource consumption of Java in embedded systems, direct hardware support of the language is a valuable option. Furthermore, an implementation of the Java virtual machine in hardware enables...... worst-case execution time analysis of Java programs. This chapter gives an overview of current approaches to hardware support for embedded and real-time Java....

  14. Hardware assisted hypervisor introspection.

    Science.gov (United States)

    Shi, Jiangyong; Yang, Yuexiang; Tang, Chuan

    2016-01-01

    In this paper, we introduce hypervisor introspection, an out-of-box way to monitor the execution of hypervisors. Similar to virtual machine introspection which has been proposed to protect virtual machines in an out-of-box way over the past decade, hypervisor introspection can be used to protect hypervisors which are the basis of cloud security. Virtual machine introspection tools are usually deployed either in hypervisor or in privileged virtual machines, which might also be compromised. By utilizing hardware support including nested virtualization, EPT protection and #BP, we are able to monitor all hypercalls belongs to the virtual machines of one hypervisor, include that of privileged virtual machine and even when the hypervisor is compromised. What's more, hypercall injection method is used to simulate hypercall-based attacks and evaluate the performance of our method. Experiment results show that our method can effectively detect hypercall-based attacks with some performance cost. Lastly, we discuss our furture approaches of reducing the performance cost and preventing the compromised hypervisor from detecting the existence of our introspector, in addition with some new scenarios to apply our hypervisor introspection system.

  15. LHCb: Hardware Data Injector

    CERN Multimedia

    Delord, V; Neufeld, N

    2009-01-01

    The LHCb High Level Trigger and Data Acquisition system selects about 2 kHz of events out of the 1 MHz of events, which have been selected previously by the first-level hardware trigger. The selected events are consolidated into files and then sent to permanent storage for subsequent analysis on the Grid. The goal of the upgrade of the LHCb readout is to lift the limitation to 1 MHz. This means speeding up the DAQ to 40 MHz. Such a DAQ system will certainly employ 10 Gigabit or technologies and might also need new networking protocols: a customized TCP or proprietary solutions. A test module is being presented, which integrates in the existing LHCb infrastructure. It is a 10-Gigabit traffic generator, flexible enough to generate LHCb's raw data packets using dummy data or simulated data. These data are seen as real data coming from sub-detectors by the DAQ. The implementation is based on an FPGA using 10 Gigabit Ethernet interface. This module is integrated in the experiment control system. The architecture, ...

  16. Hardware for soft computing and soft computing for hardware

    CERN Document Server

    Nedjah, Nadia

    2014-01-01

    Single and Multi-Objective Evolutionary Computation (MOEA),  Genetic Algorithms (GAs), Artificial Neural Networks (ANNs), Fuzzy Controllers (FCs), Particle Swarm Optimization (PSO) and Ant colony Optimization (ACO) are becoming omnipresent in almost every intelligent system design. Unfortunately, the application of the majority of these techniques is complex and so requires a huge computational effort to yield useful and practical results. Therefore, dedicated hardware for evolutionary, neural and fuzzy computation is a key issue for designers. With the spread of reconfigurable hardware such as FPGAs, digital as well as analog hardware implementations of such computation become cost-effective. The idea behind this book is to offer a variety of hardware designs for soft computing techniques that can be embedded in any final product. Also, to introduce the successful application of soft computing technique to solve many hard problem encountered during the design of embedded hardware designs. Reconfigurable em...

  17. Basics of spectroscopic instruments. Hardware of NMR spectrometer

    International Nuclear Information System (INIS)

    Sato, Hajime

    2009-01-01

    NMR is a powerful tool for structure analysis of small molecules, natural products, biological macromolecules, synthesized polymers, samples from material science and so on. Magnetic Resonance Imaging (MRI) is applicable to plants and animals Because most of NMR experiments can be done by an automation mode, one can forget hardware of NMR spectrometers. It would be good to understand features and performance of NMR spectrometers. Here I present hardware of a modern NMR spectrometer which is fully equipped with digital technology. (author)

  18. THE PATH OF TEACHING NATURAL SCIENCES THROUGH THE PEDAGOGY OF MEMORY. AN INITIAL CONTRIBUTION TO THINKING ABOUT BIOLOGY TEACHING

    Directory of Open Access Journals (Sweden)

    Laura Marcela Trujillo Castro

    2016-10-01

    Full Text Available The biology teaching is in relation whit the pedagogy of the memory, that allows to identify first the events, information and facts that have made possible the construction and the reconfiguration of the different structuring components of the biology. Secondly (developed with major extent in the written present, there is identified how from the recognition of the historical reality of the country from the memory, that is to say the memory and the comprehension of the events that have formed the history of the country, is possible the emergency of the biology teaching from a contextual perspective in route of the appropriation and of the construction of identity.   The recognition of the events and facts allows to come out of the memory information to be located from a systemic vision for the comprehension of the events of a discipline and of the happening of the history of a country. In this sense, the written present tries to give an initial offer, from the pedagogy of the memory and from the perspective of education as institutional practice and social action, which consists of starting thinking the education of the biology as act of biological and contextual appropriation.

  19. Flight Avionics Hardware Roadmap

    Science.gov (United States)

    Hodson, Robert; McCabe, Mary; Paulick, Paul; Ruffner, Tim; Some, Rafi; Chen, Yuan; Vitalpur, Sharada; Hughes, Mark; Ling, Kuok; Redifer, Matt; hide

    2013-01-01

    As part of NASA's Avionics Steering Committee's stated goal to advance the avionics discipline ahead of program and project needs, the committee initiated a multi-Center technology roadmapping activity to create a comprehensive avionics roadmap. The roadmap is intended to strategically guide avionics technology development to effectively meet future NASA missions needs. The scope of the roadmap aligns with the twelve avionics elements defined in the ASC charter, but is subdivided into the following five areas: Foundational Technology (including devices and components), Command and Data Handling, Spaceflight Instrumentation, Communication and Tracking, and Human Interfaces.

  20. Hardware for dynamic quantum computing.

    Science.gov (United States)

    Ryan, Colm A; Johnson, Blake R; Ristè, Diego; Donovan, Brian; Ohki, Thomas A

    2017-10-01

    We describe the hardware, gateware, and software developed at Raytheon BBN Technologies for dynamic quantum information processing experiments on superconducting qubits. In dynamic experiments, real-time qubit state information is fed back or fed forward within a fraction of the qubits' coherence time to dynamically change the implemented sequence. The hardware presented here covers both control and readout of superconducting qubits. For readout, we created a custom signal processing gateware and software stack on commercial hardware to convert pulses in a heterodyne receiver into qubit state assignments with minimal latency, alongside data taking capability. For control, we developed custom hardware with gateware and software for pulse sequencing and steering information distribution that is capable of arbitrary control flow in a fraction of superconducting qubit coherence times. Both readout and control platforms make extensive use of field programmable gate arrays to enable tailored qubit control systems in a reconfigurable fabric suitable for iterative development.

  1. NDAS Hardware Translation Layer Development

    Science.gov (United States)

    Nazaretian, Ryan N.; Holladay, Wendy T.

    2011-01-01

    The NASA Data Acquisition System (NDAS) project is aimed to replace all DAS software for NASA s Rocket Testing Facilities. There must be a software-hardware translation layer so the software can properly talk to the hardware. Since the hardware from each test stand varies, drivers for each stand have to be made. These drivers will act more like plugins for the software. If the software is being used in E3, then the software should point to the E3 driver package. If the software is being used at B2, then the software should point to the B2 driver package. The driver packages should also be filled with hardware drivers that are universal to the DAS system. For example, since A1, A2, and B2 all use the Preston 8300AU signal conditioners, then the driver for those three stands should be the same and updated collectively.

  2. Race and diversity in U.S. Biological Anthropology: A decade of AAPA initiatives.

    Science.gov (United States)

    Antón, Susan C; Malhi, Ripan S; Fuentes, Agustín

    2018-01-01

    Biological Anthropology studies the variation and evolution of living humans, non-human primates, and extinct ancestors and for this reason the field should be in an ideal position to attract scientists from a variety of backgrounds who have different views and experiences. However, the origin and history of the discipline, anecdotal observations, self-reports, and recent surveys suggest the field has significant barriers to attracting scholars of color. For a variety of reasons, including quantitative research that demonstrates that diverse groups do better science, the discipline should strive to achieve a more diverse composition. Here we discuss the background and underpinnings of the current and historical dearth of diversity in Biological Anthropology in the U.S. specifically as it relates to representation of minority and underrepresented minority (URM) (or racialized minority) scholars. We trace this lack of diversity to underlying issues of recruitment and retention in the STEM sciences generally, to the history of Anthropology particularly around questions of race-science, and to the absence of Anthropology at many minority-serving institutions, especially HBCUs, a situation that forestalls pathways to the discipline for many minority students. The AAPA Committee on Diversity (COD) was conceived as a means of assessing and improving diversity within the discipline, and we detail the history of the COD since its inception in 2006. Prior to the COD there were no systematic AAPA efforts to consider ethnoracial diversity in our ranks and no programming around questions of diversity and inclusion. Departmental survey data collected by the COD indicate that undergraduate majors in Biological Anthropology are remarkably diverse, but that the discipline loses these scholars between undergraduate and graduate school and systematically up rank. Our analysis of recent membership demographic survey data (2014 and 2017) shows Biological Anthropology to have less

  3. Genetic relation of adamantanes from extracts and semicoking tars of lignites with the initial biological material

    Energy Technology Data Exchange (ETDEWEB)

    Platonov, V.V.; Shvykin, A.Y.; Proskuryakov, V.A.; Podshibyakin, S.I. [Lev Tolstoi State Pedagogical University, Tula (Russian Federation)

    1999-11-01

    A genetic relation was revealed of adamantanes from extracts and semicoking tars of lignites with the relic terpenoid and steroid compounds. Probable pathways are suggested for transformation of the initial natural structures into adamantanes. The qualitative and quantitative compositions of adamantanes from crude oil and coal are compared.

  4. Fabrication of luminescent hydroxyapatite nanorods through surface-initiated RAFT polymerization: Characterization, biological imaging and drug delivery applications

    Energy Technology Data Exchange (ETDEWEB)

    Heng, Chunning [Shaanxi Key Laboratory of Degradable Biomedical Materials, Shaanxi R& D Center of Biomaterials and Fermentation Engineering, School of Chemical and Engineering, Northwest University, Xi’an, 710069 (China); Department of Chemistry, Nanchang University, 999 Xuefu Avenue, Nanchang 330031 (China); Zheng, Xiaoyan [Shaanxi Key Laboratory of Degradable Biomedical Materials, Shaanxi R& D Center of Biomaterials and Fermentation Engineering, School of Chemical and Engineering, Northwest University, Xi’an, 710069 (China); Liu, Meiying; Xu, Dazhuang; Huang, Hongye; Deng, Fengjie [Department of Chemistry, Nanchang University, 999 Xuefu Avenue, Nanchang 330031 (China); Hui, Junfeng, E-mail: huijunfeng@126.com [Shaanxi Key Laboratory of Degradable Biomedical Materials, Shaanxi R& D Center of Biomaterials and Fermentation Engineering, School of Chemical and Engineering, Northwest University, Xi’an, 710069 (China); Zhang, Xiaoyong, E-mail: xiaoyongzhang1980@gmail.com [Department of Chemistry, Nanchang University, 999 Xuefu Avenue, Nanchang 330031 (China); Wei, Yen, E-mail: weiyen@tsinghua.edu.cn [Department of Chemistry and the Tsinghua Center for Frontier Polymer Research, Tsinghua University, Beijing, 100084 (China)

    2016-11-15

    Highlights: • Hydrophobic hydroxyapatite nanorods were obtained from hydrothermal synthesis. • Surface initiated RAFT polymerization was adopted to surface modification of hydroxyapatite nanorods. • These modified hydroxyapatite nanorods showed high water dispersibility and biocompatibility. • These modified hydroxyapatite nanorods can be used for controlled drug delivery. - Abstract: Hydroxyapatite nanomaterials as an important class of nanomaterials, have been widely applied for different biomedical applications for their excellent biocompatibility, biodegradation potential and low cost. In this work, hydroxyapatite nanorods with uniform size and morphology were prepared through hydrothermal synthesis. The surfaces of these hydroxyapatite nanorods are covered with hydrophobic oleic acid, making them poor dispersibility in aqueous solution and difficult for biomedical applications. To overcome this issue, a simple surface initiated polymerization strategy has been developed via combination of the surface ligand exchange and reversible addition fragmentation chain transfer (RAFT) polymerization. Hydroxyapatite nanorods were first modified with Riboflavin-5-phosphate sodium (RPSSD) via ligand exchange reaction between the phosphate group of RPSSD and oleic acid. Then hydroxyl group of nHAp-RPSSD was used to immobilize chain transfer agent, which was used as the initiator for surface-initiated RAFT polymerization. The nHAp-RPSSD-poly(IA-PEGMA) nanocomposites were characterized by means of {sup 1}H nuclear magnetic resonance, Fourier transform infrared spectroscopy, fluorescence spectroscopy and thermal gravimetric analysis in detailed. The biocompatibility, biological imaging and drug delivery of nHAp-RPSSD-poly(IA-PEGMA) were also investigated. Results showed that nHAp-RPSSD-poly(IA-PEGMA) exhibited excellent water dispersibility, desirable optical properties, good biocompatibility and high drug loading capability, making them promising candidates for

  5. Radiation-initiated free-radical fragmentation of biologically active glycerides

    International Nuclear Information System (INIS)

    Akhrem, A.A.; Kisel', M.A.; Shadyro, O.I.; Yurkova, I.L.

    1993-01-01

    Oxidation reactions of the free-radical type play a decisive role in the initial processes of radiation damage. The most suitable substrates for such reactions are lipids. Lipids are a basic structural element of biomembranes and are involved in the barrier function and biocatalytic activity of such membranes. Free-radical degradation of membrane lipids can lead to serious damage and ultimately to destruction of the living cell. A well-studied type of free-radical conversion of lipids is oxidation of polyunsaturated fatty acid residues, so-called peroxide oxidation of lipids. In this paper, using as examples dimyristoylphosphatidyl glycerol (DMPG), monoglycerides, and glycerophosphate, the authors investigated the possibility of free-radical degradation in compounds of a lipid nature containing the α,β-bifunctional group

  6. Biological Production of Methane from Lunar Mission Solid Waste: An Initial Feasibility Assessment

    Science.gov (United States)

    Strayer, Richard; Garland, Jay; Janine, Captain

    A preliminary assessment was made of the potential for biological production of methane from solid waste generated during an early planetary base mission to the moon. This analysis includes: 1) estimation of the amount of biodegradable solid waste generated, 2) background on the potential biodegradability of plastics given their significance in solid wastes, and 3) calculation of potential methane production from the estimate of biodegradable waste. The completed analysis will also include the feasibility of biological methane production costs associated with the biological processing of the solid waste. NASA workshops and Advanced Life Support documentation have estimated the projected amount of solid wastes generated for specific space missions. From one workshop, waste estimates were made for a 180 day transit mission to Mars. The amount of plastic packaging material was not specified, but our visual examination of trash returned from stocktickerSTS missions indicated a large percentage would be plastic film. This plastic, which is not biodegradable, would amount to 1.526 kgdw crew-1 d-1 or 6.10 kgdw d-1 for a crew of 4. Over a mission of 10 days this would amount to 61 kgdw of plastics and for an 180 day lunar surface habitation it would be nearly 1100 kgdw . Approx. 24 % of this waste estimate would be biodegradable (human fecal waste, food waste, and paper), but if plastic packaging was replaced with biodegradable plastic, then 91% would be biodegradable. Plastics are man-made long chain polymeric molecules, and can be divided into two main groups; thermoplastics and thermoset plastics. Thermoplastics comprise over 90% of total plastic use in the placecountry-regionUnited States and are derived from polymerization of olefins via breakage of the double bond and subsequent formation of additional carbon to carbon bonds. The resulting sole-carbon chain polymers are highly resistant to biodegradation and hydrolytic cleavage. Common thermoplastics include low

  7. Raspberry Pi hardware projects 1

    CERN Document Server

    Robinson, Andrew

    2013-01-01

    Learn how to take full advantage of all of Raspberry Pi's amazing features and functions-and have a blast doing it! Congratulations on becoming a proud owner of a Raspberry Pi, the credit-card-sized computer! If you're ready to dive in and start finding out what this amazing little gizmo is really capable of, this ebook is for you. Taken from the forthcoming Raspberry Pi Projects, Raspberry Pi Hardware Projects 1 contains three cool hardware projects that let you have fun with the Raspberry Pi while developing your Raspberry Pi skills. The authors - PiFace inventor, Andrew Robinson and Rasp

  8. ePlant and the 3D data display initiative: integrative systems biology on the world wide web.

    Science.gov (United States)

    Fucile, Geoffrey; Di Biase, David; Nahal, Hardeep; La, Garon; Khodabandeh, Shokoufeh; Chen, Yani; Easley, Kante; Christendat, Dinesh; Kelley, Lawrence; Provart, Nicholas J

    2011-01-10

    Visualization tools for biological data are often limited in their ability to interactively integrate data at multiple scales. These computational tools are also typically limited by two-dimensional displays and programmatic implementations that require separate configurations for each of the user's computing devices and recompilation for functional expansion. Towards overcoming these limitations we have developed "ePlant" (http://bar.utoronto.ca/eplant) - a suite of open-source world wide web-based tools for the visualization of large-scale data sets from the model organism Arabidopsis thaliana. These tools display data spanning multiple biological scales on interactive three-dimensional models. Currently, ePlant consists of the following modules: a sequence conservation explorer that includes homology relationships and single nucleotide polymorphism data, a protein structure model explorer, a molecular interaction network explorer, a gene product subcellular localization explorer, and a gene expression pattern explorer. The ePlant's protein structure explorer module represents experimentally determined and theoretical structures covering >70% of the Arabidopsis proteome. The ePlant framework is accessed entirely through a web browser, and is therefore platform-independent. It can be applied to any model organism. To facilitate the development of three-dimensional displays of biological data on the world wide web we have established the "3D Data Display Initiative" (http://3ddi.org).

  9. Synthesis of 2-18F-fluoroisonicotinic acid hydrazide and initial biological evaluation

    International Nuclear Information System (INIS)

    Al Jammaz, I.; Abu Durrah, B.; Amartey, J.

    2002-01-01

    Isonicotinic acid hydrazide (isonizide) is one of the most effective agents in tuberculosis therapy. This agent rapidly permeates the bacterial cell membrane via passive diffusion. The central nervous system tuberculosis is being observed in patients who are intravenous drug abusers, with AIDS and AIDS-related complex. Therefore, radiopharmaceuticals for diagnosis of tuberculosis may become important. Very few attempts have been made to develop isonicotinic acid and derivatives for the same application. As part of an on-going research effort to develop radiotracers for fluorination of proteins and peptides via prosthetic groups approach, we have synthesized ethyl 2-[18F]-fluoroisonicotinate and 2-[18F]-fluoroisonicotinic acid hydrazide. The synthetic approach starts from treatment of ethyl-2-(trimethylammonium)-isonicotinate precursor using no-carrier-added radiofluoride produced by the 18O(p,n)18F nuclear reaction on 18O-enriched (95 %) water and Kryptofix 222 as nucleophilic catalyst in anhydrous acetonitrile at 100 0 C, gave ethyl 2-[18F]-fluoroisonicotinate in greater than 90% radiochemical yield (decay corrected) within two minutes reaction time. The ether extract of fluorinated ethylester evaporated and residue was re-dissolved in ethanol and treated with hydrazine for 15 minutes in boiling water to obtain 2-[18F]-fluoroisonicotinic acid hydrazide in excellent radiochemical yield. The overall radiochemical yield was greater that 70% with total synthesis time of approximately one hour. This synthetic approach hold considerable promise as a rapid and simple method for fluorination of radiopharmaceuticals of high radiochemical yield. Biological evaluation was performed in normal mice. The data obtained shown that the lungs appear to retain some activity that someone may presume that such radiotracer maybe useful in detection of tuberculosis

  10. Commodity hardware and software summary

    International Nuclear Information System (INIS)

    Wolbers, S.

    1997-04-01

    A review is given of the talks and papers presented in the Commodity Hardware and Software Session at the CHEP97 conference. An examination of the trends leading to the consideration of PC's for HEP is given, and a status of the work that is being done at various HEP labs and Universities is given

  11. Hardware Development Process for Human Research Facility Applications

    Science.gov (United States)

    Bauer, Liz

    2000-01-01

    The simple goal of the Human Research Facility (HRF) is to conduct human research experiments on the International Space Station (ISS) astronauts during long-duration missions. This is accomplished by providing integration and operation of the necessary hardware and software capabilities. A typical hardware development flow consists of five stages: functional inputs and requirements definition, market research, design life cycle through hardware delivery, crew training, and mission support. The purpose of this presentation is to guide the audience through the early hardware development process: requirement definition through selecting a development path. Specific HRF equipment is used to illustrate the hardware development paths. The source of hardware requirements is the science community and HRF program. The HRF Science Working Group, consisting of SCientists from various medical disciplines, defined a basic set of equipment with functional requirements. This established the performance requirements of the hardware. HRF program requirements focus on making the hardware safe and operational in a space environment. This includes structural, thermal, human factors, and material requirements. Science and HRF program requirements are defined in a hardware requirements document which includes verification methods. Once the hardware is fabricated, requirements are verified by inspection, test, analysis, or demonstration. All data is compiled and reviewed to certify the hardware for flight. Obviously, the basis for all hardware development activities is requirement definition. Full and complete requirement definition is ideal prior to initiating the hardware development. However, this is generally not the case, but the hardware team typically has functional inputs as a guide. The first step is for engineers to conduct market research based on the functional inputs provided by scientists. CommerCially available products are evaluated against the science requirements as

  12. Biology

    Indian Academy of Sciences (India)

    I am particularly happy that the Academy is bringing out this document by Professor M S. Valiathan on Ayurvedic Biology. It is an effort to place before the scientific community, especially that of India, the unique scientific opportunities that arise out of viewing Ayurveda from the perspective of contemporary science, its tools ...

  13. Openness to and preference for attributes of biologic therapy prior to initiation among patients with rheumatoid arthritis: patient and rheumatologist perspectives and implications for decision making.

    Science.gov (United States)

    Bolge, Susan C; Goren, Amir; Brown, Duncan; Ginsberg, Seth; Allen, Isabel

    2016-01-01

    Despite American College of Rheumatology recommendations, appropriate and timely initiation of biologic therapies does not always occur. This study examined openness to and preference for attributes of biologic therapies among patients with rheumatoid arthritis (RA), differences in patients' and rheumatologists' perceptions, and discussions around biologic therapy initiation. A self-administered online survey was completed by 243 adult patients with RA in the US who were taking disease-modifying antirheumatic drugs (DMARDs) and had never taken, but had discussed biologic therapy with a rheumatologist. Patients were recruited from a consumer panel (n=142) and patient advocacy organization (n=101). A separate survey was completed by 103 rheumatologists who treated at least 25 patients with RA per month with biologic therapy. Descriptive and bivariate analyses were conducted separately for patients and rheumatologists. Attributes of biologic therapy included route of administration (intravenous infusion or subcutaneous injection), frequency of injections/infusions, and duration of infusion. Over half of patients (53.1%) were open to both intravenous infusion and subcutaneous injection, whereas rheumatologists reported 40.7% of patients would be open to both. Only 26.3% of patients strongly preferred subcutaneous injection, whereas rheumatologists reported 35.2%. Discrepancies were even more pronounced among specific patient types (eg, older vs younger patients and Medicare recipients). Among patients, 23% reported initiating discussion about biologics and 54% reported their rheumatologist initiated the discussion. A majority of rheumatologists reported discussing in detail several key aspects of biologics, whereas a minority of patients reported the same. Preferences differed among patients with RA from rheumatologists' perceptions of these preferences for biologic therapy, including greater openness to intravenous infusion among patients than assumed by

  14. Creating a pipeline of talent for informatics: STEM initiative for high school students in computer science, biology, and biomedical informatics.

    Science.gov (United States)

    Dutta-Moscato, Joyeeta; Gopalakrishnan, Vanathi; Lotze, Michael T; Becich, Michael J

    2014-01-01

    This editorial provides insights into how informatics can attract highly trained students by involving them in science, technology, engineering, and math (STEM) training at the high school level and continuing to provide mentorship and research opportunities through the formative years of their education. Our central premise is that the trajectory necessary to be expert in the emergent fields in front of them requires acceleration at an early time point. Both pathology (and biomedical) informatics are new disciplines which would benefit from involvement by students at an early stage of their education. In 2009, Michael T Lotze MD, Kirsten Livesey (then a medical student, now a medical resident at University of Pittsburgh Medical Center (UPMC)), Richard Hersheberger, PhD (Currently, Dean at Roswell Park), and Megan Seippel, MS (the administrator) launched the University of Pittsburgh Cancer Institute (UPCI) Summer Academy to bring high school students for an 8 week summer academy focused on Cancer Biology. Initially, pathology and biomedical informatics were involved only in the classroom component of the UPCI Summer Academy. In 2011, due to popular interest, an informatics track called Computer Science, Biology and Biomedical Informatics (CoSBBI) was launched. CoSBBI currently acts as a feeder program for the undergraduate degree program in bioinformatics at the University of Pittsburgh, which is a joint degree offered by the Departments of Biology and Computer Science. We believe training in bioinformatics is the best foundation for students interested in future careers in pathology informatics or biomedical informatics. We describe our approach to the recruitment, training and research mentoring of high school students to create a pipeline of exceptionally well-trained applicants for both the disciplines of pathology informatics and biomedical informatics. We emphasize here how mentoring of high school students in pathology informatics and biomedical informatics

  15. Creating a pipeline of talent for informatics: STEM initiative for high school students in computer science, biology, and biomedical informatics

    Science.gov (United States)

    Dutta-Moscato, Joyeeta; Gopalakrishnan, Vanathi; Lotze, Michael T.; Becich, Michael J.

    2014-01-01

    This editorial provides insights into how informatics can attract highly trained students by involving them in science, technology, engineering, and math (STEM) training at the high school level and continuing to provide mentorship and research opportunities through the formative years of their education. Our central premise is that the trajectory necessary to be expert in the emergent fields in front of them requires acceleration at an early time point. Both pathology (and biomedical) informatics are new disciplines which would benefit from involvement by students at an early stage of their education. In 2009, Michael T Lotze MD, Kirsten Livesey (then a medical student, now a medical resident at University of Pittsburgh Medical Center (UPMC)), Richard Hersheberger, PhD (Currently, Dean at Roswell Park), and Megan Seippel, MS (the administrator) launched the University of Pittsburgh Cancer Institute (UPCI) Summer Academy to bring high school students for an 8 week summer academy focused on Cancer Biology. Initially, pathology and biomedical informatics were involved only in the classroom component of the UPCI Summer Academy. In 2011, due to popular interest, an informatics track called Computer Science, Biology and Biomedical Informatics (CoSBBI) was launched. CoSBBI currently acts as a feeder program for the undergraduate degree program in bioinformatics at the University of Pittsburgh, which is a joint degree offered by the Departments of Biology and Computer Science. We believe training in bioinformatics is the best foundation for students interested in future careers in pathology informatics or biomedical informatics. We describe our approach to the recruitment, training and research mentoring of high school students to create a pipeline of exceptionally well-trained applicants for both the disciplines of pathology informatics and biomedical informatics. We emphasize here how mentoring of high school students in pathology informatics and biomedical informatics

  16. The principles of computer hardware

    CERN Document Server

    Clements, Alan

    2000-01-01

    Principles of Computer Hardware, now in its third edition, provides a first course in computer architecture or computer organization for undergraduates. The book covers the core topics of such a course, including Boolean algebra and logic design; number bases and binary arithmetic; the CPU; assembly language; memory systems; and input/output methods and devices. It then goes on to cover the related topics of computer peripherals such as printers; the hardware aspects of the operating system; and data communications, and hence provides a broader overview of the subject. Its readable, tutorial-based approach makes it an accessible introduction to the subject. The book has extensive in-depth coverage of two microprocessors, one of which (the 68000) is widely used in education. All chapters in the new edition have been updated. Major updates include: powerful software simulations of digital systems to accompany the chapters on digital design; a tutorial-based introduction to assembly language, including many exam...

  17. Hunting for hardware changes in data centres

    Science.gov (United States)

    Coelho dos Santos, M.; Steers, I.; Szebenyi, I.; Xafi, A.; Barring, O.; Bonfillou, E.

    2012-12-01

    With many servers and server parts the environment of warehouse sized data centres is increasingly complex. Server life-cycle management and hardware failures are responsible for frequent changes that need to be managed. To manage these changes better a project codenamed “hardware hound” focusing on hardware failure trending and hardware inventory has been started at CERN. By creating and using a hardware oriented data set - the inventory - with detailed information on servers and their parts as well as tracking changes to this inventory, the project aims at, for example, being able to discover trends in hardware failure rates.

  18. Hunting for hardware changes in data centres

    International Nuclear Information System (INIS)

    Coelho dos Santos, M; Steers, I; Szebenyi, I; Xafi, A; Barring, O; Bonfillou, E

    2012-01-01

    With many servers and server parts the environment of warehouse sized data centres is increasingly complex. Server life-cycle management and hardware failures are responsible for frequent changes that need to be managed. To manage these changes better a project codenamed “hardware hound” focusing on hardware failure trending and hardware inventory has been started at CERN. By creating and using a hardware oriented data set - the inventory - with detailed information on servers and their parts as well as tracking changes to this inventory, the project aims at, for example, being able to discover trends in hardware failure rates.

  19. Compiling quantum circuits to realistic hardware architectures using temporal planners

    Science.gov (United States)

    Venturelli, Davide; Do, Minh; Rieffel, Eleanor; Frank, Jeremy

    2018-04-01

    To run quantum algorithms on emerging gate-model quantum hardware, quantum circuits must be compiled to take into account constraints on the hardware. For near-term hardware, with only limited means to mitigate decoherence, it is critical to minimize the duration of the circuit. We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus on compiling to superconducting hardware architectures with nearest neighbor constraints. Our initial experiments focus on compiling Quantum Alternating Operator Ansatz (QAOA) circuits whose high number of commuting gates allow great flexibility in the order in which the gates can be applied. That freedom makes it more challenging to find optimal compilations but also means there is a greater potential win from more optimized compilation than for less flexible circuits. We map this quantum circuit compilation problem to a temporal planning problem, and generated a test suite of compilation problems for QAOA circuits of various sizes to a realistic hardware architecture. We report compilation results from several state-of-the-art temporal planners on this test set. This early empirical evaluation demonstrates that temporal planning is a viable approach to quantum circuit compilation.

  20. Reconfigurable Hardware for Compressing Hyperspectral Image Data

    Science.gov (United States)

    Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua

    2010-01-01

    High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of

  1. Threats and Challenges in Reconfigurable Hardware Security

    OpenAIRE

    Kastner, Ryan; Huffmire, Ted

    2008-01-01

    Computing systems designed using reconfigurable hardware are now used in many sensitive applications, where security is of utmost importance. Unfortunately, a strong notion of security is not currently present in FPGA hardware and software design flows. In the following, we discuss the security implications of using reconfigurable hardware in sensitive applications, and outline problems, attacks, solutions and topics for future research.

  2. Communication Estimation for Hardware/Software Codesign

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1998-01-01

    to be general enough to be able to capture the characteristics of a wide range of communication protocols and yet to be sufficiently detailed as to allow the designer or design tool to efficiently explore tradeoffs between throughput, bus widths, burst/non-burst transfers and data packing strategies. Thus......This paper presents a general high level estimation model of communication throughput for the implementation of a given communication protocol. The model, which is part of a larger model that includes component price, software driver object code size and hardware driver area, is intended...... it provides a basis for decision making with respect to communication protocols/components and communication driver design in the initial design space exploration phase of a co-synthesis process where a large number of possibilities must be examined and where fast estimators are therefore necessary. The fill...

  3. Hardware complications in scoliosis surgery

    Energy Technology Data Exchange (ETDEWEB)

    Bagchi, Kaushik; Mohaideen, Ahamed [Department of Orthopaedic Surgery and Musculoskeletal Services, Maimonides Medical Center, Brooklyn, NY (United States); Thomson, Jeffrey D. [Connecticut Children' s Medical Center, Department of Orthopaedics, Hartford, CT (United States); Foley, Christopher L. [Department of Radiology, Connecticut Children' s Medical Center, Hartford, Connecticut (United States)

    2002-07-01

    Background: Scoliosis surgery has undergone a dramatic evolution over the past 20 years with the advent of new surgical techniques and sophisticated instrumentation. Surgeons have realized scoliosis is a complex multiplanar deformity that requires thorough knowledge of spinal anatomy and pathophysiology in order to manage patients afflicted by it. Nonoperative modalities such as bracing and casting still play roles in the treatment of scoliosis; however, it is the operative treatment that has revolutionized the treatment of this deformity that affects millions worldwide. As part of the evolution of scoliosis surgery, newer implants have resulted in improved outcomes with respect to deformity correction, reliability of fixation, and paucity of complications. Each technique and implant has its own set of unique complications, and the surgeon must appreciate these when planning surgery. Materials and methods: Various surgical techniques and types of instrumentation typically used in scoliosis surgery are briefly discussed. Though scoliosis surgery is associated with a wide variety of complications, only those that directly involve the hardware are discussed. The current literature is reviewed and several illustrative cases of patients treated for scoliosis at the Connecticut Children's Medical Center and the Newington Children's Hospital in Connecticut are briefly presented. Conclusion: Spine surgeons and radiologists should be familiar with the different types of instrumentation in the treatment of scoliosis. Furthermore, they should recognize the clinical and roentgenographic signs of hardware failure as part of prompt and effective treatment of such complications. (orig.)

  4. Travel Software using GPU Hardware

    CERN Document Server

    Szalwinski, Chris M; Dimov, Veliko Atanasov; CERN. Geneva. ATS Department

    2015-01-01

    Travel is the main multi-particle tracking code being used at CERN for the beam dynamics calculations through hadron and ion linear accelerators. It uses two routines for the calculation of space charge forces, namely, rings of charges and point-to-point. This report presents the studies to improve the performance of Travel using GPU hardware. The studies showed that the performance of Travel with the point-to-point simulations of space-charge effects can be speeded up at least 72 times using current GPU hardware. Simple recompilation of the source code using an Intel compiler can improve performance at least 4 times without GPU support. The limited memory of the GPU is the bottleneck. Two algorithms were investigated on this point: repeated computation and tiling. The repeating computation algorithm is simpler and is the currently recommended solution. The tiling algorithm was more complicated and degraded performance. Both build and test instructions for the parallelized version of the software are inclu...

  5. Synthesis and biological activity of novel mono-indole and mono-benzofuran inhibitors of bacterial transcription initiation complex formation.

    Science.gov (United States)

    Mielczarek, Marcin; Thomas, Ruth V; Ma, Cong; Kandemir, Hakan; Yang, Xiao; Bhadbhade, Mohan; Black, David StC; Griffith, Renate; Lewis, Peter J; Kumar, Naresh

    2015-04-15

    Our ongoing research focused on targeting transcription initiation in bacteria has resulted in synthesis of several classes of mono-indole and mono-benzofuran inhibitors that targeted the essential protein-protein interaction between RNA polymerase core and σ(70)/σ(A) factors in bacteria. In this study, the reaction of indole-2-, indole-3-, indole-7- and benzofuran-2-glyoxyloyl chlorides with amines and hydrazines afforded a variety of glyoxyloylamides and glyoxyloylhydrazides. Similarly, condensation of 2- and 7-trichloroacetylindoles with amines and hydrazines delivered amides and hydrazides. The novel molecules were found to inhibit the RNA polymerase-σ(70)/σ(A) interaction as measured by ELISA, and also inhibited the growth of both Gram-positive and Gram-negative bacteria in culture. Structure-activity relationship (SAR) studies of the mono-indole and mono-benzofuran inhibitors suggested that the hydrophilic-hydrophobic balance is an important determinant of biological activity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Protein Structure Initiative Material Repository: an open shared public resource of structural genomics plasmids for the biological community

    Science.gov (United States)

    Cormier, Catherine Y.; Mohr, Stephanie E.; Zuo, Dongmei; Hu, Yanhui; Rolfs, Andreas; Kramer, Jason; Taycher, Elena; Kelley, Fontina; Fiacco, Michael; Turnbull, Greggory; LaBaer, Joshua

    2010-01-01

    The Protein Structure Initiative Material Repository (PSI-MR; http://psimr.asu.edu) provides centralized storage and distribution for the protein expression plasmids created by PSI researchers. These plasmids are a resource that allows the research community to dissect the biological function of proteins whose structures have been identified by the PSI. The plasmid annotation, which includes the full length sequence, vector information and associated publications, is stored in a freely available, searchable database called DNASU (http://dnasu.asu.edu). Each PSI plasmid is also linked to a variety of additional resources, which facilitates cross-referencing of a particular plasmid to protein annotations and experimental data. Plasmid samples can be requested directly through the website. We have also developed a novel strategy to avoid the most common concern encountered when distributing plasmids namely, the complexity of material transfer agreement (MTA) processing and the resulting delays this causes. The Expedited Process MTA, in which we created a network of institutions that agree to the terms of transfer in advance of a material request, eliminates these delays. Our hope is that by creating a repository of expression-ready plasmids and expediting the process for receiving these plasmids, we will help accelerate the accessibility and pace of scientific discovery. PMID:19906724

  7. Extended Logic Intelligent Processing System for a Sensor Fusion Processor Hardware

    Science.gov (United States)

    Stoica, Adrian; Thomas, Tyson; Li, Wei-Te; Daud, Taher; Fabunmi, James

    2000-01-01

    The paper presents the hardware implementation and initial tests from a low-power, highspeed reconfigurable sensor fusion processor. The Extended Logic Intelligent Processing System (ELIPS) is described, which combines rule-based systems, fuzzy logic, and neural networks to achieve parallel fusion of sensor signals in compact low power VLSI. The development of the ELIPS concept is being done to demonstrate the interceptor functionality which particularly underlines the high speed and low power requirements. The hardware programmability allows the processor to reconfigure into different machines, taking the most efficient hardware implementation during each phase of information processing. Processing speeds of microseconds have been demonstrated using our test hardware.

  8. Peculiarities of hardware implementation of generalized cellular tetra automaton

    OpenAIRE

    Аноприенко, Александр Яковлевич; Федоров, Евгений Евгениевич; Иваница, Сергей Васильевич; Альрабаба, Хамза

    2015-01-01

    Cellular automata are widely used in many fields of knowledge for the study of variety of complex real processes: computer engineering and computer science, cryptography, mathematics, physics, chemistry, ecology, biology, medicine, epidemiology, geology, architecture, sociology, theory of neural networks. Thus, cellular automata (CA) and tetra automata are gaining relevance taking into account the hardware and software solutions.Also it is marked a trend towards an increase in the number of p...

  9. Validation of Alzheimer's disease CSF and plasma biological markers: the multicentre reliability study of the pilot European Alzheimer's Disease Neuroimaging Initiative (E-ADNI)

    DEFF Research Database (Denmark)

    Buerger, Katharina; Frisoni, Giovanni; Uspenskaya, Olga

    2009-01-01

    BACKGROUND: Alzheimer's Disease Neuroimaging Initiatives ("ADNI") aim to validate neuroimaging and biochemical markers of Alzheimer's disease (AD). Data of the pilot European-ADNI (E-ADNI) biological marker programme of cerebrospinal fluid (CSF) and plasma candidate biomarkers are reported. METHODS...

  10. Cost per patient-year in response using a claims-based algorithm for the 2 years following biologic initiation in patients with rheumatoid arthritis.

    Science.gov (United States)

    Bonafede, Machaon; Johnson, Barbara H; Princic, Nicole; Shah, Neel; Harrison, David J

    2015-05-01

    To estimate cost per patient-year in response during 2 years following biologic initiation among patients with rheumatoid arthritis (RA). Adults newly initiating biologics for RA (etanercept, abatacept, adalimumab, certolizumab, golimumab, or infliximab) between January 2009 and July 2011 were identified in the MarketScan Commercial Database. Eligible patients were continuously enrolled 6 months before (pre-index) and 24 months after (post-index) their first (index) biologic claim. Biologic effectiveness was assessed using six criteria during 2-year follow-up: treatment adherence ≥80%, no biologic dose escalation, no biologic switch, no new disease-modifying anti-rheumatic drug, no new/increased glucocorticoid dose, and limited intra-articular joint injections (≤2). After a 90-day period of non-response for a treatment failure, effectiveness or failure of subsequent treatment was assessed again for the index biologic or new biologic (after switching). Post-index RA-related medical, pharmacy, and drug administration costs were attributed to the index biologic. Cost per patient-year in response was calculated as RA-related costs divided by duration of response. Overall, 15.0% of patients (1229/8193) did not fail any criterion for 2 years and were effectively treated. Mean duration of response was highest for etanercept (538.3 days), followed by golimumab (537.0 days; p = 0.864), adalimumab (534.7 days; p = 0.301), certolizumab (524.0 days; p = 0.165), infliximab (480.0 days; p < 0.001), and abatacept (482.3 days; p < 0.001). Total disease-related cost per patient-year in response was lower for patients initiated on etanercept ($25,086) than for patients initiated on adalimumab ($25,960), certolizumab ($26,339), golimumab ($26,332), abatacept ($35,581), or infliximab ($36,107). This study was limited to employer-paid commercial insurance. Database analyses cannot determine reasons for failing criteria. The algorithm was not designed and

  11. The rise of developmental genetics - a historical account of the fusion of embryology and cell biology with human genetics and the emergence of the Stem Cell Initiative.

    Science.gov (United States)

    Kidson, S H; Ballo, R; Greenberg, L J

    2016-05-25

    Genetics and cell biology are very prominent areas of biological research with rapid advances being driven by a flood of theoretical, technological and informational knowledge. Big biology and small biology continue to feed off each other. In this paper, we provide a brief overview of the productive interactions that have taken place between human geneticists and cell biologists at UCT, and credit is given to the enabling environment created led by Prof. Peter Beighton. The growth of new disciplines and disciplinary mergers that have swept away division of the past to make new exciting syntheses are discussed. We show how our joint research has benefitted from worldwide advances in developmental genetics, cloning and stem cell technologies, genomics, bioinformatics and imaging. We conclude by describing the role of the UCT Stem Cell Initiative and show how we are using induced pluripotent cells to carry out disease-in-the- dish studies on retinal degeneration and fibrosis.

  12. Open-source hardware for medical devices.

    Science.gov (United States)

    Niezen, Gerrit; Eslambolchilar, Parisa; Thimbleby, Harold

    2016-04-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device.

  13. Hardware Resource Allocation for Hardware/Software Partitioning in the LYCOS System

    DEFF Research Database (Denmark)

    Grode, Jesper Nicolai Riis; Knudsen, Peter Voigt; Madsen, Jan

    1998-01-01

    as a designer's/design tool's aid to generate good hardware allocations for use in hardware/software partitioning. The algorithm has been implemented in a tool under the LYCOS system. The results show that the allocations produced by the algorithm come close to the best allocations obtained by exhaustive search.......This paper presents a novel hardware resource allocation technique for hardware/software partitioning. It allocates hardware resources to the hardware data-path using information such as data-dependencies between operations in the application, and profiling information. The algorithm is useful...

  14. Hardware Resource Allocation for Hardware/Software Partitioning in the LYCOS System

    DEFF Research Database (Denmark)

    Grode, Jesper Nicolai Riis; Madsen, Jan; Knudsen, Peter Voigt

    1998-01-01

    as a designer's/design tool's aid to generate good hardware allocations for use in hardware/software partitioning. The algorithm has been implemented in a tool under the LYCOS system. The results show that the allocations produced by the algorithm come close to the best allocations obtained by exhaustive search......This paper presents a novel hardware resource allocation technique for hardware/software partitioning. It allocates hardware resources to the hardware data-path using information such as data-dependencies between operations in the application, and profiling information. The algorithm is useful...

  15. Brain inspired hardware architectures - Can they be used for particle physics ?

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    After their inception in the 1940s and several decades of moderate success, artificial neural networks have recently demonstrated impressive achievements in analysing big data volumes. Wide and deep network architectures can now be trained using high performance computing systems, graphics card clusters in particular. Despite their successes these state-of-the-art approaches suffer from very long training times and huge energy consumption, in particular during the training phase. The biological brain can perform similar and superior classification tasks in the space and time domains, but at the same time exhibits very low power consumption, rapid unsupervised learning capabilities and fault tolerance. In the talk the differences between classical neural networks and neural circuits in the brain will be presented. Recent hardware implementations of neuromorphic computing systems and their applications will be shown. Finally, some initial ideas to use accelerated neural architectures as trigger processors i...

  16. Comparative Modal Analysis of Sieve Hardware Designs

    Science.gov (United States)

    Thompson, Nathaniel

    2012-01-01

    The CMTB Thwacker hardware operates as a testbed analogue for the Flight Thwacker and Sieve components of CHIMRA, a device on the Curiosity Rover. The sieve separates particles with a diameter smaller than 150 microns for delivery to onboard science instruments. The sieving behavior of the testbed hardware should be similar to the Flight hardware for the results to be meaningful. The elastodynamic behavior of both sieves was studied analytically using the Rayleigh Ritz method in conjunction with classical plate theory. Finite element models were used to determine the mode shapes of both designs, and comparisons between the natural frequencies and mode shapes were made. The analysis predicts that the performance of the CMTB Thwacker will closely resemble the performance of the Flight Thwacker within the expected steady state operating regime. Excitations of the testbed hardware that will mimic the flight hardware were recommended, as were those that will improve the efficiency of the sieving process.

  17. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Science.gov (United States)

    Hwang, Wen-Jyi; Lee, Wei-Hao; Lin, Shiow-Jyu; Lai, Sheng-Ying

    2013-01-01

    This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA) and fuzzy C-means (FCM) algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA). It is embedded in a System-on-Chip (SOC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation. PMID:24189331

  18. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    Sheng-Ying Lai

    2013-11-01

    Full Text Available This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA and fuzzy C-means (FCM algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA. It is embedded in a System-on-Chip (SOC platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation.

  19. On-Chip Reconfigurable Hardware Accelerators for Popcount Computations

    Directory of Open Access Journals (Sweden)

    Valery Sklyarov

    2016-01-01

    Full Text Available Popcount computations are widely used in such areas as combinatorial search, data processing, statistical analysis, and bio- and chemical informatics. In many practical problems the size of initial data is very large and increase in throughput is important. The paper suggests two types of hardware accelerators that are (1 designed in FPGAs and (2 implemented in Zynq-7000 all programmable systems-on-chip with partitioning of algorithms that use popcounts between software of ARM Cortex-A9 processing system and advanced programmable logic. A three-level system architecture that includes a general-purpose computer, the problem-specific ARM, and reconfigurable hardware is then proposed. The results of experiments and comparisons with existing benchmarks demonstrate that although throughput of popcount computations is increased in FPGA-based designs interacting with general-purpose computers, communication overheads (in experiments with PCI express are significant and actual advantages can be gained if not only popcount but also other types of relevant computations are implemented in hardware. The comparison of software/hardware designs for Zynq-7000 all programmable systems-on-chip with pure software implementations in the same Zynq-7000 devices demonstrates increase in performance by a factor ranging from 5 to 19 (taking into account all the involved communication overheads between the programmable logic and the processing systems.

  20. Radiosensitivity of cancer-initiating cells and normal stem cells (or what the Heisenberg uncertainly principle has to do with biology).

    Science.gov (United States)

    Woodward, Wendy Ann; Bristow, Robert Glen

    2009-04-01

    Mounting evidence suggests that parallels between normal stem cell biology and cancer biology may provide new targets for cancer therapy. Prospective identification and isolation of cancer-initiating cells from solid tumors has promoted the descriptive and functional identification of these cells allowing for characterization of their response to contemporary cancer therapies, including chemotherapy and radiation. In clinical radiation therapy, the failure to clinically eradicate all tumor cells (eg, a lack of response, partial response, or nonpermanent complete response by imaging) is considered a treatment failure. As such, biologists have explored the characteristics of the small population of clonogenic cancer cells that can survive and are capable of repopulating the tumor after subcurative therapy. Herein, we discuss the convergence of these clonogenic studies with contemporary radiosensitivity studies that use cell surface markers to identify cancer-initiating cells. Implications for and uncertainties regarding incorporation of these concepts into the practice of modern radiation oncology are discussed.

  1. LWH & ACH Helmet Hardware Study

    Science.gov (United States)

    2015-11-30

    initial attempts to perform impact tests using screws mounted in Kevlar composite panels resulted in little damage to the screws, but a lot of...stiffer and stronger than Kevlar panels, does not plastically deform (and therefore Figure 11. Typical ductile fracture surface resulting from a

  2. Genetic relationship of organic bases of the quinoline and isoquinoline series from lignite semicoking tars with the initial biological material

    Energy Technology Data Exchange (ETDEWEB)

    Platonov, V.V.; Proskuryakov, V.A.; Podshibyakin, S.I.; Domogatskii, V.V.; Shvykin, A.Y.; Shavyrina, O.A.; Chilachava, K.B. [Leo Tolstoy State Pedagog University, Tula (Russian Federation)

    2002-07-01

    The genetic relationship of quinoline and isoquinoline compounds present in semicoking tars of Kimovsk lignites (near-Moscow fields) with the initial vegetable material is discussed. Transformation pathways of the native compounds in the course of lignite formation are suggested.

  3. Biologically-Inspired Hardware for Land/Aerial Robots

    Data.gov (United States)

    National Aeronautics and Space Administration — Future generations of NASA land/aerial robots will be required to operate in the harsh, unpredictable environments of extra-terrestrial bodies including asteroids,...

  4. Examining Time to Initiation of Biologic Disease-modifying Antirheumatic Drugs and Medication Adherence and Persistence Among Texas Medicaid Recipients With Rheumatoid Arthritis.

    Science.gov (United States)

    Kim, Gilwan; Barner, Jamie C; Rascati, Karen; Richards, Kristin

    2016-03-01

    Little is known about the transition from nonbiologic disease-modifying antirheumatic drugs (DMARDs) to biologic DMARDs or about individual nonbiologic DMARD use patterns among patients with rheumatoid arthritis (RA). This study examined time to initiation of biologic DMARDs and nonbiologic DMARD medication adherence and persistence among Texas Medicaid recipients with RA taking nonbiologic DMARDs. In this retrospective study (July 1, 2003-December 31, 2010) of the Texas Medicaid database, patients were aged 18 to 62 years at index, were diagnosed with RA (International Classification of Diseases, Ninth Revision, Clinical Modification, code 714.xx), had no claims for nonbiologic or biologic DMARDs in the preindex period, and had a minimum of 2 prescription claims for the same nonbiologic DMARD in the postindex period. Kaplan-Meier survival analysis and log-rank tests were used to compare time to initiation of biologic DMARDs according to nonbiologic DMARD type and therapy. Adherence and persistence were examined according to nonbiologic type and therapy by using ANOVA models and χ(2), Duncan, and t tests. On average, patients were 47.9 (± 10.4) years of age, mostly female (89.1%) and Hispanic (55.2%). Methotrexate (MTX) and leflunomide (LEF) users took the shortest time to initiate biologic DMARDs (207 [190] days and 188 [205] days, respectively). LEF users had the highest mean adherence of 37.5% (27.5%), which was similar to MTX users (35.7% [26.9%]), whereas dual-therapy users had the lowest mean adherence at 17.1% (14.4%). Sulfasalazine users (108 [121] days) had the lowest persistence, whereas LEF (227 [231] days) and MTX (211 [222] days) users had the longest persistence. Nonbiologic DMARD monotherapy users were more adherent than dual-therapy users (32.6% [25.8%] vs 17.1% [14.4%]). These results should be interpreted in light of some study limitations, such as using proportion of days covered as a proxy for adherence, not having clinical data to control for

  5. Sex biology contributions to vulnerability to Alzheimer's disease: A think tank convened by the Women's Alzheimer's Research Initiative.

    Science.gov (United States)

    Snyder, Heather M; Asthana, Sanjay; Bain, Lisa; Brinton, Roberta; Craft, Suzanne; Dubal, Dena B; Espeland, Mark A; Gatz, Margaret; Mielke, Michelle M; Raber, Jacob; Rapp, Peter R; Yaffe, Kristine; Carrillo, Maria C

    2016-11-01

    More than 5 million Americans are living with Alzheimer's disease (AD) today, and nearly two-thirds of Americans with AD are women. This sex difference may be due to the higher longevity women generally experience; however, increasing evidence suggests that longevity alone is not a sufficient explanation and there may be other factors at play. The Alzheimer's Association convened an expert think tank to focus on the state of the science and level of evidence around gender and biological sex differences for AD, including the knowledge gaps and areas of science that need to be more fully addressed. This article summarizes the think tank discussion, moving forward a research agenda and funding program to better understand the biological underpinnings of sex- and gender-related disparities of risk for AD. Copyright © 2016 The Alzheimer's Association. All rights reserved.

  6. Hardware-in-the-Loop Testing

    Data.gov (United States)

    Federal Laboratory Consortium — RTC has a suite of Hardware-in-the Loop facilities that include three operational facilities that provide performance assessment and production acceptance testing of...

  7. Cooperative communications hardware, channel and PHY

    CERN Document Server

    Dohler, Mischa

    2010-01-01

    Facilitating Cooperation for Wireless Systems Cooperative Communications: Hardware, Channel & PHY focuses on issues pertaining to the PHY layer of wireless communication networks, offering a rigorous taxonomy of this dispersed field, along with a range of application scenarios for cooperative and distributed schemes, demonstrating how these techniques can be employed. The authors discuss hardware, complexity and power consumption issues, which are vital for understanding what can be realized at the PHY layer, showing how wireless channel models differ from more traditional

  8. Designing Secure Systems on Reconfigurable Hardware

    OpenAIRE

    Huffmire, Ted; Brotherton, Brett; Callegari, Nick; Valamehr, Jonathan; White, Jeff; Kastner, Ryan; Sherwood, Ted

    2008-01-01

    The extremely high cost of custom ASIC fabrication makes FPGAs an attractive alternative for deployment of custom hardware. Embedded systems based on reconfigurable hardware integrate many functions onto a single device. Since embedded designers often have no choice but to use soft IP cores obtained from third parties, the cores operate at different trust levels, resulting in mixed trust designs. The goal of this project is to evaluate recently proposed security primitives for reconfigurab...

  9. LWH and ACH Helmet Hardware Study

    Science.gov (United States)

    2015-11-30

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6355--15-9642 LWH & ACH Helmet Hardware Study November 30, 2015 Ronald l. Holtz PeteR...19b. TELEPHONE NUMBER (include area code) b. ABSTRACT c. THIS PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT LWH & ACH Helmet Hardware Study...screws and nuts used with the Light Weight Helmet (LWH) and Advanced Combat Helmet ( ACH ). The testing included basic dimensional measurements, Rockwell

  10. IDD Archival Hardware Architecture and Workflow

    Energy Technology Data Exchange (ETDEWEB)

    Mendonsa, D; Nekoogar, F; Martz, H

    2008-10-09

    This document describes the functionality of every component in the DHS/IDD archival and storage hardware system shown in Fig. 1. The document describes steps by step process of image data being received at LLNL then being processed and made available to authorized personnel and collaborators. Throughout this document references will be made to one of two figures, Fig. 1 describing the elements of the architecture and the Fig. 2 describing the workflow and how the project utilizes the available hardware.

  11. A comprehensive workflow for general-purpose neural modeling with highly configurable neuromorphic hardware systems.

    Science.gov (United States)

    Brüderle, Daniel; Petrovici, Mihai A; Vogginger, Bernhard; Ehrlich, Matthias; Pfeil, Thomas; Millner, Sebastian; Grübl, Andreas; Wendt, Karsten; Müller, Eric; Schwartz, Marc-Olivier; de Oliveira, Dan Husmann; Jeltsch, Sebastian; Fieres, Johannes; Schilling, Moritz; Müller, Paul; Breitwieser, Oliver; Petkov, Venelin; Muller, Lyle; Davison, Andrew P; Krishnamurthy, Pradeep; Kremkow, Jens; Lundqvist, Mikael; Muller, Eilif; Partzsch, Johannes; Scholze, Stefan; Zühl, Lukas; Mayr, Christian; Destexhe, Alain; Diesmann, Markus; Potjans, Tobias C; Lansner, Anders; Schüffny, René; Schemmel, Johannes; Meier, Karlheinz

    2011-05-01

    In this article, we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results.

  12. Surface grafting of zwitterionic polymers onto dye doped AIE-active luminescent silica nanoparticles through surface-initiated ATRP for biological imaging applications

    Science.gov (United States)

    Mao, Liucheng; Liu, Xinhua; Liu, Meiying; Huang, Long; Xu, Dazhuang; Jiang, Ruming; Huang, Qiang; Wen, Yuanqing; Zhang, Xiaoyong; Wei, Yen

    2017-10-01

    Aggregation-induced emission (AIE) dyes have recently been intensively explored for biological imaging applications owing to their outstanding optical feature as compared with conventional organic dyes. The AIE-active luminescent silica nanoparticles (LSNPs) are expected to combine the advantages both of silica nanoparticles and AIE-active dyes. Although the AIE-active LSNPs have been prepared previously, surface modification of these AIE-active LSNPs with functional polymers has not been reported thus far. In this work, we reported a rather facile and general strategy for preparation of polymers functionalized AIE-active LSNPs through the surface-initiated atom transfer radical polymerization (ATRP). The AIE-active LSNPs were fabricated via direct encapsulation of AIE-active dye into silica nanoparticles through a non-covalent modified Stöber method. The ATRP initiator was subsequently immobilized onto these AIE-active LSNPs through amidation reaction between 3-aminopropyl-triethoxy-silane and 2-bromoisobutyryl bromide. Finally, the zwitterionic 2-(methacryloyloxy)ethyl phosphorylcholine (MPC) was selected as model monomer and grafted onto MSNs through ATRP. The characterization results suggested that LSNPs can be successfully modified with poly(MPC) through surface-initiated ATRP. The biological evaluation results demonstrated that the final SNPs-AIE-pMPC composites possess low cytotoxicity, desirable optical properties and great potential for biological imaging. Taken together, we demonstrated that AIE-active LSNPs can be fabricated and surface modified with functional polymers to endow novel functions and better performance for biomedical applications. More importantly, this strategy developed in this work could also be extended for fabrication of many other LSNPs polymer composites owing to the good monomer adoptability of ATRP.

  13. Biological significance of facilitated diffusion in protein-DNA interactions. Applications to T4 endonuclease V-initiated DNA repair

    International Nuclear Information System (INIS)

    Dowd, D.R.; Lloyd, R.S.

    1990-01-01

    Facilitated diffusion along nontarget DNA is employed by numerous DNA-interactive proteins to locate specific targets. Until now, the biological significance of DNA scanning has remained elusive. T4 endonuclease V is a DNA repair enzyme which scans nontarget DNA and processively incises DNA at the site of pyrimidine dimers which are produced by exposure to ultraviolet (UV) light. In this study we tested the hypothesis that there exists a direct correlation between the degree of processivity of wild type and mutant endonuclease V molecules and the degree of enhanced UV resistance which is conferred to repair-deficient Eshcerichia coli. This was accomplished by first creating a series of endonuclease V mutants whose in vitro catalytic activities were shown to be very similar to that of the wild type enzyme. However, when the mechanisms by which these enzymes search nontarget DNA for its substrate were analyzed in vitro and in vivo, the mutants displayed varying degrees of nontarget DNA scanning ranging from being nearly as processive as wild type to randomly incising dimers within the DNA population. The ability of these altered endonuclease V molecules to enhance UV survival in DNA repair-deficient E. coli then was assessed. The degree of enhanced UV survival was directly correlated with the level of facilitated diffusion. This is the first conclusive evidence directly relating a reduction of in vivo facilitated diffusion with a change in an observed phenotype. These results support the assertion that the mechanisms which DNA-interactive proteins employ in locating their target sites are of biological significance

  14. Flight Hardware Virtualization for On-Board Science Data Processing

    Data.gov (United States)

    National Aeronautics and Space Administration — Utilize Hardware Virtualization technology to benefit on-board science data processing by investigating new real time embedded Hardware Virtualization solutions and...

  15. Rupture hardware minimization in pressurized water reactor piping

    International Nuclear Information System (INIS)

    Mukherjee, S.K.; Ski, J.J.; Chexal, V.; Norris, D.M.; Goldstein, N.A.; Beaudoin, B.F.; Quinones, D.F.; Server, W.L.

    1989-01-01

    For much of the high-energy piping in light reactor systems, fracture mechanics calculations can be used to assure pipe failure resistance, thus allowing the elimination of excessive rupture restraint hardware both inside and outside containment. These calculations use the concept of leak-before-break (LBB) and include part-through-wall flaw fatigue crack propagation, through-wall flaw detectable leakage, and through-wall flaw stability analyses. Performing these analyses not only reduces initial construction, future maintenance, and radiation exposure costs, but also improves the overall safety and integrity of the plant since much more is known about the piping and its capabilities than would be the case had the analyses not been performed. This paper presents the LBB methodology applied a Beaver Valley Power Station- Unit 2 (BVPS-2); the application for two specific lines, one inside containment (stainless steel) and the other outside containment (ferrutic steel), is shown in a generic sense using a simple parametric matrix. The overall results for BVPS-2 indicate that pipe rupture hardware is not necessary for stainless steel lines inside containment greater than or equal to 6-in. (152-mm) nominal pipe size that have passed a screening criteria designed to eliminate potential problem systems (such as the feedwater system). Similarly, some ferritic steel line as small as 3-in. (76-mm) diameter (outside containment) can qualify for pipe rupture hardware elemination

  16. Pipe rupture hardware minimization in pressurized water reactor system

    International Nuclear Information System (INIS)

    Mukherjee, S.K.; Szyslowski, J.J.; Chexal, V.; Norris, D.M.; Goldstein, N.A.; Beaudoin, B.; Quinones, D.; Server, W.

    1987-01-01

    For much of the high energy piping in light water reactor systems, fracture mechanics calculations can be used to assure pipe failure resistance, thus allowing the elimination of excessive rupture restraint hardware both inside and outside containment. These calculations use the concept of leak-before-break (LBB) and include part-through-wall flaw fatigue crack propagation, through-wall flaw detectable leakage, and through-wall flaw stability analyses. Performing these analyses not only reduces initial construction, future maintenance, and radiation exposure costs, but the overall safety and integrity of the plant are improved since much more is known about the piping and its capabilities than would be the case had the analyses not been performed. This paper presents the LBB methodology applied at Beaver Valley Power Station - Unit 2 (BVPS-2); the application for two specific lines, one inside containment (stainless steel) and the other outside containment (ferritic steel), is shown in a generic sense using a simple parametric matrix. The overall results for BVPS-2 indicate that pipe rupture hardware is not necessary for stainless steel lines inside containment greater than or equal to 6-in (152 mm) nominal pipe size that have passed a screening criteria designed to eliminate potential problem systems (such as the feedwater system). Similarly, some ferritic steel lines as small as 3-in (76 mm) diameter (outside containment) can qualify for pipe rupture hardware elimination

  17. Understanding the biology of bone sarcoma from early initiating events through late events in metastasis and disease progression.

    Directory of Open Access Journals (Sweden)

    Limin eZhu

    2013-09-01

    Full Text Available The two most common primary bone malignancies, osteosarcoma and Ewing sarcoma, are both aggressive, highly metastatic cancers that most often strike teens, though both can be found in younger children and adults. Despite distinct origins and pathogenesis, both diseases share several mechanisms of progression and metastasis, including neovascularization, invasion, anoikis resistance, chemoresistance and evasion of the immune response. Some of these processes are well-studies in more common carcinoma models, and the observation from adult diseases may be readily applied to pediatric bone sarcomas. Neovascularization, which includes angiogenesis and vasculogenesis, is a clear example of a process that is likely to be similar between carcinomas and sarcomas, since the responding cells are the same in each case. Chemoresistance mechanisms also may be similar between other cancers and the bone sarcomas. Since osteosarcoma and Ewing sarcoma are mesenchymal in origin, the process of epithelial-to-mesenchymal transformation is largely absent in bone sarcomas, necessitating different approaches to study progression and metastasis in these diseases. One process that is less well-studied in bone sarcomas is dormancy, which allows micrometastatic disease to remain viable but not growing in distant sites – typically the lungs – for months or years before renewing growth to become overt metastatic disease. By understanding the basic biology of these processes, novel therapeutic strategies may be developed that could improve survival in children with osteosarcoma or Ewing sarcoma.

  18. A Hardware Abstraction Layer in Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Korsholm, Stephan; Kalibera, Tomas

    2011-01-01

    Embedded systems use specialized hardware devices to interact with their environment, and since they have to be dependable, it is attractive to use a modern, type-safe programming language like Java to develop programs for them. Standard Java, as a platform-independent language, delegates access...... to devices, direct memory access, and interrupt handling to some underlying operating system or kernel, but in the embedded systems domain resources are scarce and a Java Virtual Machine (JVM) without an underlying middleware is an attractive architecture. The contribution of this article is a proposal...... for Java packages with hardware objects and interrupt handlers that interface to such a JVM. We provide implementations of the proposal directly in hardware, as extensions of standard interpreters, and finally with an operating system middleware. The latter solution is mainly seen as a migration path...

  19. MFTF supervisory control and diagnostics system hardware

    International Nuclear Information System (INIS)

    Butner, D.N.

    1979-01-01

    The Supervisory Control and Diagnostics System (SCDS) for the Mirror Fusion Test Facility (MFTF) is a multiprocessor minicomputer system designed so that for most single-point failures, the hardware may be quickly reconfigured to provide continued operation of the experiment. The system is made up of nine Perkin-Elmer computers - a mixture of 8/32's and 7/32's. Each computer has ports on a shared memory system consisting of two independent shared memory modules. Each processor can signal other processors through hardware external to the shared memory. The system communicates with the Local Control and Instrumentation System, which consists of approximately 65 microprocessors. Each of the six system processors has facilities for communicating with a group of microprocessors; the groups consist of from four to 24 microprocessors. There are hardware switches so that if an SCDS processor communicating with a group of microprocessors fails, another SCDS processor takes over the communication

  20. Hardware Accelerators for Elliptic Curve Cryptography

    Directory of Open Access Journals (Sweden)

    C. Puttmann

    2008-05-01

    Full Text Available In this paper we explore different hardware accelerators for cryptography based on elliptic curves. Furthermore, we present a hierarchical multiprocessor system-on-chip (MPSoC platform that can be used for fast integration and evaluation of novel hardware accelerators. In respect of two application scenarios the hardware accelerators are coupled at different hierarchy levels of the MPSoC platform. The whole system is implemented in a state of the art 65 nm standard cell technology. Moreover, an FPGA-based rapid prototyping system for fast system verification is presented. Finally, a metric to analyze the resource efficiency by means of chip area, execution time and energy consumption is introduced.

  1. Hardware Acceleration of Adaptive Neural Algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - world conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.

  2. Hardware and software constructs for a vibration analysis network

    International Nuclear Information System (INIS)

    Cook, S.A.; Crowe, R.D.; Toffer, H.

    1985-01-01

    Vibration level monitoring and analysis has been initiated at N Reactor, the dual purpose reactor operated at Hanford, Washington by UNC Nuclear Industries (UNC) for the Department of Energy (DOE). The machinery to be monitored was located in several buildings scattered over the plant site, necessitating an approach using satellite stations to collect, monitor and temporarily store data. The satellite stations are, in turn, linked to a centralized processing computer for further analysis. The advantages of a networked data analysis system are discussed in this paper along with the hardware and software required to implement such a system

  3. Parallel Processing with Digital Signal Processing Hardware and Software

    Science.gov (United States)

    Swenson, Cory V.

    1995-01-01

    The assembling and testing of a parallel processing system is described which will allow a user to move a Digital Signal Processing (DSP) application from the design stage to the execution/analysis stage through the use of several software tools and hardware devices. The system will be used to demonstrate the feasibility of the Algorithm To Architecture Mapping Model (ATAMM) dataflow paradigm for static multiprocessor solutions of DSP applications. The individual components comprising the system are described followed by the installation procedure, research topics, and initial program development.

  4. Human Centered Hardware Modeling and Collaboration

    Science.gov (United States)

    Stambolian Damon; Lawrence, Brad; Stelges, Katrine; Henderson, Gena

    2013-01-01

    In order to collaborate engineering designs among NASA Centers and customers, to in clude hardware and human activities from multiple remote locations, live human-centered modeling and collaboration across several sites has been successfully facilitated by Kennedy Space Center. The focus of this paper includes innovative a pproaches to engineering design analyses and training, along with research being conducted to apply new technologies for tracking, immersing, and evaluating humans as well as rocket, vehic le, component, or faci lity hardware utilizing high resolution cameras, motion tracking, ergonomic analysis, biomedical monitoring, wor k instruction integration, head-mounted displays, and other innovative human-system integration modeling, simulation, and collaboration applications.

  5. Quantum neuromorphic hardware for quantum artificial intelligence

    Science.gov (United States)

    Prati, Enrico

    2017-08-01

    The development of machine learning methods based on deep learning boosted the field of artificial intelligence towards unprecedented achievements and application in several fields. Such prominent results were made in parallel with the first successful demonstrations of fault tolerant hardware for quantum information processing. To which extent deep learning can take advantage of the existence of a hardware based on qubits behaving as a universal quantum computer is an open question under investigation. Here I review the convergence between the two fields towards implementation of advanced quantum algorithms, including quantum deep learning.

  6. Hardware Accelerated Sequence Alignment with Traceback

    Directory of Open Access Journals (Sweden)

    Scott Lloyd

    2009-01-01

    in a timely manner. Known methods to accelerate alignment on reconfigurable hardware only address sequence comparison, limit the sequence length, or exhibit memory and I/O bottlenecks. A space-efficient, global sequence alignment algorithm and architecture is presented that accelerates the forward scan and traceback in hardware without memory and I/O limitations. With 256 processing elements in FPGA technology, a performance gain over 300 times that of a desktop computer is demonstrated on sequence lengths of 16000. For greater performance, the architecture is scalable to more processing elements.

  7. Computer hardware for radiologists: Part I

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM), Picture Archiving and Communication System (PACS), Radiology information system (RIS) technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU), the chipset, the random access memory (RAM), the memory modules, bus, storage drives, and ports. The personnel computer (PC) has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs). The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called “buses”. The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute “programs”. A Pentium ® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM) is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration

  8. Computer hardware for radiologists: Part I

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM, Picture Archiving and Communication System (PACS, Radiology information system (RIS technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU, the chipset, the random access memory (RAM, the memory modules, bus, storage drives, and ports. The personnel computer (PC has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs. The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called "buses". The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute "programs". A Pentium® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration.

  9. Environmental Control System Software & Hardware Development

    Science.gov (United States)

    Vargas, Daniel Eduardo

    2017-01-01

    ECS hardware: (1) Provides controlled purge to SLS Rocket and Orion spacecraft. (2) Provide mission-focused engineering products and services. ECS software: (1) NASA requires Compact Unique Identifiers (CUIs); fixed-length identifier used to identify information items. (2) CUI structure; composed of nine semantic fields that aid the user in recognizing its purpose.

  10. Femoral neck fracture following hardware removal.

    Science.gov (United States)

    Shaer, James A; Hileman, Barbara M; Newcomer, Jill E; Hanes, Marina C

    2012-01-16

    It is uncommon for femoral neck fractures to occur after proximal femoral hardware removal because age, osteoporosis, and technical error are often noted as the causes for this type of fracture. However, excessive alcohol consumption and failure to comply with protected weight bearing for 6 weeks increases the risk of femoral neck fractures.This article describes a case of a 57-year-old man with a high-energy ipsilateral inter-trochanteric hip fracture, comminuted distal third femoral shaft fracture, and displaced lateral tibial plateau fracture. Cephalomedullary fixation was used to fix the ipsilateral femur fractures after medical stabilization and evaluation of the patient. The patient healed clinically and radiographically at 6 months. Despite conservative treatment for painful proximal hardware, elective hip screw removal was performed 22.5 months after injury. Seven weeks later, he sustained a nontraumatic femoral neck fracture.In this case, it is unlikely that the femoral neck fracture occurred as a result of hardware removal. We assumed that, in addition to the patient's alcohol abuse and tobacco use, stress fractures may have attributed to the femoral neck fracture. We recommend using a shorter hip screw to minimize hardware prominence or possibly off-label use of an injectable bone filler, such as calcium phosphate cement. Copyright 2012, SLACK Incorporated.

  11. QCE : A Simulator for Quantum Computer Hardware

    NARCIS (Netherlands)

    Michielsen, Kristel; Raedt, Hans De

    2003-01-01

    The Quantum Computer Emulator (QCE) described in this paper consists of a simulator of a generic, general purpose quantum computer and a graphical user interface. The latter is used to control the simulator, to define the hardware of the quantum computer and to debug and execute quantum algorithms.

  12. The fast Amsterdam multiprocessor (FAMP) system hardware

    International Nuclear Information System (INIS)

    Hertzberger, L.O.; Kieft, G.; Kisielewski, B.; Wiggers, L.W.; Engster, C.; Koningsveld, L. van

    1981-01-01

    The architecture of a multiprocessor system is described that will be used for on-line filter and second stage trigger applications. The system is based on the MC 68000 microprocessor from Motorola. Emphasis is paid to hardware aspects, in particular the modularity, processor communication and interfacing, whereas the system software and the applications will be described in separate articles. (orig.)

  13. Microprocessor Design Using Hardware Description Language

    Science.gov (United States)

    Mita, Rosario; Palumbo, Gaetano

    2008-01-01

    The following paper has been conceived to deal with the contents of some lectures aimed at enhancing courses on digital electronic, microelectronic or VLSI systems. Those lectures show how to use a hardware description language (HDL), such as the VHDL, to specify, design and verify a custom microprocessor. The general goal of this work is to teach…

  14. CAMAC high energy physics electronics hardware

    International Nuclear Information System (INIS)

    Kolpakov, I.F.

    1977-01-01

    CAMAC hardware for high energy physics large spectrometers and control systems is reviewed as is the development of CAMAC modules at the High Energy Laboratory, JINR (Dubna). The total number of crates used at the Laboratory is 179. The number of CAMAC modules of 120 different types exceeds 1700. The principles of organization and the structure of developed CAMAC systems are described. (author)

  15. Enabling Open Hardware through FOSS tools

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Software developers often take open file formats and tools for granted. When you publish code on github, you do not ask yourself if somebody will be able to open it and modify it. We need the same freedom in the open hardware world, to make it truly accessible for everyone.

  16. Hardware Acceleration of Sparse Cognitive Algorithms

    Science.gov (United States)

    2016-05-01

    is clear that these emerging algorithms that can support unsupervised , or lightly supervised learning , as well as incremental learning , map poorly...distribution unlimited. 8.0 CONCLUDING REMARKS These emerging algorithms that can support unsupervised , or lightly supervised learning , as well as...15. SUBJECT TERMS Cortical Algorithms; Machine Learning ; Hardware; VLSI; ASIC 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT: SAR

  17. Modular Neural Tile Architecture for Compact Embedded Hardware Spiking Neural Network

    NARCIS (Netherlands)

    Pande, Sandeep; Morgan, Fearghal; Cawley, Seamus; Bruintjes, Tom; Smit, Gerardus Johannes Maria; McGinley, Brian; Carrillo, Snaider; Harkin, Jim; McDaid, Liam

    2013-01-01

    Biologically-inspired packet switched network on chip (NoC) based hardware spiking neural network (SNN) architectures have been proposed as an embedded computing platform for classification, estimation and control applications. Storage of large synaptic connectivity (SNN topology) information in

  18. Growth in spaceflight hardware results in alterations to the transcriptome and proteome

    Science.gov (United States)

    Basu, Proma; Kruse, Colin P. S.; Luesse, Darron R.; Wyatt, Sarah E.

    2017-11-01

    The Biological Research in Canisters (BRIC) hardware has been used to house many biology experiments on both the Space Transport System (STS, commonly known as the space shuttle) and the International Space Station (ISS). However, microscopic examination of Arabidopsis seedlings by Johnson et al. (2015) indicated the hardware itself may affect cell morphology. The experiment herein was designed to assess the effects of the BRIC-Petri Dish Fixation Units (BRIC-PDFU) hardware on the transcriptome and proteome of Arabidopsis seedlings. To our knowledge, this is the first transcriptomic and proteomic comparison of Arabidopsis seedlings grown with and without hardware. Arabidopsis thaliana wild-type Columbia (Col-0) seeds were sterilized and bulk plated on forty-four 60 mm Petri plates, of which 22 were integrated into the BRIC-PDFU hardware and 22 were maintained in closed containers at Ohio University. Seedlings were grown for approximately 3 days, fixed with RNAlater® and stored at -80 °C prior to RNA and protein extraction, with proteins separated into membrane and soluble fractions prior to analysis. The RNAseq analysis identified 1651 differentially expressed genes; MS/MS analysis identified 598 soluble and 589 membrane proteins differentially abundant both at p stress responses. Some of these genes and proteins have been previously identified in spaceflight experiments, indicating that these genes and proteins may be perturbed by both conditions.

  19. We Are Not Alone: The iMOP Initiative and Its Roles in a Biology- and Disease-Driven Human Proteome Project.

    Science.gov (United States)

    Tholey, Andreas; Taylor, Nicolas L; Heazlewood, Joshua L; Bendixen, Emøke

    2017-12-01

    Mapping of the human proteome has advanced significantly in recent years and will provide a knowledge base to accelerate our understanding of how proteins and protein networks can affect human health and disease. However, providing solutions to human health challenges will likely fail if insights are exclusively based on studies of human samples and human proteomes. In recent years, it has become evident that human health depends on an integrated understanding of the many species that make human life possible. These include the commensal microorganisms that are essential to human life, pathogens, and food species as well as the classic model organisms that enable studies of biological mechanisms. The Human Proteome Organization (HUPO) initiative on multiorganism proteomes (iMOP) works to support proteome research undertaken on nonhuman species that remain widely under-studied compared with the progress in human proteome research. This perspective argues the need for further research on multiple species that impact human life. We also present an update on recent progress in model organisms, microbiota, and food species, address the emerging problem of antibiotics resistance, and outline how iMOP activities could lead to a more inclusive approach for the human proteome project (HPP) to better support proteome research aimed at improving human health and furthering knowledge on human biology.

  20. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  1. Decreased use of glucocorticoids in biological-experienced patients with rheumatoid arthritis who initiated intravenous abatacept: results from the 2-year ACTION study

    Science.gov (United States)

    Alten, Rieke; Nüßlein, Hubert; Galeazzi, Mauro; Lorenz, Hanns-Martin; Nurmohamed, Michael T; Bensen, William G; Burmester, Gerd R; Peter, Hans-Hartmut; Pavelka, Karel; Chartier, Mélanie; Poncet, Coralie; Rauch, Christiane; Elbez, Yedid; Le Bars, Manuela

    2016-01-01

    Introduction Prolonged glucocorticoid use may increase the risk of adverse safety outcomes, including cardiovascular events. The European League Against Rheumatism and the Canadian Rheumatology Association advise tapering glucocorticoid dose as rapidly as clinically feasible. There is a paucity of published data on RA that adequately describe concomitant treatment patterns. Methods ACTION (AbataCepT In rOutiNe clinical practice) is a non-interventional cohort study of patients from Europe and Canada that investigated the long-term retention of intravenous abatacept in clinical practice. We assessed concomitant glucocorticoids in patients with established RA who had participated in ACTION and received ≥1 biological agent prior to abatacept initiation. Results The analysis included 1009 patients. Glucocorticoids were prescribed at abatacept initiation in 734 (72.7%) patients at a median 7.5 mg/day dose (n=692). Of the patients who remained on abatacept at 24 months, 40.7% were able to decrease their dose of glucocorticoids, including 26.9% who decreased their dose from >5 mg/day to ≤5 mg/day. Conclusion Reduction and/or cessation of glucocorticoid therapy is possible with intravenous abatacept in clinical practice. PMID:26925253

  2. Particle Transport Simulation on Heterogeneous Hardware

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    CPUs and GPGPUs. About the speaker Vladimir Koylazov is CTO and founder of Chaos Software and one of the original developers of the V-Ray raytracing software. Passionate about 3D graphics and programming, Vlado is the driving force behind Chaos Group's software solutions. He participated in the implementation of algorithms for accurate light simulations and support for different hardware platforms, including CPU and GPGPU, as well as distributed calculat...

  3. Hardware-Independent Proofs of Numerical Programs

    Science.gov (United States)

    Boldo, Sylvie; Nguyen, Thi Minh Tuyen

    2010-01-01

    On recent architectures, a numerical program may give different answers depending on the execution hardware and the compilation. Our goal is to formally prove properties about numerical programs that are true for multiple architectures and compilers. We propose an approach that states the rounding error of each floating-point computation whatever the environment. This approach is implemented in the Frama-C platform for static analysis of C code. Small case studies using this approach are entirely and automatically proved

  4. Reconfigurable Hardware Adapts to Changing Mission Demands

    Science.gov (United States)

    2003-01-01

    A new class of computing architectures and processing systems, which use reconfigurable hardware, is creating a revolutionary approach to implementing future spacecraft systems. With the increasing complexity of electronic components, engineers must design next-generation spacecraft systems with new technologies in both hardware and software. Derivation Systems, Inc., of Carlsbad, California, has been working through NASA s Small Business Innovation Research (SBIR) program to develop key technologies in reconfigurable computing and Intellectual Property (IP) soft cores. Founded in 1993, Derivation Systems has received several SBIR contracts from NASA s Langley Research Center and the U.S. Department of Defense Air Force Research Laboratories in support of its mission to develop hardware and software for high-assurance systems. Through these contracts, Derivation Systems began developing leading-edge technology in formal verification, embedded Java, and reconfigurable computing for its PF3100, Derivational Reasoning System (DRS ), FormalCORE IP, FormalCORE PCI/32, FormalCORE DES, and LavaCORE Configurable Java Processor, which are designed for greater flexibility and security on all space missions.

  5. Trends in computer hardware and software.

    Science.gov (United States)

    Frankenfeld, F M

    1993-04-01

    Previously identified and current trends in the development of computer systems and in the use of computers for health care applications are reviewed. Trends identified in a 1982 article were increasing miniaturization and archival ability, increasing software costs, increasing software independence, user empowerment through new software technologies, shorter computer-system life cycles, and more rapid development and support of pharmaceutical services. Most of these trends continue today. Current trends in hardware and software include the increasing use of reduced instruction-set computing, migration to the UNIX operating system, the development of large software libraries, microprocessor-based smart terminals that allow remote validation of data, speech synthesis and recognition, application generators, fourth-generation languages, computer-aided software engineering, object-oriented technologies, and artificial intelligence. Current trends specific to pharmacy and hospitals are the withdrawal of vendors of hospital information systems from the pharmacy market, improved linkage of information systems within hospitals, and increased regulation by government. The computer industry and its products continue to undergo dynamic change. Software development continues to lag behind hardware, and its high cost is offsetting the savings provided by hardware.

  6. A Hardware Lab Anywhere At Any Time

    Directory of Open Access Journals (Sweden)

    Tobias Schubert

    2004-12-01

    Full Text Available Scientific technical courses are an important component in any student's education. These courses are usually characterised by the fact that the students execute experiments in special laboratories. This leads to extremely high costs and a reduction in the maximum number of possible participants. From this traditional point of view, it doesn't seem possible to realise the concepts of a Virtual University in the context of sophisticated technical courses since the students must be "on the spot". In this paper we introduce the so-called Mobile Hardware Lab which makes student participation possible at any time and from any place. This lab nevertheless transfers a feeling of being present in a laboratory. This is accomplished with a special Learning Management System in combination with hardware components which correspond to a fully equipped laboratory workstation that are lent out to the students for the duration of the lab. The experiments are performed and solved at home, then handed in electronically. Judging and marking are also both performed electronically. Since 2003 the Mobile Hardware Lab is now offered in a completely web based form.

  7. 4273π: Bioinformatics education on low cost ARM hardware

    Science.gov (United States)

    2013-01-01

    Background Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. Results We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012–2013. Conclusions 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost. PMID:23937194

  8. 4273π: bioinformatics education on low cost ARM hardware.

    Science.gov (United States)

    Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D

    2013-08-12

    Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.

  9. eDNA: A Bio-Inspired Reconfigurable Hardware Cell Architecture Supporting Self-organisation and Self-healing

    DEFF Research Database (Denmark)

    Boesen, Michael Reibel; Madsen, Jan

    2009-01-01

    This paper presents the concept of a biological inspired reconfigurable hardware cell architecture which supports self-organisation and self-healing. Two fundamental processes in biology, namely fertilization-to-birth and cell self-healing have inspired the development of this cell architecture. ...

  10. A Framework for Assessing the Reusability of Hardware (Reusable Rocket Engines)

    Science.gov (United States)

    Childress-Thompson, Rhonda; Thomas, Dale; Farrington, Phillip

    2016-01-01

    Within the space flight community, reusability has taken center stage as the new buzzword. In order for reusable hardware to be competitive with its expendable counterpart, two major elements must be closely scrutinized. First, recovery and refurbishment costs must be lower than the development and acquisition costs. Additionally, the reliability for reused hardware must remain the same (or nearly the same) as "first use" hardware. Therefore, it is imperative that a systematic approach be established to enhance the development of reusable systems. However, before the decision can be made on whether it is more beneficial to reuse hardware or to replace it, the parameters that are needed to deem hardware worthy of reuse must be identified. For reusable hardware to be successful, the factors that must be considered are reliability (integrity, life, number of uses), operability (maintenance, accessibility), and cost (procurement, retrieval, refurbishment). These three factors are essential to the successful implementation of reusability while enabling the ability to meet performance goals. Past and present strategies and attempts at reuse within the space industry will be examined to identify important attributes of reusability that can be used to evaluate hardware when contemplating reusable versus expendable options. This paper will examine why reuse must be stated as an initial requirement rather than included as an afterthought in the final design. Late in the process, changes in the overall objective/purpose of components typically have adverse effects that potentially negate the benefits. A methodology for assessing the viability of reusing hardware will be presented by using the Space Shuttle Main Engine (SSME) to validate the approach. Because reliability, operability, and costs are key drivers in making this critical decision, they will be used to assess requirements for reuse as applied to components of the SSME.

  11. Pre-Hardware Optimization of Spacecraft Image Processing Software Algorithms and Hardware Implementation

    Science.gov (United States)

    Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)

    2001-01-01

    Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the

  12. Hardware implementation of a GFSR pseudo-random number generator

    Science.gov (United States)

    Aiello, G. R.; Budinich, M.; Milotti, E.

    1989-12-01

    We describe the hardware implementation of a pseudo-random number generator of the "Generalized Feedback Shift Register" (GFSR) type. After brief theoretical considerations we describe two versions of the hardware, the tests done and the performances achieved.

  13. The Impact of Flight Hardware Scavenging on Space Logistics

    Science.gov (United States)

    Oeftering, Richard C.

    2011-01-01

    For a given fixed launch vehicle capacity the logistics payload delivered to the moon may be only roughly 20 percent of the payload delivered to the International Space Station (ISS). This is compounded by the much lower flight frequency to the moon and thus low availability of spares for maintenance. This implies that lunar hardware is much more scarce and more costly per kilogram than ISS and thus there is much more incentive to preserve hardware. The Constellation Lunar Surface System (LSS) program is considering ways of utilizing hardware scavenged from vehicles including the Altair lunar lander. In general, the hardware will have only had a matter of hours of operation yet there may be years of operational life remaining. By scavenging this hardware the program, in effect, is treating vehicle hardware as part of the payload. Flight hardware may provide logistics spares for system maintenance and reduce the overall logistics footprint. This hardware has a wide array of potential applications including expanding the power infrastructure, and exploiting in-situ resources. Scavenging can also be seen as a way of recovering the value of, literally, billions of dollars worth of hardware that would normally be discarded. Scavenging flight hardware adds operational complexity and steps must be taken to augment the crew s capability with robotics, capabilities embedded in flight hardware itself, and external processes. New embedded technologies are needed to make hardware more serviceable and scavengable. Process technologies are needed to extract hardware, evaluate hardware, reconfigure or repair hardware, and reintegrate it into new applications. This paper also illustrates how scavenging can be used to drive down the cost of the overall program by exploiting the intrinsic value of otherwise discarded flight hardware.

  14. Computer hardware for radiologists: Part 2

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. “Storage drive” is a term describing a “memory” hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. “Drive interfaces” connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular “input/output devices” used commonly with computers are the printer, monitor, mouse, and keyboard. The “bus” is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. “Ports” are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ‘ever increasing’ digital future

  15. Computer hardware for radiologists: Part 2

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU, chipset, random access memory (RAM, and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. "Storage drive" is a term describing a "memory" hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. "Drive interfaces" connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular "input/output devices" used commonly with computers are the printer, monitor, mouse, and keyboard. The "bus" is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated ISA bus. "Ports" are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ′ever increasing′ digital future.

  16. Open Source Hardware for DIY Environmental Sensing

    Science.gov (United States)

    Aufdenkampe, A. K.; Hicks, S. D.; Damiano, S. G.; Montgomery, D. S.

    2014-12-01

    The Arduino open source electronics platform has been very popular within the DIY (Do It Yourself) community for several years, and it is now providing environmental science researchers with an inexpensive alternative to commercial data logging and transmission hardware. Here we present the designs for our latest series of custom Arduino-based dataloggers, which include wireless communication options like self-meshing radio networks and cellular phone modules. The main Arduino board uses a custom interface board to connect to various research-grade sensors to take readings of turbidity, dissolved oxygen, water depth and conductivity, soil moisture, solar radiation, and other parameters. Sensors with SDI-12 communications can be directly interfaced to the logger using our open Arduino-SDI-12 software library (https://github.com/StroudCenter/Arduino-SDI-12). Different deployment options are shown, like rugged enclosures to house the loggers and rigs for mounting the sensors in both fresh water and marine environments. After the data has been collected and transmitted by the logger, the data is received by a mySQL-PHP stack running on a web server that can be accessed from anywhere in the world. Once there, the data can be visualized on web pages or served though REST requests and Water One Flow (WOF) services. Since one of the main benefits of using open source hardware is the easy collaboration between users, we are introducing a new web platform for discussion and sharing of ideas and plans for hardware and software designs used with DIY environmental sensors and data loggers.

  17. Methodology for Assessing Reusability of Spaceflight Hardware

    Science.gov (United States)

    Childress-Thompson, Rhonda; Thomas, L. Dale; Farrington, Phillip

    2017-01-01

    In 2011 the Space Shuttle, the only Reusable Launch Vehicle (RLV) in the world, returned to earth for the final time. Upon retirement of the Space Shuttle, the United States (U.S.) no longer possessed a reusable vehicle or the capability to send American astronauts to space. With the National Aeronautics and Space Administration (NASA) out of the RLV business and now only pursuing Expendable Launch Vehicles (ELV), not only did companies within the U.S. start to actively pursue the development of either RLVs or reusable components, but entities around the world began to venture into the reusable market. For example, SpaceX and Blue Origin are developing reusable vehicles and engines. The Indian Space Research Organization is developing a reusable space plane and Airbus is exploring the possibility of reusing its first stage engines and avionics housed in the flyback propulsion unit referred to as the Advanced Expendable Launcher with Innovative engine Economy (Adeline). Even United Launch Alliance (ULA) has announced plans for eventually replacing the Atlas and Delta expendable rockets with a family of RLVs called Vulcan. Reuse can be categorized as either fully reusable, the situation in which the entire vehicle is recovered, or partially reusable such as the National Space Transportation System (NSTS) where only the Space Shuttle, Space Shuttle Main Engines (SSME), and Solid Rocket Boosters (SRB) are reused. With this influx of renewed interest in reusability for space applications, it is imperative that a systematic approach be developed for assessing the reusability of spaceflight hardware. The partially reusable NSTS offered many opportunities to glean lessons learned; however, when it came to efficient operability for reuse the Space Shuttle and its associated hardware fell short primarily because of its two to four-month turnaround time. Although there have been several attempts at designing RLVs in the past with the X-33, Venture Star and Delta Clipper

  18. Open Hardware for CERN's accelerator control systems

    International Nuclear Information System (INIS)

    Bij, E van der; Serrano, J; Wlostowski, T; Cattin, M; Gousiou, E; Sanchez, P Alvarez; Boccardi, A; Voumard, N; Penacoba, G

    2012-01-01

    The accelerator control systems at CERN will be upgraded and many electronics modules such as analog and digital I/O, level converters and repeaters, serial links and timing modules are being redesigned. The new developments are based on the FPGA Mezzanine Card, PCI Express and VME64x standards while the Wishbone specification is used as a system on a chip bus. To attract partners, the projects are developed in an 'Open' fashion. Within this Open Hardware project new ways of working with industry are being evaluated and it has been proven that industry can be involved at all stages, from design to production and support.

  19. Management of cladding hulls and fuel hardware

    International Nuclear Information System (INIS)

    1985-01-01

    The reprocessing of spent fuel from power reactors based on chop-leach technology produces a solid waste product of cladding hulls and other metallic residues. This report describes the current situation in the management of fuel cladding hulls and hardware. Information is presented on the material composition of such waste together with the heating effects due to neutron-induced activation products and fuel contamination. As no country has established a final disposal route and the corresponding repository, this report also discusses possible disposal routes and various disposal options under consideration at present

  20. Hardware trigger processor for the MDT system

    CERN Document Server

    AUTHOR|(SzGeCERN)757787; The ATLAS collaboration; Hazen, Eric; Butler, John; Black, Kevin; Gastler, Daniel Edward; Ntekas, Konstantinos; Taffard, Anyes; Martinez Outschoorn, Verena; Ishino, Masaya; Okumura, Yasuyuki

    2017-01-01

    We are developing a low-latency hardware trigger processor for the Monitored Drift Tube system in the Muon spectrometer. The processor will fit candidate Muon tracks in the drift tubes in real time, improving significantly the momentum resolution provided by the dedicated trigger chambers. We present a novel pure-FPGA implementation of a Legendre transform segment finder, an associative-memory alternative implementation, an ARM (Zynq) processor-based track fitter, and compact ATCA carrier board architecture. The ATCA architecture is designed to allow a modular, staged approach to deployment of the system and exploration of alternative technologies.

  1. Development of Hardware Dual Modality Tomography System

    Directory of Open Access Journals (Sweden)

    R. M. Zain

    2009-06-01

    Full Text Available The paper describes the hardware development and performance of the Dual Modality Tomography (DMT system. DMT consists of optical and capacitance sensors. The optical sensors consist of 16 LEDs and 16 photodiodes. The Electrical Capacitance Tomography (ECT electrode design use eight electrode plates as the detecting sensor. The digital timing and the control unit have been developing in order to control the light projection of optical emitters, switching the capacitance electrodes and to synchronize the operation of data acquisition. As a result, the developed system is able to provide a maximum 529 set data per second received from the signal conditioning circuit to the computer.

  2. Hardware-efficient autonomous quantum memory protection.

    Science.gov (United States)

    Leghtas, Zaki; Kirchmair, Gerhard; Vlastakis, Brian; Schoelkopf, Robert J; Devoret, Michel H; Mirrahimi, Mazyar

    2013-09-20

    We propose to encode a quantum bit of information in a superposition of coherent states of an oscillator, with four different phases. Our encoding in a single cavity mode, together with a protection protocol, significantly reduces the error rate due to photon loss. This protection is ensured by an efficient quantum error correction scheme employing the nonlinearity provided by a single physical qubit coupled to the cavity. We describe in detail how to implement these operations in a circuit quantum electrodynamics system. This proposal directly addresses the task of building a hardware-efficient quantum memory and can lead to important shortcuts in quantum computing architectures.

  3. Dynamically-Loaded Hardware Libraries (HLL) Technology for Audio Applications

    DEFF Research Database (Denmark)

    Esposito, A.; Lomuscio, A.; Nunzio, L. Di

    2016-01-01

    In this work, we apply hardware acceleration to embedded systems running audio applications. We present a new framework, Dynamically-Loaded Hardware Libraries or HLL, to dynamically load hardware libraries on reconfigurable platforms (FPGAs). Provided a library of application-specific processors,...

  4. Visual basic application in computer hardware control and data ...

    African Journals Online (AJOL)

    ... hardware device control and data acquisition is experimented using Visual Basic and the Speech Application Programming Interface (SAPI) Software Development Kit. To control hardware using Visual Basic, all hardware requests were designed to go through Windows via the printer parallel ports which is accessed and ...

  5. PACE: A dynamic programming algorithm for hardware/software partitioning

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1996-01-01

    with a hardware area constraint and the problem of minimizing hardware area with a system execution time constraint. The target architecture consists of a single microprocessor and a single hardware chip (ASIC, FPGA, etc.) which are connected by a communication channel. The algorithm incorporates a realistic...

  6. Static Scheduling of Periodic Hardware Tasks with Precedence and Deadline Constraints on Reconfigurable Hardware Devices

    Directory of Open Access Journals (Sweden)

    Ikbel Belaid

    2011-01-01

    Full Text Available Task graph scheduling for reconfigurable hardware devices can be defined as finding a schedule for a set of periodic tasks with precedence, dependence, and deadline constraints as well as their optimal allocations on the available heterogeneous hardware resources. This paper proposes a new methodology comprising three main stages. Using these three main stages, dynamic partial reconfiguration and mixed integer programming, pipelined scheduling and efficient placement are achieved and enable parallel computing of the task graph on the reconfigurable devices by optimizing placement/scheduling quality. Experiments on an application of heterogeneous hardware tasks demonstrate an improvement of resource utilization of 12.45% of the available reconfigurable resources corresponding to a resource gain of 17.3% compared to a static design. The configuration overhead is reduced to 2% of the total running time. Due to pipelined scheduling, the task graph spanning is minimized by 4% compared to sequential execution of the graph.

  7. ARM assembly language with hardware experiments

    CERN Document Server

    Elahi, Ata

    2015-01-01

    This book provides a hands-on approach to learning ARM assembly language with the use of a TI microcontroller. The book starts with an introduction to computer architecture and then discusses number systems and digital logic. The text covers ARM Assembly Language, ARM Cortex Architecture and its components, and Hardware Experiments using TILM3S1968. Written for those interested in learning embedded programming using an ARM Microcontroller. ·         Introduces number systems and signal transmission methods   ·         Reviews logic gates, registers, multiplexers, decoders and memory   ·         Provides an overview and examples of ARM instruction set   ·         Uses using Keil development tools for writing and debugging ARM assembly language Programs   ·         Hardware experiments using a Mbed NXP LPC1768 microcontroller; including General Purpose Input/Output (GPIO) configuration, real time clock configuration, binary input to 7-segment display, creating ...

  8. Hardware system for man-machine interface

    International Nuclear Information System (INIS)

    Niki, Kiyoshi; Tai, Ichirou; Hiromoto, Hiroshi; Inubushi, Hiroyuki; Makino, Teruyuki.

    1988-01-01

    Keeping pace with the recent advance of electronic technology, the adoption of the system that can present more information efficiently and in orderly form to operators has been promoted rapidly, in place of the man-machine interface for power stations, which comprises conventional indicators, switches and annunciators. By the introduction of new hardware and software, the form of the central control rooms of power stations and the sharing of roles between man and machine there have been reexamined. In this report, the way the man-machine interface in power stations should be and the requirement for the role of operators are summarized, and based on them, the role of man-machine equipment is considered, thereafter, the features and functions of new typical man-machine equipments that are used for power stations at present or can be applied are described. Finally, the example of how these equipments are applied to power plants as the actual system is shown. The role of man-machine system in power stations, recent operation monitoring and control, the sharing of roles between hardware and operators, the role of machines, the recent typical hard ware of man-machine interface, and the examples of the latest application are reported. (K.I.)

  9. ISS Logistics Hardware Disposition and Metrics Validation

    Science.gov (United States)

    Rogers, Toneka R.

    2010-01-01

    I was assigned to the Logistics Division of the International Space Station (ISS)/Spacecraft Processing Directorate. The Division consists of eight NASA engineers and specialists that oversee the logistics portion of the Checkout, Assembly, and Payload Processing Services (CAPPS) contract. Boeing, their sub-contractors and the Boeing Prime contract out of Johnson Space Center, provide the Integrated Logistics Support for the ISS activities at Kennedy Space Center. Essentially they ensure that spares are available to support flight hardware processing and the associated ground support equipment (GSE). Boeing maintains a Depot for electrical, mechanical and structural modifications and/or repair capability as required. My assigned task was to learn project management techniques utilized by NASA and its' contractors to provide an efficient and effective logistics support infrastructure to the ISS program. Within the Space Station Processing Facility (SSPF) I was exposed to Logistics support components, such as, the NASA Spacecraft Services Depot (NSSD) capabilities, Mission Processing tools, techniques and Warehouse support issues, required for integrating Space Station elements at the Kennedy Space Center. I also supported the identification of near-term ISS Hardware and Ground Support Equipment (GSE) candidates for excessing/disposition prior to October 2010; and the validation of several Logistics Metrics used by the contractor to measure logistics support effectiveness.

  10. Introduction to Hardware Security and Trust

    CERN Document Server

    Wang, Cliff

    2012-01-01

    The emergence of a globalized, horizontal semiconductor business model raises a set of concerns involving the security and trust of the information systems on which modern society is increasingly reliant for mission-critical functionality. Hardware-oriented security and trust issues span a broad range including threats related to the malicious insertion of Trojan circuits designed, e.g.,to act as a ‘kill switch’ to disable a chip, to integrated circuit (IC) piracy,and to attacks designed to extract encryption keys and IP from a chip. This book provides the foundations for understanding hardware security and trust, which have become major concerns for national security over the past decade.  Coverage includes security and trust issues in all types of electronic devices and systems such as ASICs, COTS, FPGAs, microprocessors/DSPs, and embedded systems.  This serves as an invaluable reference to the state-of-the-art research that is of critical significance to the security of,and trust in, modern society�...

  11. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  12. Hardware development process for Human Research facility applications

    Science.gov (United States)

    Bauer, Liz

    2000-01-01

    The simple goal of the Human Research Facility (HRF) is to conduct human research experiments on the International Space Station (ISS) astronauts during long-duration missions. This is accomplished by providing integration and operation of the necessary hardware and software capabilities. A typical hardware development flow consists of five stages: functional inputs and requirements definition, market research, design life cycle through hardware delivery, crew training, and mission support. The purpose of this presentation is to guide the audience through the early hardware development process: requirement definition through selecting a development path. Specific HRF equipment is used to illustrate the hardware development paths. .

  13. Life sciences flight hardware development for the International Space Station

    Science.gov (United States)

    Kern, V. D.; Bhattacharya, S.; Bowman, R. N.; Donovan, F. M.; Elland, C.; Fahlen, T. F.; Girten, B.; Kirven-Brooks, M.; Lagel, K.; Meeker, G. B.; Santos, O.

    During the construction phase of the International Space Station (ISS), early flight opportunities have been identified (including designated Utilization Flights, UF) on which early science experiments may be performed. The focus of NASA's and other agencies' biological studies on the early flight opportunities is cell and molecular biology; with UF-1 scheduled to fly in fall 2001, followed by flights 8A and UF-3. Specific hardware is being developed to verify design concepts, e.g., the Avian Development Facility for incubation of small eggs and the Biomass Production System for plant cultivation. Other hardware concepts will utilize those early research opportunities onboard the ISS, e.g., an Incubator for sample cultivation, the European Modular Cultivation System for research with small plant systems, an Insect Habitat for support of insect species. Following the first Utilization Flights, additional equipment will be transported to the ISS to expand research opportunities and capabilities, e.g., a Cell Culture Unit, the Advanced Animal Habitat for rodents, an Aquatic Facility to support small fish and aquatic specimens, a Plant Research Unit for plant cultivation, and a specialized Egg Incubator for developmental biology studies. Host systems (Figure 1A, B), e.g., a 2.5 m Centrifuge Rotor (g-levels from 0.01-g to 2-g) for direct comparisons between μg and selectable g levels, the Life Sciences Glove☐ for contained manipulations, and Habitat Holding Racks (Figure 1B) will provide electrical power, communication links, and cooling to the habitats. Habitats will provide food, water, light, air and waste management as well as humidity and temperature control for a variety of research organisms. Operators on Earth and the crew on the ISS will be able to send commands to the laboratory equipment to monitor and control the environmental and experimental parameters inside specific habitats. Common laboratory equipment such as microscopes, cryo freezers, radiation

  14. Veggie Hardware Validation Test Preliminary Results and Lessons Learned

    Science.gov (United States)

    Massa, Gioia D.; Dufour, Nicole F.; Smith, T. M.

    2014-01-01

    The Veggie hardware validation test, VEG-01, was conducted on the International Space Station during Expeditions 39 and 40 from May through June of 2014. The Veggie hardware and the VEG-01 experiment payload were launched to station aboard the SpaceX-3 resupply mission in April, 2014. Veggie was installed in an Expedite-the-Processing-of-Experiments-to-Space-Station (ExPRESS) rack in the Columbus module, and the VEG-01 validation test was initiated. Veggie installation was successful, and power was supplied to the unit. The hardware was programmed and the root mat reservoir and plant pillows were installed without issue. As expected, a small amount of growth media was observed in the sealed bags which enclosed the plant pillows when they were destowed. Astronaut Steve Swanson used the wet/dry vacuum to clean up the escaped particles. Water insertion or priming the first plant pillow was unsuccessful as an issue prevented water movement through the quick disconnect. All subsequent pillows were successfully primed, and the initial pillow was replaced with a backup pillow and successfully primed. Six pillows were primed, but only five pillows had plants which germinated. After about a week and a half it was observed that plants were not growing well and that pillow wicks were dry. This indicated that the reservoir was not supplying sufficient water to the pillows via wicking, and so the team reverted to an operational fix which added water directly to the plant pillows. Direct watering of the pillows led to a recovery in several of the stressed plants; a couple of which did not recover. An important lesson learned involved Veggie's bellows. The bellows tended to float and interfere with operations when opened, so Steve secured them to the baseplate during plant tending operations. Due to the perceived intensity of the LED lights, the crew found it challenging to both work under the lights and read crew procedures on their computer. Although the lights are not a safety

  15. Validating gravitational-wave detections: The Advanced LIGO hardware injection system

    Science.gov (United States)

    Biwer, C.; Barker, D.; Batch, J. C.; Betzwieser, J.; Fisher, R. P.; Goetz, E.; Kandhasamy, S.; Karki, S.; Kissel, J. S.; Lundgren, A. P.; Macleod, D. M.; Mullavey, A.; Riles, K.; Rollins, J. G.; Thorne, K. A.; Thrane, E.; Abbott, T. D.; Allen, B.; Brown, D. A.; Charlton, P.; Crowder, S. G.; Fritschel, P.; Kanner, J. B.; Landry, M.; Lazzaro, C.; Millhouse, M.; Pitkin, M.; Savage, R. L.; Shawhan, P.; Shoemaker, D. H.; Smith, J. R.; Sun, L.; Veitch, J.; Vitale, S.; Weinstein, A. J.; Cornish, N.; Essick, R. C.; Fays, M.; Katsavounidis, E.; Lange, J.; Littenberg, T. B.; Lynch, R.; Meyers, P. M.; Pannarale, F.; Prix, R.; O'Shaughnessy, R.; Sigg, D.

    2017-03-01

    Hardware injections are simulated gravitational-wave signals added to the Laser Interferometer Gravitational-wave Observatory (LIGO). The detectors' test masses are physically displaced by an actuator in order to simulate the effects of a gravitational wave. The simulated signal initiates a control-system response which mimics that of a true gravitational wave. This provides an end-to-end test of LIGO's ability to observe gravitational waves. The gravitational-wave analyses used to detect and characterize signals are exercised with hardware injections. By looking for discrepancies between the injected and recovered signals, we are able to characterize the performance of analyses and the coupling of instrumental subsystems to the detectors' output channels. This paper describes the hardware injection system and the recovery of injected signals representing binary black hole mergers, a stochastic gravitational wave background, spinning neutron stars, and sine-Gaussians.

  16. Handbook of hardware/software codesign

    CERN Document Server

    Teich, Jürgen

    2017-01-01

    This handbook presents fundamental knowledge on the hardware/software (HW/SW) codesign methodology. Contributing expert authors look at key techniques in the design flow as well as selected codesign tools and design environments, building on basic knowledge to consider the latest techniques. The book enables readers to gain real benefits from the HW/SW codesign methodology through explanations and case studies which demonstrate its usefulness. Readers are invited to follow the progress of design techniques through this work, which assists readers in following current research directions and learning about state-of-the-art techniques. Students and researchers will appreciate the wide spectrum of subjects that belong to the design methodology from this handbook. .

  17. Algorithms for Hardware-Based Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Müller Dietmar

    2004-01-01

    Full Text Available Nonlinear spatial transforms and fuzzy pattern classification with unimodal potential functions are established in signal processing. They have proved to be excellent tools in feature extraction and classification. In this paper, we will present a hardware-accelerated image processing and classification system which is implemented on one field-programmable gate array (FPGA. Nonlinear discrete circular transforms generate a feature vector. The features are analyzed by a fuzzy classifier. This principle can be used for feature extraction, pattern recognition, and classification tasks. Implementation in radix-2 structures is possible, allowing fast calculations with a computational complexity of up to . Furthermore, the pattern separability properties of these transforms are better than those achieved with the well-known method based on the power spectrum of the Fourier Transform, or on several other transforms. Using different signal flow structures, the transforms can be adapted to different image and signal processing applications.

  18. Theorem Proving in Intel Hardware Design

    Science.gov (United States)

    O'Leary, John

    2009-01-01

    For the past decade, a framework combining model checking (symbolic trajectory evaluation) and higher-order logic theorem proving has been in production use at Intel. Our tools and methodology have been used to formally verify execution cluster functionality (including floating-point operations) for a number of Intel products, including the Pentium(Registered TradeMark)4 and Core(TradeMark)i7 processors. Hardware verification in 2009 is much more challenging than it was in 1999 - today s CPU chip designs contain many processor cores and significant firmware content. This talk will attempt to distill the lessons learned over the past ten years, discuss how they apply to today s problems, outline some future directions.

  19. Battery Management System Hardware Concepts: An Overview

    Directory of Open Access Journals (Sweden)

    Markus Lelie

    2018-03-01

    Full Text Available This paper focuses on the hardware aspects of battery management systems (BMS for electric vehicle and stationary applications. The purpose is giving an overview on existing concepts in state-of-the-art systems and enabling the reader to estimate what has to be considered when designing a BMS for a given application. After a short analysis of general requirements, several possible topologies for battery packs and their consequences for the BMS’ complexity are examined. Four battery packs that were taken from commercially available electric vehicles are shown as examples. Later, implementation aspects regarding measurement of needed physical variables (voltage, current, temperature, etc. are discussed, as well as balancing issues and strategies. Finally, safety considerations and reliability aspects are investigated.

  20. EPICS: Allen-Bradley hardware reference manual

    International Nuclear Information System (INIS)

    Nawrocki, G.

    1993-01-01

    This manual covers the following hardware: Allen-Bradley 6008 -- SV VMEbus I/O scanner; Allen-Bradley universal I/O chassis 1771-A1B, -A2B, -A3B, and -A4B; Allen-Bradley power supply module 1771-P4S; Allen-Bradley 1771-ASB remote I/O adapter module; Allen-Bradley 1771-IFE analog input module; Allen-Bradley 1771-OFE analog output module; Allen-Bradley 1771-IG(D) TTL input module; Allen-Bradley 1771-OG(d) TTL output; Allen-Bradley 1771-IQ DC selectable input module; Allen-Bradley 1771-OW contact output module; Allen-Bradley 1771-IBD DC (10--30V) input module; Allen-Bradley 1771-OBD DC (10--60V) output module; Allen-Bradley 1771-IXE thermocouple/millivolt input module; and the Allen-Bradley 2705 RediPANEL push button module

  1. The double Chooz hardware trigger system

    Energy Technology Data Exchange (ETDEWEB)

    Cucoanes, Andi; Beissel, Franz; Reinhold, Bernd; Roth, Stefan; Stahl, Achim; Wiebusch, Christopher [RWTH Aachen (Germany)

    2008-07-01

    The double Chooz neutrino experiment aims to improve the present knowledge on {theta}{sub 13} mixing angle using two similar detectors placed at {proportional_to}280 m and respectively 1 km from the Chooz power plant reactor cores. The detectors measure the disappearance of reactor antineutrinos. The hardware trigger has to be very efficient for antineutrinos as well as for various types of background events. The triggering condition is based on discriminated PMT sum signals and the multiplicity of groups of PMTs. The talk gives an outlook to the double Chooz experiment and explains the requirements of the trigger system. The resulting concept and its performance is shown as well as first results from a prototype system.

  2. Compressive Sensing Image Sensors-Hardware Implementation

    Directory of Open Access Journals (Sweden)

    Shahram Shirani

    2013-04-01

    Full Text Available The compressive sensing (CS paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal–oxide–semiconductor technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed.

  3. Fast Gridding on Commodity Graphics Hardware

    DEFF Research Database (Denmark)

    Sørensen, Thomas Sangild; Schaeffter, Tobias; Noe, Karsten Østergaard

    2007-01-01

    is the far most time consuming of the three steps (Table 1). Modern graphics cards (GPUs) can be utilised as a fast parallel processor provided that algorithms are reformulated in a parallel solution. The purpose of this work is to test the hypothesis, that a non-cartesian reconstruction can be efficiently......The most commonly used algorithm for non-cartesian MRI reconstruction is the gridding algorithm [1]. It consists of three steps:                    1) convolution with a gridding kernel and resampling on a cartesian grid, 2) inverse FFT, and 3) deapodization. On the CPU the convolution step...... implemented on graphics hardware giving a significant speedup compared to CPU based alternatives. We present a novel GPU implementation of the convolution step that overcomes the problems of memory bandwidth that has limited the speed of previous GPU gridding algorithms [2]....

  4. Locating hardware faults in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-04-13

    Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

  5. Hardware descriptions of the I and C systems for NPP

    International Nuclear Information System (INIS)

    Lee, Cheol Kwon; Oh, In Suk; Park, Joo Hyun; Kim, Dong Hoon; Han, Jae Bok; Shin, Jae Whal; Kim, Young Bak

    2003-09-01

    The hardware specifications for I and C Systems of SNPP(Standard Nuclear Power Plant) are reviewed in order to acquire the hardware requirement and specification of KNICS (Korea Nuclear Instrumentation and Control System). In the study, we investigated hardware requirements, hardware configuration, hardware specifications, man-machine hardware requirements, interface requirements with the other system, and data communication requirements that are applicable to SNP. We reviewed those things of control systems, protection systems, monitoring systems, information systems, and process instrumentation systems. Through the study, we described the requirements and specifications of digital systems focusing on a microprocessor and a communication interface, and repeated it for analog systems focusing on the manufacturing companies. It is expected that the experience acquired from this research will provide vital input for the development of the KNICS

  6. Swarm behavioral sorting based on robotic hardware variation

    OpenAIRE

    Shang, Beining; Crowder, Richard; Zauner, Klaus-Peter

    2014-01-01

    Swarm robotic systems can offer advantages of robustness, flexibility and scalability, just like social insects. One of the issues that researchers are facing is the hardware variation when implementing real robotic swarms. Identical software cannot guarantee identical behaviors among all robots due to hardware differences between swarm members. We propose a novel approach for sorting swarm robots according to their hardware differences. This method is based on the large number of interaction...

  7. Why Open Source Hardware matters and why you should care

    OpenAIRE

    Gürkaynak, Frank K.

    2017-01-01

    Open source hardware is currently where open source software was about 30 years ago. The idea is well received by enthusiasts, there is interest and the open source hardware has gained visible momentum recently, with several well-known universities including UC Berkeley, Cambridge and ETH Zürich actively working on large projects involving open source hardware, attracting the attention of companies big and small. But it is still not quite there yet. In this talk, based on my experience on the...

  8. Hardware/Software Co-design using Primitive Interface

    OpenAIRE

    Navin Chourasia; Puran Gaur

    2011-01-01

    Most engineering designs can be viewed as systems, i.e., as collections of several components whose combined operation provides useful services. Components can be heterogeneous in nature and their interaction may be regulated by some simple or complex means. Interface between Hardware & Software plays a very important role in co-design of the embedded system. Hardware/software co-design means meeting system-level objectives by exploiting the synergism of hardware and software through their co...

  9. Large research infrastructure for Earth-Ocean Science: Challenges of multidisciplinary integration across hardware, software, and people networks

    Science.gov (United States)

    Best, M.; Barnes, C. R.; Johnson, F.; Pautet, L.; Pirenne, B.; Founding Scientists Of Neptune Canada

    2010-12-01

    NEPTUNE Canada is operating a regional cabled ocean observatory across the northern Juan de Fuca Plate in the northeastern Pacific. Installation of the first suite of instruments and connectivity equipment was completed in 2009, so this system now provides the continuous power and bandwidth to collect integrated data on physical, chemical, geological, and biological gradients at temporal resolutions relevant to the dynamics of the earth-ocean system. The building of this facility integrates hardware, software, and people networks. Hardware progress to date includes: installation of the 800km powered fiber-optic backbone in the Fall of 2007; development of Nodes and Junction Boxes; acquisition/development and testing of Instruments; development of mobile instrument platforms such as a) a Vertical Profiler and b) a Crawler (University of Bremmen); and integration of over a thousand components into an operating subsea sensor system. Nodes, extension cables, junction boxes, and instruments were installed at 4 out of 5 locations in 2009; the fifth Node is instrumented in September 2010. In parallel, software and hardware systems are acquiring, archiving, and delivering the continuous real-time data through the internet to the world - already many terabytes of data. A web environment (Oceans 2.0) to combine this data access with analysis and visualization, collaborative tools, interoperability, and instrument control is being released. Finally, a network of scientists and technicians are contributing to the process in every phase, and data users already number in the thousands. Initial experiments were planned through a series of workshops and international proposal competitions. At inshore Folger Passage, Barkley Sound, understanding controls on biological productivity help evaluate the effects that marine processes have on fish and marine mammals. Experiments around Barkley Canyon allow quantification of changes in biological and chemical activity associated with

  10. {sup 18}F-FDG PET/CT evaluation of children and young adults with suspected spinal fusion hardware infection

    Energy Technology Data Exchange (ETDEWEB)

    Bagrosky, Brian M. [University of Colorado School of Medicine, Department of Pediatric Radiology, Children' s Hospital Colorado, 12123 E. 16th Ave., Box 125, Aurora, CO (United States); University of Colorado School of Medicine, Department of Radiology, Division of Nuclear Medicine, Aurora, CO (United States); Hayes, Kari L.; Fenton, Laura Z. [University of Colorado School of Medicine, Department of Pediatric Radiology, Children' s Hospital Colorado, 12123 E. 16th Ave., Box 125, Aurora, CO (United States); Koo, Phillip J. [University of Colorado School of Medicine, Department of Radiology, Division of Nuclear Medicine, Aurora, CO (United States)

    2013-08-15

    patients with suspected spinal hardware infection. Because pneumonia was diagnosed as often as spinal hardware infection, initial chest radiography should also be performed. (orig.)

  11. Reliable software for unreliable hardware a cross layer perspective

    CERN Document Server

    Rehman, Semeen; Henkel, Jörg

    2016-01-01

    This book describes novel software concepts to increase reliability under user-defined constraints. The authors’ approach bridges, for the first time, the reliability gap between hardware and software. Readers will learn how to achieve increased soft error resilience on unreliable hardware, while exploiting the inherent error masking characteristics and error (stemming from soft errors, aging, and process variations) mitigations potential at different software layers. · Provides a comprehensive overview of reliability modeling and optimization techniques at different hardware and software levels; · Describes novel optimization techniques for software cross-layer reliability, targeting unreliable hardware.

  12. Advancing the education in molecular diagnostics: the IFCC-Initiative "Clinical Molecular Biology Curriculum" (C-CMBC); a ten-year experience.

    Science.gov (United States)

    Lianidou, Evi; Ahmad-Nejad, Parviz; Ferreira-Gonzalez, Andrea; Izuhara, Kenji; Cremonesi, Laura; Schroeder, Maria-Eugenia; Richter, Karin; Ferrari, Maurizio; Neumaier, Michael

    2014-09-25

    Molecular techniques are becoming commonplace in the diagnostic laboratory. Their applications influence all major phases of laboratory medicine including predisposition/genetic risk, primary diagnosis, therapy stratification and prognosis. Readily available laboratory hardware and wetware (i.e. consumables and reagents) foster rapid dissemination to countries that are just establishing molecular testing programs. Appropriate skill levels extending beyond the technical procedure are required for analytical and diagnostic proficiency that is mandatory in molecular genetic testing. An international committee (C-CMBC) of the International Federation for Clinical Chemistry (IFCC) was established to disseminate skills in molecular genetic testing in member countries embarking on the respective techniques. We report the ten-year experience with different teaching and workshop formats for beginners in molecular diagnostics. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Nanorobot Hardware Architecture for Medical Defense

    Directory of Open Access Journals (Sweden)

    Luiz C. Kretly

    2008-05-01

    Full Text Available This work presents a new approach with details on the integrated platform and hardware architecture for nanorobots application in epidemic control, which should enable real time in vivo prognosis of biohazard infection. The recent developments in the field of nanoelectronics, with transducers progressively shrinking down to smaller sizes through nanotechnology and carbon nanotubes, are expected to result in innovative biomedical instrumentation possibilities, with new therapies and efficient diagnosis methodologies. The use of integrated systems, smart biosensors, and programmable nanodevices are advancing nanoelectronics, enabling the progressive research and development of molecular machines. It should provide high precision pervasive biomedical monitoring with real time data transmission. The use of nanobioelectronics as embedded systems is the natural pathway towards manufacturing methodology to achieve nanorobot applications out of laboratories sooner as possible. To demonstrate the practical application of medical nanorobotics, a 3D simulation based on clinical data addresses how to integrate communication with nanorobots using RFID, mobile phones, and satellites, applied to long distance ubiquitous surveillance and health monitoring for troops in conflict zones. Therefore, the current model can also be used to prevent and save a population against the case of some targeted epidemic disease.

  14. Live HDR video streaming on commodity hardware

    Science.gov (United States)

    McNamee, Joshua; Hatchett, Jonathan; Debattista, Kurt; Chalmers, Alan

    2015-09-01

    High Dynamic Range (HDR) video provides a step change in viewing experience, for example the ability to clearly see the soccer ball when it is kicked from the shadow of the stadium into sunshine. To achieve the full potential of HDR video, so-called true HDR, it is crucial that all the dynamic range that was captured is delivered to the display device and tone mapping is confined only to the display. Furthermore, to ensure widespread uptake of HDR imaging, it should be low cost and available on commodity hardware. This paper describes an end-to-end HDR pipeline for capturing, encoding and streaming high-definition HDR video in real-time using off-the-shelf components. All the lighting that is captured by HDR-enabled consumer cameras is delivered via the pipeline to any display, including HDR displays and even mobile devices with minimum latency. The system thus provides an integrated HDR video pipeline that includes everything from capture to post-production, archival and storage, compression, transmission, and display.

  15. 8-Channel Broadband Laser Ranging Hardware Development

    Science.gov (United States)

    Bennett, Corey; La Lone, Brandon; Younk, Patrick; Daykin, Ed; Rhodes, Michelle; Perry, Daniel; Tran, Vu; Miller, Edward

    2017-06-01

    Broadband Laser Ranging (BLR) is a new diagnostic being developed to precisely measure the position vs. time of surfaces, shock break out, particle clouds, jets, and debris moving at kilometers per second speeds. The instrument uses interferometry to encode distance into a modulation in the spectrum of pulses from a mode-locked fiber laser and uses a dispersive Fourier transformation to map the spectral modulation into time. Range information is thereby recorded on a fast oscilloscope at the repetition rate of the laser, approximately every 50 ns. Current R&D is focused on developing a compact 8-channel system utilizing one laser and one high-speed oscilloscope. This talk will emphasize the hardware being developed for applications at the Contained Firing Facility at LLNL, but has a common architecture being developed in collaboration with NSTec and LANL for applications at multiple other facilities. Prepared by LLNL under Contract DE-AC52-07NA27344, by LANL under Contract DE-AC52-06NA25396, and by NSTec Contract DE-AC52-06NA25946.

  16. Hardware image assessment for wireless endoscopy capsules.

    Science.gov (United States)

    Khorsandi, M A; Karimi, N; Samavi, S; Hajabdollahi, M; Soroushmehr, S M R; Ward, K; Najarian, K

    2016-08-01

    Wireless capsule endoscopy is a new technology in the realm of telemedicine that has many advantages over the traditional endoscopy systems. Transmitted images should help diagnosis of diseases of the gastrointestinal tract. Two important technical challenges for the manufacturers of these capsules are power consumption and size of the circuitry. Also, the system must be fast enough for real-time processing of image or video data. To solve this problem, many hardware designs have been proposed for implementation of the image processing unit. In this paper we propose an architecture that could be used for the assessment of endoscopy images. The assessment allows avoidance of transmission of medically useless images. Hence, volume of data is reduced for more efficient transmission of images by the endoscopy capsule. This is done by color space conversion and moment calculation of images captured by the capsule. The inputs of the proposed architecture are RGB image frames and the outputs are images with converted colors and calculated image moments. Experimental results indicate that the proposed architecture has low complexity and is appropriate for a real-time application.

  17. A Hardware Track Finder for ATLAS Trigger

    CERN Document Server

    Volpi, G; The ATLAS collaboration; Andreazza, A; Citterio, M; Favareto, A; Liberali, V; Meroni, C; Riva, M; Sabatini, F; Stabile, A; Annovi, A; Beretta, M; Castegnaro, A; Bevacqua, V; Crescioli, F; Francesco, C; Dell'Orso, M; Giannetti, P; Magalotti, D; Piendibene, M; Roda, C; Sacco, I; Tripiccione, R; Fabbri, L; Franchini, M; Giorgi, F; Giannuzzi, F; Lasagni, F; Sbarra, C; Valentinetti, S; Villa, M; Zoccoli, A; Lanza, A; Negri, A; Vercesi, V; Bogdan, M; Boveia, A; Canelli, F; Cheng, Y; Dunford, M; Li, H L; Kapliy, A; Kim, Y K; Melachrinos, C; Shochet, M; Tang, F; Tang, J; Tuggle, J; Tompkins, L; Webster, J; Atkinson, M; Cavaliere, V; Chang, P; Kasten, M; McCarn, A; Neubauer, M; Hoff, J; Liu, T; Okumura, Y; Olsen, J; Penning, B; Todri, A; Wu, J; Drake, G; Proudfoot, J; Zhang, J; Blair, R; Anderson, J; Auerbach, B; Blazey, G; Kimura, N; Yorita, K; Sakurai, Y; Mitani, T; Iizawa, T

    2012-01-01

    The existing three level ATLAS trigger system is deployed to reduce the event rate from the bunch crossing rate of 40 MHz to ~400 Hz for permanent storage at the LHC design luminosity of 10^34 cm^-2 s^-1. When the LHC reaches beyond the design luminosity, the load on the Level-2 trigger system will significantly increase due to both the need for more sophisticated algorithms to suppress background and the larger event sizes. The Fast TracKer (FTK) is a custom electronics system that will operate at the full Level-1 accepted rate of 100 KHz and provide high quality tracks at the beginning of processing in the Level-2 trigger, by performing track reconstruction in hardware with massive parallelism of associative memories and FPGAs. The performance in important physics areas including b-tagging, tau-tagging and lepton isolation will be demonstrated with the ATLAS MC simulation at different LHC luminosities. The system design will be overviewed. The latest R&amp;amp;D progress of individual components...

  18. Magnetic qubits as hardware for quantum computers

    International Nuclear Information System (INIS)

    Tejada, J.; Chudnovsky, E.; Barco, E. del

    2000-01-01

    We propose two potential realisations for quantum bits based on nanometre scale magnetic particles of large spin S and high anisotropy molecular clusters. In case (1) the bit-value basis states vertical bar-0> and vertical bar-1> are the ground and first excited spin states S z = S and S-1, separated by an energy gap given by the ferromagnetic resonance (FMR) frequency. In case (2), when there is significant tunnelling through the anisotropy barrier, the qubit states correspond to the symmetric, vertical bar-0>, and antisymmetric, vertical bar-1>, combinations of the two-fold degenerate ground state S z = ± S. In each case the temperature of operation must be low compared to the energy gap, Δ, between the states vertical bar-0> and vertical bar-1>. The gap Δ in case (2) can be controlled with an external magnetic field perpendicular to the easy axis of the molecular cluster. The states of different molecular clusters and magnetic particles may be entangled by connecting them by superconducting lines with Josephson switches, leading to the potential for quantum computing hardware. (author)

  19. Magnetic qubits as hardware for quantum computers

    Energy Technology Data Exchange (ETDEWEB)

    Tejada, J.; Chudnovsky, E.; Barco, E. del [and others

    2000-07-01

    We propose two potential realisations for quantum bits based on nanometre scale magnetic particles of large spin S and high anisotropy molecular clusters. In case (1) the bit-value basis states vertical bar-0> and vertical bar-1> are the ground and first excited spin states S{sub z} = S and S-1, separated by an energy gap given by the ferromagnetic resonance (FMR) frequency. In case (2), when there is significant tunnelling through the anisotropy barrier, the qubit states correspond to the symmetric, vertical bar-0>, and antisymmetric, vertical bar-1>, combinations of the two-fold degenerate ground state S{sub z} = {+-} S. In each case the temperature of operation must be low compared to the energy gap, {delta}, between the states vertical bar-0> and vertical bar-1>. The gap {delta} in case (2) can be controlled with an external magnetic field perpendicular to the easy axis of the molecular cluster. The states of different molecular clusters and magnetic particles may be entangled by connecting them by superconducting lines with Josephson switches, leading to the potential for quantum computing hardware. (author)

  20. Mechanics of Granular Materials labeled hardware

    Science.gov (United States)

    2000-01-01

    Mechanics of Granular Materials (MGM) flight hardware takes two twin double locker assemblies in the Space Shuttle middeck or the Spacehab module. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: NASA/MSFC).

  1. Open Hardware For CERN's Accelerator Control Systems

    CERN Document Server

    van der Bij, E; Ayass, M; Boccardi, A; Cattin, M; Gil Soriano, C; Gousiou, E; Iglesias Gonsálvez, S; Penacoba Fernandez, G; Serrano, J; Voumard, N; Wlostowski, T

    2011-01-01

    The accelerator control systems at CERN will be renovated and many electronics modules will be redesigned as the modules they will replace cannot be bought anymore or use obsolete components. The modules used in the control systems are diverse: analog and digital I/O, level converters and repeaters, serial links and timing modules. Overall around 120 modules are supported that are used in systems such as beam instrumentation, cryogenics and power converters. Only a small percentage of the currently used modules are commercially available, while most of them had been specifically designed at CERN. The new developments are based on VITA and PCI-SIG standards such as FMC (FPGA Mezzanine Card), PCI Express and VME64x using transition modules. As system-on-chip interconnect, the public domain Wishbone specification is used. For the renovation, it is considered imperative to have for each board access to the full hardware design and its firmware so that problems could quickly be resolved by CERN engineers or its ...

  2. Building on the concept of marine biological valuation with respect to translating it to a practical protocol: View points derived from a joint ENCORA--MARBEF initiative

    Directory of Open Access Journals (Sweden)

    Tomasz Zarzycki

    2007-12-01

    Full Text Available Marine biological valuation provides a comprehensive concept for assessing the intrinsic value of subzones within a study area. This paper gives an update on the concept of marine biological valuation as described by Derous et al. (2007. This concept was based on a literature review of existing ecological valuation criteria and the consensus reached by a discussion group of experts during an international workshop in December 2004. The concept was discussed during an ENCORA-MARBEF workshop in December 2006, which resulted in the fine-tuning of the concept of marine biological valuation, especially with respect to its applicability to marine areas.

  3. The " Daphnia" Lynx Mark I Suborbital Flight Experiment: Hardware Qualification at the Drop Tower Bremen

    Science.gov (United States)

    Knie, Miriam; Schoppmann, Kathrin; Eck, Hendrik; Ribeiro, Bernard Wolfschoon; Laforsch, Christian

    2016-06-01

    The Drop Tower Bremen, a ground-based facility enabling research under real microgravity conditions, is an excellent platform for testing new types of experimental hardware to ensure full performance when deployed in costly and rare flight opportunities such as suborbital flights. Here we describe the " Daphnia" experiment which will fly on XCOR Aerospace Lynx Mark I and our experience from the hardware tests with the catapult system at the drop tower. The aim of the " Daphnia" experiment is to obtain data on the biological performance of daphnids and predator-prey interactions in microgravity, which are important for the development of aquatic bioregenerative life support systems (BLSS). The experiment consists of two subunits: The first unit is dedicated to predator-prey interactions, where behavioural analysis should reveal if microgravity interfere with prey ( Daphnia) detection or feeding and therefore may interrupt the trophic cascade. The functioning of such an artificial food web is indispensable for a long-lasting BLSS suitable for long-duration manned space missions or Earth-based explorations to extreme habitats. The second unit is designed to investigate the impact of microgravity on gene expression and the cytoskeleton in Daphnia. Next to data collection, the real microgravity conditions at the drop tower have helped to identify the weak points of the " Daphnia" experimental hardware and lead to further improvement. Hence, the drop tower is ideal for testing new experimental hardware which is indispensable before the implementation in suborbital flights.

  4. Demonstrating Hybrid Learning in a Flexible Neuromorphic Hardware System.

    Science.gov (United States)

    Friedmann, Simon; Schemmel, Johannes; Grubl, Andreas; Hartel, Andreas; Hock, Matthias; Meier, Karlheinz

    2017-02-01

    We present results from a new approach to learning and plasticity in neuromorphic hardware systems: to enable flexibility in implementable learning mechanisms while keeping high efficiency associated with neuromorphic implementations, we combine a general-purpose processor with full-custom analog elements. This processor is operating in parallel with a fully parallel neuromorphic system consisting of an array of synapses connected to analog, continuous time neuron circuits. Novel analog correlation sensor circuits process spike events for each synapse in parallel and in real-time. The processor uses this pre-processing to compute new weights possibly using additional information following its program. Therefore, to a certain extent, learning rules can be defined in software giving a large degree of flexibility. Synapses realize correlation detection geared towards Spike-Timing Dependent Plasticity (STDP) as central computational primitive in the analog domain. Operating at a speed-up factor of 1000 compared to biological time-scale, we measure time-constants from tens to hundreds of micro-seconds. We analyze variability across multiple chips and demonstrate learning using a multiplicative STDP rule. We conclude that the presented approach will enable flexible and efficient learning as a platform for neuroscientific research and technological applications.

  5. FPGA BASED HARDWARE KEY FOR TEMPORAL ENCRYPTION

    Directory of Open Access Journals (Sweden)

    B. Lakshmi

    2010-09-01

    Full Text Available In this paper, a novel encryption scheme with time based key technique on an FPGA is presented. Time based key technique ensures right key to be entered at right time and hence, vulnerability of encryption through brute force attack is eliminated. Presently available encryption systems, suffer from Brute force attack and in such a case, the time taken for breaking a code depends on the system used for cryptanalysis. The proposed scheme provides an effective method in which the time is taken as the second dimension of the key so that the same system can defend against brute force attack more vigorously. In the proposed scheme, the key is rotated continuously and four bits are drawn from the key with their concatenated value representing the delay the system has to wait. This forms the time based key concept. Also the key based function selection from a pool of functions enhances the confusion and diffusion to defend against linear and differential attacks while the time factor inclusion makes the brute force attack nearly impossible. In the proposed scheme, the key scheduler is implemented on FPGA that generates the right key at right time intervals which is then connected to a NIOS – II processor (a virtual microcontroller which is brought out from Altera FPGA that communicates with the keys to the personal computer through JTAG (Joint Test Action Group communication and the computer is used to perform encryption (or decryption. In this case the FPGA serves as hardware key (dongle for data encryption (or decryption.

  6. Neutron Imaging for Selective Laser Melting Inconel Hardware with Internal Passages

    Science.gov (United States)

    Tramel, Terri L.; Norwood, Joseph K.; Bilheux, Hassina

    2014-01-01

    Additive Manufacturing is showing great promise for the development of new innovative designs and large potential life cycle cost reduction for the Aerospace Industry. However, more development work is required to move this technology into space flight hardware production. With selective laser melting (SLM), hardware that once consisted of multiple, carefully machined and inspected pieces, joined together can be made in one part. However standard inspection techniques cannot be used to verify that the internal passages are within dimensional tolerances or surface finish requirements. NASA/MSFC traveled to Oak Ridge National Lab's (ORNL) Spallation Neutron Source to perform some non-destructive, proof of concept imaging measurements to assess the capabilities to understand internal dimensional tolerances and internal passages surface roughness. This presentation will describe 1) the goals of this proof of concept testing, 2) the lessons learned when designing and building these Inconel 718 test specimens to minimize beam time, 3) the neutron imaging test setup and test procedure to get the images, 4) the initial results in images, volume and a video, 4) the assessment of using this imaging technique to gather real data for designing internal flow passages in SLM manufacturing aerospace hardware, and lastly 5) how proper cleaning of the internal passages is critically important. In summary, the initial results are very promising and continued development of a technique to assist in SLM development for aerospace components is desired by both NASA and ORNL. A plan forward that benefits both ORNL and NASA will also be presented, based on the promising initial results. The initial images and volume reconstruction showed that clean, clear images of the internal passages geometry are obtainable. These clear images of the internal passages of simple geometries will be compared to the build model to determine any differences. One surprising result was that a new cleaning

  7. Hardware packet pacing using a DMA in a parallel computer

    Science.gov (United States)

    Chen, Dong; Heidelberger, Phillip; Vranas, Pavlos

    2013-08-13

    Method and system for hardware packet pacing using a direct memory access controller in a parallel computer which, in one aspect, keeps track of a total number of bytes put on the network as a result of a remote get operation, using a hardware token counter.

  8. The role of the visual hardware system in rugby performance ...

    African Journals Online (AJOL)

    This study explores the importance of the 'hardware' factors of the visual system in the game of rugby. A group of professional and club rugby players were tested and the results compared. The results were also compared with the established norms for elite athletes. The findings indicate no significant difference in hardware ...

  9. Hardware and software for image acquisition in nuclear medicine

    International Nuclear Information System (INIS)

    Fideles, E.L.; Vilar, G.; Silva, H.S.

    1992-01-01

    A system for image acquisition and processing in nuclear medicine is presented, including the hardware and software referring to acquisition. The hardware is consisted of an analog-digital conversion card, developed in wire-wape. Its function is digitate the analogic signs provided by gamma camera. The acquisitions are made in list or frame mode. (C.G.C.)

  10. A Practical Introduction to HardwareSoftware Codesign

    CERN Document Server

    Schaumont, Patrick R

    2013-01-01

    This textbook provides an introduction to embedded systems design, with emphasis on integration of custom hardware components with software. The key problem addressed in the book is the following: how can an embedded systems designer strike a balance between flexibility and efficiency? The book describes how combining hardware design with software design leads to a solution to this important computer engineering problem. The book covers four topics in hardware/software codesign: fundamentals, the design space of custom architectures, the hardware/software interface and application examples. The book comes with an associated design environment that helps the reader to perform experiments in hardware/software codesign. Each chapter also includes exercises and further reading suggestions. Improvements in this second edition include labs and examples using modern FPGA environments from Xilinx and Altera, which make the material applicable to a greater number of courses where these tools are already in use.  Mo...

  11. Hardware implementation of the ORNL fissile mass flow monitor

    International Nuclear Information System (INIS)

    McEvers, J.; Sumner, J.; Jones, R.; Ferrell, R.; Martin, C.; Uckan, T.; March-Leuba, J.

    1998-01-01

    This paper provides an overall description of the implementation of the Oak Ridge National Laboratory (ORNL) Fissile Mass Flow Monitor, which is part of a Blend Down Monitoring System (BDMS) developed by the US Department of Energy (DOE). The Fissile Mass Flow Monitor is designed to measure the mass flow of fissile material through a gaseous or liquid process stream. It consists of a source-modulator assembly, a detector assembly, and a cabinet that houses all control, data acquisition, and supporting electronics equipment. The development of this flow monitor was first funded by DOE/NE in September 95, and an initial demonstration by ORNL was described in previous INMM meetings. This methodology was chosen by DOE/NE for implementation in November 1996, and the hardware/software development is complete. Successful BDMS installation and operation of the complete BDMS has been demonstrated in the Paducah Gaseous Diffusion Plant (PGDP), which is operated by Lockheed Martin Utility Services, Inc. for the US Enrichment Corporation and regulated by the Nuclear Regulatory Commission. Equipment for two BDMS units has been shipped to the Russian Federation

  12. Secure management of biomedical data with cryptographic hardware.

    Science.gov (United States)

    Canim, Mustafa; Kantarcioglu, Murat; Malin, Bradley

    2012-01-01

    The biomedical community is increasingly migrating toward research endeavors that are dependent on large quantities of genomic and clinical data. At the same time, various regulations require that such data be shared beyond the initial collecting organization (e.g., an academic medical center). It is of critical importance to ensure that when such data are shared, as well as managed, it is done so in a manner that upholds the privacy of the corresponding individuals and the overall security of the system. In general, organizations have attempted to achieve these goals through deidentification methods that remove explicitly, and potentially, identifying features (e.g., names, dates, and geocodes). However, a growing number of studies demonstrate that deidentified data can be reidentified to named individuals using simple automated methods. As an alternative, it was shown that biomedical data could be shared, managed, and analyzed through practical cryptographic protocols without revealing the contents of any particular record. Yet, such protocols required the inclusion of multiple third parties, which may not always be feasible in the context of trust or bandwidth constraints. Thus, in this paper, we introduce a framework that removes the need for multiple third parties by collocating services to store and to process sensitive biomedical data through the integration of cryptographic hardware. Within this framework, we define a secure protocol to process genomic data and perform a series of experiments to demonstrate that such an approach can be run in an efficient manner for typical biomedical investigations.

  13. Logistics hardware and services control system

    Science.gov (United States)

    Koromilas, A.; Miller, K.; Lamb, T.

    1973-01-01

    Software system permits onsite direct control of logistics operations, which include spare parts, initial installation, tool control, and repairable parts status and control, through all facets of operations. System integrates logistics actions and controls receipts, issues, loans, repairs, fabrications, and modifications and assets in predicting and allocating logistics parts and services effectively.

  14. Review of the treatment of psoriatic arthritis with biological agents: choice of drug for initial therapy and switch therapy for non-responders

    Directory of Open Access Journals (Sweden)

    D'Angelo S

    2017-03-01

    Full Text Available Salvatore D’Angelo,1 Giuseppina Tramontano,1 Michele Gilio,1 Pietro Leccese,1 Ignazio Olivieri1,2 1Rheumatology Institute of Lucania (IRel - Rheumatology Department of Lucania, San Carlo Hospital of Potenza and Madonna delle Grazie Hospital of Matera, Potenza and Matera, 2Basilicata Ricerca Biomedica (BRB Foundation, Potenza, Italy Abstract: Psoriatic arthritis (PsA is a heterogeneous chronic inflammatory disease with a broad clinical spectrum and variable course. It can involve musculoskeletal structures as well as skin, nails, eyes, and gut. The management of PsA has changed tremendously in the last decade, thanks to an earlier diagnosis, an advancement in pharmacological therapies, and a wider application of a multidisciplinary approach. The commercialization of tumor necrosis factor inhibitors (adalimumab, certolizumab pegol, etanercept, golimumab, and infliximab as well as interleukin (IL-12/23 (ustekinumab and IL-17 (secukinumab inhibitors is representative of a revolution in the treatment of PsA. No evidence-based strategies are currently available for guiding the rheumatologist to prescribe biological drugs. Several international and national recommendation sets are currently available with the aim to help rheumatologists in everyday clinical practice management of PsA patients treated with biological therapy. Since no specific biological agent has been demonstrated to be more effective than others, the drug choice should be made according to the available safety data, the presence of extra-articular manifestations, the patient’s preferences (e.g., administration route, and the drug price. However, future studies directly comparing different biological drugs and assessing the efficacy of treatment strategies specific for PsA are urgently needed. Keywords: psoriatic arthritis, treatment, biological drugs, TNF inhibitors, ustekinumab, secukinumab

  15. Targeting multiple heterogeneous hardware platforms with OpenCL

    Science.gov (United States)

    Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.

    2014-06-01

    The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware

  16. Hardware Implementation of a Bilateral Subtraction Filter

    Science.gov (United States)

    Huertas, Andres; Watson, Robert; Villalpando, Carlos; Goldberg, Steven

    2009-01-01

    A bilateral subtraction filter has been implemented as a hardware module in the form of a field-programmable gate array (FPGA). In general, a bilateral subtraction filter is a key subsystem of a high-quality stereoscopic machine vision system that utilizes images that are large and/or dense. Bilateral subtraction filters have been implemented in software on general-purpose computers, but the processing speeds attainable in this way even on computers containing the fastest processors are insufficient for real-time applications. The present FPGA bilateral subtraction filter is intended to accelerate processing to real-time speed and to be a prototype of a link in a stereoscopic-machine- vision processing chain, now under development, that would process large and/or dense images in real time and would be implemented in an FPGA. In terms that are necessarily oversimplified for the sake of brevity, a bilateral subtraction filter is a smoothing, edge-preserving filter for suppressing low-frequency noise. The filter operation amounts to replacing the value for each pixel with a weighted average of the values of that pixel and the neighboring pixels in a predefined neighborhood or window (e.g., a 9 9 window). The filter weights depend partly on pixel values and partly on the window size. The present FPGA implementation of a bilateral subtraction filter utilizes a 9 9 window. This implementation was designed to take advantage of the ability to do many of the component computations in parallel pipelines to enable processing of image data at the rate at which they are generated. The filter can be considered to be divided into the following parts (see figure): a) An image pixel pipeline with a 9 9- pixel window generator, b) An array of processing elements; c) An adder tree; d) A smoothing-and-delaying unit; and e) A subtraction unit. After each 9 9 window is created, the affected pixel data are fed to the processing elements. Each processing element is fed the pixel value for

  17. FPGA Acceleration by Dynamically-Loaded Hardware Libraries

    DEFF Research Database (Denmark)

    Lomuscio, Andrea; Nannarelli, Alberto; Re, Marco

    Hardware acceleration is a viable solution to obtain energy efficiency in data intensive computation. In this work, we present a hardware framework to dynamically load hardware libraries, HLL, on reconfigurable platforms (FPGAs). Provided a library of application-specific processors, we load on......-the-y the speciffic processor in the FPGA, and we transfer the execution from the CPU to the FPGA-based accelerator. Results show that significant speed-up and energy efficiency can be obtained by HLL acceleration on system-on-chips where reconfigurable fabric is placed next to the CPUs....

  18. Hardware support for collecting performance counters directly to memory

    Science.gov (United States)

    Gara, Alan; Salapura, Valentina; Wisniewski, Robert W.

    2012-09-25

    Hardware support for collecting performance counters directly to memory, in one aspect, may include a plurality of performance counters operable to collect one or more counts of one or more selected activities. A first storage element may be operable to store an address of a memory location. A second storage element may be operable to store a value indicating whether the hardware should begin copying. A state machine may be operable to detect the value in the second storage element and trigger hardware copying of data in selected one or more of the plurality of performance counters to the memory location whose address is stored in the first storage element.

  19. Hardware Realization of Chaos Based Symmetric Image Encryption

    KAUST Repository

    Barakat, Mohamed L.

    2012-06-01

    This thesis presents a novel work on hardware realization of symmetric image encryption utilizing chaos based continuous systems as pseudo random number generators. Digital implementation of chaotic systems results in serious degradations in the dynamics of the system. Such defects are illuminated through a new technique of generalized post proceeding with very low hardware cost. The thesis further discusses two encryption algorithms designed and implemented as a block cipher and a stream cipher. The security of both systems is thoroughly analyzed and the performance is compared with other reported systems showing a superior results. Both systems are realized on Xilinx Vetrix-4 FPGA with a hardware and throughput performance surpassing known encryption systems.

  20. Pre-Hardware Optimization of Spacecraft Image Processing Algorithms and Hardware Implementation

    Science.gov (United States)

    Kizhner, Semion; Petrick, David J.; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Day, John H. (Technical Monitor)

    2002-01-01

    Spacecraft telemetry rates and telemetry product complexity have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image data processing and color picture generation application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The proposed solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms, and reconfigurable computing hardware (RC) technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processors (DSP). It has been shown that this approach can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft.

  1. Performance comparison between ISCSI and other hardware and software solutions

    CERN Document Server

    Gug, M

    2003-01-01

    We report on our investigations on some technologies that can be used to build disk servers and networks of disk servers using commodity hardware and software solutions. It focuses on the performance that can be achieved by these systems and gives measured figures for different configurations. It is divided into two parts : iSCSI and other technologies and hardware and software RAID solutions. The first part studies different technologies that can be used by clients to access disk servers using a gigabit ethernet network. It covers block access technologies (iSCSI, hyperSCSI, ENBD). Experimental figures are given for different numbers of clients and servers. The second part compares a system based on 3ware hardware RAID controllers, a system using linux software RAID and IDE cards and a system mixing both hardware RAID and software RAID. Performance measurements for reading and writing are given for different RAID levels.

  2. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Dominique Houzet

    2006-08-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  3. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Ouadjaout Salim

    2006-01-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  4. Towards hardware-intrinsic security foundations and practice

    CERN Document Server

    Sadeghi, Ahmad-Reza; Tuyls, Pim

    2010-01-01

    Hardware-intrinsic security is a young field dealing with secure secret key storage. This book features contributions from researchers and practitioners with backgrounds in physics, mathematics, cryptography, coding theory and processor theory.

  5. International Space Station (ISS) Addition of Hardware - Computer Generated Art

    Science.gov (United States)

    1995-01-01

    This computer generated scene of the International Space Station (ISS) represents the first addition of hardware following the completion of Phase II. The 8-A Phase shows the addition of the S-9 truss.

  6. Preventive Safety Measures: A Guide to Security Hardware.

    Science.gov (United States)

    Gottwalt, T. J.

    2003-01-01

    Emphasizes the importance of an annual security review of a school facility's door hardware and provides a description of the different types of locking devices typically used on schools and where they are best applied. (EV)

  7. Hardware device to physical structure binding and authentication

    Science.gov (United States)

    Hamlet, Jason R.; Stein, David J.; Bauer, Todd M.

    2013-08-20

    Detection and deterrence of device tampering and subversion may be achieved by including a cryptographic fingerprint unit within a hardware device for authenticating a binding of the hardware device and a physical structure. The cryptographic fingerprint unit includes an internal physically unclonable function ("PUF") circuit disposed in or on the hardware device, which generate an internal PUF value. Binding logic is coupled to receive the internal PUF value, as well as an external PUF value associated with the physical structure, and generates a binding PUF value, which represents the binding of the hardware device and the physical structure. The cryptographic fingerprint unit also includes a cryptographic unit that uses the binding PUF value to allow a challenger to authenticate the binding.

  8. Aspects of system modelling in Hardware/Software partitioning

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1996-01-01

    This paper addresses fundamental aspects of system modelling and partitioning algorithms in the area of Hardware/Software Codesign. Three basic system models for partitioning are presented and the consequences of partitioning according to each of these are analyzed. The analysis shows the importa......This paper addresses fundamental aspects of system modelling and partitioning algorithms in the area of Hardware/Software Codesign. Three basic system models for partitioning are presented and the consequences of partitioning according to each of these are analyzed. The analysis shows...... the importance of making a clear distinction between the model used for partitioning and the model used for evaluation It also illustrates the importance of having a realistic hardware model such that hardware sharing can be taken into account. Finally, the importance of integrating scheduling and allocation...

  9. Hardware Implementation Of Line Clipping A lgorithm By Using FPGA

    Directory of Open Access Journals (Sweden)

    Amar Dawod

    2013-04-01

    Full Text Available The computer graphics system performance is increasing faster than any other computing application. Algorithms for line clipping against convex polygons and lines have been studied for a long time and many research papers have been published so far. In spite of the latest graphical hardware development and significant increase of performance the clipping is still a bottleneck of any graphical system. So its implementation in hardware is essential for real time applications. In this paper clipping operation is discussed and a hardware implementation of the line clipping algorithm is presented and finally formulated and tested using Field Programmable Gate Arrays (FPGA. The designed hardware unit consists of two parts : the first is positional code generator unit and the second is the clipping unit. Finally it is worth mentioning that the  designed unit is capable of clipping (232524 line segments per second.       

  10. Testing the LIGO inspiral analysis with hardware injections

    International Nuclear Information System (INIS)

    Brown, D A

    2004-01-01

    Injection of simulated binary inspiral signals into detector hardware provides an excellent test of the inspiral detection pipeline. By recovering the physical parameters of an injected signal, we test our understanding of both instrumental calibration and the data analysis pipeline. We describe an inspiral search code and results from hardware injection tests and demonstrate that injected signals can be recovered by the data analysis pipeline. The parameters of the recovered signals match those of the injected signals

  11. Fifty Years of Observing Hardware and Human Behavior

    Science.gov (United States)

    McMann, Joe

    2011-01-01

    During this half-day workshop, Joe McMann presented the lessons learned during his 50 years of experience in both industry and government, which included all U.S. manned space programs, from Mercury to the ISS. He shared his thoughts about hardware and people and what he has learned from first-hand experience. Included were such topics as design, testing, design changes, development, failures, crew expectations, hardware, requirements, and meetings.

  12. Hardware control system using modular software under RSX-11D

    International Nuclear Information System (INIS)

    Kittell, R.S.; Helland, J.A.

    1978-01-01

    A modular software system used to control extensive hardware is described. The development, operation, and experience with this software are discussed. Included are the methods employed to implement this system while taking advantage of the Real-Time features of RSX-11D. Comparisons are made between this system and an earlier nonmodular system. The controlled hardware includes magnet power supplies, stepping motors, DVM's, and multiplexors, and is interfaced through CAMAC. 4 figures

  13. Accelerator Technology: Injection and Extraction Related Hardware: Kickers and Septa

    CERN Document Server

    Barnes, M J; Mertens, V

    2013-01-01

    This document is part of Subvolume C 'Accelerators and Colliders' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the the Section '8.7 Injection and Extraction Related Hardware: Kickers and Septa' of the Chapter '8 Accelerator Technology' with the content: 8.7 Injection and Extraction Related Hardware: Kickers and Septa 8.7.1 Fast Pulsed Systems (Kickers) 8.7.2 Electrostatic and Magnetic Septa

  14. A Survey on Hardware Implementations of Visual Object Trackers

    OpenAIRE

    El-Shafie, Al-Hussein A.; Habib, S. E. D.

    2017-01-01

    Visual object tracking is an active topic in the computer vision domain with applications extending over numerous fields. The main sub-tasks required to build an object tracker (e.g. object detection, feature extraction and object tracking) are computation-intensive. In addition, real-time operation of the tracker is indispensable for almost all of its applications. Therefore, complete hardware or hardware/software co-design approaches are pursued for better tracker implementations. This pape...

  15. Hardware Middleware for Person Tracking on Embedded Distributed Smart Cameras

    Directory of Open Access Journals (Sweden)

    Ali Akbar Zarezadeh

    2012-01-01

    Full Text Available Tracking individuals is a prominent application in such domains like surveillance or smart environments. This paper provides a development of a multiple camera setup with jointed view that observes moving persons in a site. It focuses on a geometry-based approach to establish correspondence among different views. The expensive computational parts of the tracker are hardware accelerated via a novel system-on-chip (SoC design. In conjunction with this vision application, a hardware object request broker (ORB middleware is presented as the underlying communication system. The hardware ORB provides a hardware/software architecture to achieve real-time intercommunication among multiple smart cameras. Via a probing mechanism, a performance analysis is performed to measure network latencies, that is, time traversing the TCP/IP stack, in both software and hardware ORB approaches on the same smart camera platform. The empirical results show that using the proposed hardware ORB as client and server in separate smart camera nodes will considerably reduce the network latency up to 100 times compared to the software ORB.

  16. Surgical Critical Care Initiative

    Data.gov (United States)

    Federal Laboratory Consortium — The Surgical Critical Care Initiative (SC2i) is a USU research program established in October 2013 to develop, translate, and validate biology-driven critical care....

  17. An application of characteristic function in order to predict reliability and lifetime of aeronautical hardware

    International Nuclear Information System (INIS)

    Żurek, Józef; Kaleta, Ryszard; Zieja, Mariusz

    2016-01-01

    The forecasting of reliability and life of aeronautical hardware requires recognition of many and various destructive processes that deteriorate the health/maintenance status thereof. The aging of technical components of aircraft as an armament system proves of outstanding significance to reliability and safety of the whole system. The aging process is usually induced by many and various factors, just to mention mechanical, biological, climatic, or chemical ones. The aging is an irreversible process and considerably affects (i.e. reduces) reliability and lifetime of aeronautical equipment. Application of the characteristic function of the aging process is suggested to predict reliability and lifetime of aeronautical hardware. An increment in values of diagnostic parameters is introduced to formulate then, using the characteristic function and after some rearrangements, the partial differential equation. An analytical dependence for the characteristic function of the aging process is a solution to this equation. With the inverse transformation applied, the density function of the aging of aeronautical hardware is found. Having found the density function, one can determine the aeronautical equipment’s reliability and lifetime. The in-service collected or the life tests delivered data are used to attain this goal. Coefficients in this relationship are found using the likelihood function.

  18. MSAP Hardware Verification: Testing Multi-Mission System Architecture Platform Hardware Using Simulation and Bench Test Equipment

    Science.gov (United States)

    Crossin, Kent R.

    2005-01-01

    The Multi-Mission System Architecture Platform (MSAP) project aims to develop a system of hardware and software that will provide the core functionality necessary in many JPL missions and can be tailored to accommodate mission-specific requirements. The MSAP flight hardware is being developed in the Verilog hardware description language, allowing developers to simulate their design before releasing it to a field programmable gate array (FPGA). FPGAs can be updated in a matter of minutes, drastically reducing the time and expense required to produce traditional application-specific integrated circuits. Bench test equipment connected to the FPGAs can then probe and run Tcl scripts on the hardware. The Verilog and Tcl code can be reused or modified with each design. These steps are effective in confirming that the design operates according specifications.

  19. Repeat LISS treatment for femoral shaft fractures due to hardware failure: a retrospective analysis of eleven cases.

    Science.gov (United States)

    Li, Xu; Xu, Xian; Liu, Lin; Shao, Qin; Wu, Wei

    2013-10-01

    To evaluate the effectiveness of a replating technique having a less-invasive stabilization system (LISS) for femoral shaft fractures due to LISS failure in adults. There were 11 patients with hardware failure of LISS for femoral shaft fractures, on an average of 50 days after the primary operation. The failed implants were removed, and the fractures were replated with a LISS following the rationale of biological osteosynthesis. Radiological fracture union and incidence of postoperative complications were employed to evaluate the effectiveness of this replating technique for femoral shaft fractures. Operative duration including removing failed hardware and replating fractures averaged 81.5 min, with an average blood loss of 330 ml. Patients had an average follow-up of 25.7 months. Radiological evaluation indicated that fracture union occurred in an average of 4.4 months in all patients. The length and alignment of the affected limb were satisfactory, and hardware failure did not recur. The replating technique with LISS for femoral shaft fractures due to hardware failure of LISS can obtain satisfactory results when the appropriate rationale of biological osteosynthesis and functional exercise is followed.

  20. NPOESS Interface Data Processing Segment (IDPS) Hardware

    Science.gov (United States)

    Sullivan, W. J.; Grant, K. D.; Bergeron, C.

    2008-12-01

    The National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation weather and environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the Defense Meteorological Satellite Program (DMSP) managed by the DoD. The NPOESS satellites carry a suite of sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The NPOESS design allows centralized mission management and delivers high quality environmental products to military, civil and scientific users. The ground data processing segment for NPOESS is the Interface Data Processing Segment (IDPS), developed by Raytheon Intelligence and Information Systems. IDPS processes NPOESS satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government. IDPS will process environmental data products beginning with the NPOESS Preparatory Project (NPP) and continuing through the lifetime of the NPOESS system. Within the overall NPOESS processing environment, the IDPS must process a data volume several orders of magnitude the size of current systems -- in one-quarter of the time. Further, it must support the calibration, validation, and data quality improvement initiatives of the NPOESS program to ensure the production of atmospheric and environmental products that meet strict requirements for accuracy and precision. This poster will illustrate and describe the IDPS HW architecture that is necessary to meet these challenging design requirements. In addition, it will illustrate the expandability features of the architecture in support of future data processing and data distribution needs.

  1. Biological effects-based tools for monitoring impacted surface waters in the Great Lakes: a multiagency program in support of the Great Lakes Restoration Initiative

    Science.gov (United States)

    Ekman, Drew R.; Ankley, Gerald T.; Blazer, Vicki; Collette, Timothy W.; Garcia-Reyero, Natàlia; Iwanowicz, Luke R.; Jorgensen, Zachary G.; Lee, Kathy E.; Mazik, Pat M.; Miller, David H.; Perkins, Edward J.; Smith, Edwin T.; Tietge, Joseph E.; Villeneuve, Daniel L.

    2013-01-01

    There is increasing demand for the implementation of effects-based monitoring and surveillance (EBMS) approaches in the Great Lakes Basin to complement traditional chemical monitoring. Herein, we describe an ongoing multiagency effort to develop and implement EBMS tools, particularly with regard to monitoring potentially toxic chemicals and assessing Areas of Concern (AOCs), as envisioned by the Great Lakes Restoration Initiative (GLRI). Our strategy includes use of both targeted and open-ended/discovery techniques, as appropriate to the amount of information available, to guide a priori end point and/or assay selection. Specifically, a combination of in vivo and in vitro tools is employed by using both wild and caged fish (in vivo), and a variety of receptor- and cell-based assays (in vitro). We employ a work flow that progressively emphasizes in vitro tools for long-term or high-intensity monitoring because of their greater practicality (e.g., lower cost, labor) and relying on in vivo assays for initial surveillance and verification. Our strategy takes advantage of the strengths of a diversity of tools, balancing the depth, breadth, and specificity of information they provide against their costs, transferability, and practicality. Finally, a series of illustrative scenarios is examined that align EBMS options with management goals to illustrate the adaptability and scaling of EBMS approaches and how they can be used in management decisions.

  2. Flight Hardware Virtualization for On-Board Science Data Processing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Utilize Hardware Virtualization technology to benefit on-board science data processing by investigating new real time embedded Hardware Virtualization solutions and...

  3. OS friendly microprocessor architecture: Hardware level computer security

    Science.gov (United States)

    Jungwirth, Patrick; La Fratta, Patrick

    2016-05-01

    We present an introduction to the patented OS Friendly Microprocessor Architecture (OSFA) and hardware level computer security. Conventional microprocessors have not tried to balance hardware performance and OS performance at the same time. Conventional microprocessors have depended on the Operating System for computer security and information assurance. The goal of the OS Friendly Architecture is to provide a high performance and secure microprocessor and OS system. We are interested in cyber security, information technology (IT), and SCADA control professionals reviewing the hardware level security features. The OS Friendly Architecture is a switched set of cache memory banks in a pipeline configuration. For light-weight threads, the memory pipeline configuration provides near instantaneous context switching times. The pipelining and parallelism provided by the cache memory pipeline provides for background cache read and write operations while the microprocessor's execution pipeline is running instructions. The cache bank selection controllers provide arbitration to prevent the memory pipeline and microprocessor's execution pipeline from accessing the same cache bank at the same time. This separation allows the cache memory pages to transfer to and from level 1 (L1) caching while the microprocessor pipeline is executing instructions. Computer security operations are implemented in hardware. By extending Unix file permissions bits to each cache memory bank and memory address, the OSFA provides hardware level computer security.

  4. Fast DRR splat rendering using common consumer graphics hardware

    International Nuclear Information System (INIS)

    Spoerk, Jakob; Bergmann, Helmar; Wanschitz, Felix; Dong, Shuo; Birkfellner, Wolfgang

    2007-01-01

    Digitally rendered radiographs (DRR) are a vital part of various medical image processing applications such as 2D/3D registration for patient pose determination in image-guided radiotherapy procedures. This paper presents a technique to accelerate DRR creation by using conventional graphics hardware for the rendering process. DRR computation itself is done by an efficient volume rendering method named wobbled splatting. For programming the graphics hardware, NVIDIAs C for Graphics (Cg) is used. The description of an algorithm used for rendering DRRs on the graphics hardware is presented, together with a benchmark comparing this technique to a CPU-based wobbled splatting program. Results show a reduction of rendering time by about 70%-90% depending on the amount of data. For instance, rendering a volume of 2x10 6 voxels is feasible at an update rate of 38 Hz compared to 6 Hz on a common Intel-based PC using the graphics processing unit (GPU) of a conventional graphics adapter. In addition, wobbled splatting using graphics hardware for DRR computation provides higher resolution DRRs with comparable image quality due to special processing characteristics of the GPU. We conclude that DRR generation on common graphics hardware using the freely available Cg environment is a major step toward 2D/3D registration in clinical routine

  5. GOSH! A roadmap for open-source science hardware

    CERN Document Server

    Stefania Pandolfi

    2016-01-01

    The goal of the Gathering for Open Science Hardware (GOSH! 2016), held from 2 to 5 March 2016 at IdeaSquare, was to lay the foundations of the open-source hardware for science movement.   The participants in the GOSH! 2016 meeting gathered in IdeaSquare. (Image: GOSH Community) “Despite advances in technology, many scientific innovations are held back because of a lack of affordable and customisable hardware,” says François Grey, a professor at the University of Geneva and coordinator of Citizen Cyberlab – a partnership between CERN, the UN Institute for Training and Research and the University of Geneva – which co-organised the GOSH! 2016 workshop. “This scarcity of accessible science hardware is particularly obstructive for citizen science groups and humanitarian organisations that don’t have the same economic means as a well-funded institution.” Instead, open sourcing science hardware co...

  6. Alternative, Green Processes for the Precision Cleaning of Aerospace Hardware

    Science.gov (United States)

    Maloney, Phillip R.; Grandelli, Heather Eilenfield; Devor, Robert; Hintze, Paul E.; Loftin, Kathleen B.; Tomlin, Douglas J.

    2014-01-01

    Precision cleaning is necessary to ensure the proper functioning of aerospace hardware, particularly those systems that come in contact with liquid oxygen or hypergolic fuels. Components that have not been cleaned to the appropriate levels may experience problems ranging from impaired performance to catastrophic failure. Traditionally, this has been achieved using various halogenated solvents. However, as information on the toxicological and/or environmental impacts of each came to light, they were subsequently regulated out of use. The solvent currently used in Kennedy Space Center (KSC) precision cleaning operations is Vertrel MCA. Environmental sampling at KSC indicates that continued use of this or similar solvents may lead to high remediation costs that must be borne by the Program for years to come. In response to this problem, the Green Solvents Project seeks to develop state-of-the-art, green technologies designed to meet KSCs precision cleaning needs.Initially, 23 solvents were identified as potential replacements for the current Vertrel MCA-based process. Highly halogenated solvents were deliberately omitted since historical precedents indicate that as the long-term consequences of these solvents become known, they will eventually be regulated out of practical use, often with significant financial burdens for the user. Three solvent-less cleaning processes (plasma, supercritical carbon dioxide, and carbon dioxide snow) were also chosen since they produce essentially no waste stream. Next, experimental and analytical procedures were developed to compare the relative effectiveness of these solvents and technologies to the current KSC standard of Vertrel MCA. Individually numbered Swagelok fittings were used to represent the hardware in the cleaning process. First, the fittings were cleaned using Vertrel MCA in order to determine their true cleaned mass. Next, the fittings were dipped into stock solutions of five commonly encountered contaminants and were

  7. A Principled Kernel Testbed for Hardware/Software Co-Design Research

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, Alex; Williams, Samuel; Madduri, Kamesh; Ibrahim, Khaled; Bailey, David; Demmel, James; Strohmaier, Erich

    2010-04-01

    Recently, advances in processor architecture have become the driving force for new programming models in the computing industry, as ever newer multicore processor designs with increasing number of cores are introduced on schedules regimented by marketing demands. As a result, collaborative parallel (rather than simply concurrent) implementations of important applications, programming languages, models, and even algorithms have been forced to adapt to these architectures to exploit the available raw performance. We believe that this optimization regime is flawed. In this paper, we present an alternate approach that, rather than starting with an existing hardware/software solution laced with hidden assumptions, defines the computational problems of interest and invites architects, researchers and programmers to implement novel hardware/software co-designed solutions. Our work builds on the previous ideas of computational dwarfs, motifs, and parallel patterns by selecting a representative set of essential problems for which we provide: An algorithmic description; scalable problem definition; illustrative reference implementations; verification schemes. This testbed will enable comparative research in areas such as parallel programming models, languages, auto-tuning, and hardware/software codesign. For simplicity, we focus initially on the computational problems of interest to the scientific computing community but proclaim the methodology (and perhaps a subset of the problems) as applicable to other communities. We intend to broaden the coverage of this problem space through stronger community involvement.

  8. A Principled Kernel Testbed for Hardware/Software Co-Design Research

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Madduri, Kamesh [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ibrahim, Khaled [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bailey, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Demmel, James [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Strohmaier, Erich [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2010-07-16

    Recently, advances in processor architecture have become the driving force for new programming models in the computing industry, as ever newer multicore processor designs with increasing number of cores are introduced on schedules regimented by marketing demands. As a result, collaborative parallel (rather than simply concurrent) implementations of important applications, programming languages, models, and even algorithms have been forced to adapt to these architectures to exploit the available raw performance. We believe that this optimization regime is flawed. In this paper, we present an alternate approach that, rather than starting with an existing hardware/software solution laced with hidden assumptions, defines the computational problems of interest and invites architects, researchers and programmers to implement novel hardware/software co-designed solutions. Our work builds on the previous ideas of computational dwarfs, motifs, and parallel patterns by selecting a representative set of essential problems for which we provide: An algorithmic description; scalable problem definition; illustrative reference implementations; verification schemes. This testbed will enable comparative research in areas such as parallel programming models, languages, auto-tuning, and hardware/software codesign. For simplicity, we focus initially on the computational problems of interest to the scientific computing community but proclaim the methodology (and perhaps a subset of the problems) as applicable to other communities. We intend to broaden the coverage of this problem space through stronger community involvement.

  9. Amateur Radio on the International Space Station - Phase 2 Hardware System

    Science.gov (United States)

    Bauer, F.; McFadin, L.; Bruninga, B.; Watarikawa, H.

    2003-01-01

    The International Space Station (ISS) ham radio system has been on-orbit for over 3 years. Since its first use in November 2000, the first seven expedition crews and three Soyuz taxi crews have utilized the amateur radio station in the Functional Cargo Block (also referred to as the FGB or Zarya module) to talk to thousands of students in schools, to their families on Earth, and to amateur radio operators around the world. Early on, the Amateur Radio on the International Space Station (ARISS) international team devised a multi-phased hardware development approach for the ISS ham radio station. Three internal development Phases. Initial Phase 1, Mobile Radio Phase 2 and Permanently Mounted Phase 3 plus an externally mounted system, were proposed and agreed to by the ARISS team. The Phase 1 system hardware development which was started in 1996 has since been delivered to ISS. It is currently operational on 2 meters. The 70 cm system is expected to be installed and operated later this year. Since 2001, the ARISS international team have worked to bring the second generation ham system, called Phase 2, to flight qualification status. At this time, major portions of the Phase 2 hardware system have been delivered to ISS and will soon be installed and checked out. This paper intends to provide an overview of the Phase 1 system for background and then describe the capabilities of the Phase 2 radio system. It will also describe the current plans to finalize the Phase 1 and Phase 2 testing in Russia and outlines the plans to bring the Phase 2 hardware system to full operation.

  10. Hardware and software maintenance strategies for upgrading vintage computers

    International Nuclear Information System (INIS)

    Wang, B.C.; Buijs, W.J.; Banting, R.D.

    1992-01-01

    The paper focuses on the maintenance of the computer hardware and software for digital control computers (DCC). Specific design and problems related to various maintenance strategies are reviewed. A foundation was required for a reliable computer maintenance and upgrading program to provide operation of the DCC with high availability and reliability for 40 years. This involved a carefully planned and executed maintenance and upgrading program, involving complementary hardware and software strategies. The computer system was designed on a modular basis, with large sections easily replaceable, to facilitate maintenance and improve availability of the system. Advances in computer hardware have made it possible to replace DCC peripheral devices with reliable, inexpensive, and widely available components from PC-based systems (PC = personal computer). By providing a high speed link from the DCC to a PC, it is now possible to use many commercial software packages to process data from the plant. 1 fig

  11. Plutonium Protection System (PPS). Volume 2. Hardware description. Final report

    International Nuclear Information System (INIS)

    Miyoshi, D.S.

    1979-05-01

    The Plutonium Protection System (PPS) is an integrated safeguards system developed by Sandia Laboratories for the Department of Energy, Office of Safeguards and Security. The system is designed to demonstrate and test concepts for the improved safeguarding of plutonium. Volume 2 of the PPS final report describes the hardware elements of the system. The major areas containing hardware elements are the vault, where plutonium is stored, the packaging room, where plutonium is packaged into Container Modules, the Security Operations Center, which controls movement of personnel, the Material Accountability Center, which maintains the system data base, and the Material Operations Center, which monitors the operating procedures in the system. References are made to documents in which details of the hardware items can be found

  12. DAQ Hardware and software development for the ATLAS Pixel Detector

    CERN Document Server

    Stramaglia, Maria Elena; The ATLAS collaboration

    2015-01-01

    In 2014, the Pixel Detector of the ATLAS experiment was extended by about 12 million pixels with the installation of the Insertable B-Layer (IBL). Data-taking and tuning procedures have been implemented by employing newly designed read-out hardware, which supports the full detector bandwidth even for calibration. The hardware is supported by an embedded software stack running on the read-out boards. The same boards will be used to upgrade the read-out bandwidth for the two outermost layers of the ATLAS Pixel Barrel (54 million pixels). We present the IBL read-out hardware and the supporting software architecture used to calibrate and operate the 4-layer ATLAS Pixel detector. We discuss the technical implementations and status for data taking, validation of the DAQ system in recent cosmic ray data taking, in-situ calibrations, and results from additional tests in preparation for Run 2 at the LHC.

  13. High-performance free-space optical modem hardware

    Science.gov (United States)

    Sluz, Joseph E.; Juarez, Juan C.; Bair, Chun-Huei; Oberc, Rachel L.; Venkat, Radha A.; Rollend, Derek; Young, David W.

    2012-06-01

    This paper describes key aspects of modem hardware designed to operate in free space optical (FSO) links of up to 200 km. The hardware serves as a bridge between 10 gigabit Ethernet client data systems and FSO terminals. The modem hardware alters the client data rate and format for optimal transmission and reception over the FSO link by applying forward error correction (FEC) processing and differential phase shift keying (DPSK) modulation. Optical automatic gain control (OAGC) is also used. The result of these features provide sensitivities approaching -48 dBm with 60 dB of error-free dynamic range while in the presence of turbulent optical conditions to deal with large dynamic range optical power fades.

  14. Hardware Abstraction and Protocol Optimization for Coded Sensor Networks

    DEFF Research Database (Denmark)

    Nistor, Maricica; Roetter, Daniel Enrique Lucani; Barros, João

    2015-01-01

    The design of the communication protocols in wireless sensor networks (WSNs) often neglects several key characteristics of the sensor's hardware, while assuming that the number of transmitted bits is the dominating factor behind the system's energy consumption. A closer look at the hardware...... specifications of common sensors reveals, however, that other equally important culprits exist, such as the reception and processing energy. Hence, there is a need for a more complete hardware abstraction of a sensor node to reduce effectively the total energy consumption of the network by designing energy......-efficient protocols that use such an abstraction, as well as mechanisms to optimize a communication protocol in terms of energy consumption. The problem is modeled for different feedback-based techniques, where sensors are connected to a base station, either directly or through relays. We show that for four example...

  15. Asymmetric Hardware Distortions in Receive Diversity Systems: Outage Performance Analysis

    KAUST Repository

    Javed, Sidrah

    2017-02-22

    This paper studies the impact of asymmetric hardware distortion (HWD) on the performance of receive diversity systems using linear and switched combining receivers. The asymmetric attribute of the proposed model motivates the employment of improper Gaussian signaling (IGS) scheme rather than the traditional proper Gaussian signaling (PGS) scheme. The achievable rate performance is analyzed for the ideal and non-ideal hardware scenarios using PGS and IGS transmission schemes for different combining receivers. In addition, the IGS statistical characteristics are optimized to maximize the achievable rate performance. Moreover, the outage probability performance of the receive diversity systems is analyzed yielding closed form expressions for both PGS and IGS based transmission schemes. HWD systems that employ IGS is proven to efficiently combat the self interference caused by the HWD. Furthermore, the obtained analytic expressions are validated through Monte-Carlo simulations. Eventually, non-ideal hardware transceivers degradation and IGS scheme acquired compensation are quantified through suitable numerical results.

  16. Hardware Realization of Chaos-based Symmetric Video Encryption

    KAUST Repository

    Ibrahim, Mohamad A.

    2013-05-01

    This thesis reports original work on hardware realization of symmetric video encryption using chaos-based continuous systems as pseudo-random number generators. The thesis also presents some of the serious degradations caused by digitally implementing chaotic systems. Subsequently, some techniques to eliminate such defects, including the ultimately adopted scheme are listed and explained in detail. Moreover, the thesis describes original work on the design of an encryption system to encrypt MPEG-2 video streams. Information about the MPEG-2 standard that fits this design context is presented. Then, the security of the proposed system is exhaustively analyzed and the performance is compared with other reported systems, showing superiority in performance and security. The thesis focuses more on the hardware and the circuit aspect of the system’s design. The system is realized on Xilinx Vetrix-4 FPGA with hardware parameters and throughput performance surpassing conventional encryption systems.

  17. XOR-FREE Implementation of Convolutional Encoder for Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    Gaurav Purohit

    2016-01-01

    Full Text Available This paper presents a novel XOR-FREE algorithm to implement the convolutional encoder using reconfigurable hardware. The approach completely removes the XOR processing of a chosen nonsystematic, feedforward generator polynomial of larger constraint length. The hardware (HW implementation of new architecture uses Lookup Table (LUT for storing the parity bits. The design implements architectural reconfigurability by modifying the generator polynomial of the same constraint length and code rate to reduce the design complexity. The proposed architecture reduces the dynamic power up to 30% and improves the hardware cost and propagation delay up to 20% and 32%, respectively. The performance of the proposed architecture is validated in MATLAB Simulink and tested on Zynq-7 series FPGA.

  18. Design and control of compliant tensegrity robots through simulation and hardware validation

    Science.gov (United States)

    Caluwaerts, Ken; Despraz, Jérémie; Işçen, Atıl; Sabelhaus, Andrew P.; Bruce, Jonathan; Schrauwen, Benjamin; SunSpiral, Vytas

    2014-01-01

    To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center, Moffett Field, CA, USA, has developed and validated two software environments for the analysis, simulation and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity (‘tensile–integrity’) structures have unique physical properties that make them ideal for interaction with uncertain environments. Yet, these characteristics make design and control of bioinspired tensegrity robots extremely challenging. This work presents the progress our tools have made in tackling the design and control challenges of spherical tensegrity structures. We focus on this shape since it lends itself to rolling locomotion. The results of our analyses include multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures that have been tested in simulation. A hardware prototype of a spherical six-bar tensegrity, the Reservoir Compliant Tensegrity Robot, is used to empirically validate the accuracy of simulation. PMID:24990292

  19. Design and Control of Compliant Tensegrity Robots Through Simulation and Hardware Validation

    Science.gov (United States)

    Caluwaerts, Ken; Despraz, Jeremie; Iscen, Atil; Sabelhaus, Andrew P.; Bruce, Jonathan; Schrauwen, Benjamin; Sunspiral, Vytas

    2014-01-01

    To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center has developed and validated two different software environments for the analysis, simulation, and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity ("tensile-integrity") structures have unique physical properties which make them ideal for interaction with uncertain environments. Yet these characteristics, such as variable structural compliance, and global multi-path load distribution through the tension network, make design and control of bio-inspired tensegrity robots extremely challenging. This work presents the progress in using these two tools in tackling the design and control challenges. The results of this analysis includes multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures. The current hardware prototype of a six-bar tensegrity, code-named ReCTeR, is presented in the context of this validation.

  20. Carbonate fuel cell endurance: Hardware corrosion and electrolyte management status

    Energy Technology Data Exchange (ETDEWEB)

    Yuh, C.; Johnsen, R.; Farooque, M.; Maru, H.

    1993-01-01

    Endurance tests of carbonate fuel cell stacks (up to 10,000 hours) have shown that hardware corrosion and electrolyte losses can be reasonably controlled by proper material selection and cell design. Corrosion of stainless steel current collector hardware, nickel clad bipolar plate and aluminized wet seal show rates within acceptable limits. Electrolyte loss rate to current collector surface has been minimized by reducing exposed current collector surface area. Electrolyte evaporation loss appears tolerable. Electrolyte redistribution has been restrained by proper design of manifold seals.

  1. Carbonate fuel cell endurance: Hardware corrosion and electrolyte management status

    Energy Technology Data Exchange (ETDEWEB)

    Yuh, C.; Johnsen, R.; Farooque, M.; Maru, H.

    1993-05-01

    Endurance tests of carbonate fuel cell stacks (up to 10,000 hours) have shown that hardware corrosion and electrolyte losses can be reasonably controlled by proper material selection and cell design. Corrosion of stainless steel current collector hardware, nickel clad bipolar plate and aluminized wet seal show rates within acceptable limits. Electrolyte loss rate to current collector surface has been minimized by reducing exposed current collector surface area. Electrolyte evaporation loss appears tolerable. Electrolyte redistribution has been restrained by proper design of manifold seals.

  2. Mini-O, simple Omega receiver hardware for user education

    Science.gov (United States)

    Burhans, R. W.

    1976-01-01

    A problem with the Omega system is a lack of suitable low cost hardware for the small user community. A collection of do it yourself circuit modules are under development intended for use by educational institutions, small boat owners, aviation enthusiasts, and others who have some skills in fabricating their own electronic equipment. Applications of the hardware to time frequency standards measurements, signal propagation monitoring, and navigation experiments are presented. A family of Mini-O systems have been constructed varying from the simplest RF preamplifiers and narrowband filters front-ends, to sophisticated microcomputer interface adapters.

  3. Hardware Evaluation of the Horizontal Exercise Fixture with Weight Stack

    Science.gov (United States)

    Newby, Nate; Leach, Mark; Fincke, Renita; Sharp, Carwyn

    2009-01-01

    HEF with weight stack seems to be a very sturdy and reliable exercise device that should function well in a bed rest training setting. A few improvements should be made to both the hardware and software to improve usage efficiency, but largely, this evaluation has demonstrated HEF's robustness. The hardware offers loading to muscles, bones, and joints, potentially sufficient to mitigate the loss of muscle mass and bone mineral density during long-duration bed rest campaigns. With some minor modifications, the HEF with weight stack equipment provides the best currently available means of performing squat, heel raise, prone row, bench press, and hip flexion/extension exercise in a supine orientation.

  4. Hardware-assisted software clock synchronization for homogeneous distributed systems

    Science.gov (United States)

    Ramanathan, P.; Kandlur, Dilip D.; Shin, Kang G.

    1990-01-01

    A clock synchronization scheme that strikes a balance between hardware and software solutions is proposed. The proposed is a software algorithm that uses minimal additional hardware to achieve reasonably tight synchronization. Unlike other software solutions, the guaranteed worst-case skews can be made insensitive to the maximum variation of message transit delay in the system. The scheme is particularly suitable for large partially connected distributed systems with topologies that support simple point-to-point broadcast algorithms. Examples of such topologies include the hypercube and the mesh interconnection structures.

  5. Surface moisture measurement system hardware acceptance test report

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.A., Westinghouse Hanford

    1996-05-28

    This document summarizes the results of the hardware acceptance test for the Surface Moisture Measurement System (SMMS). This test verified that the mechanical and electrical features of the SMMS functioned as designed and that the unit is ready for field service. The bulk of hardware testing was performed at the 306E Facility in the 300 Area and the Fuels and Materials Examination Facility in the 400 Area. The SMMS was developed primarily in support of Tank Waste Remediation System (TWRS) Safety Programs for moisture measurement in organic and ferrocyanide watch list tanks.

  6. Automated power distribution system hardware. [for space station power supplies

    Science.gov (United States)

    Anderson, Paul M.; Martin, James A.; Thomason, Cindy

    1989-01-01

    An automated power distribution system testbed for the space station common modules has been developed. It incorporates automated control and monitoring of a utility-type power system. Automated power system switchgear, control and sensor hardware requirements, hardware design, test results, and potential applications are discussed. The system is designed so that the automated control and monitoring of the power system is compatible with both a 208-V, 20-kHz single-phase AC system and a high-voltage (120 to 150 V) DC system.

  7. Integrated circuit authentication hardware Trojans and counterfeit detection

    CERN Document Server

    Tehranipoor, Mohammad; Zhang, Xuehui

    2013-01-01

    This book describes techniques to verify the authenticity of integrated circuits (ICs). It focuses on hardware Trojan detection and prevention and counterfeit detection and prevention. The authors discuss a variety of detection schemes and design methodologies for improving Trojan detection techniques, as well as various attempts at developing hardware Trojans in IP cores and ICs. While describing existing Trojan detection methods, the authors also analyze their effectiveness in disclosing various types of Trojans, and demonstrate several architecture-level solutions. 

  8. Computer organization and design the hardware/software interface

    CERN Document Server

    Hennessy, John L

    1994-01-01

    Computer Organization and Design: The Hardware/Software Interface presents the interaction between hardware and software at a variety of levels, which offers a framework for understanding the fundamentals of computing. This book focuses on the concepts that are the basis for computers.Organized into nine chapters, this book begins with an overview of the computer revolution. This text then explains the concepts and algorithms used in modern computer arithmetic. Other chapters consider the abstractions and concepts in memory hierarchies by starting with the simplest possible cache. This book di

  9. Hardware Design for a Smart Lock System for Home Automation

    OpenAIRE

    Javierre, Sergio

    2016-01-01

    Developing a system that can be controlled by a portable device and easily implemented on any door is the main goal of the Smart Lock System. Its purpose is to avoid the usage of a hardware key; the new key will be an Android app in the mobile device which provides security to the user and to the specific area due to the fact that only restricted personnel is permitted access in this area. The design of the embedded system and its implementation, focusing on the system hardware part, are t...

  10. Electrical, electronics, and digital hardware essentials for scientists and engineers

    CERN Document Server

    Lipiansky, Ed

    2012-01-01

    A practical guide for solving real-world circuit board problems Electrical, Electronics, and Digital Hardware Essentials for Scientists and Engineers arms engineers with the tools they need to test, evaluate, and solve circuit board problems. It explores a wide range of circuit analysis topics, supplementing the material with detailed circuit examples and extensive illustrations. The pros and cons of various methods of analysis, fundamental applications of electronic hardware, and issues in logic design are also thoroughly examined. The author draws on more than tw

  11. Enhancing Technology Transfer of Computer Hardware and Software Architectures Using Human Factors in Initial Design.

    Science.gov (United States)

    1981-09-01

    here. For example, one question asked "To what extent are you familiar with artificial intelligence applications ?" and obviously most showed an...increased familiarity since they were introduced to some artificial Intelligence applications during the experiment. One similar type of detailed question

  12. Implementation of a Hardware-in-the-Loop System Using Scale Model Hardware for Hybrid Electric Vehicle Development

    OpenAIRE

    Janczak, John

    2007-01-01

    Hardware-in-a-loop (HIL) testing and simulation for components and control strategies can reduce both time and cost of development. HIL testing focuses on one component or control system rather than the entire vehicle. The rest of the system is simulated by computer systems which use real time data acquisition systems to read outputs and respond like the systems in the actual vehicle would respond. The hardware for the system is on a scaled-down level to save both time and money during tes...

  13. Hardware realization of an SVM algorithm implemented in FPGAs

    Science.gov (United States)

    Wiśniewski, Remigiusz; Bazydło, Grzegorz; Szcześniak, Paweł

    2017-08-01

    The paper proposes a technique of hardware realization of a space vector modulation (SVM) of state function switching in matrix converter (MC), oriented on the implementation in a single field programmable gate array (FPGA). In MC the SVM method is based on the instantaneous space-vector representation of input currents and output voltages. The traditional computation algorithms usually involve digital signal processors (DSPs) which consumes the large number of power transistors (18 transistors and 18 independent PWM outputs) and "non-standard positions of control pulses" during the switching sequence. Recently, hardware implementations become popular since computed operations may be executed much faster and efficient due to nature of the digital devices (especially concurrency). In the paper, we propose a hardware algorithm of SVM computation. In opposite to the existing techniques, the presented solution applies COordinate Rotation DIgital Computer (CORDIC) method to solve the trigonometric operations. Furthermore, adequate arithmetic modules (that is, sub-devices) used for intermediate calculations, such as code converters or proper sectors selectors (for output voltages and input current) are presented in detail. The proposed technique has been implemented as a design described with the use of Verilog hardware description language. The preliminary results of logic implementation oriented on the Xilinx FPGA (particularly, low-cost device from Artix-7 family from Xilinx was used) are also presented.

  14. Improving Reliability, Security, and Efficiency of Reconfigurable Hardware Systems (Habilitation)

    NARCIS (Netherlands)

    Ziener, Daniel

    2017-01-01

    In this treatise,  my research on methods to improve efficiency, reliability, and security of reconfigurable hardware systems, i.e., FPGAs, through partial dynamic reconfiguration is outlined. The efficiency of reconfigurable systems can be improved by loading optimized data paths on-the-fly on an

  15. A Hardware Framework for on-Chip FPGA Acceleration

    DEFF Research Database (Denmark)

    Lomuscio, Andrea; Cardarilli, Gian Carlo; Nannarelli, Alberto

    2016-01-01

    In this work, we present a new framework to dynamically load hardware accelerators on reconfigurable platforms (FPGAs). Provided a library of application-specific processors, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA...

  16. TreeBASIS Feature Descriptor and Its Hardware Implementation

    Directory of Open Access Journals (Sweden)

    Spencer Fowers

    2014-01-01

    Full Text Available This paper presents a novel feature descriptor called TreeBASIS that provides improvements in descriptor size, computation time, matching speed, and accuracy. This new descriptor uses a binary vocabulary tree that is computed using basis dictionary images and a test set of feature region images. To facilitate real-time implementation, a feature region image is binary quantized and the resulting quantized vector is passed into the BASIS vocabulary tree. A Hamming distance is then computed between the feature region image and the effectively descriptive basis dictionary image at a node to determine the branch taken and the path the feature region image takes is saved as a descriptor. The TreeBASIS feature descriptor is an excellent candidate for hardware implementation because of its reduced descriptor size and the fact that descriptors can be created and features matched without the use of floating point operations. The TreeBASIS descriptor is more computationally and space efficient than other descriptors such as BASIS, SIFT, and SURF. Moreover, it can be computed entirely in hardware without the support of a CPU for additional software-based computations. Experimental results and a hardware implementation show that the TreeBASIS descriptor compares well with other descriptors for frame-to-frame homography computation while requiring fewer hardware resources.

  17. Chip-Multiprocessor Hardware Locks for Safety-Critical Java

    DEFF Research Database (Denmark)

    Strøm, Torur Biskopstø; Puffitsch, Wolfgang; Schoeberl, Martin

    2013-01-01

    and may void a task set's schedulability. In this paper we present a hardware locking mechanism to reduce the synchronization overhead. The solution is implemented for the chip-multiprocessor version of the Java Optimized Processor in the context of safety-critical Java. The implementation is compared...

  18. Hardware Algorithms For Tile-Based Real-Time Rendering

    NARCIS (Netherlands)

    Crisu, D.

    2012-01-01

    In this dissertation, we present the GRAphics AcceLerator (GRAAL) framework for developing embedded tile-based rasterization hardware for mobile devices, meant to accelerate real-time 3-D graphics (OpenGL compliant) applications. The goal of the framework is a low-cost, low-power, high-performance

  19. Detecting System of Nested Hardware Virtual Machine Monitor

    Directory of Open Access Journals (Sweden)

    Artem Vladimirovich Iuzbashev

    2015-03-01

    Full Text Available Method of nested hardware virtual machine monitor detection was proposed in this work. The method is based on HVM timing attack. In case of HVM presence in system, the number of different instruction sequences execution time values will increase. We used this property as indicator in our detection.

  20. Foundations of digital signal processing theory, algorithms and hardware design

    CERN Document Server

    Gaydecki, Patrick

    2005-01-01

    An excellent introductory text, this book covers the basic theoretical, algorithmic and real-time aspects of digital signal processing (DSP). Detailed information is provided on off-line, real-time and DSP programming and the reader is effortlessly guided through advanced topics such as DSP hardware design, FIR and IIR filter design and difference equation manipulation.

  1. Hardware support for the tumult real-time scheduler

    NARCIS (Netherlands)

    van der Bij, H.C.; Smit, Gerardus Johannes Maria; Havinga, Paul J.M.

    1989-01-01

    This article describes the hardware which is designed for speeding up and supporting the schedule routines of the TUMULT multi-tasking operating system. TUMULT uses a “priority running up” schedule algorithm which automatically increases the priority of a process when (part of) it must be finished

  2. Hardware Descriptive Languages: An Efficient Approach to Device ...

    African Journals Online (AJOL)

    Contemporarily, owing to astronomical advancements in the very large scale integration (VLSI) market segments, hardware engineers are now focusing on how to develop their new digital system designs in programmable languages like very high speed integrated circuit hardwaredescription language (VHDL) and Verilog ...

  3. A selective logging mechanism for hardware transactional memory systems

    OpenAIRE

    Lupon Navazo, Marc; Magklis, Grigorios; González Colás, Antonio María

    2011-01-01

    Log-based Hardware Transactional Memory (HTM) systems offer an elegant solution to handle speculative data that overflow transactional L1 caches. By keeping the pre-transactional values on a software-resident log, speculative values can be safely moved across the memory hierarchy, without requiring expensive searches on L1 misses or commits.

  4. Towards Shop Floor Hardware Reconfiguration for Industrial Collaborative Robots

    DEFF Research Database (Denmark)

    Schou, Casper; Madsen, Ole

    2016-01-01

    In this paper we propose a roadmap for hardware reconfiguration of industrial collaborative robots. As a flexible resource, the collaborative robot will often need transitioning to a new task. Our goal is, that this transitioning should be done by the shop floor operators, not highly specialized ...

  5. Hardware Design Considerations for Edge-Accelerated Stereo Correspondence Algorithms

    Directory of Open Access Journals (Sweden)

    Christos Ttofis

    2012-01-01

    Full Text Available Stereo correspondence is a popular algorithm for the extraction of depth information from a pair of rectified 2D images. Hence, it has been used in many computer vision applications that require knowledge about depth. However, stereo correspondence is a computationally intensive algorithm and requires high-end hardware resources in order to achieve real-time processing speed in embedded computer vision systems. This paper presents an overview of the use of edge information as a means to accelerate hardware implementations of stereo correspondence algorithms. The presented approach restricts the stereo correspondence algorithm only to the edges of the input images rather than to all image points, thus resulting in a considerable reduction of the search space. The paper highlights the benefits of the edge-directed approach by applying it to two stereo correspondence algorithms: an SAD-based fixed-support algorithm and a more complex adaptive support weight algorithm. Furthermore, we present design considerations about the implementation of these algorithms on reconfigurable hardware and also discuss issues related to the memory structures needed, the amount of parallelism that can be exploited, the organization of the processing blocks, and so forth. The two architectures (fixed-support based versus adaptive-support weight based are compared in terms of processing speed, disparity map accuracy, and hardware overheads, when both are implemented on a Virtex-5 FPGA platform.

  6. CT image reconstruction system based on hardware implementation

    International Nuclear Information System (INIS)

    Silva, Hamilton P. da; Evseev, Ivan; Schelin, Hugo R.; Paschuk, Sergei A.; Milhoretto, Edney; Setti, Joao A.P.; Zibetti, Marcelo; Hormaza, Joel M.; Lopes, Ricardo T.

    2009-01-01

    Full text: The timing factor is very important for medical imaging systems, which can nowadays be synchronized by vital human signals, like heartbeats or breath. The use of hardware implemented devices in such a system has advantages considering the high speed of information treatment combined with arbitrary low cost on the market. This article refers to a hardware system which is based on electronic programmable logic called FPGA, model Cyclone II from ALTERA Corporation. The hardware was implemented on the UP3 ALTERA Kit. A partially connected neural network with unitary weights was programmed. The system was tested with 60 topographic projections, 100 points in each, of the Shepp and Logan phantom created by MATLAB. The main restriction was found to be the memory size available on the device: the dynamic range of reconstructed image was limited to 0 65535. Also, the normalization factor must be observed in order to do not saturate the image during the reconstruction and filtering process. The test shows a principal possibility to build CT image reconstruction systems for any reasonable amount of input data by arranging the parallel work of the hardware units like we have tested. However, further studies are necessary for better understanding of the error propagation from topographic projections to reconstructed image within the implemented method. (author)

  7. Hardware prototype with component specification and usage description

    NARCIS (Netherlands)

    Azam, Tre; Aswat, Soyeb; Klemke, Roland; Sharma, Puneet; Wild, Fridolin

    2017-01-01

    Following on from D3.1 and the final selection of sensors, in this D3.2 report we present the first version of the experience capturing hardware prototype design and API architecture taking into account the current limitations of the Hololens not being available until early next month in time for

  8. Hardware Synchronization for Embedded Multi-Core Processors

    DEFF Research Database (Denmark)

    Stoif, Christian; Schoeberl, Martin; Liccardi, Benito

    2011-01-01

    -core systems, using an FPGA-development board with two hard PowerPC processor cores. Best- and worst-case results, together with intensive benchmarking of all synchronization primitives implemented, show the expected superiority of the hardware solutions. It is also shown that dual-ported memory outperforms...

  9. Evaluation of In-House versus Contract Computer Hardware Maintenance

    International Nuclear Information System (INIS)

    Wright, H.P.

    1981-09-01

    The issue of In-House versus Contract Computer Hardware Maintenance is one which every organization who uses computers must resolve. This report discusses the advantages and disadvantages of both approaches to computer maintenance, the costs involved (based on the current AGNS computer inventory), and the AGNS maintenance experience to date. A recommendation on an appropriate approach for AGNS is made

  10. Use of Heritage Hardware on MPCV Exploration Flight Test One

    Science.gov (United States)

    Rains, George Edward; Cross, Cynthia D.

    2011-01-01

    Due to an aggressive schedule for the first orbital test flight of an unmanned Orion capsule, known as Exploration Flight Test One (EFT1), combined with severe programmatic funding constraints, an effort was made to identify heritage hardware, i.e., already existing, flight-certified components from previous manned space programs, which might be available for use on EFT1. With the end of the Space Shuttle Program, no current means exists to launch Multi Purpose Logistics Modules (MPLMs) to the International Space Station (ISS), and so the inventory of many flight-certified Shuttle and MPLM components are available for other purposes. Two of these items are the Shuttle Ground Support Equipment Heat Exchanger (GSE Hx) and the MPLM cabin Positive Pressure Relief Assembly (PPRA). In preparation for the utilization of these components by the Orion Program, analyses and testing of the hardware were performed. The PPRA had to be analyzed to determine its susceptibility to pyrotechnic shock, and vibration testing had to be performed, since those environments are predicted to be significantly more severe during an Orion mission than those the hardware was originally designed to accommodate. The GSE Hx had to be tested for performance with the Orion thermal working fluids, which are different from those used by the Space Shuttle. This paper summarizes the certification of the use of heritage hardware for EFT1.

  11. Choropleth Mapping on Personal Computers: Software Sources and Hardware Requirements.

    Science.gov (United States)

    Lewis, Lawrence T.

    1986-01-01

    Describes the hardware and some of the choropleth mapping software available for the IBM-PC, PC compatible and Apple II microcomputers. Reviewed are: Micromap II, Documap, Desktop Information Display System (DIDS) , Multimap, Execuvision, Iris Gis, Mapmaker, PC Map, Statmap, and Atlas Map. Vendors' addresses are provided. (JDH)

  12. Hardware methods in cosmetology. Programs of face care

    OpenAIRE

    Chuhraev, N.; Zukow, W.; Samosiuk, N.; Chuhraeva, E.; Tereshchenko, A.; Gunko, M.; Unichenko, A.; Paramonova, A.

    2016-01-01

    Medical Innovative Technologies, Kiev, Ukraine Radomska Szkoła Wyższa w Radomiu, Polska Radom University in Radom, Poland HARDWARE METHODS IN COSMETOLOGY PROGRAMS OF FACE CARE N. Chuhraev, W. Zukow, N. Samosiuk, E. Chuhraeva, A. Tereshchenko, M. Gunko, A. Unichenko, A. Paramonova Edited by N. Chuhraev W. Zukow N. Samosiuk E. Chuhraeva A. Tereshchenko M. Gunko A. Unichenko A. Paramonov...

  13. Parallel asynchronous hardware implementation of image processing algorithms

    Science.gov (United States)

    Coon, Darryl D.; Perera, A. G. U.

    1990-01-01

    Research is being carried out on hardware for a new approach to focal plane processing. The hardware involves silicon injection mode devices. These devices provide a natural basis for parallel asynchronous focal plane image preprocessing. The simplicity and novel properties of the devices would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture built from arrays of the devices would form a two-dimensional (2-D) array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuron-like asynchronous pulse-coded form through the laminar processor. No multiplexing, digitization, or serial processing would occur in the preprocessing state. High performance is expected, based on pulse coding of input currents down to one picoampere with noise referred to input of about 10 femtoamperes. Linear pulse coding has been observed for input currents ranging up to seven orders of magnitude. Low power requirements suggest utility in space and in conjunction with very large arrays. Very low dark current and multispectral capability are possible because of hardware compatibility with the cryogenic environment of high performance detector arrays. The aforementioned hardware development effort is aimed at systems which would integrate image acquisition and image processing.

  14. Hardware Approach for Real Time Machine Stereo Vision

    Directory of Open Access Journals (Sweden)

    Michael Tornow

    2006-02-01

    Full Text Available Image processing is an effective tool for the analysis of optical sensor information for driver assistance systems and controlling of autonomous robots. Algorithms for image processing are often very complex and costly in terms of computation. In robotics and driver assistance systems, real-time processing is necessary. Signal processing algorithms must often be drastically modified so they can be implemented in the hardware. This task is especially difficult for continuous real-time processing at high speeds. This article describes a hardware-software co-design for a multi-object position sensor based on a stereophotogrammetric measuring method. In order to cover a large measuring area, an optimized algorithm based on an image pyramid is implemented in an FPGA as a parallel hardware solution for depth map calculation. Object recognition and tracking are then executed in real-time in a processor with help of software. For this task a statistical cluster method is used. Stabilization of the tracking is realized through use of a Kalman filter. Keywords: stereophotogrammetry, hardware-software co-design, FPGA, 3-d image analysis, real-time, clustering and tracking.

  15. Tomographic image reconstruction and rendering with texture-mapping hardware

    International Nuclear Information System (INIS)

    Azevedo, S.G.; Cabral, B.K.; Foran, J.

    1994-07-01

    The image reconstruction problem, also known as the inverse Radon transform, for x-ray computed tomography (CT) is found in numerous applications in medicine and industry. The most common algorithm used in these cases is filtered backprojection (FBP), which, while a simple procedure, is time-consuming for large images on any type of computational engine. Specially-designed, dedicated parallel processors are commonly used in medical CT scanners, whose results are then passed to graphics workstation for rendering and analysis. However, a fast direct FBP algorithm can be implemented on modern texture-mapping hardware in current high-end workstation platforms. This is done by casting the FBP algorithm as an image warping operation with summing. Texture-mapping hardware, such as that on the Silicon Graphics Reality Engine (TM), shows around 600 times speedup of backprojection over a CPU-based implementation (a 100 Mhz R4400 in this case). This technique has the further advantages of flexibility and rapid programming. In addition, the same hardware can be used for both image reconstruction and for volumetric rendering. The techniques can also be used to accelerate iterative reconstruction algorithms. The hardware architecture also allows more complex operations than straight-ray backprojection if they are required, including fan-beam, cone-beam, and curved ray paths, with little or no speed penalties

  16. Towards automated construction of dependable software/hardware systems

    Energy Technology Data Exchange (ETDEWEB)

    Yakhnis, A.; Yakhnis, V. [Pioneer Technologies & Rockwell Science Center, Albuquerque, NM (United States)

    1997-11-01

    This report contains viewgraphs on the automated construction of dependable computer architecture systems. The outline of this report is: examples of software/hardware systems; dependable systems; partial delivery of dependability; proposed approach; removing obstacles; advantages of the approach; criteria for success; current progress of the approach; and references.

  17. Smart Home Hardware-in-the-Loop Testing

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, Annabelle

    2017-07-12

    This presentation provides a high-level overview of NREL's smart home hardware-in-the-loop testing. It was presented at the Fourth International Workshop on Grid Simulator Testing of Energy Systems and Wind Turbine Powertrains, held April 25-26, 2017, hosted by NREL and Clemson University at the Energy Systems Integration Facility in Golden, Colorado.

  18. Hardware availability calculations and results of the IFMIF accelerator facility

    Energy Technology Data Exchange (ETDEWEB)

    Bargalló, Enric, E-mail: enric.bargallo-font@upc.edu [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Arroyo, Jose Manuel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Abal, Javier [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Beauvais, Pierre-Yves; Gobin, Raphael; Orsini, Fabienne [Commissariat à l’Energie Atomique, Saclay (France); Weber, Moisés; Podadera, Ivan [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Grespan, Francesco; Fagotti, Enrico [Istituto Nazionale di Fisica Nucleare, Legnaro (Italy); De Blas, Alfredo; Dies, Javier; Tapia, Carlos [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Mollá, Joaquín; Ibarra, Ángel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain)

    2014-10-15

    Highlights: • IFMIF accelerator facility hardware availability analyses methodology is described. • Results of the individual hardware availability analyses are shown for the reference design. • Accelerator design improvements are proposed for each system. • Availability results are evaluated and compared with the requirements. - Abstract: Hardware availability calculations have been done individually for each system of the deuteron accelerators of the International Fusion Materials Irradiation Facility (IFMIF). The principal goal of these analyses is to estimate the availability of the systems, compare it with the challenging IFMIF requirements and find new paths to improve availability performances. Major unavailability contributors are highlighted and possible design changes are proposed in order to achieve the hardware availability requirements established for each system. In this paper, such possible improvements are implemented in fault tree models and the availability results are evaluated. The parallel activity on the design and construction of the linear IFMIF prototype accelerator (LIPAc) provides detailed design information for the RAMI (reliability, availability, maintainability and inspectability) analyses and allows finding out the improvements that the final accelerator could have. Because of the R and D behavior of the LIPAc, RAMI improvements could be the major differences between the prototype and the IFMIF accelerator design.

  19. Detection of hardware backdoor through microcontroller read time ...

    African Journals Online (AJOL)

    The objective of this work, christened “HABA” (Hardware Backdoor Aware) is to collect data samples of series of read time of microcontroller embedded on military grade equipments and correlate it with previously stored expected behavior read time samples so as to detect abnormality or otherwise. I was motivated by the ...

  20. Flight Hardware Packaging Design for Stringent EMC Radiated Emission Requirements

    Science.gov (United States)

    Lortz, Charlene L.; Huang, Chi-Chien N.; Ravich, Joshua A.; Steiner, Carl N.

    2013-01-01

    This packaging design approach can help heritage hardware meet a flight project's stringent EMC radiated emissions requirement. The approach requires only minor modifications to a hardware's chassis and mainly concentrates on its connector interfaces. The solution is to raise the surface area where the connector is mounted by a few millimeters using a pedestal, and then wrapping with conductive tape from the cable backshell down to the surface-mounted connector. This design approach has been applied to JPL flight project subsystems. The EMC radiated emissions requirements for flight projects can vary from benign to mission critical. If the project's EMC requirements are stringent, the best approach to meet EMC requirements would be to design an EMC control program for the project early on and implement EMC design techniques starting with the circuit board layout. This is the ideal scenario for hardware that is built from scratch. Implementation of EMC radiated emissions mitigation techniques can mature as the design progresses, with minimal impact to the design cycle. The real challenge exists for hardware that is planned to be flown following a built-to-print approach, in which heritage hardware from a past project with a different set of requirements is expected to perform satisfactorily for a new project. With acceptance of heritage, the design would already be established (circuit board layout and components have already been pre-determined), and hence any radiated emissions mitigation techniques would only be applicable at the packaging level. The key is to take a heritage design with its known radiated emissions spectrum and repackage, or modify its chassis design so that it would have a better chance of meeting the new project s radiated emissions requirements.

  1. SpecCert: Specifying and Verifying Hardware-based Security Enforcement

    OpenAIRE

    Letan , Thomas; Chifflier , Pierre; Hiet , Guillaume; Néron , Pierre; Morin , Benjamin

    2016-01-01

    Over time, hardware designs have constantly grown in complexity and modern platforms involve multiple interconnected hardware components. During the last decade, several vulnerability disclosures have proven that trust in hardware can be misplaced. In this article, we give a formal definition of Hardware-based Security Enforcement (HSE) mechanisms, a class of security enforcement mechanisms such that a software component relies on the underlying hardware platform to enforce a security policy....

  2. Getting expert systems off the ground: Lessons learned from integrating model-based diagnostics with prototype flight hardware

    Science.gov (United States)

    Stephan, Amy; Erikson, Carol A.

    1991-11-01

    As an initial attempt to introduce expert system technology into an onboard environment, a model based diagnostic system using the TRW MARPLE software tool was integrated with prototype flight hardware and its corresponding control software. Because this experiment was designed primarily to test the effectiveness of the model based reasoning technique used, the expert system ran on a separate hardware platform, and interactions between the control software and the model based diagnostics were limited. While this project met its objective of showing that model based reasoning can effectively isolate failures in flight hardware, it also identified the need for an integrated development path for expert system and control software for onboard applications. In developing expert systems that are ready for flight, artificial intelligence techniques must be evaluated to determine whether they offer a real advantage onboard, identify which diagnostic functions should be performed by the expert systems and which are better left to the procedural software, and work closely with both the hardware and the software developers from the beginning of a project to produce a well designed and thoroughly integrated application.

  3. A hardware approach for histological and histopathological digital image stain normalization.

    Science.gov (United States)

    Şerbănescu, Mircea Sebastian; Pleşea, Iancu Emil

    2015-01-01

    Advances in technology made the migration of pathological diagnosis to digital slides possible. As the need for objectivity and automation emerged, new computer software algorithms were proposed. Computer algorithms demand accurate color and intensity values in order to provide reliable results. The tissue samples undergo several processing steps from histological preparation to digitalization, which cannot be completely standardized. Thus, non-standardized input data generates unreliable output data. In this article, we discuss a new computational normalization algorithm for histopathological stained slides that uses a hardware color marker. The marker is added to the glass slide together with the tissue section, exposed to all the processing steps and altered in the same manner as the biological material of interest, thus becoming a solid color marker for image normalization. The results of the proposed method are numerically and perceptually tested in order to prove the advantages of the method. We conclude that our combined hardware-software technique for staining normalization of digital slides is superior to the existing methods based on only software normalization, and that its implementation will tackle not only the acquisition errors but also the technical errors that may occur during the staining process.

  4. Summary of multi-core hardware and programming model investigations

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Suzanne Marie; Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.

    2008-05-01

    This report summarizes our investigations into multi-core processors and programming models for parallel scientific applications. The motivation for this study was to better understand the landscape of multi-core hardware, future trends, and the implications on system software for capability supercomputers. The results of this study are being used as input into the design of a new open-source light-weight kernel operating system being targeted at future capability supercomputers made up of multi-core processors. A goal of this effort is to create an agile system that is able to adapt to and efficiently support whatever multi-core hardware and programming models gain acceptance by the community.

  5. Computer organization and design the hardware/software interface

    CERN Document Server

    Patterson, David A

    2013-01-01

    The 5th edition of Computer Organization and Design moves forward into the post-PC era with new examples, exercises, and material highlighting the emergence of mobile computing and the cloud. This generational change is emphasized and explored with updated content featuring tablet computers, cloud infrastructure, and the ARM (mobile computing devices) and x86 (cloud computing) architectures. Because an understanding of modern hardware is essential to achieving good performance and energy efficiency, this edition adds a new concrete example, "Going Faster," used throughout the text to demonstrate extremely effective optimization techniques. Also new to this edition is discussion of the "Eight Great Ideas" of computer architecture. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, computer arithmetic, pipelining, memory hierarchies and I/O. Optimization techniques featured throughout the text. It covers parallelism in depth with...

  6. Hardware emulation of Memristor based Ternary Content Addressable Memory

    KAUST Repository

    Bahloul, Mohamed A.

    2017-12-13

    MTCAM (Memristor Ternary Content Addressable Memory) is a special purpose storage medium in which data could be retrieved based on the stored content. Using Memristors as the main storage element provides the potential of achieving higher density and more efficient solutions than conventional methods. A key missing item in the validation of such approaches is the wide spread availability of hardware emulation platforms that can provide reliable and repeatable performance statistics. In this paper, we present a hardware MTCAM emulation based on 2-Transistors-2Memristors (2T2M) bit-cell. It builds on a bipolar memristor model with storing and fetching capabilities based on the actual current-voltage behaviour. The proposed design offers a flexible verification environment with quick design revisions, high execution speeds and powerful debugging techniques. The proposed design is modeled using VHDL and prototyped on Xilinx Virtex® FPGA.

  7. Advances in neuromorphic hardware exploiting emerging nanoscale devices

    CERN Document Server

    2017-01-01

    This book covers all major aspects of cutting-edge research in the field of neuromorphic hardware engineering involving emerging nanoscale devices. Special emphasis is given to leading works in hybrid low-power CMOS-Nanodevice design. The book offers readers a bidirectional (top-down and bottom-up) perspective on designing efficient bio-inspired hardware. At the nanodevice level, it focuses on various flavors of emerging resistive memory (RRAM) technology. At the algorithm level, it addresses optimized implementations of supervised and stochastic learning paradigms such as: spike-time-dependent plasticity (STDP), long-term potentiation (LTP), long-term depression (LTD), extreme learning machines (ELM) and early adoptions of restricted Boltzmann machines (RBM) to name a few. The contributions discuss system-level power/energy/parasitic trade-offs, and complex real-world applications. The book is suited for both advanced researchers and students interested in the field.

  8. Cumulative Measurement Errors for Dynamic Testing of Space Flight Hardware

    Science.gov (United States)

    Winnitoy, Susan

    2012-01-01

    Located at the NASA Johnson Space Center in Houston, TX, the Six-Degree-of-Freedom Dynamic Test System (SDTS) is a real-time, six degree-of-freedom, short range motion base simulator originally designed to simulate the relative dynamics of two bodies in space mating together (i.e., docking or berthing). The SDTS has the capability to test full scale docking and berthing systems utilizing a two body dynamic docking simulation for docking operations and a Space Station Remote Manipulator System (SSRMS) simulation for berthing operations. The SDTS can also be used for nonmating applications such as sensors and instruments evaluations requiring proximity or short range motion operations. The motion base is a hydraulic powered Stewart platform, capable of supporting a 3,500 lb payload with a positional accuracy of 0.03 inches. The SDTS is currently being used for the NASA Docking System testing and has been also used by other government agencies. The SDTS is also under consideration for use by commercial companies. Examples of tests include the verification of on-orbit robotic inspection systems, space vehicle assembly procedures and docking/berthing systems. The facility integrates a dynamic simulation of on-orbit spacecraft mating or de-mating using flight-like mechanical interface hardware. A force moment sensor is used for input during the contact phase, thus simulating the contact dynamics. While the verification of flight hardware presents unique challenges, one particular area of interest involves the use of external measurement systems to ensure accurate feedback of dynamic contact. The measurement systems for the test facility have two separate functions. The first is to take static measurements of facility and test hardware to determine both the static and moving frames used in the simulation and control system. The test hardware must be measured after each configuration change to determine both sets of reference frames. The second function is to take dynamic

  9. Binary Associative Memories as a Benchmark for Spiking Neuromorphic Hardware

    Directory of Open Access Journals (Sweden)

    Andreas Stöckel

    2017-08-01

    Full Text Available Large-scale neuromorphic hardware platforms, specialized computer systems for energy efficient simulation of spiking neural networks, are being developed around the world, for example as part of the European Human Brain Project (HBP. Due to conceptual differences, a universal performance analysis of these systems in terms of runtime, accuracy and energy efficiency is non-trivial, yet indispensable for further hard- and software development. In this paper we describe a scalable benchmark based on a spiking neural network implementation of the binary neural associative memory. We treat neuromorphic hardware and software simulators as black-boxes and execute exactly the same network description across all devices. Experiments on the HBP platforms under varying configurations of the associative memory show that the presented method allows to test the quality of the neuron model implementation, and to explain significant deviations from the expected reference output.

  10. Fast and Reliable Mouse Picking Using Graphics Hardware

    Directory of Open Access Journals (Sweden)

    Hanli Zhao

    2009-01-01

    Full Text Available Mouse picking is the most commonly used intuitive operation to interact with 3D scenes in a variety of 3D graphics applications. High performance for such operation is necessary in order to provide users with fast responses. This paper proposes a fast and reliable mouse picking algorithm using graphics hardware for 3D triangular scenes. Our approach uses a multi-layer rendering algorithm to perform the picking operation in linear time complexity. The objectspace based ray-triangle intersection test is implemented in a highly parallelized geometry shader. After applying the hardware-supported occlusion queries, only a small number of objects (or sub-objects are rendered in subsequent layers, which accelerates the picking efficiency. Experimental results demonstrate the high performance of our novel approach. Due to its simplicity, our algorithm can be easily integrated into existing real-time rendering systems.

  11. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    Directory of Open Access Journals (Sweden)

    David R. W. Barr

    2009-01-01

    Full Text Available We present a software environment for the efficient simulation of cellular processor arrays (CPAs. This software (APRON is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  12. Hardware support for CSP on a Java chip multiprocessor

    DEFF Research Database (Denmark)

    Gruian, Flavius; Schoeberl, Martin

    2013-01-01

    Due to memory bandwidth limitations, chip multiprocessors (CMPs) adopting the convenient shared memory model for their main memory architecture scale poorly. On-chip core-to-core communication is a solution to this problem, that can lead to further performance increase for a number of multithreaded...... applications. Programmatically, the Communicating Sequential Processes (CSPs) paradigm provides a sound computational model for such an architecture with message based communication. In this paper we explore hardware support for CSP in the context of an embedded Java CMP. The hardware support for CSP are on......-chip communication channels, implemented by a ring-based network-on-chip (NoC), to reduce the memory bandwidth pressure on the shared memory.The presented solution is scalable and also specific for our limited resources and real-time predictability requirements. CMP architectures of three to eight processors were...

  13. Parallel random number generator for inexpensive configurable hardware cells

    Science.gov (United States)

    Ackermann, J.; Tangen, U.; Bödekker, B.; Breyer, J.; Stoll, E.; McCaskill, J. S.

    2001-11-01

    A new random number generator ( RNG) adapted to parallel processors has been created. This RNG can be implemented with inexpensive hardware cells. The correlation between neighboring cells is suppressed with smart connections. With such connection structures, sequences of pseudo-random numbers are produced. Numerical tests including a self-avoiding random walk test and the simulation of the order parameter and energy of the 2D Ising model give no evidence for correlation in the pseudo-random sequences. Because the new random number generator has suppressed the correlation between neighboring cells which is usually observed in cellular automaton implementations, it is applicable for extended time simulations. It gives an immense speed-up factor if implemented directly in configurable hardware, and has recently been used for long time simulations of spatially resolved molecular evolution.

  14. Verification of OpenSSL version via hardware performance counters

    Science.gov (United States)

    Bruska, James; Blasingame, Zander; Liu, Chen

    2017-05-01

    Many forms of malware and security breaches exist today. One type of breach downgrades a cryptographic program by employing a man-in-the-middle attack. In this work, we explore the utilization of hardware events in conjunction with machine learning algorithms to detect which version of OpenSSL is being run during the encryption process. This allows for the immediate detection of any unknown downgrade attacks in real time. Our experimental results indicated this detection method is both feasible and practical. When trained with normal TLS and SSL data, our classifier was able to detect which protocol was being used with 99.995% accuracy. After the scope of the hardware event recording was enlarged, the accuracy diminished greatly, but to 53.244%. Upon removal of TLS 1.1 from the data set, the accuracy returned to 99.905%.

  15. Outline of a fast hardware implementation of Winograd's DFT algorithm

    Science.gov (United States)

    Zohar, S.

    1980-01-01

    The main characteristics of the discrete Fourier transform (DFT) algorithm considered by Winograd (1976) is a significant reduction in the number of multiplications. Its primary disadvantage is a higher structural complexity. It is, therefore, difficult to translate the reduced number of multiplications into faster execution of the DFT by means of a software implementation of the algorithm. For this reason, a hardware implementation is considered in the current study, taking into account a design based on the algorithm prescription discussed by Zohar (1979). The hardware implementation of a FORTRAN subroutine is proposed, giving attention to a pipelining scheme in which 5 consecutive data batches are being operated on simultaneously, each batch undergoing one of 5 processing phases.

  16. Implementation of a Hardware Ray Tracer for digital design education

    OpenAIRE

    Eggen, Jonas Agentoft

    2017-01-01

    Digital design is a large and complex field of electronic engineering, and learning digital design requires maturing over time. The learning process can be facilitated by making use of a single learning platform throughout a whole course. A learning platform built around a hardware ray tracer can be used in illustrating many important aspects of digital design. A unified learning platform allows students to delve into intricate details of digital design while still seeing the bigger pictur...

  17. IDEAS and App Development Internship in Hardware and Software Design

    Science.gov (United States)

    Alrayes, Rabab D.

    2016-01-01

    In this report, I will discuss the tasks and projects I have completed while working as an electrical engineering intern during the spring semester of 2016 at NASA Kennedy Space Center. In the field of software development, I completed tasks for the G-O Caching Mobile App and the Asbestos Management Information System (AMIS) Web App. The G-O Caching Mobile App was written in HTML, CSS, and JavaScript on the Cordova framework, while the AMIS Web App is written in HTML, CSS, JavaScript, and C# on the AngularJS framework. My goals and objectives on these two projects were to produce an app with an eye-catching and intuitive User Interface (UI), which will attract more employees to participate; to produce a fully-tested, fully functional app which supports workforce engagement and exploration; to produce a fully-tested, fully functional web app that assists technicians working in asbestos management. I also worked in hardware development on the Integrated Display and Environmental Awareness System (IDEAS) wearable technology project. My tasks on this project were focused in PCB design and camera integration. My goals and objectives for this project were to successfully integrate fully functioning custom hardware extenders on the wearable technology headset to minimize the size of hardware on the smart glasses headset for maximum user comfort; to successfully integrate fully functioning camera onto the headset. By the end of this semester, I was able to successfully develop four extender boards to minimize hardware on the headset, and assisted in integrating a fully-functioning camera into the system.

  18. Generation of embedded Hardware/Software from SystemC

    OpenAIRE

    Houzet , Dominique; Ouadjaout , Salim

    2006-01-01

    International audience; Designers increasingly rely on reusing intellectual property (IP) and on raising the level of abstraction to respect system-on-chip (SoC) market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propo...

  19. BCI meeting 2005--workshop on technology: hardware and software.

    Science.gov (United States)

    Cincotti, Febo; Bianchi, Luigi; Birch, Gary; Guger, Christoph; Mellinger, Jürgen; Scherer, Reinhold; Schmidt, Robert N; Yáñez Suárez, Oscar; Schalk, Gerwin

    2006-06-01

    This paper describes the outcome of discussions held during the Third International BCI Meeting at a workshop to review and evaluate the current state of BCI-related hardware and software. Technical requirements and current technologies, standardization procedures and future trends are covered. The main conclusion was recognition of the need to focus technical requirements on the users' needs and the need for consistent standards in BCI research.

  20. 10161 Executive Summary -- Decision Procedures in Software, Hardware and Bioware

    OpenAIRE

    Bjorner, Nikolaj; Nieuwenhuis, Robert; Veith, Helmut; Voronkov, Andrei

    2010-01-01

    The main goal of the seminar Decision Procedures in Soft, Hard and Bio-ware was to bring together renowned as well as young aspiring researchers from two groups. The first group formed by researchers who develop both theory and efficient implementations of decision procedures. The second group comprising of researchers from application areas such as program analysis and testing, crypto-analysis, hardware verification, industrial planning and scheduling, and bio-inform...

  1. Toward Composable Hardware Agnostic Communications Blocks Lessons Learned

    Science.gov (United States)

    2016-11-01

    processing through a common thread- ing, scheduling, IPC, and memory management approach • Hardware-specific optimization abstraction • Flow -based block...composition - Each block may receive multiple inputs and generate multiple outputs to different blocks enabling flow -based usage Presentation Name - 5...with a high level block complexity analysis. Assumptions such as infinite memory/all access in L1 cache , hand assembly (no function call overhead/stack

  2. Hardware realization of chaos based block cipher for image encryption

    KAUST Repository

    Barakat, Mohamed L.

    2011-12-01

    Unlike stream ciphers, block ciphers are very essential for parallel processing applications. In this paper, the first hardware realization of chaotic-based block cipher is proposed for image encryption applications. The proposed system is tested for known cryptanalysis attacks and for different block sizes. When implemented on Virtex-IV, system performance showed high throughput and utilized small area. Passing successfully in all tests, our system proved to be secure with all block sizes. © 2011 IEEE.

  3. Introduction to hardware for nuclear medicine data systems

    International Nuclear Information System (INIS)

    Erickson, J.J.

    1976-01-01

    Hardware included in a computer-based data system for nuclear medicine imaging studies is discussed. The report is written for the newcomer to computer collection and analysis. Emphasis is placed on the effect of the various portions of the system on the final application in the nuclear medicine clinic. While an attempt is made to familiarize the user with some of the terms he will encounter, no attempt is made to make him a computer expert. 1 figure, 2 tables

  4. Hardware of automation systems of isotope mass spectrometers

    International Nuclear Information System (INIS)

    Manojlov, V.V.; Meleshkin, A.S.; Novikov, L.V.; Kornil'ev, S.O.; Voronin, B.M.

    1997-01-01

    The modernized hardware of isotope mass spectrometers is described. The modern control systems for the mass spectrometers are fulfilled on the basis of IBM/PC AT. Versions of subsystems mass spectrometer control through a standard bus and through a digital-to-analog converter are considered. The characteristics of an electrometric amplifier and interface cards developed for modernized automation systems of the isotope mass spectrometers are presented

  5. Low extractable wipers for cleaning space flight hardware

    Science.gov (United States)

    Tijerina, Veronica; Gross, Frederick C.

    1986-01-01

    There is a need for low extractable wipers for solvent cleaning of space flight hardware. Soxhlet extraction is the method utilized today by most NASA subcontractors, but there may be alternate methods to achieve the same results. The need for low non-volatile residue materials, the history of soxhlet extraction, and proposed alternate methods are discussed, as well as different types of wipers, test methods, and current standards.

  6. Corrosion Testing of Stainless Steel Fuel Cell Hardware

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, M.S.; Zawodzinski, C.; Gottesfeld, S.

    1998-11-01

    Metal hardware is gaining increasing interest in polymer electrolyte fuel cell (PEFC) development as a possible alternative to machined graphite hardware because of its potential for low-cost manufacturing combined with its intrinsic high conductivity, minimal permeability and advantageous mechanical properties. A major barrier to more widespread use of metal hardware has been the susceptibility of various metals to corrosion. Few pure metals can withstand the relatively aggressive environment of a fuel cell and thus the choices for hardware are quite limited. Precious metals such as platinum or gold are prohibitively expensive and so tend to be utilized as coatings on inexpensive substrates such as aluminum or stainless steel. The main challenge with coatings has been to achieve pin-hole free surfaces that will remain so after years of use. Titanium has been used to some extent and though it is very corrosion-resistant, it is also relatively expensive and often still requires some manner of surface coating to prevent the formation of a poorly conducting oxide layer. In contrast, metal alloys may hold promise as potentially low-cost, corrosion-resistant materials for bipolar plates. The dozens of commercially available stainless steel and nickel based alloys have been specifically formulated to offer a particular advantage depending upon their application. In the case of austenitic stainless steels, for example, 316 SS contains molybdenum and a higher chromium content than its more common counterpart, 304 SS, that makes it more noble and increases its corrosion resistance. Likewise, 316L SS contains less carbon than 316 SS to make it easier to weld. A number of promising corrosion-resistant, highly noble alloys such as Hastelloy{trademark} or Duplex{trademark} (a stainless steel developed for seawater service) are available commercially, but are expensive and difficult to obtain in various forms (i.e. wire screen, foil, etc.) or in small amounts for R and D

  7. Automatic Optimization of Hardware Accelerators for Image Processing

    OpenAIRE

    Reiche, Oliver; Häublein, Konrad; Reichenbach, Marc; Hannig, Frank; Teich, Jürgen; Fey, Dietmar

    2015-01-01

    In the domain of image processing, often real-time constraints are required. In particular, in safety-critical applications, such as X-ray computed tomography in medical imaging or advanced driver assistance systems in the automotive domain, timing is of utmost importance. A common approach to maintain real-time capabilities of compute-intensive applications is to offload those computations to dedicated accelerator hardware, such as Field Programmable Gate Arrays (FPGAs). Programming such arc...

  8. FY16 ISCP Nuclear Counting Facility Hardware Expansion Summary

    International Nuclear Information System (INIS)

    Church, Jennifer A.; Kashgarian, Michaele; Wooddy, Todd; Haslett, Bob; Torretto, Phil

    2016-01-01

    Hardware expansion and detector calibrations were the focus of FY 16 ISCP efforts in the Nuclear Counting Facility. Work focused on four main objectives: 1) Installation, calibration, and validation of 4 additional HPGe gamma spectrometry systems; including two Low Energy Photon Spectrometers (LEPS). 2) Re-Calibration and validation of 3 previously installed gamma-ray detectors, 3) Integration of the new systems into the NCF IT infrastructure, and 4) QA/QC and maintenance of current detector systems.

  9. FY16 ISCP Nuclear Counting Facility Hardware Expansion Summary

    Energy Technology Data Exchange (ETDEWEB)

    Church, Jennifer A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kashgarian, Michaele [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wooddy, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Haslett, Bob [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Torretto, Phil [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-09-15

    Hardware expansion and detector calibrations were the focus of FY 16 ISCP efforts in the Nuclear Counting Facility. Work focused on four main objectives: 1) Installation, calibration, and validation of 4 additional HPGe gamma spectrometry systems; including two Low Energy Photon Spectrometers (LEPS). 2) Re-Calibration and validation of 3 previously installed gamma-ray detectors, 3) Integration of the new systems into the NCF IT infrastructure, and 4) QA/QC and maintenance of current detector systems.

  10. Treatment alternatives for non-fuel-bearing hardware

    Energy Technology Data Exchange (ETDEWEB)

    Ross, W.A.; Clark, L.L.; Oma, K.H.

    1987-01-01

    This evaluation compared four alternatives for the treatment or processing of non-fuel bearing hardware (NFBH) to reduce its volume and prepare it for disposal. These treatment alternatives are: shredding; shredding and low pressure compaction; shredding and supercompaction; and melting. These alternatives are compared on the basis of system costs, waste form characteristics, and process considerations. The study recommends that melting and supercompaction alternatives be further considered and that additional testing be conducted for these two alternatives.

  11. Hardware Trojans - Prevention, Detection, Countermeasures (A Literature Review)

    Science.gov (United States)

    2011-07-01

    manufacturing process in-house is infeasible for all but the smallest Application Specific Integrated Circuit (ASIC) designs. Our reliance on the globalisation ...for all but the smallest ASIC designs. Our reliance on the globalisation of the electronics industry is critical for developing both our commercial and...on the detection mechanism used, a Hardware Trojan may be either definitively identified, or a statistical measure may be provided indicating the

  12. Reconfigurable Signal Processing and Hardware Architecture for Broadband Wireless Communications

    Directory of Open Access Journals (Sweden)

    Liang Ying-Chang

    2005-01-01

    Full Text Available This paper proposes a broadband wireless transceiver which can be reconfigured to any type of cyclic-prefix (CP -based communication systems, including orthogonal frequency-division multiplexing (OFDM, single-carrier cyclic-prefix (SCCP system, multicarrier (MC code-division multiple access (MC-CDMA, MC direct-sequence CDMA (MC-DS-CDMA, CP-based CDMA (CP-CDMA, and CP-based direct-sequence CDMA (CP-DS-CDMA. A hardware platform is proposed and the reusable common blocks in such a transceiver are identified. The emphasis is on the equalizer design for mobile receivers. It is found that after block despreading operation, MC-DS-CDMA and CP-DS-CDMA have the same equalization blocks as OFDM and SCCP systems, respectively, therefore hardware and software sharing is possible for these systems. An attempt has also been made to map the functional reconfigurable transceiver onto the proposed hardware platform. The different functional entities which will be required to perform the reconfiguration and realize the transceiver are explained.

  13. Hardware Accelerators Targeting a Novel Group Based Packet Classification Algorithm

    Directory of Open Access Journals (Sweden)

    O. Ahmed

    2013-01-01

    Full Text Available Packet classification is a ubiquitous and key building block for many critical network devices. However, it remains as one of the main bottlenecks faced when designing fast network devices. In this paper, we propose a novel Group Based Search packet classification Algorithm (GBSA that is scalable, fast, and efficient. GBSA consumes an average of 0.4 megabytes of memory for a 10 k rule set. The worst-case classification time per packet is 2 microseconds, and the preprocessing speed is 3 M rules/second based on an Xeon processor operating at 3.4 GHz. When compared with other state-of-the-art classification techniques, the results showed that GBSA outperforms the competition with respect to speed, memory usage, and processing time. Moreover, GBSA is amenable to implementation in hardware. Three different hardware implementations are also presented in this paper including an Application Specific Instruction Set Processor (ASIP implementation and two pure Register-Transfer Level (RTL implementations based on Impulse-C and Handel-C flows, respectively. Speedups achieved with these hardware accelerators ranged from 9x to 18x compared with a pure software implementation running on an Xeon processor.

  14. Hardware Design Improvements to the Major Constituent Analyzer

    Science.gov (United States)

    Combs, Scott; Schwietert, Daniel; Anaya, Marcial; DeWolf, Shannon; Merrill, Dave; Gardner, Ben D.; Thoresen, Souzan; Granahan, John; Belcher, Paul; Matty, Chris

    2011-01-01

    The Major Constituent Analyzer (MCA) onboard the International Space Station (ISS) is designed to monitor the major constituents of the ISS's internal atmosphere. This mass spectrometer based system is an integral part of the Environmental Control and Life Support System (ECLSS) and is a primary tool for the management of ISS atmosphere composition. As a part of NASA Change Request CR10773A, several alterations to the hardware have been made to accommodate improved MCA logistics. First, the ORU 08 verification gas assembly has been modified to allow the verification gas cylinder to be installed on orbit. The verification gas is an essential MCA consumable that requires periodic replenishment. Designing the cylinder for subassembly transport reduces the size and weight of the maintained item for launch. The redesign of the ORU 08 assembly includes a redesigned housing, cylinder mounting apparatus, and pneumatic connection. The second hardware change is a redesigned wiring harness for the ORU 02 analyzer. The ORU 02 electrical connector interface was damaged in a previous on-orbit installation, and this necessitated the development of a temporary fix while a more permanent solution was developed. The new wiring harness design includes flexible cable as well as indexing fasteners and guide-pins, and provides better accessibility during the on-orbit maintenance operation. This presentation will describe the hardware improvements being implemented for MCA as well as the expected improvement to logistics and maintenance.

  15. Automation Hardware & Software for the STELLA Robotic Telescope

    Science.gov (United States)

    Weber, M.; Granzer, Th.; Strassmeier, K. G.

    The STELLA telescope (a joint project of the AIP, Hamburger Sternwarte and the IAC) is to operate in fully robotic mode, with no human interaction necessary for regular operation. Thus, the hardware must be kept as simple as possible to avoid unnecessary failures, and the environmental conditions must be monitored accurately to protect the telescope in case of bad weather. All computers are standard PCs running Linux, and communication with specialized hardware is done via a RS232/RS485 bus system. The high level (java based) control software consists of independent modules to ease bug-tracking and to allow the system to be extended without changing existing modules. Any command cycle consists of three messages, the actual command sent from the central node to the operating device, an immediate acknowledge, and a final done message, both sent back from the receiving device to the central node. This reply-splitting allows a direct distinction between communication problems (no acknowledge message) and hardware problems (no or a delayed done message). To avoid bug-prone packing of all the sensor-analyzing software into a single package, each sensor-reading and interaction with other sensors is done within a self-contained thread. Weather-decision making is therefore totally decoupled from the core control software to avoid dead-locks in the core module.

  16. Secure Hardware Performance Analysis in Virtualized Cloud Environment

    Directory of Open Access Journals (Sweden)

    Chee-Heng Tan

    2013-01-01

    Full Text Available The main obstacle in mass adoption of cloud computing for database operations is the data security issue. In this paper, it is shown that IT services particularly in hardware performance evaluation in virtual machine can be accomplished effectively without IT personnel gaining access to real data for diagnostic and remediation purposes. The proposed mechanisms utilized TPC-H benchmark to achieve 2 objectives. First, the underlying hardware performance and consistency is supervised via a control system, which is constructed using a combination of TPC-H queries, linear regression, and machine learning techniques. Second, linear programming techniques are employed to provide input to the algorithms that construct stress-testing scenarios in the virtual machine, using the combination of TPC-H queries. These stress-testing scenarios serve 2 purposes. They provide the boundary resource threshold verification to the first control system, so that periodic training of the synthetic data sets for performance evaluation is not constrained by hardware inadequacy, particularly when the resources in the virtual machine are scaled up or down which results in the change of the utilization threshold. Secondly, they provide a platform for response time verification on critical transactions, so that the expected Quality of Service (QoS from these transactions is assured.

  17. Hardware demonstration of high-speed networks for satellite applications.

    Energy Technology Data Exchange (ETDEWEB)

    Donaldson, Jonathon W.; Lee, David S.

    2008-09-01

    This report documents the implementation results of a hardware demonstration utilizing the Serial RapidIO{trademark} and SpaceWire protocols that was funded by Sandia National Laboratories (SNL's) Laboratory Directed Research and Development (LDRD) office. This demonstration was one of the activities in the Modeling and Design of High-Speed Networks for Satellite Applications LDRD. This effort has demonstrated the transport of application layer packets across both RapidIO and SpaceWire networks to a common downlink destination using small topologies comprised of commercial-off-the-shelf and custom devices. The RapidFET and NEX-SRIO debug and verification tools were instrumental in the successful implementation of the RapidIO hardware demonstration. The SpaceWire hardware demonstration successfully demonstrated the transfer and routing of application data packets between multiple nodes and also was able reprogram remote nodes using configuration bitfiles transmitted over the network, a key feature proposed in node-based architectures (NBAs). Although a much larger network (at least 18 to 27 nodes) would be required to fully verify the design for use in a real-world application, this demonstration has shown that both RapidIO and SpaceWire are capable of routing application packets across a network to a common downlink node, illustrating their potential use in real-world NBAs.

  18. Hardware implementation of on -chip learning using re configurable FPGAS

    International Nuclear Information System (INIS)

    Kelash, H.M.; Sorour, H.S; Mahmoud, I.I.; Zaki, M; Haggag, S.S.

    2009-01-01

    The multilayer perceptron (MLP) is a neural network model that is being widely applied in the solving of diverse problems. A supervised training is necessary before the use of the neural network.A highly popular learning algorithm called back-propagation is used to train this neural network model. Once trained, the MLP can be used to solve classification problems. An interesting method to increase the performance of the model is by using hardware implementations. The hardware can do the arithmetical operations much faster than software. In this paper, a design and implementation of the sequential mode (stochastic mode) of backpropagation algorithm with on-chip learning using field programmable gate arrays (FPGA) is presented, a pipelined adaptation of the on-line back propagation algorithm (BP) is shown.The hardware implementation of forward stage, backward stage and update weight of backpropagation algorithm is also presented. This implementation is based on a SIMD parallel architecture of the forward propagation the diagnosis of the multi-purpose research reactor of Egypt accidents is used to test the proposed system

  19. Detection of Low-order Curves in Images using Biologically-plausible Hardware

    Science.gov (United States)

    2012-09-29

    param *p) { float theta,dtheta,cth, sth ; int f; dtheta = M_PI / (p->numorientations * p->df); // how many frames is also how many angles for(f=0;f < p...numorientations;f++) { theta=f*dtheta; cth = cos(theta); sth = sin(theta); for(r=p->nr;r>=0;r--)for(c=p->nc;c>=0;c--) { pointer = p->v->ifsptr...vtemp; //place to store a temporatry double sth , cth; // place to remember cos and sin of theta double rho; //the rho in the equation of a st line

  20. Contamination Control and Hardware Processing Solutions at Marshall Space Flight Center

    Science.gov (United States)

    Burns, DeWitt H.; Hampton, Tammy; Huey, LaQuieta; Mitchell, Mark; Norwood, Joey; Lowrey, Nikki

    2012-01-01

    The Contamination Control Team of Marshall Space Flight Center's Materials and Processes Laboratory supports many Programs/ Projects that design, manufacture, and test a wide range of hardware types that are sensitive to contamination and foreign object damage (FOD). Examples where contamination/FOD concerns arise include sensitive structural bondline failure, critical orifice blockage, seal leakage, and reactive fluid compatibility (liquid oxygen, hydrazine) as well as performance degradation of sensitive instruments or spacecraft surfaces such as optical elements and thermal control systems. During the design phase, determination of the sensitivity of a hardware system to different types or levels of contamination/FOD is essential. A contamination control and FOD control plan must then be developed and implemented through all phases of ground processing, and, sometimes, on-orbit use, recovery, and refurbishment. Implementation of proper controls prevents cost and schedule impacts due to hardware damage or rework and helps assure mission success. Current capabilities are being used to support recent and on-going activities for multiple Mission Directorates / Programs such as International Space Station (ISS), James Webb Space Telescope (JWST), Space Launch System (SLS) elements (tanks, engines, booster), etc. The team also advances Green Technology initiatives and addresses materials obsolescence issues for NASA and external customers, most notably in the area of solvent replacement (e.g. aqueous cleaners containing hexavalent chrome, ozone depleting chemicals (CFC s and HCFC's), suspect carcinogens). The team evaluates new surface cleanliness inspection and cleaning technologies (e.g. plasma cleaning), and maintains databases for processing support materials as well as outgassing and optical compatibility test results for spaceflight environments.

  1. CMOL/CMOS hardware architectures and performance/price for Bayesian memory - The building block of intelligent systems

    Science.gov (United States)

    Zaveri, Mazad Shaheriar

    The semiconductor/computer industry has been following Moore's law for several decades and has reaped the benefits in speed and density of the resultant scaling. Transistor density has reached almost one billion per chip, and transistor delays are in picoseconds. However, scaling has slowed down, and the semiconductor industry is now facing several challenges. Hybrid CMOS/nano technologies, such as CMOL, are considered as an interim solution to some of the challenges. Another potential architectural solution includes specialized architectures for applications/models in the intelligent computing domain, one aspect of which includes abstract computational models inspired from the neuro/cognitive sciences. Consequently in this dissertation, we focus on the hardware implementations of Bayesian Memory (BM), which is a (Bayesian) Biologically Inspired Computational Model (BICM). This model is a simplified version of George and Hawkins' model of the visual cortex, which includes an inference framework based on Judea Pearl's belief propagation. We then present a "hardware design space exploration" methodology for implementing and analyzing the (digital and mixed-signal) hardware for the BM. This particular methodology involves: analyzing the computational/operational cost and the related micro-architecture, exploring candidate hardware components, proposing various custom hardware architectures using both traditional CMOS and hybrid nanotechnology - CMOL, and investigating the baseline performance/price of these architectures. The results suggest that CMOL is a promising candidate for implementing a BM. Such implementations can utilize the very high density storage/computation benefits of these new nano-scale technologies much more efficiently; for example, the throughput per 858 mm2 (TPM) obtained for CMOL based architectures is 32 to 40 times better than the TPM for a CMOS based multiprocessor/multi-FPGA system, and almost 2000 times better than the TPM for a PC

  2. Next generation hyper-scale software and hardware systems for big data analytics

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    Building on foundational technologies such as many-core systems, non-volatile memories and photonic interconnects, we describe some current technologies and future research to create real-time, big data analytics, IT infrastructure. We will also briefly describe some of our biologically-inspired software and hardware architecture for creating radically new hyper-scale cognitive computing systems. About the speaker Rich Friedrich is the director of Strategic Innovation and Research Services (SIRS) at HP Labs. In this strategic role, he is responsible for research investments in nano-technology, exascale computing, cyber security, information management, cloud computing, immersive interaction, sustainability, social computing and commercial digital printing. Rich's philosophy is to fuse strategy and inspiration to create compelling capabilities for next generation information devices, systems and services. Using essential insights gained from the metaphysics of innnovation, he effectively leads ...

  3. Biological radioprotector

    International Nuclear Information System (INIS)

    Stefanescu, Ioan; Titescu, Gheorghe; Tamaian, Radu; Haulica, Ion; Bild, Walther

    2002-01-01

    According to the patent description, the biological radioprotector is deuterium depleted water, DDW, produced by vacuum distillation with an isotopic content lower than natural value. It appears as such or in a mixture with natural water and carbon dioxide. It can be used for preventing and reducing the ionizing radiation effects upon humans or animal organisms, exposed therapeutically, professionally or accidentally to radiation. The most significant advantage of using DDW as biological radioprotector results from its way of administration. Indeed no one of the radioprotectors currently used today can be orally administrated, what reduces the patients' compliance to prophylactic administrations. The biological radioprotector is an unnoxious product obtained from natural water, which can be administrated as food additive instead of drinking water. Dose modification factor is according to initial estimates around 1.9, what is a remarkable feature when one takes into account that the product is toxicity-free and side effect-free and can be administrated prophylactically as a food additive. A net radioprotective action of the deuterium depletion was evidenced experimentally in laboratory animals (rats) hydrated with DDW of 30 ppm D/(D+H) concentration as compared with normally hydrated control animals. Knowing the effects of irradiation and mechanisms of the acute radiation disease as well as the effects of administration of radiomimetic chemicals upon cellular lines of fast cell division, it appears that the effects of administrating DDW result from stimulation of the immunity system. In conclusion, the biological radioprotector DDW presents the following advantages: - it is obtained from natural products without toxicity; - it is easy to be administrated as a food additive, replacing the drinking water; - besides radioprotective effects, the product has also immunostimulative and antitumoral effects

  4. Development of a hardware-based AC microgrid for AC stability assessment

    Science.gov (United States)

    Swanson, Robert R.

    As more power electronic-based devices enable the development of high-bandwidth AC microgrids, the topic of microgrid power distribution stability has become of increased interest. Recently, researchers have proposed a relatively straightforward method to assess the stability of AC systems based upon the time-constants of sources, the net bus capacitance, and the rate limits of sources. In this research, a focus has been to develop a hardware test system to evaluate AC system stability. As a first step, a time domain model of a two converter microgrid was established in which a three phase inverter acts as a power source and an active rectifier serves as an adjustable constant power AC load. The constant power load can be utilized to create rapid power flow transients to the generating system. As a second step, the inverter and active rectifier were designed using a Smart Power Module IGBT for switching and an embedded microcontroller as a processor for algorithm implementation. The inverter and active rectifier were designed to operate simultaneously using a synchronization signal to ensure each respective local controller operates in a common reference frame. Finally, the physical system was created and initial testing performed to validate the hardware functionality as a variable amplitude and variable frequency AC system.

  5. Analysis of Systems Hardware Flown on LDEF-Results of the Systems Special Investigation Group

    National Research Council Canada - National Science Library

    Dursch, H

    1992-01-01

    .... The Systems Special Investigation Group (Systems SIG) was formed to investigate the effects of the long term exposure to LEO on systems related hardware and to coordinate and collate all systems analysis of LDEF hardware...

  6. NCERA-101 STATION REPORT - KENNEDY SPACE CENTER: Large Plant Growth Hardware for the International Space Station

    Science.gov (United States)

    Massa, Gioia D.

    2013-01-01

    This is the station report for the national controlled environments meeting. Topics to be discussed will include the Veggie and Advanced Plant Habitat ISS hardware. The goal is to introduce this hardware to a potential user community.

  7. Parameter Validation for Evaluation of Spaceflight Hardware Reusability

    Science.gov (United States)

    Childress-Thompson, Rhonda; Dale, Thomas L.; Farrington, Phillip

    2017-01-01

    Within recent years, there has been an influx of companies around the world pursuing reusable systems for space flight. Much like NASA, many of these new entrants are learning that reusable systems are complex and difficult to acheive. For instance, in its first attempts to retrieve spaceflight hardware for future reuse, SpaceX unsuccessfully tried to land on a barge at sea, resulting in a crash-landing. As this new generation of launch developers continues to develop concepts for reusable systems, having a systematic approach for determining the most effective systems for reuse is paramount. Three factors that influence the effective implementation of reusability are cost, operability and reliability. Therefore, a method that integrates these factors into the decision-making process must be utilized to adequately determine whether hardware used in space flight should be reused or discarded. Previous research has identified seven features that contribute to the successful implementation of reusability for space flight applications, defined reusability for space flight applications, highlighted the importance of reusability, and presented areas that hinder successful implementation of reusability. The next step is to ensure that the list of reusability parameters previously identified is comprehensive, and any duplication is either removed or consolidated. The characteristics to judge the seven features as good indicators for successful reuse are identified and then assessed using multiattribute decision making. Next, discriminators in the form of metrics or descriptors are assigned to each parameter. This paper explains the approach used to evaluate these parameters, define the Measures of Effectiveness (MOE) for reusability, and quantify these parameters. Using the MOEs, each parameter is assessed for its contribution to the reusability of the hardware. Potential data sources needed to validate the approach will be identified.

  8. Acquisition of reliable vacuum hardware for large accelerator systems

    International Nuclear Information System (INIS)

    Welch, K.M.

    1996-01-01

    Credible and effective communications prove to be the major challenge in the acquisition of reliable vacuum hardware. Technical competence is necessary but not sufficient. We must effectively communicate with management, sponsoring agencies, project organizations, service groups, staff and with vendors. Most of Deming's 14 quality assurance tenets relate to creating an enlightened environment of good communications. All projects progress along six distinct, closely coupled, dynamic phases; all six phases are in a state of perpetual change. These phases and their elements are discussed, with emphasis given to the acquisition phase and its related vocabulary. (author)

  9. Use of Hardware Battery Drill in Orthopedic Surgery.

    Science.gov (United States)

    Satish, Bhava R J; Shahdi, Masood; Ramarao, Duddupudi; Ranganadham, Atmakuri V; Kalamegam, Sundaresan

    2017-03-01

    Among the power drills (Electrical/Pneumatic/Battery) used in Orthopedic surgery, battery drill has got several advantages. Surgeons in low resource settings could not routinely use Orthopedic battery drills (OBD) due to the prohibitive cost of good drills or poor quality of other drills. "Hardware" or Engineering battery drill (HBD) is a viable alternative to OBD. HBD is easy to procure, rugged in nature, easy to maintain, durable, easily serviceable and 70 to 75 times cheaper than the standard high end OBD. We consider HBD as one of the cost effective equipment in Orthopedic operation theatres.

  10. Monitoring and Hardware Management for Critical Fusion Plasma Instrumentation

    Directory of Open Access Journals (Sweden)

    Carvalho Paulo F.

    2018-01-01

    Full Text Available Controlled nuclear fusion aims to obtain energy by particles collision confined inside a nuclear reactor (Tokamak. These ionized particles, heavier isotopes of hydrogen, are the main elements inside of plasma that is kept at high temperatures (millions of Celsius degrees. Due to high temperatures and magnetic confinement, plasma is exposed to several sources of instabilities which require a set of procedures by the control and data acquisition systems throughout fusion experiments processes. Control and data acquisition systems often used in nuclear fusion experiments are based on the Advanced Telecommunication Computer Architecture (AdvancedTCA® standard introduced by the Peripheral Component Interconnect Industrial Manufacturers Group (PICMG®, to meet the demands of telecommunications that require large amount of data (TB transportation at high transfer rates (Gb/s, to ensure high availability including features such as reliability, serviceability and redundancy. For efficient plasma control, systems are required to collect large amounts of data, process it, store for later analysis, make critical decisions in real time and provide status reports either from the experience itself or the electronic instrumentation involved. Moreover, systems should also ensure the correct handling of detected anomalies and identified faults, notify the system operator of occurred events, decisions taken to acknowledge and implemented changes. Therefore, for everything to work in compliance with specifications it is required that the instrumentation includes hardware management and monitoring mechanisms for both hardware and software. These mechanisms should check the system status by reading sensors, manage events, update inventory databases with hardware system components in use and maintenance, store collected information, update firmware and installed software modules, configure and handle alarms to detect possible system failures and prevent emergency

  11. Computer, Network, Software, and Hardware Engineering with Applications

    CERN Document Server

    Schneidewind, Norman F

    2012-01-01

    There are many books on computers, networks, and software engineering but none that integrate the three with applications. Integration is important because, increasingly, software dominates the performance, reliability, maintainability, and availability of complex computer and systems. Books on software engineering typically portray software as if it exists in a vacuum with no relationship to the wider system. This is wrong because a system is more than software. It is comprised of people, organizations, processes, hardware, and software. All of these components must be considered in an integr

  12. Combining high productivity with high performance on commodity hardware

    DEFF Research Database (Denmark)

    Skovhede, Kenneth

    to a particular hardware platform, is a risky investment. To make this problem worse, the scientists that have the required field expertise to write the algorithms are not formally trained programmers. This usually leads to scientists writing buggy, inefficient and hard to maintain programs. Occasionally......, a skilled programmer is hired, which increases the program quality, but increases the cost of the program. This extra link also introduces longer development iterations and may introduce other errors, as the programmer is not necessarily an expert in the field. And neither approach solves the issue...

  13. System for processing an encrypted instruction stream in hardware

    Science.gov (United States)

    Griswold, Richard L.; Nickless, William K.; Conrad, Ryan C.

    2016-04-12

    A system and method of processing an encrypted instruction stream in hardware is disclosed. Main memory stores the encrypted instruction stream and unencrypted data. A central processing unit (CPU) is operatively coupled to the main memory. A decryptor is operatively coupled to the main memory and located within the CPU. The decryptor decrypts the encrypted instruction stream upon receipt of an instruction fetch signal from a CPU core. Unencrypted data is passed through to the CPU core without decryption upon receipt of a data fetch signal.

  14. Hardware interface unit for control of shuttle RMS vibrations

    Science.gov (United States)

    Lindsay, Thomas S.; Hansen, Joseph M.; Manouchehri, Davoud; Forouhar, Kamran

    1994-01-01

    Vibration of the Shuttle Remote Manipulator System (RMS) increases the time for task completion and reduces task safety for manipulator-assisted operations. If the dynamics of the manipulator and the payload can be physically isolated, performance should improve. Rockwell has developed a self contained hardware unit which interfaces between a manipulator arm and payload. The End Point Control Unit (EPCU) is built and is being tested at Rockwell and at the Langley/Marshall Coupled, Multibody Spacecraft Control Research Facility in NASA's Marshall Space Flight Center in Huntsville, Alabama.

  15. Surface moisture measurement system hardware acceptance test procedure

    International Nuclear Information System (INIS)

    Ritter, G.A.

    1996-01-01

    The purpose of this acceptance test procedure is to verify that the mechanical and electrical features of the Surface Moisture Measurement System are operating as designed and that the unit is ready for field service. This procedure will be used in conjunction with a software acceptance test procedure, which addresses testing of software and electrical features not addressed in this document. Hardware testing will be performed at the 306E Facility in the 300 Area and the Fuels and Materials Examination Facility in the 400 Area. These systems were developed primarily in support of Tank Waste Remediation System (TWRS) Safety Programs for moisture measurement in organic and ferrocyanide watch list tanks

  16. Study of hardware implementations of fast tracking algorithms

    International Nuclear Information System (INIS)

    Song, Z.; Huang, G.; Wang, D.; Lentdecker, G. De; Dong, J.; Léonard, A.; Robert, F.; Yang, Y.

    2017-01-01

    Real-time track reconstruction at high event rates is a major challenge for future experiments in high energy physics. To perform pattern-recognition and track fitting, artificial retina or Hough transformation methods have been introduced in the field which have to be implemented in FPGA firmware. In this note we report on a case study of a possible FPGA hardware implementation approach of the retina algorithm based on a Floating-Point core. Detailed measurements with this algorithm are investigated. Retina performance and capabilities of the FPGA are discussed along with perspectives for further optimization and applications.

  17. Hardware and first results of TUNKA-HiSCORE

    International Nuclear Information System (INIS)

    Kunnas, M.; Brückner, M.; Budnev, N.; Büker, M.; Chvalaev, O.; Dyachok, A.; Einhaus, U.; Epimakhov, S.; Gress, O.; Hampf, D.; Horns, D.; Ivanova, A.; Konstantinov, E.; Korosteleva, E.; Kuzmichev, L.; Lubsandorzhiev, B.; Mirgazov, R.; Monkhoev, R.; Nachtigall, R.; Pakhorukov, A.

    2014-01-01

    As a non-imaging wide-angle Cherenkov air shower detector array with an area of up to 100 km 2 , the HiSCORE (Hundred⁎i Square km Cosmic ORigin Explorer) detector concept allows measurements of gamma rays and cosmic rays in an energy range of 10 TeV up to 1 EeV. In the framework of the Tunka-HiSCORE project we have started measurements with a small prototype array, and planned to build an engineering array (1 km 2 ) on the site of the Tunka experiment in Siberia. The first results and the most important hardware components are presented here

  18. UAV payload and mission control hardware/software architecture

    OpenAIRE

    Pastor Llorens, Enric; López Rubio, Juan; Royo Chic, Pablo

    2007-01-01

    This paper presents an embedded hardware/software architecture specially designed to be applied on mini/micro Unmanned Aerial Vehicles (UAV). An UAV is low-cost non-piloted airplane designed to operate in D-cube (Dangerous-Dirty-Dull) situations [8]. Many types of UAVs exist today; however with the advent of UAV's civil applications, the class of mini/micro UAVs is emerging as a valid option in a commercial scenario. This type of UAV shares limitations with most computer embedded systems: lim...

  19. Optimizing Investment Strategies with the Reconfigurable Hardware Platform RIVYERA

    Directory of Open Access Journals (Sweden)

    Christoph Starke

    2012-01-01

    Full Text Available The hardware structure of a processing element used for optimization of an investment strategy for financial markets is presented. It is shown how this processing element can be multiply implemented on the massively parallel FPGA-machine RIVYERA. This leads to a speedup of a factor of about 17,000 in comparison to one single high-performance PC, while saving more than 99% of the consumed energy. Furthermore, it is shown for a special security and different time periods that the optimized investment strategy delivers an outperformance between 2 and 14 percent in relation to a buy and hold strategy.

  20. Technology Corner: Dating of Electronic Hardware for Prior Art Investigations

    Directory of Open Access Journals (Sweden)

    Sellam Ismail

    2012-03-01

    Full Text Available In many legal matters, specifically patent litigation, determining and authenticating the date of computer hardware or other electronic products or components is often key to establishing the item as legitimate evidence of prior art. Such evidence can be used to buttress claims of technologies available or of events transpiring by or at a particular date.In 1945, the Electronics Industry Association published a standard, EIA 476-A, standardized in the reference Source and Date Code Marking (Electronic Industries Association, 1988.(see PDF for full tech corner

  1. Hardware Architectures for the Orthogonal and Biorthogonal Wavelet Transform

    Directory of Open Access Journals (Sweden)

    G. Knowles

    2002-01-01

    Full Text Available In this note, optimal hardware architectures for the orthogonal and biorthogonal wavelet transforms are presented. The approach used here is not the standard lifting method, but takes advantage of the symmetries inherent in the coefficients of the transforms and the decimation/interpolation operators. The design is based on a highly optimized datapath, which seamlessly integrates both orthogonal and biorthogonal transforms, data extension at the edges and the forward and inverse transforms. The datapath design could be further optimized for speed or low power. The datapath is controlled by a small fast control unit which is hard programmed according to the wavelet or wavelets required by the application.

  2. J-2X Upper Stage Engine: Hardware and Testing 2009

    Science.gov (United States)

    Buzzell, James C.

    2009-01-01

    Mission: Common upper stage engine for Ares I and Ares V. Challenge: Use proven technology from Saturn X-33, RS-68 to develop the highest Isp GG cycle engine in history for 2 missions in record time . Key Features: LOX/LH2 GG cycle, series turbines (2), HIP-bonded MCC, pneumatic ball-sector valves, on-board engine controller, tube-wall regen nozzle/large passively-cooled nozzle extension, TEG boost/cooling . Development Philosophy: proven hardware, aggressive schedule, early risk reduction, requirements-driven.

  3. Monitoring and Hardware Management for Critical Fusion Plasma Instrumentation

    Science.gov (United States)

    Carvalho, Paulo F.; Santos, Bruno; Correia, Miguel; Combo, Álvaro M.; Rodrigues, AntÓnio P.; Pereira, Rita C.; Fernandes, Ana; Cruz, Nuno; Sousa, Jorge; Carvalho, Bernardo B.; Batista, AntÓnio J. N.; Correia, Carlos M. B. A.; Gonçalves, Bruno

    2018-01-01

    Controlled nuclear fusion aims to obtain energy by particles collision confined inside a nuclear reactor (Tokamak). These ionized particles, heavier isotopes of hydrogen, are the main elements inside of plasma that is kept at high temperatures (millions of Celsius degrees). Due to high temperatures and magnetic confinement, plasma is exposed to several sources of instabilities which require a set of procedures by the control and data acquisition systems throughout fusion experiments processes. Control and data acquisition systems often used in nuclear fusion experiments are based on the Advanced Telecommunication Computer Architecture (AdvancedTCA®) standard introduced by the Peripheral Component Interconnect Industrial Manufacturers Group (PICMG®), to meet the demands of telecommunications that require large amount of data (TB) transportation at high transfer rates (Gb/s), to ensure high availability including features such as reliability, serviceability and redundancy. For efficient plasma control, systems are required to collect large amounts of data, process it, store for later analysis, make critical decisions in real time and provide status reports either from the experience itself or the electronic instrumentation involved. Moreover, systems should also ensure the correct handling of detected anomalies and identified faults, notify the system operator of occurred events, decisions taken to acknowledge and implemented changes. Therefore, for everything to work in compliance with specifications it is required that the instrumentation includes hardware management and monitoring mechanisms for both hardware and software. These mechanisms should check the system status by reading sensors, manage events, update inventory databases with hardware system components in use and maintenance, store collected information, update firmware and installed software modules, configure and handle alarms to detect possible system failures and prevent emergency scenarios

  4. Computer organization and design the hardware/software interface

    CERN Document Server

    Patterson, David A

    2009-01-01

    The classic textbook for computer systems analysis and design, Computer Organization and Design, has been thoroughly updated to provide a new focus on the revolutionary change taking place in industry today: the switch from uniprocessor to multicore microprocessors. This new emphasis on parallelism is supported by updates reflecting the newest technologies with examples highlighting the latest processor designs, benchmarking standards, languages and tools. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, compu

  5. SYNTHESIS OF INFORMATION SYSTEM FOR SMART HOUSE HARDWARE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Vikentyeva Olga Leonidovna

    2017-10-01

    Full Text Available Subject: smart house maintenance requires taking into account a number of factors: resource-saving, reduction of operational expenditures, safety enhancement, providing comfortable working and leisure conditions. Automation of the corresponding engineering systems of illumination, climate control, security as well as communication systems and networks via utilization of contemporary technologies (e.g., IoT - Internet of Things poses a significant challenge related to storage and processing of the overwhelmingly massive volume of data whose utilization extent is extremely low nowadays. Since a building’s lifespan is large enough and exceeds the lifespan of codes and standards that take into account the requirements of safety, comfort, energy saving, etc., it is necessary to consider management aspects in the context of rational use of large data at the stage of information modeling. Research objectives: increase the efficiency of managing the subsystems of smart buildings hardware on the basis of a web-based information system that has a flexible multi-level architecture with several control loops and an adaptation model. Materials and methods: since a smart house belongs to man-machine systems, the cybernetic approach is considered as the basic method for design and research of information management system. Instrumental research methods are represented by set-theoretical modelling, automata theory and architectural principles of organization of information management systems. Results: a flexible architecture of information system for management of smart house hardware subsystems has been synthesized. This architecture encompasses several levels: client level, application level and data level as well as three layers: presentation level, actuating device layer and analytics layer. The problem of growing volumes of information processed by realtime message controller is attended by employment of sensors and actuating mechanisms with configurable

  6. Searching for Organics, Fossils, and Biology on Mars

    Science.gov (United States)

    McKay, Christopher P.; DeVincenzi, Donald (Technical Monitor)

    2001-01-01

    One of the goals of Astrobiology is to understand life on a fundamental level. All life on Earth is constructed from the same basic biochemical building blocks consisting of 20 amino acids with left handed symmetry, five nucleotides, a few sugars of right handed symmetry and some lipids. Using the metaphor of computers this is equivalent to saying that all life shares the same hardware. Beyond hardware similarity, it is now known that all life has fundamentally the same software. The genetic code of life is common to all organisms. Some have argued that the "hammer of evolution is heavy" and life anywhere is likely to be composed of identical biochemical and genetic patterns. However, in a system as complex as biochemistry it is likely that there are numerous local optima and the details of the optimum found by evolutionary selection on another world would likely depend on the initial conditions and random developments in the early biological history on that world. To address these fundamental questions in Astrobiology we need a second example of life: a second genesis.

  7. Hardware-Assisted System for Program Execution Security of SOC

    Directory of Open Access Journals (Sweden)

    Wang Xiang

    2016-01-01

    Full Text Available With the rapid development of embedded systems, the systems’ security has become more and more important. Most embedded systems are at the risk of series of software attacks, such as buffer overflow attack, Trojan virus. In addition, with the rapid growth in the number of embedded systems and wide application, followed embedded hardware attacks are also increasing. This paper presents a new hardware assisted security mechanism to protect the program’s code and data, monitoring its normal execution. The mechanism mainly monitors three types of information: the start/end address of the program of basic blocks; the lightweight hash value in basic blocks and address of the next basic block. These parameters are extracted through additional tools running on PC. The information will be stored in the security module. During normal program execution, the security module is designed to compare the real-time state of program with the information in the security module. If abnormal, it will trigger the appropriate security response, suspend the program and jump to the specified location. The module has been tested and validated on the SOPC with OR1200 processor. The experimental analysis shows that the proposed mechanism can defence a wide range of common software and physical attacks with low performance penalties and minimal overheads.

  8. A Hardware Fast Tracker for the ATLAS trigger

    International Nuclear Information System (INIS)

    Asbah, N.

    2016-01-01

    The trigger system of the ATLAS experiment is designed to reduce the event rate from the LHC nominal bunch crossing at 40 MHz to about 1 kHz, at the design luminosity of 10 34 cm -2 · s -1 . After a successful period of data taking from 2010 to early 2013, the LHC already started with much higher instantaneous luminosity. This will increase the load on High Level Trigger system, the second stage of the selection based on software algorithms. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals. The Fast TracKer (FTK) is part of the ATLAS trigger upgrade project. It is a hardware processor that will provide, at every Level-1 accepted event (100 kHz) and within 100 μs, full tracking information for tracks with momentum as low as 1 GeV. Providing fast, extensive access to tracking information, with resolution comparable to the offline reconstruction, FTK will help in precise detection of the primary and secondary vertices to ensure robust selections and improve the trigger performance. FTK exploits hardware technologies with massive parallelism, combining Associative Memory ASICs, FPGAs and high-speed communication links.

  9. Hardware-accelerated autostereogram rendering for interactive 3D visualization

    Science.gov (United States)

    Petz, Christoph; Goldluecke, Bastian; Magnor, Marcus

    2003-05-01

    Single Image Random Dot Stereograms (SIRDS) are an attractive way of depicting three-dimensional objects using conventional display technology. Once trained in decoupling the eyes' convergence and focusing, autostereograms of this kind are able to convey the three-dimensional impression of a scene. We present in this work an algorithm that generates SIRDS at interactive frame rates on a conventional PC. The presented system allows rotating a 3D geometry model and observing the object from arbitrary positions in real-time. Subjective tests show that the perception of a moving or rotating 3D scene presents no problem: The gaze remains focused onto the object. In contrast to conventional SIRDS algorithms, we render multiple pixels in a single step using a texture-based approach, exploiting the parallel-processing architecture of modern graphics hardware. A vertex program determines the parallax for each vertex of the geometry model, and the graphics hardware's texture unit is used to render the dot pattern. No data has to be transferred between main memory and the graphics card for generating the autostereograms, leaving CPU capacity available for other tasks. Frame rates of 25 fps are attained at a resolution of 1024x512 pixels on a standard PC using a consumer-grade nVidia GeForce4 graphics card, demonstrating the real-time capability of the system.

  10. Testing of hardware implementation of infrared image enhancing algorithm

    Science.gov (United States)

    Dulski, R.; Sosnowski, T.; PiÄ tkowski, T.; Trzaskawka, P.; Kastek, M.; Kucharz, J.

    2012-10-01

    The interpretation of IR images depends on radiative properties of observed objects and surrounding scenery. Skills and experience of an observer itself are also of great importance. The solution to improve the effectiveness of observation is utilization of algorithm of image enhancing capable to improve the image quality and the same effectiveness of object detection. The paper presents results of testing the hardware implementation of IR image enhancing algorithm based on histogram processing. Main issue in hardware implementation of complex procedures for image enhancing algorithms is high computational cost. As a result implementation of complex algorithms using general purpose processors and software usually does not bring satisfactory results. Because of high efficiency requirements and the need of parallel operation, the ALTERA's EP2C35F672 FPGA device was used. It provides sufficient processing speed combined with relatively low power consumption. A digital image processing and control module was designed and constructed around two main integrated circuits: a FPGA device and a microcontroller. Programmable FPGA device performs image data processing operations which requires considerable computing power. It also generates the control signals for array readout, performs NUC correction and bad pixel mapping, generates the control signals for display module and finally executes complex image processing algorithms. Implemented adaptive algorithm is based on plateau histogram equalization. Tests were performed on real IR images of different types of objects registered in different spectral bands. The simulations and laboratory experiments proved the correct operation of the designed system in executing the sophisticated image enhancement.

  11. A hardware fast tracker for the ATLAS trigger

    Science.gov (United States)

    Asbah, Nedaa

    2016-09-01

    The trigger system of the ATLAS experiment is designed to reduce the event rate from the LHC nominal bunch crossing at 40 MHz to about 1 kHz, at the design luminosity of 1034 cm-2 s-1. After a successful period of data taking from 2010 to early 2013, the LHC already started with much higher instantaneous luminosity. This will increase the load on High Level Trigger system, the second stage of the selection based on software algorithms. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals. The Fast TracKer (FTK) is part of the ATLAS trigger upgrade project. It is a hardware processor that will provide, at every Level-1 accepted event (100 kHz) and within 100 microseconds, full tracking information for tracks with momentum as low as 1 GeV. Providing fast, extensive access to tracking information, with resolution comparable to the offline reconstruction, FTK will help in precise detection of the primary and secondary vertices to ensure robust selections and improve the trigger performance. FTK exploits hardware technologies with massive parallelism, combining Associative Memory ASICs, FPGAs and high-speed communication links.

  12. TileCal ROD Hardware and Software Requirements

    CERN Document Server

    Castelo, J; Cuenca, C; Ferrer, A; Fullana, E; Higón, E; Iglesias, C; Munar, A; Poveda, J; Ruiz-Martínez, A; Salvachúa, B; Solans, C; Valls, J A

    2005-01-01

    In this paper we present the specific hardware and firmware requirements and modifications to operate the Liquid Argon Calorimeter (LiArg) ROD motherboard in the Hadronic Tile Calorimeter (TileCal) environment. Although the use of the board is similar for both calorimeters there are still some differences in the operation of the front-end associated to both detectors which make the use of the same board incompatible. We review the evolution of the design of the ROD from the early prototype stages (ROD based on commercial and Demonstrator boards) to the production phases (ROD final board based on the LiArg design), with emphasis on the different operation modes for the TileCal detector. We start with a short review of the TileCal ROD system functionality and then we detail the different ROD hardware requirements for options, the baseline (ROD Demo board) and the final (ROD final high density board). We also summarize the performance parameters of the ROD motherboard based on the final high density option and s...

  13. Autonomous open-source hardware apparatus for quantum key distribution

    Directory of Open Access Journals (Sweden)

    Ignacio H. López Grande

    2016-01-01

    Full Text Available We describe an autonomous, fully functional implementation of the BB84 quantum key distribution protocol using open source hardware microcontrollers for the synchronization, communication, key sifting and real-time key generation diagnostics. The quantum bits are prepared in the polarization of weak optical pulses generated with light emitting diodes, and detected using a sole single-photon counter and a temporally multiplexed scheme. The system generates a shared cryptographic key at a rate of 365 bps, with a raw quantum bit error rate of 2.7%. A detailed description of the peripheral electronics for control, driving and communication between stages is released as supplementary material. The device can be built using simple and reliable hardware and it is presented as an alternative for a practical realization of sophisticated, yet accessible quantum key distribution systems. Received: 11 Novembre 2015, Accepted: 7 January 2016; Edited by: O. Martínez; DOI: http://dx.doi.org/10.4279/PIP.080002 Cite as: I H López Grande, C T Schmiegelow, M A Larotonda, Papers in Physics 8, 080002 (2016

  14. Optimizing memory-bound SYMV kernel on GPU hardware accelerators

    KAUST Repository

    Abdelfattah, Ahmad

    2013-01-01

    Hardware accelerators are becoming ubiquitous high performance scientific computing. They are capable of delivering an unprecedented level of concurrent execution contexts. High-level programming language extensions (e.g., CUDA), profiling tools (e.g., PAPI-CUDA, CUDA Profiler) are paramount to improve productivity, while effectively exploiting the underlying hardware. We present an optimized numerical kernel for computing the symmetric matrix-vector product on nVidia Fermi GPUs. Due to its inherent memory-bound nature, this kernel is very critical in the tridiagonalization of a symmetric dense matrix, which is a preprocessing step to calculate the eigenpairs. Using a novel design to address the irregular memory accesses by hiding latency and increasing bandwidth, our preliminary asymptotic results show 3.5x and 2.5x fold speedups over the similar CUBLAS 4.0 kernel, and 7-8% and 30% fold improvement over the Matrix Algebra on GPU and Multicore Architectures (MAGMA) library in single and double precision arithmetics, respectively. © 2013 Springer-Verlag.

  15. Health Maintenance System (HMS) Hardware Research, Design, and Collaboration

    Science.gov (United States)

    Gonzalez, Stefanie M.

    2010-01-01

    The Space Life Sciences division (SLSD) concentrates on optimizing a crew member's health. Developments are translated into innovative engineering solutions, research growth, and community awareness. This internship incorporates all those areas by targeting various projects. The main project focuses on integrating clinical and biomedical engineering principles to design, develop, and test new medical kits scheduled for launch in the Spring of 2011. Additionally, items will be tagged with Radio Frequency Interference Devices (RFID) to keep track of the inventory. The tags will then be tested to optimize Radio Frequency feed and feed placement. Research growth will occur with ground based experiments designed to measure calcium encrusted deposits in the International Space Station (ISS). The tests will assess the urine calcium levels with Portable Clinical Blood Analyzer (PCBA) technology. If effective then a model for urine calcium will be developed and expanded to microgravity environments. To support collaboration amongst the subdivisions of SLSD the architecture of the Crew Healthcare Systems (CHeCS) SharePoint site has been redesigned for maximum efficiency. Community collaboration has also been established with the University of Southern California, Dept. of Aeronautical Engineering and the Food and Drug Administration (FDA). Hardware disbursements will transpire within these communities to support planetary surface exploration and to serve as an educational tool demonstrating how ground based medicine influenced the technological development of space hardware.

  16. MRI - From basic knowledge to advanced strategies: Hardware

    International Nuclear Information System (INIS)

    Carpenter, T.A.; Williams, E.J.

    1999-01-01

    There have been remarkable advances in the hardware used for nuclear magnetic resonance imaging scanners. These advances have enabled an extraordinary range of sophisticated magnetic resonance MR sequences to be performed routinely. This paper focuses on the following particular aspects: (a) Magnet system. Advances in magnet technology have allowed superconducting magnets which are low maintenance and have excellent homogeneity and very small stray field footprints. (b) Gradient system. Optimisation of gradient design has allowed gradient coils which provide excellent field for spatial encoding, have reduced diameter and have technology to minimise the effects of eddy currents. These coils can now routinely provide the strength and switching rate required by modern imaging methods. (c) Radio-frequency (RF) system. The advances in digital electronics can now provide RF electronics which have low noise characteristics, high accuracy and improved stability, which are all essential to the formation of excellent images. The use of surface coils has increased with the availability of phased-array systems, which are ideal for spinal work. (d) Computer system. The largest advance in technology has been in the supporting computer hardware which is now affordable, reliable and with performance to match the processing requirements demanded by present imaging sequences. (orig.)

  17. Hardware Efficient Architecture with Variable Block Size for Motion Estimation

    Directory of Open Access Journals (Sweden)

    Nehal N. Shah

    2016-01-01

    Full Text Available Video coding standards such as MPEG-x and H.26x incorporate variable block size motion estimation (VBSME which is highly time consuming and extremely complex from hardware implementation perspective due to huge computation. In this paper, we have discussed basic aspects of video coding and studied and compared existing architectures for VBSME. Various architectures with different pixel scanning pattern give a variety of performance results for motion vector (MV generation, showing tradeoff between macroblock processed per second and resource requirement for computation. Aim of this paper is to design VBSME architecture which utilizes optimal resources to minimize chip area and offer adequate frame processing rate for real time implementation. Speed of computation can be improved by accessing 16 pixels of base macroblock of size 4 × 4 in single clock cycle using z scanning pattern. Widely adopted cost function for hardware implementation known as sum of absolute differences (SAD is used for VBSME architecture with multiplexer based absolute difference calculator and partial summation term reduction (PSTR based multioperand adders. Device utilization of proposed implementation is only 22k gates and it can process 179 HD (1920 × 1080 resolution frames in best case and 47 HD resolution frames in worst case per second. Due to such higher throughput design is well suitable for real time implementation.

  18. A Hardware Track Trigger (FTK) for the ATLAS Trigger

    CERN Document Server

    Zhang, J; The ATLAS collaboration

    2014-01-01

    The design and studies of the performance for the ATLAS hardware Fast TracKer (FTK) are presented. The existing trigger system of the ATLAS experiment is deployed to reduce the event rate from the bunch crossing rate of 40 MHz to < 1 KHz for permanent storage at the LHC design luminosity of 10^34 cm^-2 s^-1. The LHC has performed exceptionally well and routinely exceeds the design luminosity and from 2015 is due to operate with higher still luminosities. This will place a significant load on the High Level trigger (HLT) system, both due to the need for more sophisticated algorithms to reject background, and from the larger data volumes that will need to be processed. The Fast TracKer is a custom electronics system that will operate at the full Level-1 accepted rate of 100 KHz and provide high quality tracks at the beginning of processing in the HLT. This will be performing by track reconstruction using hardware with massive parallelism using associative memories (AM) and FPGAs. The availability of the full...

  19. The FTK: A Hardware Track Finder for the ATLAS Trigger

    CERN Document Server

    Alison, J; Anderson, J; Andreani, A; Andreazza, A; Annovi, A; Antonelli, M; Atkinson, M; Auerbach, B; Baines, J; Barberio, E; Beccherle, R; Beretta, M; Biesuz, N V; Blair, R; Blazey, G; Bogdan, M; Boveia, A; Britzger, D; Bryant, P; Burghgrave, B; Calderini, G; Cavaliere, V; Cavasinni, V; Chakraborty, D; Chang, P; Cheng, Y; Cipriani, R; Citraro, S; Citterio, M; Crescioli, F; Dell'Orso, M; Donati, S; Dondero, P; Drake, G; Gadomski, S; Gatta, M; Gentsos, C; Giannetti, P; Giulini, M; Gkaitatzis, S; Howarth, J W; Iizawa, T; Kapliy, A; Kasten, M; Kim, Y K; Kimura, N; Klimkovich, T; Kordas, K; Korikawa, T; Krizka, K; Kubota, T; Lanza, A; Lasagni, F; Liberali, V; Li, H L; Love, J; Luciano, P; Luongo, C; Magalotti, D; Melachrinos, C; Meroni, C; Mitani, T; Negri, A; Neroutsos, P; Neubauer, M; Nikolaidis, S; Okumura, Y; Pandini, C; Penning, B; Petridou, C; Piendibene, M; Proudfoot, J; Rados, P; Roda, C; Rossi, E; Sakurai, Y; Sampsonidis, D; Sampsonidou, D; Schmitt, S; Schoening, A; Shochet, M; Shojaii, S; Soltveit, H; Sotiropoulou, C L; Stabile, A; Tang, F; Testa, M; Tompkins, L; Vercesi, V; Villa, M; Volpi, G; Webster, J; Wu, X; Yorita, K; Yurkewicz, A; Zeng, J C; Zhang, J

    2014-01-01

    The ATLAS experiment trigger system is designed to reduce the event rate, at the LHC design luminosity of 1034 cm-2 s-1, from the nominal bunch crossing rate of 40 MHz to less than 1 kHz for permanent storage. During Run 1, the LHC has performed exceptionally well, routinely exceeding the design luminosity. From 2015 the LHC is due to operate with higher still luminosities. This will place a significant load on the High Level Trigger system, both due to the need for more sophisticated algorithms to reject background, and from the larger data volumes that will need to be processed. The Fast TracKer is a hardware upgrade for Run 2, consisting of a custom electronics system that will operate at the full rate for Level-1 accepted events of 100 kHz and provide high quality tracks at the beginning of processing in the High Level Trigger. This will perform track reconstruction using hardware with massive parallelism using associative memories and FPGAs. The availability of the full tracking information will enable r...

  20. Advances in flexible optrode hardware for use in cybernetic insects

    Science.gov (United States)

    Register, Joseph; Callahan, Dennis M.; Segura, Carlos; LeBlanc, John; Lissandrello, Charles; Kumar, Parshant; Salthouse, Christopher; Wheeler, Jesse

    2017-08-01

    Optogenetic manipulation is widely used to selectively excite and silence neurons in laboratory experiments. Recent efforts to miniaturize the components of optogenetic systems have enabled experiments on freely moving animals, but further miniaturization is required for freely flying insects. In particular, miniaturization of high channel-count optical waveguides are needed for high-resolution interfaces. Thin flexible waveguide arrays are needed to bend light around tight turns to access small anatomical targets. We present the design of lightweight miniaturized optogentic hardware and supporting electronics for the untethered steering of dragonfly flight. The system is designed to enable autonomous flight and includes processing, guidance sensors, solar power, and light stimulators. The system will weigh less than 200mg and be worn by the dragonfly as a backpack. The flexible implant has been designed to provide stimuli around nerves through micron scale apertures of adjacent neural tissue without the use of heavy hardware. We address the challenges of lightweight optogenetics and the development of high contrast polymer waveguides for this purpose.

  1. Software and Hardware Developments For a Mobile Manipulator Control

    Directory of Open Access Journals (Sweden)

    F. Abdessemed

    2008-12-01

    Full Text Available In this paper, we present the hardware and software architectures of an experimental real time control system of a mobile manipulator that performs tasks of manipulating objects in an environment of a large space. The mechanical architecture is a manipulator arm mounted on a mobile platform. In this work we show how one can implement an imbedded system, which includes the hardware and the software. The system makes use of a PC as the host and constitutes the high level layer. It is configured in such a way that it performs all the input-output interface operations; and is composed of different modules that constitute the software making up the required operations to be executed in a scheduling manner in order to meet the requirements of the real time control. In this paper, we also focus on the development of the generalized trajectory generation for the case of tasks where only one subsystem is considered to move and when the whole system is in permanent movement to achieve a particular task either in a free environment, or in presence of obstacles.

  2. Error Probability Analysis of Hardware Impaired Systems with Asymmetric Transmission

    KAUST Repository

    Javed, Sidrah

    2018-04-26

    Error probability study of the hardware impaired (HWI) systems highly depends on the adopted model. Recent models have proved that the aggregate noise is equivalent to improper Gaussian signals. Therefore, considering the distinct noise nature and self-interfering (SI) signals, an optimal maximum likelihood (ML) receiver is derived. This renders the conventional minimum Euclidean distance (MED) receiver as a sub-optimal receiver because it is based on the assumptions of ideal hardware transceivers and proper Gaussian noise in communication systems. Next, the average error probability performance of the proposed optimal ML receiver is analyzed and tight bounds and approximations are derived for various adopted systems including transmitter and receiver I/Q imbalanced systems with or without transmitter distortions as well as transmitter or receiver only impaired systems. Motivated by recent studies that shed the light on the benefit of improper Gaussian signaling in mitigating the HWIs, asymmetric quadrature amplitude modulation or phase shift keying is optimized and adapted for transmission. Finally, different numerical and simulation results are presented to support the superiority of the proposed ML receiver over MED receiver, the tightness of the derived bounds and effectiveness of asymmetric transmission in dampening HWIs and improving overall system performance

  3. A novel hardware implementation for detecting respiration rate using photoplethysmography.

    Science.gov (United States)

    Prinable, Joseph; Jones, Peter; Thamrin, Cindy; McEwan, Alistair

    2017-07-01

    Asthma is a serious public health problem. Continuous monitoring of breathing may offer an alternative way to assess disease status. In this paper we present a novel hardware implementation for the capture and storage of a photoplethysmography (PPG) signal. The LED duty cycle was altered to determine the effect on respiratory rate accuracy. The oximeter was mounted to the left index finger of ten healthy volunteers. The breathing rate derived from the oximeter was validated against a nasal airflow sensor. The duty cycle of a pulse oximeter was changed between 5%, 10% and 25% at a sample rate of 500 Hz. A PPG signal and reference signal was captured for each duty cycle. The PPG signals were post processed in Matlab to derive a respiration rate using an existing Matlab toolbox. At a 25% duty cycle the RMSE was <;2 breaths per minute for the top performing algorithm. The RMSE increased to over 5 breaths per minute when the duty cycle was reduced to 5%. The power consumed by the hardware for a 5%, 10% and 25% duty cycle was 5.4 mW, 7.8 mW, and 15 mW respectively. For clinical assessment of respiratory rate, a RSME of <;2 breaths per minute is recommended. Further work is required to determine utility in asthma management. However for non-clinical applications such as fitness tracking, lower accuracy may be sufficient to allow a reduced duty cycle setting.

  4. PCI hardware support in LIA-2 control system

    International Nuclear Information System (INIS)

    Bolkhovityanov, D.; Cheblakov, P.

    2012-01-01

    The control system of the LIA-2 accelerator is built on cPCI crates with *86- compatible processor boards running Linux. Slow electronics is connected via CAN-bus, while fast electronics (4 MHz and 200 MHz fast ADCs and 200 MHz timers) are implemented as cPCI/PMC modules. Several ways to drive PCI control electronics in Linux were examined. Finally a user-space drivers approach was chosen. These drivers communicate with hardware via a small kernel module, which provides access to PCI BARs and to interrupt handling. This module was named USPCI (User-Space PCI access). This approach dramatically simplifies creation of drivers, as opposed to kernel drivers, and provides high reliability (because only a tiny and thoroughly-debugged piece of code runs in kernel). LIA-2 accelerator was successfully commissioned, and the solution chosen has proven adequate and very easy to use. Besides, USPCI turned out to be a handy tool for examination and debugging of PCI devices direct from command-line. In this paper available approaches to work with PCI control hardware in Linux are considered, and USPCI architecture is described. (authors)

  5. Manufacturing Initiative

    Data.gov (United States)

    National Aeronautics and Space Administration — The Advanced Manufacturing Technologies (AMT) Project supports multiple activities within the Administration's National Manufacturing Initiative. A key component of...

  6. Overview of Additive Manufacturing Initiatives at NASA Marshall Space Flight Center

    Science.gov (United States)

    Clinton, R. G., Jr.

    2018-01-01

    NASA's In Space Manufacturing Initiative (ISM) includes: The case for ISM - why; ISM path to exploration - results from the 3D Printing In Zero-G Technology Demonstration - ISM challenges; In space Robotic Manufacturing and Assembly (IRMA); Additive construction. Additively Manufacturing (AM) development for liquid rocket engine space flight hardware. MSFC standard and specification for additively manufactured space flight hardware. Summary.

  7. Solar cooling in the hardware-in-the-loop test; Solare Kuehlung im Hardware-in-the-Loop-Test

    Energy Technology Data Exchange (ETDEWEB)

    Lohmann, Sandra; Radosavljevic, Rada; Goebel, Johannes; Gottschald, Jonas; Adam, Mario [Fachhochschule Duesseldorf (Germany). Erneuerbare Energien und Energieeffizienz E2

    2012-07-01

    The first part of the BMBF-funded research project 'Solar cooling in the hardware-in-the-loop test' (SoCool HIL) deals with the simulation of a solar refrigeration system using the simulation environment Matlab / Simulink with the toolboxes Stateflow and Carnot. Dynamic annual simulations and DoE supported parameter variations were used to select meaningful system configurations, control strategies and dimensioning of components. The second part of this project deals with hardware-in-the-loop tests using the 17.5 kW absorption chiller of the company Yazaki Europe Limited (Hertfordshire, United Kingdom). For this, the chiller is operated on a test bench in order to emulate the behavior of other system components (solar circuit with heat storage, recooling, buildings and cooling distribution / transfer). The chiller is controlled by a simulation of the system using MATLAB / Simulink / Carnot. Based on the knowledge on the real dynamic performance of the chiller the simulation model of the chiller can then be validated. Further tests are used to optimize the control of the chiller to the current cooling load. In addition, some changes in system configurations (for example cold backup) are tested with the real machine. The results of these tests and the findings on the dynamic performance of the chiller are presented.

  8. Hardware Architectures for Data-Intensive Computing Problems: A Case Study for String Matching

    Energy Technology Data Exchange (ETDEWEB)

    Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel

    2012-12-28

    DNA analysis is an emerging application of high performance bioinformatic. Modern sequencing machinery are able to provide, in few hours, large input streams of data, which needs to be matched against exponentially growing databases of known fragments. The ability to recognize these patterns effectively and fastly may allow extending the scale and the reach of the investigations performed by biology scientists. Aho-Corasick is an exact, multiple pattern matching algorithm often at the base of this application. High performance systems are a promising platform to accelerate this algorithm, which is computationally intensive but also inherently parallel. Nowadays, high performance systems also include heterogeneous processing elements, such as Graphic Processing Units (GPUs), to further accelerate parallel algorithms. Unfortunately, the Aho-Corasick algorithm exhibits large performance variability, depending on the size of the input streams, on the number of patterns to search and on the number of matches, and poses significant challenges on current high performance software and hardware implementations. An adequate mapping of the algorithm on the target architecture, coping with the limit of the underlining hardware, is required to reach the desired high throughputs. In this paper, we discuss the implementation of the Aho-Corasick algorithm for GPU-accelerated high performance systems. We present an optimized implementation of Aho-Corasick for GPUs and discuss its tradeoffs on the Tesla T10 and he new Tesla T20 (codename Fermi) GPUs. We then integrate the optimized GPU code, respectively, in a MPI-based and in a pthreads-based load balancer to enable execution of the algorithm on clusters and large sharedmemory multiprocessors (SMPs) accelerated with multiple GPUs.

  9. From Newton to Einstein - N-body dynamics in galactic nuclei and SPH using new special hardware and astrogrid-D

    International Nuclear Information System (INIS)

    Spurzem, R; Berczik, P; Berentzen, I; Merritt, D; Nakasato, N; Adorf, H M; Bruesemeister, T; Schwekendiek, P; Steinacker, J; Wambsganss, J; Martinez, G Marcus; Lienhart, G; Kugel, A; Maenner, R; Burkert, A; Naab, T; Vasquez, H; Wetzstein, M

    2007-01-01

    The dynamics of galactic nuclei containing multiple supermassive black holes is modelled including relativistic dynamics. It is shown that for certain initial conditions there is no stalling problem for the relativistic coalescence of supermassive black hole binaries. This astrophysical application and another one using a smoothed particle hydrodynamics code are our first use cases on a new computer architecture using GRAPE and new MPRACE accelerator cards based on reconfigurable chips, developed in the GRACE project. We briefly discuss our science applications and first benchmarks obtained with the new hardware. Our present architecture still relies on the GRAPE special purpose hardware (not reconfigurable), but next generations will focus on new architectural approaches including custom network and computing architectures. The new hardware is embedded into national and international grid infrastructures

  10. Accelerating epistasis analysis in human genetics with consumer graphics hardware

    Directory of Open Access Journals (Sweden)

    Cancare Fabio

    2009-07-01

    Full Text Available Abstract Background Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs have more memory bandwidth and computational capability than Central Processing Units (CPUs and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. Findings We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective

  11. Accelerating epistasis analysis in human genetics with consumer graphics hardware.

    Science.gov (United States)

    Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H

    2009-07-24

    Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR) is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs) have more memory bandwidth and computational capability than Central Processing Units (CPUs) and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective performance while leaving the CPU available for other

  12. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (noise of less than 1 μVrms for a useful signal-to-noise ratio (SNR). Flat interface nerve electrode (FINE) contacts alone generate thermal noise of at least 0.5 μVrms therefore the amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating noise when connected to a FINE placed on the sciatic nerve of an awake animal. An algorithm was introduced to find the value of N that can minimize both the power consumption and the noise in order to design a miniaturized ultralow-noise neural amplifier. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This

  13. Ultra-low noise miniaturized neural amplifier with hardware averaging

    Science.gov (United States)

    Dweiri, Yazan M.; Eggers, Thomas; McCallum, Grant; Durand, Dominique M.

    2015-08-01

    Objective. Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (noise of less than 1 μVrms for a useful signal-to-noise ratio (SNR). Flat interface nerve electrode (FINE) contacts alone generate thermal noise of at least 0.5 μVrms therefore the amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Approach. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. Main results. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating noise when connected to a FINE placed on the sciatic nerve of an awake animal. An algorithm was introduced to find the value of N that can minimize both the power consumption and the noise in order to design a miniaturized ultralow-noise neural amplifier. Significance. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and

  14. Hardware and Software Interfacing at New Mexico Geochronology Research Laboratory: Distributed Control Using Pychron and RemoteControlServer.cs

    Science.gov (United States)

    McIntosh, W. C.; Ross, J. I.

    2012-12-01

    We developed a system for interfacing existing hardware and software to two new Thermo Scientific Argus VI mass spectrometers and three Photon Machines Fusions laser systems at New Mexico Geochronology Research Laboratory. NMGRL's upgrade to the new analytical equipment required the design and implementation of a software ecosystem that allows seamless communication between various software and hardware components. Based on past experience and initial testing we choose to pursue a "Fully Distributed Control" model. In this model, hardware is compartmentalized and controlled by customized software running on individual computers. Each computer is connected to a Local Area Network (LAN) facilitating inter-process communication using TCP or UDP Internet Protocols. Two other options for interfacing are 1) Single Control, in which all hardware is controlled by a single application on a single computer and 2), Partial Distributed Control, in which the mass spectrometer is controlled directly by Thermo Scientific's Qtegra and all other hardware is controlled by a separate application. The "Fully Distributed Control" model offers the most efficient use of software resources, leveraging our in-house laboratory software with proprietary third-party applications, such as Qtegra and Mass Spec. Two software products resulted from our efforts. 1) Pychron, a configurable and extensible package for hardware control, data acquisition and preprocessing, and 2) RemoteControlServer.cs, a C# script for Thermo's Qtegra software that implements a TCP/UDP command server. Pychron is written in python and uses standard well-established libraries such as, Numpy, Scipy, and Enthought ETS. Pychron is flexible and extensible, encouraging experimentation and rapid development of new features. A project page for Pychron is located at http://code.google.com/p/arlab, featuring an issue tracker and a Version Control System (Mercurial). RemoteControlServer.cs is a simple socket server that listens

  15. Information Management within the LHC Hardware Commissioning Project

    CERN Document Server

    Bellesia, B; Koratzinos, M; Marqueta Barbero, A; Pojer, M; Saban, R; Schmidt, R; Solfaroli Camilloci, M; Szkutnik, J; Vergara Fernández, A; Wenninger, J; Zerlauth, M

    2010-01-01

    The core task of the commissioning of the LHC technical systems was the individual test of the 1572 superconducting circuits of the collider, the powering tests. The two objectives of these tests were the validation of the different sub-systems making each superconducting circuit as well as the validation of the superconducting elements of the circuits in their final configuration in the tunnel. A wide set of software applications were developed by the team in charge of coordinating the powering activities (Hardware Commissioning Coordination) in order to manage the amount of information required for the preparation, execution and traceability of the tests. In all the cases special care was taken in order to keep the tools consistent with the LHC quality assurance policy, avoid redundancies between applications, ensure integrity and coherence of the test results and optimise their usability within an accelerator operation environment. This paper describes the main characteristics of these tools; it details th...

  16. An Overview of Reconfigurable Hardware in Embedded Systems

    Directory of Open Access Journals (Sweden)

    Garcia Philip

    2006-01-01

    Full Text Available Over the past few years, the realm of embedded systems has expanded to include a wide variety of products, ranging from digital cameras, to sensor networks, to medical imaging systems. Consequently, engineers strive to create ever smaller and faster products, many of which have stringent power requirements. Coupled with increasing pressure to decrease costs and time-to-market, the design constraints of embedded systems pose a serious challenge to embedded systems designers. Reconfigurable hardware can provide a flexible and efficient platform for satisfying the area, performance, cost, and power requirements of many embedded systems. This article presents an overview of reconfigurable computing in embedded systems, in terms of benefits it can provide, how it has already been used, design issues, and hurdles that have slowed its adoption.

  17. Simple Approach For Induction Motor Control Using Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    József VÁSÁRHELYI

    2002-12-01

    Full Text Available The paper deals with rotor-field-oriented vector control structures for the induction motor drives fed by the so-called tandem frequency converter. It is composed of two different types of DC-link converters connected in parallel arrangement. The larger-power one has current-source character and is operating synchronized in time and in amplitude with the stator currents. The other one has voltage-source character and it is the actuator of the motor control system. The drive is able to run also with partial-failed tandem converter, if the control strategy corresponds to the actual operating mode. A reconfigurable hardware implemented in configurable logic cells ensures the changing of the vector-control structure. The proposed control schemes were tested by simulation based on Matlab-Simulink model.

  18. Multistage switching hardware and software implementations for student experiment purpose

    Science.gov (United States)

    Sani, A.; Suherman

    2018-02-01

    Current communication and internet networks are underpinned by the switching technologies that interconnect one network to the others. Students’ understanding on networks rely on how they conver the theories. However, understanding theories without touching the reality may exert spots in the overall knowledge. This paper reports the progress of the multistage switching design and implementation for student laboratory activities. The hardware and software designs are based on three stages clos switching architecture with modular 2x2 switches, controlled by an arduino microcontroller. The designed modules can also be extended for batcher and bayan switch, and working on circuit and packet switching systems. The circuit analysis and simulation show that the blocking probability for each switch combinations can be obtained by generating random or patterned traffics. The mathematic model and simulation analysis shows 16.4% blocking probability differences as the traffic generation is uniform. The circuits design components and interfacing solution have been identified to allow next step implementation.

  19. Hardware Implementation of Maximum Power Point Tracking for Thermoelectric Generators

    Science.gov (United States)

    Maganga, Othman; Phillip, Navneesh; Burnham, Keith J.; Montecucco, Andrea; Siviter, Jonathan; Knox, Andrew; Simpson, Kevin

    2014-06-01

    This work describes the practical implementation of two maximum power point tracking (MPPT) algorithms, namely those of perturb and observe, and extremum seeking control. The proprietary dSPACE system is used to perform hardware in the loop (HIL) simulation whereby the two control algorithms are implemented using the MATLAB/Simulink (Mathworks, Natick, MA) software environment in order to control a synchronous buck-boost converter connected to two commercial thermoelectric modules. The process of performing HIL simulation using dSPACE is discussed, and a comparison between experimental and simulated results is highlighted. The experimental results demonstrate the validity of the two MPPT algorithms, and in conclusion the benefits and limitations of real-time implementation of MPPT controllers using dSPACE are discussed.

  20. Design-to-fabricate: maker hardware requires maker software.

    Science.gov (United States)

    Schmidt, Ryan; Ratto, Matt

    2013-01-01

    As a result of consumer-level 3D printers' increasing availability and affordability, the audience for 3D-design tools has grown considerably. However, current tools are ill-suited for these users. They have steep learning curves and don't take into account that the end goal is a physical object, not a digital model. A new class of "maker"-level design tools is needed to accompany this new commodity hardware. However, recent examples of such tools achieve accessibility primarily by constraining functionality. In contrast, the meshmixer project is building tools that provide accessibility and expressive power by leveraging recent computer graphics research in geometry processing. The project members have had positive experiences with several 3D-design-to-print workshops and are exploring several design-to-fabricate problems. This article is part of a special issue on 3D printing.

  1. Graph based communication analysis for hardware/software codesign

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1999-01-01

    In this paper we present a coarse grain CDFG (Control/Data Flow Graph) model suitable for hardware/software partitioning of single processes and demonstrate how it is necessary to perform various transformations on the graph structure before partitioning in order to achieve a structure that allows...... for accurate estimation of communication overhead between nodes mapped to different processors. In particular, we demonstrate how various transformations of control structures can lead to a more accurate communication analysis and more efficient implementations. The purpose of the transformations is to obtain...... a CDFG structure that is sufficiently fine grained as to support a correct communication analysis but not more fine grained than necessary as this will increase partitioning and analysis time....

  2. Hardware-in-the-loop grid simulator system and method

    Science.gov (United States)

    Fox, John Curtiss; Collins, Edward Randolph; Rigas, Nikolaos

    2017-05-16

    A hardware-in-the-loop (HIL) electrical grid simulation system and method that combines a reactive divider with a variable frequency converter to better mimic and control expected and unexpected parameters in an electrical grid. The invention provides grid simulation in a manner to allow improved testing of variable power generators, such as wind turbines, and their operation once interconnected with an electrical grid in multiple countries. The system further comprises an improved variable fault reactance (reactive divider) capable of providing a variable fault reactance power output to control a voltage profile, therein creating an arbitrary recovery voltage. The system further comprises an improved isolation transformer designed to isolate zero-sequence current from either a primary or secondary winding in a transformer or pass the zero-sequence current from a primary to a secondary winding.

  3. CT and MRI techniques for imaging around orthopedic hardware

    International Nuclear Information System (INIS)

    Do, Thuy Duong; Skornitzke, Stephan; Weber, Marc-Andre; Sutter, Reto

    2018-01-01

    Orthopedic hardware impairs image quality in cross-sectional imaging. With an increasing number of orthopedic implants in an aging population, the need to mitigate metal artifacts in computed tomography and magnetic resonance imaging is becoming increasingly relevant. This review provides an overview of the major artifacts in CT and MRI and state-of-the-art solutions to improve image quality. All steps of image acquisition from device selection, scan preparations and parameters to image post-processing influence the magnitude of metal artifacts. Technological advances like dual-energy CT with the possibility of virtual monochromatic imaging (VMI) and new materials offer opportunities to further reduce artifacts in CT and MRI. Dedicated metal artifact reduction sequences contain algorithms to reduce artifacts and improve imaging of surrounding tissue and are essential tools in orthopedic imaging to detect postoperative complications in early stages.

  4. New hardware and software design for electrical impedance tomography

    Science.gov (United States)

    Goharian, Mehran

    Electrical impedance tomography (EIT) is an imaging technique that reconstructs the internal electrical properties of an object from boundary voltage measurements. In this technique a series of electrodes is attached to the surface of an object and alternating current is passed via these electrodes and the resulting voltages are measured. Reconstruction of internal conductivity images requires the solution of an ill-conditioned nonlinear inverse problem from the noisy boundary voltage measurements. Such unreliable boundary measurements make the solutions unstable. To obtain stable and meaningful solutions regularization is used. This thesis deals with the EIT problem from the perspective of both image reconstruction and hardware design. This thesis consists of two main parts. The first part covers the development of 3D image reconstruction algorithms for single and multi-frequency EIT. The second part relates to the design of novel multi-frequency hardware and performance testing of the hardware using the designed phantom. Three different approaches for image reconstruction of EIT are presented: (1) The dog-leg algorithm is introduced as an alternative method to Levenberg-Marquardt for solving the EIT inverse problem. It was found that the dog leg technique requires less computation time to converge to the same result as the Levenberg-Marquardt. (2) We propose a novel approach to build a subspace for regularization using a spectral and spatial multi-frequency analysis approach. The approach is based on the construction of a subspace for the expected conductivity distributions using principal component analysis (PCA). The advantage of this technique is that priori information for regularization matrix is determined from the statistical nature of the multi-frequency data. (3) We present a quadratic constrained least square approach to the EIT problem. The proposed approach is based on the trust region subproblem (TRS), which uses L-curve maximum curvature criteria

  5. Programming languages and compiler design for realistic quantum hardware

    Science.gov (United States)

    Chong, Frederic T.; Franklin, Diana; Martonosi, Margaret

    2017-09-01

    Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.

  6. HARDWARE IMPLEMENTATION OF SECURE AODV FOR WIRELESS SENSOR NETWORKS

    Directory of Open Access Journals (Sweden)

    S. Sharmila

    2010-12-01

    Full Text Available Wireless Sensor Networks are extremely vulnerable to any kind of routing attacks due to several factors such as wireless transmission and resource-constrained nodes. In this respect, securing the packets is of great importance when designing the infrastructure and protocols of sensor networks. This paper describes the hardware architecture of secure routing for wireless sensor networks. The routing path is selected using Ad-hoc on demand distance vector routing protocol (AODV. The data packets are converted into digest using hash functions. The functionality of the proposed method is modeled using Verilog HDL in MODELSIM simulator and the performance is compared with various target devices. The results show that the data packets are secured and defend against the routing attacks with minimum energy consumption.

  7. Programming languages and compiler design for realistic quantum hardware.

    Science.gov (United States)

    Chong, Frederic T; Franklin, Diana; Martonosi, Margaret

    2017-09-13

    Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.

  8. Molecular Dynamics Simulations of Clathrate Hydrates on Specialised Hardware Platforms

    Directory of Open Access Journals (Sweden)

    Christian R. Trott

    2012-09-01

    Full Text Available Classical equilibrium molecular dynamics (MD simulations have been performed to investigate the computational performance of the Simple Point Charge (SPC and TIP4P water models applied to simulation of methane hydrates, and also of liquid water, on a variety of specialised hardware platforms, in addition to estimation of various equilibrium properties of clathrate hydrates. The FPGA-based accelerator MD-GRAPE 3 was used to accelerate substantially the computation of non-bonded forces, while GPU-based platforms were also used in conjunction with CUDA-enabled versions of the LAMMPS MD software packages to reduce computational time dramatically. The dependence of molecular system size and scaling with number of processors was also investigated. Considering performance relative to power consumption, it is seen that GPU-based computing is quite attractive.

  9. On the Achievable Rate of Hardware-Impaired Transceiver Systems

    KAUST Repository

    Javed, Sidrah

    2018-01-15

    In this paper, we accurately model the transceiver hardware impairments (HWIs) of multiple-input multiple-output (MIMO) systems considering different HWI stages at transmitter and receiver. The proposed novel statistical model shows that transceiver HWIs transform the transmitted symmetric signal to asymmetric one. Moreover, it shows that the aggregate self-interference has asymmetric characteristics. Therefore, we propose improper Gaussian signaling (IGS) for transmission in order to improve the achievable rate performance. IGS is considered as a general signaling scheme which includes the proper Gaussian signaling (PGS) as a special case. Thus, IGS has additional design parameters which enable it to mitigate the HWI self-interference. As a case study, we analyze the achievable rate performance of single-input multiple-output systems with linear and selection combiner. Furthermore, we optimize the IGS statistical characteristics for interference alignment. This improves the achievable rate performance as compared to the PGS, which is validated through numerical results.

  10. Impact of Improper Gaussian Signaling on Hardware Impaired Systems

    KAUST Repository

    Javed, Sidrah

    2016-12-18

    In this paper, we accurately model the hardware impairments (HWI) as improper Gaussian signaling (IGS) which can characterize the asymmetric characteristics of different HWI sources. The proposed model encourages us to adopt IGS scheme for transmitted signal that represents a general study compared with the conventional scheme, proper Gaussian signaling (PGS). First, we express the achievable rate of HWI systems when both PGS and IGS schemes are used when the aggregate effect of HWI is modeled as IGS. Moreover, we tune the IGS statistical characteristics to maximize the achievable rate. Then, we analyze the outage probability for both schemes and derive closed form expressions. Finally, we validate the analytic expressions through numerical and simulation results. In addition, we quantify through the numerical results the performance degradation in the absence of ideal transceivers and the gain reaped from adopting IGS scheme compared with PGS scheme.

  11. Using EPICS enabled industrial hardware for upgrading control systems

    International Nuclear Information System (INIS)

    Bjorkland, Eric A.; Veeramani, Arun; Debelle, Thierry

    2009-01-01

    Los Alamos National Laboratory has been working with National Instruments (NI) and Cosy lab to implement EPICS Input Output Controller (IOC) software that runs directly on NI CompactRIO Real Time Controller (RTC) and communicates with NI LabVIEW through a shared memory interface. In this presentation, we will discuss our current progress in upgrading the control system at the Los Alamos Neutron Science Centre (LANSCE) and what we have learned about integrating CompactRIO into large experimental physics facilities. We will also discuss the implications of using Channel Access Server for LabVIEW which will enable more commercial hardware platforms to be used in upgrading existing facilities or in commissioning new ones.

  12. An Overview of Reconfigurable Hardware in Embedded Systems

    Directory of Open Access Journals (Sweden)

    Wenyin Fu

    2006-09-01

    Full Text Available Over the past few years, the realm of embedded systems has expanded to include a wide variety of products, ranging from digital cameras, to sensor networks, to medical imaging systems. Consequently, engineers strive to create ever smaller and faster products, many of which have stringent power requirements. Coupled with increasing pressure to decrease costs and time-to-market, the design constraints of embedded systems pose a serious challenge to embedded systems designers. Reconfigurable hardware can provide a flexible and efficient platform for satisfying the area, performance, cost, and power requirements of many embedded systems. This article presents an overview of reconfigurable computing in embedded systems, in terms of benefits it can provide, how it has already been used, design issues, and hurdles that have slowed its adoption.

  13. Acquisition of reliable vacuum hardware for large accelerator systems

    International Nuclear Information System (INIS)

    Welch, K.M.

    1995-01-01

    Credible and effective communications prove to be the major challenge in the acquisition of reliable vacuum hardware. Technical competence is necessary but not sufficient. The authors must effectively communicate with management, sponsoring agencies, project organizations, service groups, staff and with vendors. Most of Deming's 14 quality assurance tenants relate to creating an enlightened environment of good communications. All projects progress along six distinct, closely coupled, dynamic phases. All six phases are in a state of perpetual change. These phases and their elements are discussed, with emphasis given to the acquisition phase and its related vocabulary. Large projects require great clarity and rigor as poor communications can be costly. For rigor to be cost effective, it can't be pedantic. Clarity thrives best in a low-risk, team environment

  14. Software and Hardware Infrastructure for Research in Electrophysiology

    Directory of Open Access Journals (Sweden)

    Roman eMouček

    2014-03-01

    Full Text Available As in other areas of experimental science, operation of electrophysiological laboratory, design and performance of electrophysiological experiments, collection, storage and sharing of experimental data and metadata, analysis and interpretation of these data, and publication of results are time consuming activities. If these activities are well organized and supported by a suitable infrastructure, work efficiency of researchers increases significantly.This article deals with the main concepts, design, and development of software and hardware infrastructure for research in electrophysiology. The described infrastructure has been primarily developed for the needs of neuroinformatics laboratory at the University of West Bohemia, the Czech Republic. However, from the beginning it has been also designed and developed to be open and applicable in laboratories that do similar research.After introducing the laboratory and the whole architectural concept the individual parts of the infrastructure are described. The central element of the software infrastructure is a web-based portal that enables community researchers to store, share, download and search data and metadata from electrophysiological experiments. The data model, domain ontology and usage of semantic web languages and technologies are described. Current data publication policy used in the portal is briefly introduced. The registration of the portal within Neuroscience Information Framework is described. Then the methods used for processing of electrophysiological signals are presented. The specific modifications of these methods introduced by laboratory researches are summarized; the methods are organized into a laboratory workflow. Other parts of the software infrastructure include mobile and offline solutions for data/metadata storing and a hardware stimulator communicating with an EEG amplifier and recording software.

  15. Hardware and software for physical assessment work and health students

    Directory of Open Access Journals (Sweden)

    Олександр Юрійович Азархов

    2016-11-01

    Full Text Available The hardware and software used to assess the state of the students’ health by means of information technology were described in the article and displayed in the form of PEAC – (physical efficiency assessment channel. The list of the diseases that students often suffer from has been prepared for which minimum number of informative primary biosignals have been selected. The structural scheme PEAC has been made up, the ways to form and calculate the secondary parameters for evaluating the health of students have been shown. The resulting criteria, indices, indicators and parameters grouped in a separate table for ease of use, are also presented in the article. The given list necessitates the choice of vital activities parameters, which are further to be used as the criteria for primary express-diagnostics of the health state according to such indicators as electrocardiogram, photoplethysmogram, spirogram, blood pressure, body mass length, dynamometry. But these indicators (qualitative should be supplemented with measurement methods which provide quantitative component of an indicator. This method makes it possible to obtain assessments of students’ health with desired properties. Channel of the student physical disability assessment, along with the channel of activity comprehensive evaluation and decision support subsystem ensure assessment of the student's health with all aspects of his activity and professional training, thereby creating adequate algorithm of his behavior that provides maximum health, longevity and professional activities. The basic requirements for hardware have been formed, and they are, minimum number of information-measuring channels; high noise stability of information-measuring channels; comfort, providing normal activity of a student; small dimensions, weight and power consumption; simplicity, and in some cases service authorization

  16. Hierarchical Simulation to Assess Hardware and Software Dependability

    Science.gov (United States)

    Ries, Gregory Lawrence

    1997-01-01

    This thesis presents a method for conducting hierarchical simulations to assess system hardware and software dependability. The method is intended to model embedded microprocessor systems. A key contribution of the thesis is the idea of using fault dictionaries to propagate fault effects upward from the level of abstraction where a fault model is assumed to the system level where the ultimate impact of the fault is observed. A second important contribution is the analysis of the software behavior under faults as well as the hardware behavior. The simulation method is demonstrated and validated in four case studies analyzing Myrinet, a commercial, high-speed networking system. One key result from the case studies shows that the simulation method predicts the same fault impact 87.5% of the time as is obtained by similar fault injections into a real Myrinet system. Reasons for the remaining discrepancy are examined in the thesis. A second key result shows the reduction in the number of simulations needed due to the fault dictionary method. In one case study, 500 faults were injected at the chip level, but only 255 propagated to the system level. Of these 255 faults, 110 shared identical fault dictionary entries at the system level and so did not need to be resimulated. The necessary number of system-level simulations was therefore reduced from 500 to 145. Finally, the case studies show how the simulation method can be used to improve the dependability of the target system. The simulation analysis was used to add recovery to the target software for the most common fault propagation mechanisms that would cause the software to hang. After the modification, the number of hangs was reduced by 60% for fault injections into the real system.

  17. An integrable low-cost hardware random number generator

    Science.gov (United States)

    Ranasinghe, Damith C.; Lim, Daihyun; Devadas, Srinivas; Jamali, Behnam; Zhu, Zheng; Cole, Peter H.

    2005-02-01

    A hardware random number generator is different from a pseudo-random number generator; a pseudo-random number generator approximates the assumed behavior of a real hardware random number generator. Simple pseudo random number generators suffices for most applications, however for demanding situations such as the generation of cryptographic keys, requires an efficient and a cost effective source of random numbers. Arbiter-based Physical Unclonable Functions (PUFs) proposed for physical authentication of ICs exploits statistical delay variation of wires and transistors across integrated circuits, as a result of process variations, to build a secret key unique to each IC. Experimental results and theoretical studies show that a sufficient amount of variation exits across IC"s. This variation enables each IC to be identified securely. It is possible to exploit the unreliability of these PUF responses to build a physical random number generator. There exists measurement noise, which comes from the instability of an arbiter when it is in a racing condition. There exist challenges whose responses are unpredictable. Without environmental variations, the responses of these challenges are random in repeated measurements. Compared to other physical random number generators, the PUF-based random number generators can be a compact and a low-power solution since the generator need only be turned on when required. A 64-stage PUF circuit costs less than 1000 gates and the circuit can be implemented using a standard IC manufacturing processes. In this paper we have presented a fast and an efficient random number generator, and analysed the quality of random numbers produced using an array of tests used by the National Institute of Standards and Technology to evaluate the randomness of random number generators designed for cryptographic applications.

  18. Systems Biology

    Indian Academy of Sciences (India)

    IAS Admin

    Systems biology seeks to study biological systems as a whole, contrary to the reductionist approach that has dominated biology. Such a view of biological systems emanating from strong foundations of molecular level understanding of the individual components in terms of their form, function and interactions is promising to ...

  19. Standard biological parts knowledgebase.

    Science.gov (United States)

    Galdzicki, Michal; Rodriguez, Cesar; Chandran, Deepak; Sauro, Herbert M; Gennari, John H

    2011-02-24

    We have created the Knowledgebase of Standard Biological Parts (SBPkb) as a publically accessible Semantic Web resource for synthetic biology (sbolstandard.org). The SBPkb allows researchers to query and retrieve standard biological parts for research and use in synthetic biology. Its initial version includes all of the information about parts stored in the Registry of Standard Biological Parts (partsregistry.org). SBPkb transforms this information so that it is computable, using our semantic framework for synthetic biology parts. This framework, known as SBOL-semantic, was built as part of the Synthetic Biology Open Language (SBOL), a project of the Synthetic Biology Data Exchange Group. SBOL-semantic represents commonly used synthetic biology entities, and its purpose is to improve the distribution and exchange of descriptions of biological parts. In this paper, we describe the data, our methods for transformation to SBPkb, and finally, we demonstrate the value of our knowledgebase with a set of sample queries. We use RDF technology and SPARQL queries to retrieve candidate "promoter" parts that are known to be both negatively and positively regulated. This method provides new web based data access to perform searches for parts that are not currently possible.

  20. Standard biological parts knowledgebase.

    Directory of Open Access Journals (Sweden)

    Michal Galdzicki

    2011-02-01

    Full Text Available We have created the Knowledgebase of Standard Biological Parts (SBPkb as a publically accessible Semantic Web resource for synthetic biology (sbolstandard.org. The SBPkb allows researchers to query and retrieve standard biological parts for research and use in synthetic biology. Its initial version includes all of the information about parts stored in the Registry of Standard Biological Parts (partsregistry.org. SBPkb transforms this information so that it is computable, using our semantic framework for synthetic biology parts. This framework, known as SBOL-semantic, was built as part of the Synthetic Biology Open Language (SBOL, a project of the Synthetic Biology Data Exchange Group. SBOL-semantic represents commonly used synthetic biology entities, and its purpose is to improve the distribution and exchange of descriptions of biological parts. In this paper, we describe the data, our methods for transformation to SBPkb, and finally, we demonstrate the value of our knowledgebase with a set of sample queries. We use RDF technology and SPARQL queries to retrieve candidate "promoter" parts that are known to be both negatively and positively regulated. This method provides new web based data access to perform searches for parts that are not currently possible.

  1. Human wound photogrammetry with low-cost hardware based on automatic calibration of geometry and color

    Science.gov (United States)

    Jose, Abin; Haak, Daniel; Jonas, Stephan; Brandenburg, Vincent; Deserno, Thomas M.

    2015-03-01

    Photographic documentation and image-based wound assessment is frequently performed in medical diagnostics, patient care, and clinical research. To support quantitative assessment, photographic imaging is based on expensive and high-quality hardware and still needs appropriate registration and calibration. Using inexpensive consumer hardware such as smartphone-integrated cameras, calibration of geometry, color, and contrast is challenging. Some methods involve color calibration using a reference pattern such as a standard color card, which is located manually in the photographs. In this paper, we adopt the lattice detection algorithm by Park et al. from real world to medicine. At first, the algorithm extracts and clusters feature points according to their local intensity patterns. Groups of similar points are fed into a selection process, which tests for suitability as a lattice grid. The group which describes the largest probability of the meshes of a lattice is selected and from it a template for an initial lattice cell is extracted. Then, a Markov random field is modeled. Using the mean-shift belief propagation, the detection of the 2D lattice is solved iteratively as a spatial tracking problem. Least-squares geometric calibration of projective distortions and non-linear color calibration in RGB space is supported by 35 corner points of 24 color patches, respectively. The method is tested on 37 photographs taken from the German Calciphylaxis registry, where non-standardized photographic documentation is collected nationwide from all contributing trial sites. In all images, the reference card location is correctly identified. At least, 28 out of 35 lattice points were detected, outperforming the SIFT-based approach previously applied. Based on these coordinates, robust geometry and color registration is performed making the photographs comparable for quantitative analysis.

  2. Desenvolvimento de hardware reconfigurável de criptografia assimétrica

    Directory of Open Access Journals (Sweden)

    Otávio Souza Martins Gomes

    2015-01-01

    Full Text Available Este artigo apresenta o resultado parcial do desenvolvimento de uma interface de hardware reconfigurável para criptografia assimétrica que permite a troca segura de dados. Hardwares reconfiguráveis permitem o desenvolvimento deste tipo de dispositivo com segurança e flexibilidade e possibilitam a mudança de características no projeto com baixo custo e de forma rápida.Palavras-chave: Criptografia. Hardware. ElGamal. FPGA. Segurança. Development of an asymmetric cryptography reconfigurable harwadre ABSTRACTThis paper presents some conclusions and choices about the development of an asymmetric cryptography reconfigurable hardware interface to allow a safe data communication. Reconfigurable hardwares allows the development of this kind of device with safety and flexibility, and offer the possibility to change some features with low cost and in a fast way.Keywords: Cryptography. Hardware. ElGamal. FPGAs. Security.

  3. Generating clock signals for a cycle accurate, cycle reproducible FPGA based hardware accelerator

    Science.gov (United States)

    Asaad, Sameth W.; Kapur, Mohit

    2016-01-05

    A method, system and computer program product are disclosed for generating clock signals for a cycle accurate FPGA based hardware accelerator used to simulate operations of a device-under-test (DUT). In one embodiment, the DUT includes multiple device clocks generating multiple device clock signals at multiple frequencies and at a defined frequency ratio; and the FPG hardware accelerator includes multiple accelerator clocks generating multiple accelerator clock signals to operate the FPGA hardware accelerator to simulate the operations of the DUT. In one embodiment, operations of the DUT are mapped to the FPGA hardware accelerator, and the accelerator clock signals are generated at multiple frequencies and at the defined frequency ratio of the frequencies of the multiple device clocks, to maintain cycle accuracy between the DUT and the FPGA hardware accelerator. In an embodiment, the FPGA hardware accelerator may be used to control the frequencies of the multiple device clocks.

  4. Removal of symptomatic craniofacial titanium hardware following craniotomy: Case series and review

    Directory of Open Access Journals (Sweden)

    Sheri K. Palejwala

    2015-06-01

    Full Text Available Titanium craniofacial hardware has become commonplace for reconstruction and bone flap fixation following craniotomy. Complications of titanium hardware include palpability, visibility, infection, exposure, pain, and hardware malfunction, which can necessitate hardware removal. We describe three patients who underwent craniofacial reconstruction following craniotomies for trauma with post-operative courses complicated by medically intractable facial pain. All three patients subsequently underwent removal of the symptomatic craniofacial titanium hardware and experienced rapid resolution of their painful parasthesias. Symptomatic plates were found in the region of the frontozygomatic suture or MacCarty keyhole, or in close proximity with the supraorbital nerve. Titanium plates, though relatively safe and low profile, can cause local nerve irritation or neuropathy. Surgeons should be cognizant of the potential complications of titanium craniofacial hardware and locations that are at higher risk for becoming symptomatic necessitating a second surgery for removal.

  5. Establishing a novel modeling tool: a python-based interface for a neuromorphic hardware system

    Directory of Open Access Journals (Sweden)

    Daniel Brüderle

    2009-06-01

    Full Text Available Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated.

  6. Low-complexity hardware design methodology for reliable and automated removal of ocular and muscular artifact from EEG.

    Science.gov (United States)

    Acharyya, Amit; Jadhav, Pranit N; Bono, Valentina; Maharatna, Koushik; Naik, Ganesh R

    2018-05-01

    EEG is a non-invasive tool for neuro-developmental disorder diagnosis and treatment. However, EEG signal is mixed with other biological signals including Ocular and Muscular artifacts making it difficult to extract the diagnostic features. Therefore, the contaminated EEG channels are often discarded by the medical practitioners which may result in less accurate diagnosis. Many existing methods require reference electrodes, which will create discomfort to the patient/children and cause hindrance to the diagnosis of the neuro-developmental disorder and Brain Computer Interface in the pervasive environment. Therefore, it would be ideal if these artifacts can be removed real time on the hardware platform in an automated fashion and then the denoised EEG can be used for online diagnosis in a pervasive personalized healthcare environment without the need of any reference electrode. In this paper we propose a reliable, robust and automated methodology to solve the aforementioned problem. The proposed methodology is based on the Haar function based Wavelet decompositions with simple threshold based wavelet domain denoising and artifacts removal schemes. Subsequently hardware implementation results are also presented. 100 EEG data from Physionet, Klinik für Epileptologie, Universität Bonn, Germany, Caltech EEG databases and 7 EEG data from 3 subjects from University of Southampton, UK have been studied and nine exhaustive case studies comprising of real and simulated data have been formulated and tested. The proposed methodology is prototyped and validated using FPGA platform. Like existing literature, the performance of the proposed methodology is also measured in terms of correlation, regression and R-square statistics and the respective values lie above 80%, 79% and 65% with the gain in hardware complexity of 64.28% and improvement in hardware delay of 53.58% compared to state-of-the art approaches. Hardware design based on the proposed methodology consumes 75 micro

  7. 15 MW HArdware-in-the-loop Grid Simulation Project

    Energy Technology Data Exchange (ETDEWEB)

    Rigas, Nikolaos [Clemson Univ., SC (United States); Fox, John Curtiss [Clemson Univ., SC (United States); Collins, Randy [Clemson Univ., SC (United States); Tuten, James [Clemson Univ., SC (United States); Salem, Thomas [Clemson Univ., SC (United States); McKinney, Mark [Clemson Univ., SC (United States); Hadidi, Ramtin [Clemson Univ., SC (United States); Gislason, Benjamin [Clemson Univ., SC (United States); Boessneck, Eric [Clemson Univ., SC (United States); Leonard, Jesse [Clemson Univ., SC (United States)

    2014-10-31

    The 15MW Hardware-in-the-loop (HIL) Grid Simulator project was to (1) design, (2) construct and (3) commission a state-of-the-art grid integration testing facility for testing of multi-megawatt devices through a ‘shared facility’ model open to all innovators to promote the rapid introduction of new technology in the energy market to lower the cost of energy delivered. The 15 MW HIL Grid Simulator project now serves as the cornerstone of the Duke Energy Electric Grid Research, Innovation and Development (eGRID) Center. This project leveraged the 24 kV utility interconnection and electrical infrastructure of the US DOE EERE funded WTDTF project at the Clemson University Restoration Institute in North Charleston, SC. Additionally, the project has spurred interest from other technology sectors, including large PV inverter and energy storage testing and several leading edge research proposals dealing with smart grid technologies, grid modernization and grid cyber security. The key components of the project are the power amplifier units capable of providing up to 20MW of defined power to the research grid. The project has also developed a one of a kind solution to performing fault ride-through testing by combining a reactive divider network and a large power converter into a hybrid method. This unique hybrid method of performing fault ride-through analysis will allow for the research team at the eGRID Center to investigate the complex differences between the alternative methods of performing fault ride-through evaluations and will ultimately further the science behind this testing. With the final goal of being able to perform HIL experiments and demonstration projects, the eGRID team undertook a significant challenge with respect to developing a control system that is capable of communicating with several different pieces of equipment with different communication protocols in real-time. The eGRID team developed a custom fiber optical network that is based upon FPGA

  8. Hardware replacements and software tools for digital control computers

    International Nuclear Information System (INIS)

    Walker, R.A.P.; Wang, B-C.; Fung, J.

    1996-01-01

    Technological obsolescence is an on-going challenge for all computer use. By design, and to some extent good fortune, AECL has had a good track record with respect to the march of obsolescence in CANDU digital control computer technology. Recognizing obsolescence as a fact of life, AECL has undertaken a program of supporting the digital control technology of existing CANDU plants. Other AECL groups are developing complete replacement systems for the digital control computers, and more advanced systems for the digital control computers of the future CANDU reactors. This paper presents the results of the efforts of AECL's DCC service support group to replace obsolete digital control computer and related components and to provide friendlier software technology related to the maintenance and use of digital control computers in CANDU. These efforts are expected to extend the current lifespan of existing digital control computers through their mandated life. This group applied two simple rules; the product, whether new or replacement should have a generic basis, and the products should be applicable to both existing CANDU plants and to 'repeat' plant designs built using current design guidelines. While some exceptions do apply, the rules have been met. The generic requirement dictates that the product should not be dependent on any brand technology, and should back-fit to and interface with any such technology which remains in the control design. The application requirement dictates that the product should have universal use and be user friendly to the greatest extent possible. Furthermore, both requirements were designed to anticipate user involvement, modifications and alternate user defined applications. The replacements for hardware components such as paper tape reader/punch, moving arm disk, contact scanner and Ramtek are discussed. The development of these hardware replacements coincide with the development of a gateway system for selected CANDU digital control

  9. Hardware based redundant multi-threading inside a GPU for improved reliability

    Science.gov (United States)

    Sridharan, Vilas; Gurumurthi, Sudhanva

    2015-05-05

    A system and method for verifying computation output using computer hardware are provided. Instances of computation are generated and processed on hardware-based processors. As instances of computation are processed, each instance of computation receives a load accessible to other instances of computation. Instances of output are generated by processing the instances of computation. The instances of output are verified against each other in a hardware based processor to ensure accuracy of the output.

  10. Metrics for Analyzing Quantifiable Differentiation of Designs with Varying Integrity for Hardware Assurance

    Science.gov (United States)

    2017-03-01

    manufacturing flow for error insertion by adversarial or dishonest agents inside a supplier . A hardware error is defined as any construct that causes...into the design with malicious intent to compromise a design’s functionality and reliability . Other aims of hardware Trojans could be for... supplier level of abstraction [2] [3]. By quantifying the integrity of questionable hardware at the design level, one gains the granularity to

  11. Biological computation

    CERN Document Server

    Lamm, Ehud

    2011-01-01

    Introduction and Biological BackgroundBiological ComputationThe Influence of Biology on Mathematics-Historical ExamplesBiological IntroductionModels and Simulations Cellular Automata Biological BackgroundThe Game of Life General Definition of Cellular Automata One-Dimensional AutomataExamples of Cellular AutomataComparison with a Continuous Mathematical Model Computational UniversalitySelf-Replication Pseudo Code Evolutionary ComputationEvolutionary Biology and Evolutionary ComputationGenetic AlgorithmsExample ApplicationsAnalysis of the Behavior of Genetic AlgorithmsLamarckian Evolution Genet

  12. A Hybrid Hardware and Software Component Architecture for Embedded System Design

    Science.gov (United States)

    Marcondes, Hugo; Fröhlich, Antônio Augusto

    Embedded systems are increasing in complexity, while several metrics such as time-to-market, reliability, safety and performance should be considered during the design of such systems. A component-based design which enables the migration of its components between hardware and software can cope to achieve such metrics. To enable that, we define hybrid hardware and software components as a development artifact that can be deployed by different combinations of hardware and software elements. In this paper, we present an architecture for developing such components in order to construct a repository of components that can migrate between the hardware and software domains to meet the design system requirements.

  13. Small Satellite Proximity Operations Hardware-in-the-Loop Test Bed Development

    Data.gov (United States)

    National Aeronautics and Space Administration — With the proliferation of small satellites resulting from CubeSat standardization of flight hardware elements, new mission architectures involving automated small...

  14. Studies on rumen magnet usage to prevent hardware disease in buffaloes

    Directory of Open Access Journals (Sweden)

    O. S. Al-Abbadi

    2014-06-01

    Full Text Available Aim: To evaluate the rumen magnet given once a life as a prophylaxis of hardware disease in buffaloes. Materials and Methods: In the present study, 3100 buffaloes were divided into two groups. In group I, 1200 hardware diseased buffaloes were surgically treated with rumenotomy, given reticular magnets and followed up to 7 years for a possible recurrent hardware disease. In group II, 1900 clinically normal buffalo heifers were given rumen magnets orally then followed up to seven years for a possible occurrence of hardware disease. All buffaloes showed signs of hardware disease were treated by rumenotomy. Data were statistically analyzed using chi-square test. Results: Hardware disease was recorded in 110 animals (10.8% and 155 animals (8.9% in groups I and II. The incidence of developing a hardware disease during the first 4 years after the use of magnet was 0% in both groups. Starting from 5th year, a time dependent increase in the proportion of buffaloes developing a hardware disease was noticed in both groups (P 0.05. Conclusion: Administration of a rumen magnet is an effective prophylaxis for hardware disease and reapplication of a second new magnet is recommended four years later in buffaloes at high risk.

  15. W-026 acceptance test plan plant control system hardware (submittal {number_sign} 216)

    Energy Technology Data Exchange (ETDEWEB)

    Watson, T.L., Fluor Daniel Hanford

    1997-02-14

    Acceptance Testing of the WRAP 1 Plant Control System Hardware will be conducted throughout the construction of WRAP I with the final testing on the Process Area hardware being completed in November 1996. The hardware tests will be broken out by the following functional areas; Local Control Units, Operator Control Stations in the WRAP Control Room, DMS Server, PCS Server, Operator Interface Units, printers, DNS terminals, WRAP Local Area Network/Communications, and bar code equipment. This document will contain completed copies of each of the hardware tests along with the applicable test logs and completed test exception reports.

  16. An integrated hardware/software configuration for the evaluation of nuclear data

    International Nuclear Information System (INIS)

    Zvenigorodskij, A.G.; Agureev, V.A.; Dunaev, I.B.; Dunaeva, S.A.; Lomtev, G.A.; Matvej, V.N.; Shapovalov, A.F.

    1984-01-01

    The article reviews a hardware/software configuration designed along modular lines for tasks involving graphically presented information, especially suitable for organizing collections of nuclear data

  17. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Kimura, N; The ATLAS collaboration

    2012-01-01

    Selecting interesting events with triggering is very challenging at the LHC due to the busy hadronic environment. Starting in 2014 the LHC will run with an energy of 13 or 14 TeV and instantaneous luminosities which could exceed 1034 interactions per cm2 and per second. The triggering in the ATLAS detector is realized using a three level trigger approach, in which the first level (Level-1) is hardware based and the second (Level-2) and third (EF) stag are realized using large computing farms. It is a crucial and non-trivial task for triggering to maintain a high efficiency for events of interest while suppressing effectively the very high rates of inclusive QCD process, which constitute mainly background. At the same time the trigger system has to be robust and provide sufficient operational margins to adapt to changes in the running environment. In the current design track reconstruction can be performed only in limited regions of interest at L2 and the CPU requirements may limit this even further at the hig...

  18. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Kimura, N; The ATLAS collaboration

    2012-01-01

    Selecting interesting events with triggering is very challenging at the LHC due to the busy hadronic environment. Starting in 2014 the LHC will run with an energy of 14TeV and instantaneous luminosities which could exceed 10^34 interactions per cm^2 and per second. The triggering in the ATLAS detector is realized using a three level trigger approach, in which the first level (L1) is hardware based and the second (L2) and third (EF) stag are realized using large computing farms. It is a crucial and non-trivial task for triggering to maintain a high efficiency for events of interest while suppressing effectively the very high rates of inclusive QCD process, which constitute mainly background. At the same time the trigger system has to be robust and provide sufficient operational margins to adapt to changes in the running environment. In the current design track reconstruction can be performed only in limited regions of interest at L2 and the CPU requirements may limit this even further at the highest instantane...

  19. A Hardware Fast Tracker for the ATLAS trigger

    CERN Document Server

    Asbah, Nedaa; The ATLAS collaboration

    2015-01-01

    The trigger system of the ATLAS experiment is designed to reduce the event rate from the LHC nominal bunch crossing at 40 MHz to about 1 kHz, at the design luminosity of 10^{34} cm^{-2}s^{-1}. After a successful period of data taking from 2010 to early 2013, the LHC restarted with much higher instantaneous luminosity. This will increase the load on High Level Trigger system, the second stage of the selection based on software algorithms. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals. The Fast TracKer (FTK) is part of the ATLAS trigger upgrade project; it is a hardware processor that will provide, at every level-1 accepted event (100 kHz) and within 100 microseconds, full tracking information for tracks with momentum as low as 1 GeV. Providing fast extensive access to tracking information, with resolution comparable to the offline reconstruction, FTK will help in precise detection of the primary and secondar...

  20. What Scientific Applications can Benefit from Hardware Transactional Memory?

    Energy Technology Data Exchange (ETDEWEB)

    Schindewolf, M; Bihari, B; Gyllenhaal, J; Schulz, M; Wang, A; Karl, W

    2012-06-04

    Achieving efficient and correct synchronization of multiple threads is a difficult and error-prone task at small scale and, as we march towards extreme scale computing, will be even more challenging when the resulting application is supposed to utilize millions of cores efficiently. Transactional Memory (TM) is a promising technique to ease the burden on the programmer, but only recently has become available on commercial hardware in the new Blue Gene/Q system and hence the real benefit for realistic applications has not been studied, yet. This paper presents the first performance results of TM embedded into OpenMP on a prototype system of BG/Q and characterizes code properties that will likely lead to benefits when augmented with TM primitives. We first, study the influence of thread count, environment variables and memory layout on TM performance and identify code properties that will yield performance gains with TM. Second, we evaluate the combination of OpenMP with multiple synchronization primitives on top of MPI to determine suitable task to thread ratios per node. Finally, we condense our findings into a set of best practices. These are applied to a Monte Carlo Benchmark and a Smoothed Particle Hydrodynamics method. In both cases an optimized TM version, executed with 64 threads on one node, outperforms a simple TM implementation. MCB with optimized TM yields a speedup of 27.45 over baseline.

  1. Commodity hardware and open source solutions in FTU data management

    International Nuclear Information System (INIS)

    Centioli, C.; Bracco, G.; Eccher, S.; Iannone, F.; Maslennikov, A.; Panella, M.; Vitale, V.

    2004-01-01

    Frascati Tokamak Upgrade (FTU) data management system underwent several developments in the last year, mainly due to the availability of huge amount of open source software and cheap commodity hardware. First of all, we replaced the old and expensive four SUN/SOLARIS servers running AFS (Andrew File System) fusione.it cell with three SuperServer Supermicro SC-742. Secondly Linux 2.4 OS has been installed on our new cell servers and OpenAFS 1.2.8 open source distributed file system has replaced the commercial IBM/Transarc AFS. A pioneering solution - SGI's XFS file system for Linux - has been adopted to format one terabyte of FTU storage system on which the AFS volumes are based. Benchmark tests have shown the good performances of XFS compared to the classical ext3 Linux file system. Third, the data access software has been ported to Linux, together with the interfaces to Matlab and IDL, as well as the locally developed data display utility, SHOX. Finally a new Object-Oriented Data Model (OODM) has been developed for FTU shots data to build and maintain a FTU data warehouse (DW). FTU OODM has been developed using ROOT, an object oriented data analysis framework well-known in high energy physics. Since large volumes of data are involved, a parallel data extraction process, developed in the ROOT framework, has been implemented taking advantage of the AFS distributed environment of FTU computing system

  2. Hardware-in-the-Loop emulator for a hydrokinetic turbine

    Science.gov (United States)

    Rat, C. L.; Prostean, O.; Filip, I.

    2018-01-01

    Hydroelectric power has proven to be an efficient and reliable form of renewable energy, but its impact on the environment has long been a source of concern. Hydrokinetic turbines are an emerging class of renewable energy technology designed for deployment in small rivers and streams with minimal environmental impact on the local ecosystem. Hydrokinetic technology represents a truly clean source of energy, having the potential to become a highly efficient method of harvesting renewable energy. However, in order to achieve this goal, extensive research is necessary. This paper presents a Hardware-in-the-Loop emulator for a run-of-the-river type hydrokinetic turbine. The HIL system uses an ABB ACS800 drive to control an induction machine as a significant means of replicating the behavior of the real turbine. The induction machine is coupled to a permanent magnet synchronous generator and the corresponding load. The ACS800 drive is controlled through the software system, which comprises of the hydrokinetic turbine real-time simulation through mathematical modeling in the LabVIEW programming environment running on a NI CompactRIO (cRIO) platform. The advantages of this method are that it can provide a means for testing many control configurations without requiring the presence of the real turbine. This paper contains the basic principles of a hydrokinetic turbine, particularly the run-of-the-river configurations along with the experimental results obtained from the HIL system.

  3. Towards Batched Linear Solvers on Accelerated Hardware Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Haidar, Azzam [University of Tennessee (UT); Dong, Tingzing Tim [University of Tennessee (UT); Tomov, Stanimire [University of Tennessee (UT); Dongarra, Jack J [ORNL

    2015-01-01

    As hardware evolves, an increasingly effective approach to develop energy efficient, high-performance solvers, is to design them to work on many small and independent problems. Indeed, many applications already need this functionality, especially for GPUs, which are known to be currently about four to five times more energy efficient than multicore CPUs for every floating-point operation. In this paper, we describe the development of the main one-sided factorizations: LU, QR, and Cholesky; that are needed for a set of small dense matrices to work in parallel. We refer to such algorithms as batched factorizations. Our approach is based on representing the algorithms as a sequence of batched BLAS routines for GPU-contained execution. Note that this is similar in functionality to the LAPACK and the hybrid MAGMA algorithms for large-matrix factorizations. But it is different from a straightforward approach, whereby each of GPU's symmetric multiprocessors factorizes a single problem at a time. We illustrate how our performance analysis together with the profiling and tracing tools guided the development of batched factorizations to achieve up to 2-fold speedup and 3-fold better energy efficiency compared to our highly optimized batched CPU implementations based on the MKL library on a two-sockets, Intel Sandy Bridge server. Compared to a batched LU factorization featured in the NVIDIA's CUBLAS library for GPUs, we achieves up to 2.5-fold speedup on the K40 GPU.

  4. Physics of Colloids in Space (PCS) Flight Hardware Developed

    Science.gov (United States)

    Koudelka, John M.

    2001-01-01

    investigation that will be located in an Expedite the Process of Experiments to Space Station (EXPRESS) Rack. The investigation will be conducted in the International Space Station U.S. laboratory, Destiny, over a period of approximately 10 months during the station assembly period from flight 6A through flight UF-2. This experiment will gather data on the basic physical properties of colloids by studying three different colloid systems with the objective of understanding how they grow and what structures they form. A colloidal suspension consists of fine particles (micrometer to submicrometer) suspended in a fluid for example, paints, milk, salad dressings, and aerosols. The long-term goal of this investigation is to learn how to steer the growth of colloidal suspensions to create new materials and new structures. This experiment is part of a two-stage investigation conceived by Professor David Weitz of Harvard University along with Professor Peter Pusey of the University of Edinburgh. The experiment hardware was developed by the NASA Glenn Research Center through contracts with Dynacs, Inc., and ZIN Technologies.

  5. A Fast hardware tracker for the ATLAS Trigger

    CERN Document Server

    Pandini, Carlo Enrico; The ATLAS collaboration

    2015-01-01

    The trigger system at the ATLAS experiment is designed to lower the event rate occurring from the nominal bunch crossing at 40 MHz to about 1 kHz for a designed LHC luminosity of 10$^{34}$ cm$^{-2}$ s$^{-1}$. To achieve high background rejection while maintaining good efficiency for interesting physics signals, sophisticated algorithms are needed which require extensive use of tracking information. The Fast TracKer (FTK) trigger system, part of the ATLAS trigger upgrade program, is a highly parallel hardware device designed to perform track-finding at 100 kHz and based on a mixture of advanced technologies. Modern, powerful Field Programmable Gate Arrays (FPGA) form an important part of the system architecture, and the combinatorial problem of pattern recognition is solved by ~8000 standard-cell ASICs named Associative Memories. The availability of the tracking and subsequent vertex information within a short latency ensures robust selections and allows improved trigger performance for the most difficult sign...

  6. A Fast hardware Tracker for the ATLAS Trigger system

    CERN Document Server

    Pandini, Carlo Enrico; The ATLAS collaboration

    2015-01-01

    The trigger system at the ATLAS experiment is designed to lower the event rate occurring from the nominal bunch crossing at 40 MHz to about 1 kHz for a designed LHC luminosity of 10$^{34}$ cm$^{-2}$ s$^{-1}$. After a very successful data taking run the LHC is expected to run starting in 2015 with much higher instantaneous luminosities and this will increase the load on the High Level Trigger system. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals, which requires a more extensive use of tracking information. The Fast Tracker (FTK) trigger system, part of the ATLAS trigger upgrade program, is a highly parallel hardware device designed to perform full-scan track-finding at the event rate of 100 kHz. FTK is a dedicated processor based on a mixture of advanced technologies. Modern, powerful, Field Programmable Gate Arrays form an important part of the system architecture, and the combinatorial problem of pattern r...

  7. The Hardware Topological Trigger of ATLAS: Commissioning and Operations

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00226165; The ATLAS collaboration

    2018-01-01

    The Level-1 trigger is the first rate-reducing step in the ATLAS trigger system with an output rate of 100 kHz and decision latency smaller than 2.5 μs. It consists of a calorimeter trigger, muon trigger and a central trigger processor. To improve the physics potential reach in ATLAS, during the LHC shutdown after Run 1, the Level-1 trigger system was upgraded at hardware, firmware and software level. In particular, a new electronics sub-system was introduced in the real-time data processing path: the Topological Processor System (L1Topo). It consists of a single AdvancedCTA shelf equipped with two Level-1 topological processor blades. For individual blades, real-time information from calorimeter and muon Level-1 trigger systems, is processed by four individual state-of-the-art FPGAs. It needs to deal with a large input bandwidth of up to 6 Tb/s, optical connectivity and low processing latency on the real-time data path. The L1Topo firmware apply measurements of angles between jets and/or leptons and several...

  8. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Neubauer, M; The ATLAS collaboration

    2011-01-01

    In hadron collider experiments, triggering the detector to store interesting events for offline analysis is a challenge due to the high rates and multiplicities of particles produced. The LHC will soon operate at a center-of-mass energy of 14 TeV and at high instantaneous luminosities of the order of $10^{34}$ to $10^{35}$ cm$^{-2}$ s$^{-1}$. A multi-level trigger strategy is used in ATLAS, with the first level (LVL1) implemented in hardware and the second and third levels (LVL2 and EF) implemented in a large computer farm. Maintaining high trigger efficiency for the physics we are most interested in while at the same time suppressing high rate physics from inclusive QCD processes is a difficult but important problem. It is essential that the trigger system be flexible and robust, with sufficient redundancy and operating margin. Providing high quality track reconstruction over the full ATLAS detector by the start of processing at LVL2 is an important element to achieve these needs. As the instantaneous lumino...

  9. "Hardware breakage in spine surgery (A retrospective clinical study "

    Directory of Open Access Journals (Sweden)

    "Sadat MM

    2001-11-01

    Full Text Available This was a retrospective review of a consecutive series of patients with spinal disease in year 2000, who underwent posterior fusion and instrumentation with Harrington distraction and Cotrel-Dobousset system to evaluate causes of hardware failure. Many cases of clinical failure has been observed in spinal instrumentation used in spinal disorder like spondylolisthesis, fractures, deformities, … . Thirty six cases that were operated because of spinal disorders like spondylolisthesis, fractures, deformities, …, were included in this study. Seventeen of this cases had breakage of device. Factors like age at surgery, type of instrumentation, angles before and after surgery and …, were compared in two groups of patients. The most common instrument breakage was pedicle screw breakage. Pseudoarthrosis was the main factor that was presented in failure group (P value<0.001. Other important causes were, age of patient at surgery (P value=0.04, pedicle screw placement off center in the sagittal or coronal plane of the pedicle (P value=0.04. Instrumentation loads increased significantly as a direct result of variations in surgical technique that produce pseudoarthrosis, pedicle screw placement off center in the sagittal plane of the pedicle, or using less than 6 mm diameter screw. This factor can be prevented with meticulous surgical technique and using proper devices.

  10. Space station common module network topology and hardware development

    Science.gov (United States)

    Anderson, P.; Braunagel, L.; Chwirka, S.; Fishman, M.; Freeman, K.; Eason, D.; Landis, D.; Lech, L.; Martin, J.; Mccorkle, J.

    1990-01-01

    Conceptual space station common module power management and distribution (SSM/PMAD) network layouts and detailed network evaluations were developed. Individual pieces of hardware to be developed for the SSM/PMAD test bed were identified. A technology assessment was developed to identify pieces of equipment requiring development effort. Equipment lists were developed from the previously selected network schematics. Additionally, functional requirements for the network equipment as well as other requirements which affected the suitability of specific items for use on the Space Station Program were identified. Assembly requirements were derived based on the SSM/PMAD developed requirements and on the selected SSM/PMAD network concepts. Basic requirements and simplified design block diagrams are included. DC remote power controllers were successfully integrated into the DC Marshall Space Flight Center breadboard. Two DC remote power controller (RPC) boards experienced mechanical failure of UES 706 stud-mounted diodes during mechanical installation of the boards into the system. These broken diodes caused input to output shorting of the RPC's. The UES 706 diodes were replaced on these RPC's which eliminated the problem. The DC RPC's as existing in the present breadboard configuration do not provide ground fault protection because the RPC was designed to only switch the hot side current. If ground fault protection were to be implemented, it would be necessary to design the system so the RPC switched both the hot and the return sides of power.

  11. Hardware Implementation of Artificial Neural Network for Data Ciphering

    Directory of Open Access Journals (Sweden)

    Sahar L. Kadoory

    2016-10-01

    Full Text Available This paper introduces the design and realization of multiple blocks ciphering techniques on the FPGA (Field Programmable Gate Arrays. A back propagation neural networks have been built for substitution, permutation and XOR blocks ciphering using Neural Network Toolbox in MATLAB program. They are trained to encrypt the data, after obtaining the suitable weights, biases, activation function and layout. Afterward, they are described using VHDL and implemented using Xilinx Spartan-3E FPGA using two approaches: serial and parallel versions. The simulation results obtained with Xilinx ISE 9.2i software. The numerical precision is chosen carefully when implementing the Neural Network on FPGA. Obtained results from the hardware designs show accurate numeric values to cipher the data. As expected, the synthesis results indicate that the serial version requires less area resources than the parallel version. As, the data throughput in parallel version is higher than the serial version in rang between (1.13-1.5 times. Also, a slight difference can be observed in the maximum frequency.

  12. Hardware in the Loop Testing of an Iodine-Fed Hall Thruster

    Science.gov (United States)

    Polzin, Kurt A.; Peeples, Steven R.; Cecil, Jim; Lewis, Brandon L.; Molina Fraticelli, Jose C.; Clark, James P.

    2015-01-01

    initiated from an operator's workstation outside the vacuum chamber and passed through the Cortex 160 to exercise portions of the flight avionics. Two custom-designed pieces of electronics hardware have been designed to operate the propellant feed system. One piece of hardware is an auxiliary board that controls a latch valve, proportional flow control valves (PFCVs) and valve heaters as well as measuring pressures, temperatures and PFCV feedback voltage. An onboard FPGA provides a serial link for issuing commands and manages all lower level input-output functions. The other piece of hardware is a power distribution board, which accepts a standard bus voltage input and converts this voltage into all the different current-voltage types required to operate the auxiliary board. These electronics boards are located in the vacuum chamber near the thruster, exposing this hardware to both the vacuum and plasma environments they would encounter during a mission, with these components communicating to the flight computer through an RS-422 interface. The auxiliary board FPGA provides a 28V MOSFET switch circuit with a 20ms pulse to open or close the iodine propellant feed system latch valve. The FPGA provides a pulse width modulation (PWM) signal to a DC/DC boost converter to produce the 12-120V needed for control of the proportional flow control valve. There are eight MOSFET-switched heating circuits in the system. Heaters are 28V and located in the latch valve, PFCV, propellant tank and propellant feed lines. Both the latch valve and PFCV have thermistors built into them for temperature monitoring. There are also seven resistance temperature device (RTD) circuits on the auxiliary board that can be used to measure the propellant tank and feedline temperatures. The signals are conditioned and sent to an analog to digital converter (ADC), which is directly commanded and controlled by the FPGA.

  13. Inexpensive, Low Power, Open-Source Data Logging hardware development

    Science.gov (United States)

    Sandell, C. T.; Schulz, B.; Wickert, A. D.

    2017-12-01

    Over the past six years, we have developed a suite of open-source, low-cost, and lightweight data loggers for scientific research. These loggers employ the popular and easy-to-use Arduino programming environment, but consist of custom hardware optimized for field research. They may be connected to a broad and expanding range of off-the-shelf sensors, with software support built in directly to the "ALog" library. Three main models exist: The ALog (for Autonomous or Arduino Logger) is the extreme low-power model for years-long deployments with only primary AA or D batteries. The ALog shield is a stripped-down ALog that nests with a standard Arduino board for prototyping or education. The TLog (for Telemetering Logger) contains an embedded radio with 500 m range and a GPS for communications and precision timekeeping. This enables meshed networks of loggers that can send their data back to an internet-connected "home base" logger for near-real-time field data retrieval. All boards feature feature a high-precision clock, full size SD card slot for high-volume data storage, large screw terminals to connect sensors, interrupts, SPI and I2C communication capability, and 3.3V/5V power outputs. The ALog and TLog have fourteen 16-bit analog inputs with a precision voltage reference for precise analog measurements. Their components are rated -40 to +85 degrees C, and they have been tested in harsh field conditions. These low-cost and open-source data loggers have enabled our research group to collect field data across North and South America on a limited budget, support student projects, and build toward better future scientific data systems.

  14. Analog-to-Digital Cognitive Radio: Sampling, Detection, and Hardware

    Science.gov (United States)

    Cohen, Deborah; Tsiper, Shahar; Eldar, Yonina C.

    2018-01-01

    The proliferation of wireless communications has recently created a bottleneck in terms of spectrum availability. Motivated by the observation that the root of the spectrum scarcity is not a lack of resources but an inefficient managing that can be solved, dynamic opportunistic exploitation of spectral bands has been considered, under the name of Cognitive Radio (CR). This technology allows secondary users to access currently idle spectral bands by detecting and tracking the spectrum occupancy. The CR application revisits this traditional task with specific and severe requirements in terms of spectrum sensing and detection performance, real-time processing, robustness to noise and more. Unfortunately, conventional methods do not satisfy these demands for typical signals, that often have very high Nyquist rates. Recently, several sampling methods have been proposed that exploit signals' a priori known structure to sample them below the Nyquist rate. Here, we review some of these techniques and tie them to the task of spectrum sensing in the context of CR. We then show how issues related to spectrum sensing can be tackled in the sub-Nyquist regime. First, to cope with low signal to noise ratios, we propose to recover second-order statistics from the low rate samples, rather than the signal itself. In particular, we consider cyclostationary based detection, and investigate CR networks that perform collaborative spectrum sensing to overcome channel effects. To enhance the efficiency of the available spectral bands detection, we present joint spectrum sensing and direction of arrival estimation methods. Throughout this work, we highlight the relation between theoretical algorithms and their practical implementation. We show hardware simulations performed on a prototype we built, demonstrating the feasibility of sub-Nyquist spectrum sensing in the context of CR.

  15. Hardware implementation of antenna array system for maximum SLL reduction

    Directory of Open Access Journals (Sweden)

    Amr H. Hussein

    2017-06-01

    Full Text Available Side lobe level (SLL reduction has a great importance in recent communication systems. It is considered as one of the most important applications of digital beamforming since it reduces the effect of interference arriving outside the main lobe. This interference reduction increases the capacity of the communication systems. In this paper, the hardware implementation of an antenna array system for SLL reduction is introduced using microstrip technology. The proposed antenna array system consists of two main parts, the antenna array, and its feeding network. Power dividers play a vital role in various radio frequency and communication applications. A power divider can be utilized as a feeding network of an antenna array. For the synthesis of a radiation pattern, an unequal-split power divider is required. A new design for a four ports unequal circular sector power divider and its application to antenna array SLL reduction is introduced. The amplitude and phase of the signals emerging from each power divider branch are adjusted using stub and inset matching techniques. These matching techniques are used to adjust the branches impedances according to the desired power ratio. The design of the antenna array and the power divider are made using the software package CST MICROWAVE STUDIO. The power divider is realized on Rogers R03010 substrate with dielectric constant εr=10.2, loss tangent of 0.0035, and height h=1.28mm. In addition, a design for ultra-wide band (UWB antenna element and array are introduced. The antenna elements and the array are realized on the FR4 (lossy substrate with dielectric constant εr=4.5, loss tangent of 0.025, and height h=1.5mm. The fabrication is done using thin film technology and photolithographic technique. The experimental measurements are done using the vector network analyzer (VNA HP8719Es. Good agreement is found between the measurements and the simulation results.

  16. Industrial hardware and software verification with ACL2.

    Science.gov (United States)

    Hunt, Warren A; Kaufmann, Matt; Moore, J Strother; Slobodova, Anna

    2017-10-13

    The ACL2 theorem prover has seen sustained industrial use since the mid-1990s. Companies that have used ACL2 regularly include AMD, Centaur Technology, IBM, Intel, Kestrel Institute, Motorola/Freescale, Oracle and Rockwell Collins. This paper introduces ACL2 and focuses on how and why ACL2 is used in industry. ACL2 is well-suited to its industrial application to numerous software and hardware systems, because it is an integrated programming/proof environment supporting a subset of the ANSI standard Common Lisp programming language. As a programming language ACL2 permits the coding of efficient and robust programs; as a prover ACL2 can be fully automatic but provides many features permitting domain-specific human-supplied guidance at various levels of abstraction. ACL2 specifications and models often serve as efficient execution engines for the modelled artefacts while permitting formal analysis and proof of properties. Crucially, ACL2 also provides support for the development and verification of other formal analysis tools. However, ACL2 did not find its way into industrial use merely because of its technical features. The core ACL2 user/development community has a shared vision of making mechanized verification routine when appropriate and has been committed to this vision for the quarter century since the Computational Logic, Inc., Verified Stack. The community has focused on demonstrating the viability of the tool by taking on industrial projects (often at the expense of not being able to publish much).This article is part of the themed issue 'Verified trustworthy software systems'. © 2017 The Author(s).

  17. RESEARCH PROGRESS AND HARDWARE SYSTEMS AT DIII-D

    Energy Technology Data Exchange (ETDEWEB)

    PETERSEN,P.I; THE DIII-D TEAM

    2003-10-01

    OAK-B135 During the last two years significant progress has been made in the scientific understanding of DIII-D plasmas. Much of this progress has been enabled by the addition of new hardware systems. The electron cyclotron (EC) system has been upgraded from 3 MW to 6 MW, by adding three 1 MW gyrotrons with diamond windows and three steerable launchers (PPPL). The new gyrotrons have been tested to 1.0 MW for 5 s. The system has been used to control the 3/2 and 2/1 neoclassical tearing modes and to locally heat the plasma and thereby indirectly control the current density. Electron cyclotron current drive ECCD has been used to directly affect the current density. A Li-beam diagnostic has been brought on-line for measuring the edge current density using Zeeman splitting. A set of 12 coils (1-coils), consisting of six picture frame coils each above and below the midplane, with a capability of 7 kA for 10 s has been installed inside the DIII-D vessel. These coils, along with the existing six C-coils, are used to apply non-axisymmetric fields to the plasma for both exciting and controlling plasma instabilities. The DIII-D digital plasma control system is now used to not just control the shape and location of the plasma but also the electron temperature, density, the NTMs, RWMs, plasma beta and disruption mitigation. Plasma disruption experiments are extended to mitigation of real time detected disruptions on DIII-D.

  18. Implementing the lattice Boltzmann model on commodity graphics hardware

    International Nuclear Information System (INIS)

    Kaufman, Arie; Fan, Zhe; Petkov, Kaloian

    2009-01-01

    Modern graphics processing units (GPUs) can perform general-purpose computations in addition to the native specialized graphics operations. Due to the highly parallel nature of graphics processing, the GPU has evolved into a many-core coprocessor that supports high data parallelism. Its performance has been growing at a rate of squared Moore's law, and its peak floating point performance exceeds that of the CPU by an order of magnitude. Therefore, it is a viable platform for time-sensitive and computationally intensive applications. The lattice Boltzmann model (LBM) computations are carried out via linear operations at discrete lattice sites, which can be implemented efficiently using a GPU-based architecture. Our simulations produce results comparable to the CPU version while improving performance by an order of magnitude. We have demonstrated that the GPU is well suited for interactive simulations in many applications, including simulating fire, smoke, lightweight objects in wind, jellyfish swimming in water, and heat shimmering and mirage (using the hybrid thermal LBM). We further advocate the use of a GPU cluster for large scale LBM simulations and for high performance computing. The Stony Brook Visual Computing Cluster has been the platform for several applications, including simulations of real-time plume dispersion in complex urban environments and thermal fluid dynamics in a pressurized water reactor. Major GPU vendors have been targeting the high performance computing market with GPU hardware implementations. Software toolkits such as NVIDIA CUDA provide a convenient development platform that abstracts the GPU and allows access to its underlying stream computing architecture. However, software programming for a GPU cluster remains a challenging task. We have therefore developed the Zippy framework to simplify GPU cluster programming. Zippy is based on global arrays combined with the stream programming model and it hides the low-level details of the

  19. A Unified Component Modeling Approach for Performance Estimation in Hardware/Software Codesign

    DEFF Research Database (Denmark)

    Grode, Jesper Nicolai Riis; Madsen, Jan

    1998-01-01

    This paper presents an approach for abstract modeling of hardware/software architectures using Hierarchical Colored Petri Nets. The approach is able to capture complex behavioral characteristics often seen in software and hardware architectures, thus it is suitable for high level codesign issues ...... system [12]. Details and the basic characteristics of the approach can be found in [8]....

  20. Hardware detection and parameter tuning method for speed control system of PMSM

    Science.gov (United States)

    Song, Zhengqiang; Yang, Huiling

    2018-03-01

    In this paper, the development of permanent magnet synchronous motor AC speed control system is taken as an example, aiming to expound the principle and parameter setting method of the system hardware, and puts forward the method of using software or hardware to eliminate the problem.

  1. Total knee arthroplasty using patient-specific blocks after prior femoral fracture without hardware removal

    Directory of Open Access Journals (Sweden)

    Raju Vaishya

    2018-01-01

    Full Text Available Background: The options to perform total knee arthroplasty (TKA with retained hardware in femur are mainly – removal of hardware, use of extramedullary guide, or computer-assisted surgery. Patient-specific blocks (PSBs have been introduced with many potential advantages, but their use in retained hardware has not been adequately explored. The purpose of the present study was to outline and assess the usefulness of the PSBs in performing TKA in patients with retained femoral hardware. Materials and Materials and Methods: Nine patients with retained femoral hardware underwent TKA using PSBs. All the surgeries were performed by the same surgeon using same implants. Nine cases (7 males and 2 females out of total of 120 primary TKA had retained hardware. The average age of the patients was 60.55 years. The retained hardware were 6 patients with nails, 2 with plates and one patient had screws. Out of the nine cases, only one patient needed removal of a screw which was hindering placement of pin for the PSB. Results: All the patients had significant improvement in their Knee Society Score (KSS which improved from 47.0 to postoperative KSS of 86.77 (P < 0.00. The mechanical axis was significantly improved (P < 0.03 after surgery. No patient required blood transfusion and the average tourniquet time was 41 min. Conclusion: TKA using PSBs is useful and can be used in patients with retained hardware with good functional and radiological outcome.

  2. Secure Hardware Implementation of Nonlinear Functions in the Presence of Glitches

    NARCIS (Netherlands)

    Paar, C.; Nikova, S.I.; Rijmen, Vincent; Quisquater, J.J.; Sunar, B.; Schläffer, Martin

    2011-01-01

    Hardware implementations of cryptographic algorithms are vulnerable to side-channel attacks. Side-channel attacks that are based on multiple measurements of the same operation can be countered by employing masking techniques. Many protection measures depart from an idealized hardware model that is

  3. Hardware Reuse Improvement through the Domain Specific Language dHDL.

    OpenAIRE

    Sánchez Marcos, Miguel Ángel; López Vallejo, Marisa; Iglesias Fernandez, Carlos Angel

    2012-01-01

    The dHDL language has been defined to improve hardware design productivity. This is achieved through the definition of a better reuse interface (including parameters, attributes and macroports) and the creation of control structures that help the designer in the hardware generation process.

  4. Design of embedded hardware platform in intelligent γ-spectrometry instrument based on ARM9

    International Nuclear Information System (INIS)

    Hong Tianqi; Fang Fang

    2008-01-01

    This paper described the design of embedded hardware platform based on ARM9 S3C2410A, emphases are focused on analyzing the methods of design the circuits of memory, LCD and keyboard ports. It presented a new solution of hardware platform in intelligent portable instrument for γ measurement. (authors)

  5. Round Girls in Square Computers: Feminist Perspectives on the Aesthetics of Computer Hardware.

    Science.gov (United States)

    Carr-Chellman, Alison A.; Marra, Rose M.; Roberts, Shari L.

    2002-01-01

    Considers issues related to computer hardware, aesthetics, and gender. Explores how gender has influenced the design of computer hardware and how these gender-driven aesthetics may have worked to maintain, extend, or alter gender distinctions, roles, and stereotypes; discusses masculine media representations; and presents an alternative model.…

  6. 75 FR 34169 - Hewlett-Packard Company, Inkjet Consumer Solutions, HP Consumer Hardware Inkjet Lab, Including...

    Science.gov (United States)

    2010-06-16

    ... Hardware Inkjet Lab, Including Leased Workers From Hightower Technology Capital, Inc., Syncro Design, VMC, PDG Oncore, K Force, Supply Source, Sigma Design, Novo Engineering, Act, Stilwell Baker, and... Company, Inkjet Consumer Solutions, HP Consumer Hardware Inkjet Lab, Vancouver, Washington. The notice was...

  7. 34 CFR 464.42 - What limit applies to purchasing computer hardware and software?

    Science.gov (United States)

    2010-07-01

    ... software? 464.42 Section 464.42 Education Regulations of the Offices of the Department of Education... computer hardware and software? Not more than ten percent of funds received under any grant under this part may be used to purchase computer hardware or software. (Authority: 20 U.S.C. 1208aa(f)) ...

  8. Rapid Non-Cartesian Parallel Imaging Reconstruction on Commodity Graphics Hardware

    DEFF Research Database (Denmark)

    Sørensen, Thomas Sangild; Atkinson, David; Boubertakh, Redha

    2008-01-01

    This presentation describes an implementation of non-Cartesian SENSE and kt-SENSE accelerated on commodity graphics hardware. This inexpensive hardware platform is now fully programmable and very suited for solving reconstruction problems. We show that for both SENSE and kt-SENSE the reconstruction...

  9. Initial Study

    DEFF Research Database (Denmark)

    Torp, Kristian

    2009-01-01

    Congestion is a major problem in most cities and the problem is growing (Quiroga, 2000) (Faghri & Hamad, 2002). When the congestion level is increased the drivers notice this as delays in the traffic (Taylor, Woolley, & Zito, 2000), i.e., the travel time for the individual driver is simply...... increased. In the initial study presented here, the time it takes to pass an intersection is studied in details. Two major signal-controlled four-way intersections in the center of the city Aalborg are studied in details to estimate the congestion levels in these intersections, based on the time it takes...

  10. A Millimetre-Sized Robot Realized by a Piezoelectric Impact-Type Rotary Actuator and a Hardware Neuron Model

    Directory of Open Access Journals (Sweden)

    Minami Takato

    2014-07-01

    Full Text Available Micro-robotic systems are increasingly used in medicine and other fields requiring precision engineering. This paper proposes a piezoelectric impact-type rotary actuator and applies it to a millimetre-size robot controlled by a hardware neuron model. The rotary actuator and robot are fabricated by micro-electro-mechanical systems (MEMS technology. The actuator is composed of multilayer piezoelectric elements. The rotational motion of the rotor is generated by the impact head attached to the piezoelectric element. The millimetre-size robot is fitted with six legs, three on either side of the developed actuator, and can walk on uneven surfaces like an insect. The three leg parts on each side are connected by a linking mechanism. The control system is a hardware neuron model constructed from analogue electronic circuits that mimic the behaviour of biological neurons. The output signal ports of the controller are connected to the multilayer piezoelectric element. This robot system requires no specialized software programs or A/D converters. The rotation speed of the rotary actuator reaches 60 rpm at an applied neuron frequency of 25 kHz during the walking motion. The width, length and height of the robot are 4.0, 4.6 and 3.6 mm, respectively. The motion speed is 180 mm/min.

  11. Power Hardware-in-the-Loop-Based Anti-Islanding Evaluation and Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Schoder, Karl [Florida State Univ., Tallahassee, FL (United States). Ceter for Advanced Power Systems (CAPS); Langston, James [Florida State Univ., Tallahassee, FL (United States). Ceter for Advanced Power Systems (CAPS); Hauer, John [Florida State Univ., Tallahassee, FL (United States). Ceter for Advanced Power Systems (CAPS); Bogdan, Ferenc [Florida State Univ., Tallahassee, FL (United States). Ceter for Advanced Power Systems (CAPS); Steurer, Michael [Florida State Univ., Tallahassee, FL (United States). Ceter for Advanced Power Systems (CAPS); Mather, Barry [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-10-01

    The National Renewable Energy Laboratory (NREL) teamed with Southern California Edison (SCE), Clean Power Research (CPR), Quanta Technology (QT), and Electrical Distribution Design (EDD) to conduct a U.S. Department of Energy (DOE) and California Public Utility Commission (CPUC) California Solar Initiative (CSI)-funded research project investigating the impacts of integrating high-penetration levels of photovoltaics (PV) onto the California distribution grid. One topic researched in the context of high-penetration PV integration onto the distribution system is the ability of PV inverters to (1) detect islanding conditions (i.e., when the distribution system to which the PV inverter is connected becomes disconnected from the utility power connection) and (2) disconnect from the islanded system within the time specified in the performance specifications outlined in IEEE Standard 1547. This condition may cause damage to other connected equipment due to insufficient power quality (e.g., over-and under-voltages) and may also be a safety hazard to personnel that may be working on feeder sections to restore service. NREL teamed with the Florida State University (FSU) Center for Advanced Power Systems (CAPS) to investigate a new way of testing PV inverters for IEEE Standard 1547 unintentional islanding performance specifications using power hardware-in-loop (PHIL) laboratory testing techniques.

  12. Tumor Biology and Microenvironment Research

    Science.gov (United States)

    Part of NCI's Division of Cancer Biology's research portfolio, research in this area seeks to understand the role of tumor cells and the tumor microenvironment (TME) in driving cancer initiation, progression, maintenance and recurrence.

  13. LIDAR TS for ITER core plasma. Part I: layout & hardware

    Science.gov (United States)

    Salzmann, H.; Gowers, C.; Nielsen, P.

    2017-12-01

    The original time-of-flight design of the Thomson scattering diagnostic for the ITER core plasma has been shown up by ITER. This decision was justified by insufficiencies of some of the components. In this paper we show that with available, present day technology a LIDAR TS system is feasible which meets all the ITER specifications. As opposed to the conventional TS system the LIDAR TS also measures the high field side of the plasma. The optical layout of the front end has been changed only little in comparison with the latest one considered by ITER. The main change is that it offers an optical collection without any vignetting over the low field side. The throughput of the system is defined only by the size and the angle of acceptance of the detectors. This, in combination with the fact that the LIDAR system uses only one set of spectral channels for the whole line of sight, means that no absolute calibration using Raman or Rayleigh scattering from a non-hydrogen isotope gas fill of the vessel is needed. Alignment of the system is easy since the collection optics view the footprint of the laser on the inner wall. In the described design we use, simultaneously, two different wavelength pulses from a Nd:YAG laser system. Its fundamental wavelength ensures measurements of 2 keV up to more than 40 keV, whereas the injection of the second harmonic enables measurements of low temperatures. As it is the purpose of this paper to show the technological feasibility of the LIDAR system, the hardware is considered in Part I of the paper. In Part II we demonstrate by numerical simulations that the accuracy of the measurements as required by ITER is maintained throughout the given plasma parameter range. The effect of enhanced background radiation in the wavelength range 400 nm-500 nm is considered. In Part III the recovery of calibration in case of changing spectral transmission of the front end is treated. We also investigate how to improve the spatial resolution at the

  14. Standardization in synthetic biology.

    Science.gov (United States)

    Müller, Kristian M; Arndt, Katja M

    2012-01-01

    Synthetic Biology is founded on the idea that complex biological systems are built most effectively when the task is divided in abstracted layers and all required components are readily available and well-described. This requires interdisciplinary collaboration at several levels and a common understanding of the functioning of each component. Standardization of the physical composition and the description of each part is required as well as a controlled vocabulary to aid design and ensure interoperability. Here, we describe standardization initiatives from several disciplines, which can contribute to Synthetic Biology. We provide examples of the concerted standardization efforts of the BioBricks Foundation comprising the request for comments (RFC) and the Registry of Standardized Biological parts as well as the international Genetically Engineered Machine (iGEM) competition.

  15. Initial impacts and field validation of host range for Boreioglycaspis melaleucae Moore (Hemiptera: Psyllidae),a biological control agent of the invasive tree Melaleuca quinquenervia (Cav.) Blake (Myrtales: Myrtaceae: Leptosp

    Science.gov (United States)

    Invasion of south Florida wetlands by the Australian paperbark tree (“melaleuca”), Melaleuca quinquenervia (Cav.) S.T. Blake (melaleuca) has caused adverse economic and environmental impacts. The tree’s biological attributes along with favorable ambient biophysical conditions combine to complicate ...

  16. Systems Biology

    Energy Technology Data Exchange (ETDEWEB)

    Wiley, H S.

    2006-06-01

    The biology revolution over the last 50 years has been driven by the ascendancy of molecular biology. This was enthusiastically embraced by most biologists because it took us into increasingly familiar territory. It took mysterious processes, such as the replication of genetic material and assigned them parts that could be readily understood by the human mind. When we think of ''molecular machines'' as being the underlying basis of life, we are using a paradigm derived from everyday experience. However, the price that we paid was a relentless drive towards reductionism and the attendant balkanization of biology. Now along comes ''systems biology'' that promises us a solution to the problem of ''knowing more and more about less and less''. Unlike molecular biology, systems biology appears to be taking us into unfamiliar intellectual territory, such as statistics, mathematics and computer modeling. Not surprisingly, systems biology has met with widespread skepticism and resistance. Why do we need systems biology anyway and how does this new area of research promise to change the face of biology in the next couple of decades?

  17. Biological therapeutics

    National Research Council Canada - National Science Library

    Greenstein, Ben; Brook, Daniel A

    2011-01-01

    This introductory textbook covers all the main categories of biological medicines, including vaccines, hormonal preparations, drugs for rheumatoid arthritis and other connective tissue diseases, drugs...

  18. Structural Biology: Practical NMR Applications

    CERN Document Server

    Teng, Quincy

    2005-01-01

    This textbook begins with an overview of NMR development and applications in biological systems. It describes recent developments in instrument hardware and methodology. Chapters highlight the scope and limitation of NMR methods. While detailed math and quantum mechanics dealing with NMR theory have been addressed in several well-known NMR volumes, chapter two of this volume illustrates the fundamental principles and concepts of NMR spectroscopy in a more descriptive manner. Topics such as instrument setup, data acquisition, and data processing using a variety of offline software are discussed. Chapters further discuss several routine stategies for preparing samples, especially for macromolecules and complexes. The target market for such a volume includes researchers in the field of biochemistry, chemistry, structural biology and biophysics.

  19. Is synthetic biology mechanical biology?

    Science.gov (United States)

    Holm, Sune

    2015-12-01

    A widespread and influential characterization of synthetic biology emphasizes that synthetic biology is the application of engineering principles to living systems. Furthermore, there is a strong tendency to express the engineering approach to organisms in terms of what seems to be an ontological claim: organisms are machines. In the paper I investigate the ontological and heuristic significance of the machine analogy in synthetic biology. I argue that the use of the machine analogy and the aim of producing rationally designed organisms does not necessarily imply a commitment to mechanical biology. The ideal of applying engineering principles to biology is best understood as expressing recognition of the machine-unlikeness of natural organisms and the limits of human cognition. The paper suggests an interpretation of the identification of organisms with machines in synthetic biology according to which it expresses a strategy for representing, understanding, and constructing living systems that are more machine-like than natural organisms.

  20. Software-Controlled Dynamically Swappable Hardware Design in Partially Reconfigurable Systems

    Directory of Open Access Journals (Sweden)

    Huang Chun-Hsian

    2008-01-01

    Full Text Available Abstract We propose two basic wrapper designs and an enhanced wrapper design for arbitrary digital hardware circuit designs such that they can be enhanced with the capability for dynamic swapping controlled by software. A hardware design with either of the proposed wrappers can thus be swapped out of the partially reconfigurable logic at runtime in some intermediate state of computation and then swapped in when required to continue from that state. The context data is saved to a buffer in the wrapper at interruptible states, and then the wrapper takes care of saving the hardware context to communication memory through a peripheral bus, and later restoring the hardware context after the design is swapped in. The overheads of the hardware standardization and the wrapper in terms of additional reconfigurable logic resources and the time for context switching are small and generally acceptable. With the capability for dynamic swapping, high priority hardware tasks can interrupt low-priority tasks in real-time embedded systems so that the utilization of hardware space per unit time is increased.

  1. No-hardware-signature cybersecurity-crypto-module: a resilient cyber defense agent

    Science.gov (United States)

    Zaghloul, A. R. M.; Zaghloul, Y. A.

    2014-06-01

    We present an optical cybersecurity-crypto-module as a resilient cyber defense agent. It has no hardware signature since it is bitstream reconfigurable, where single hardware architecture functions as any selected device of all possible ones of the same number of inputs. For a two-input digital device, a 4-digit bitstream of 0s and 1s determines which device, of a total of 16 devices, the hardware performs as. Accordingly, the hardware itself is not physically reconfigured, but its performance is. Such a defense agent allows the attack to take place, rendering it harmless. On the other hand, if the system is already infected with malware sending out information, the defense agent allows the information to go out, rendering it meaningless. The hardware architecture is immune to side attacks since such an attack would reveal information on the attack itself and not on the hardware. This cyber defense agent can be used to secure a point-to-point, point-to-multipoint, a whole network, and/or a single entity in the cyberspace. Therefore, ensuring trust between cyber resources. It can provide secure communication in an insecure network. We provide the hardware design and explain how it works. Scalability of the design is briefly discussed. (Protected by United States Patents No.: US 8,004,734; US 8,325,404; and other National Patents worldwide.)

  2. Mesoscopic biology

    Indian Academy of Sciences (India)

    Abstract. In this paper we present a qualitative outlook of mesoscopic biology where the typical length scale is of the order of nanometers and the energy scales comparable to thermal energy. ... National Center for Biological Sciences, Tata Institute of Fundamental Research, UAS-GKVK Campus, Bangalore 560 065, India ...

  3. Computational biology

    DEFF Research Database (Denmark)

    Hartmann, Lars Røeboe; Jones, Neil; Simonsen, Jakob Grue

    2011-01-01

    Computation via biological devices has been the subject of close scrutiny since von Neumann’s early work some 60 years ago. In spite of the many relevant works in this field, the notion of programming biological devices seems to be, at best, ill-defined. While many devices are claimed or proved t...

  4. Mesoscopic biology

    Indian Academy of Sciences (India)

    In this paper we present a qualitative outlook of mesoscopic biology where the typical length scale is of the order of nanometers and the energy scales comparable to thermal energy. Novel biomolecular machines, governed by coded information at the level of DNA and proteins, operate at these length scales in biological ...

  5. Signal Analysis Van Hardware Operation General Description. Volume 1.

    Science.gov (United States)

    1981-12-01

    for operation with inter- active graphics software. Hard copy printouts of the 4014 display are produced using the Versatec line printer in a hard-copy...initiated during an output operation to the printer, the hard copy process will begin only after the current computer direction transmission is...Linearity: 25 vpm/°C max. (40 C/LSB) Warmup Time 5 min Control Controlled by programmed instructions, clock counter overflow, or external input Output

  6. Real time hardware vision processing for a bionic eye

    OpenAIRE

    Josh, Horace Edmund

    2017-01-01

    A recent objective in medical bionics research is to develop visual prostheses - devices that could potentially restore the sight of blind individuals. The Monash Vision Group is currently working towards implementing a fully autonomous direct-to-brain vision implant called the Gennaris. Although research in this field is progressing quickly, initial implementations of these devices will be quite naive, offering very basic levels of vision. The vision is anticipated to be binary - that is wit...

  7. 3D Printed Fluidic Hardware for DNA Assembly

    Science.gov (United States)

    2015-04-10

    initiatives such as the FabLab Foundation10. Access to digital fabrication tools and open electronics, such as Arduino and Raspberry Pi, enables access to...mill (Roland DG Corporation, Hamamatsu, Japan) running Arduino firmware. A user interface (Supplementary Figure 9) enabled the user to control the...A3909 stepper motor driver, were soldered onto the milled circuit board (Supplementary Figure 8). Custom Arduino -based firmware was written to take

  8. Openness initiative

    Energy Technology Data Exchange (ETDEWEB)

    Duncan, S.S. [Los Alamos National Lab., NM (United States)

    1995-12-31

    Although antinuclear campaigns seem to be effective, public communication and education efforts on low-level radioactive waste have mixed results. Attempts at public information programs on low-level radioactive waste still focus on influencing public opinion. A question then is: {open_quotes}Is it preferable to have a program focus on public education that will empower individuals to make informed decisions rather than trying to influence them in their decisions?{close_quotes} To address this question, a case study with both quantitative and qualitative data will be used. The Ohio Low-Level Radioactive Waste Education Program has a goal to provide people with information they want/need to make their own decisions. The program initiated its efforts by conducting a statewide survey to determine information needed by people and where they turned for that information. This presentation reports data from the survey and then explores the program development process in which programs were designed and presented using the information. Pre and post data from the programs reveal attitude and knowledge shifts.

  9. Feasibility study of a XML-based software environment to manage data acquisition hardware devices

    International Nuclear Information System (INIS)

    Arcidiacono, R.; Brigljevic, V.; Bruno, G.; Cano, E.; Cittolin, S.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Reino, R.; Gulmini, M.; Gutleber, J.; Jacobs, C.; Kreuzer, P.; Lo Presti, G.; Magrans, I.; Marinelli, N.; Maron, G.; Meijers, F.; Meschi, E.; Murray, S.; Nafria, M.; Oh, A.; Orsini, L.; Pieri, M.; Pollet, L.; Racz, A.; Rosinsky, P.; Schwick, C.; Sphicas, P.; Varela, J.

    2005-01-01

    A Software environment to describe configuration, control and test systems for data acquisition hardware devices is presented. The design follows a model that enforces a comprehensive use of an extensible markup language (XML) syntax to describe both the code and associated data. A feasibility study of this software, carried out for the CMS experiment at CERN, is also presented. This is based on a number of standalone applications for different hardware modules, and the design of a hardware management system to remotely access to these heterogeneous subsystems through a uniform web service interface

  10. Beyond Open Source Software: Solving Common Library Problems Using the Open Source Hardware Arduino Platform

    Directory of Open Access Journals (Sweden)

    Jonathan Younker

    2013-06-01

    Full Text Available Using open source hardware platforms like the Arduino, libraries have the ability to quickly and inexpensively prototype custom hardware solutions to common library problems. The authors present the Arduino environment, what it is, what it does, and how it was used at the James A. Gibson Library at Brock University to create a production portable barcode-scanning utility for in-house use statistics collection as well as a prototype for a service desk statistics tabulation program’s hardware interface.

  11. Hardware design document for the Infrasound Prototype for a CTBT IMS station

    Energy Technology Data Exchange (ETDEWEB)

    Breding, D.R.; Kromer, R.P. [Sandia National Labs., Albuquerque, NM (United States); Whitaker, R.W.; Sandoval, T. [Los Alamos National Lab., NM (United States)

    1997-11-01

    The Hardware Design Document (HDD) describes the various hardware components used in the Comprehensive Test Ban Treaty (CTBT) Infrasound Prototype and their interrelationships. It divides the infrasound prototype into hardware configurations items (HWCIs). The HDD uses techniques such as block diagrams and parts lists to present this information. The level of detail provided in the following sections should be sufficient to allow potential users to procure and install the infrasound system. Infrasonic monitoring is a low cost, robust, and effective technology for detecting atmospheric explosions. Low frequencies from explosion signals propagate to long ranges (few thousand kilometers) where they can be detected with an array of sensors.

  12. Management of a CFD organization in support of space hardware development

    Science.gov (United States)

    Schutzenhofer, L. A.; Mcconnaughey, P. K.; Mcconnaughey, H. V.; Wang, T. S.

    1991-01-01

    The management strategy of NASA-Marshall's CFD branch in support of space hardware development and code validation implements various elements of total quality management. The strategy encompasses (1) a teaming strategy which focuses on the most pertinent problem, (2) quick-turnaround analysis, (3) the evaluation of retrofittable design options through sensitivity analysis, and (4) coordination between the chief engineer and the hardware contractors. Advanced-technology concepts are being addressed via the definition of technology-development projects whose products are transferable to hardware programs and the integration of research activities with industry, government agencies, and universities, on the basis of the 'consortium' concept.

  13. Evaluation and Hardware Implementation of Real-Time Color Compression Algorithms

    OpenAIRE

    Ojani, Amin; Caglar, Ahmet

    2008-01-01

    A major bottleneck, for performance as well as power consumption, for graphics hardware in mobile devices is the amount of data that needs to be transferred to and from memory. In, for example, hardware accelerated 3D graphics, a large part of the memory accesses are due to large and frequent color buffer data transfers. In a graphic hardware block color data is typically processed using RGB color format. For both 3D graphic rasterization and image composition several pixels needs to be read ...

  14. An interactive audio-visual installation using ubiquitous hardware and web-based software deployment

    Directory of Open Access Journals (Sweden)

    Tiago Fernandes Tavares

    2015-05-01

    Full Text Available This paper describes an interactive audio-visual musical installation, namely MOTUS, that aims at being deployed using low-cost hardware and software. This was achieved by writing the software as a web application and using only hardware pieces that are built-in most modern personal computers. This scenario implies in specific technical restrictions, which leads to solutions combining both technical and artistic aspects of the installation. The resulting system is versatile and can be freely used from any computer with Internet access. Spontaneous feedback from the audience has shown that the provided experience is interesting and engaging, regardless of the use of minimal hardware.

  15. The technology of electromagnetic radiation danger estimation using the hardware-software module

    Directory of Open Access Journals (Sweden)

    Titov Eugene

    2017-01-01

    Full Text Available The article describes the principles of functioning of the hardware-software module, whose purpose is the estimation of a danger level of the combined electromagnetic field influence on a human organism. The module consists of the hardware and the software parts. The hardware part is an array of electromagnetic parameter detectors; the software part is an electromagnetic field modelling program based on OpenEMS. The module creates so-called images of electromagnetic environment danger. The results show practical applicability of the technological module for the stated purpose.

  16. Design and implementation of static Huffman encoding hardware using a parallel shifting algorithm

    CERN Document Server

    Tae Yeon Lee

    2004-01-01

    This paper discusses the implementation of static Huffman encoding hardware for real-time lossless compression for the electromagnetic calorimeter in the CMS experiment. The construction of the Huffman encoding hardware illustrates the implementation for optimizing the logic size. The number of logic gates in the parallel shift operation required for the hardware was examined. The experiment with a simulated environment and an FPGA shows that the real-time constraint has been fulfilled and the design of the buffer length is appropriate. (16 refs).

  17. Development of embedded PC and FPGA based systems with virtual hardware

    Science.gov (United States)

    Zabołotny, Wojciech M.

    2012-05-01

    This paper discusses a methodology available to develop digital systems based on both: embedded computer and tightly coupled FPGA, using the virtual hardware. This approach allows to test design concepts and even to develop some parts of firmware and software before the real hardware is built, or to allow multiple developers to work simultaneously when only limited number of prototype devices is available. The aim of this paper is to show different available methods, providing the hardware-software co-simulation for development of such digital systems, with emphasis on the open source solutions, and to discuss their applicability.

  18. Analysis of Hardware Impairments on the Energy Harvesting Hybrid Relay Networks

    Science.gov (United States)

    Guo, K.; Guo, D.; Zhang, B.

    2018-01-01

    In real communication systems, hardware impairments are often considered. When the system is suffering hardware impairments, the system performance will be worse. For this reason, in this paper, the impact of hardware impairments on the system performance for the energy harvesting hybrid relay networks is investigated. Especially, the closed-form expression of the outage performance and the optimal analysis of the instantaneous throughput for the system are derived, from the analysis we know that the impairments level leads great loss on the system performance. In addition, numerical results are derived to verify the correctness of our analysis.

  19. Hardware dependencies of GPU-accelerated beamformer performances for microwave breast cancer detection

    Directory of Open Access Journals (Sweden)

    Salomon Christoph J.

    2016-09-01

    Full Text Available UWB microwave imaging has proven to be a promising technique for early-stage breast cancer detection. The extensive image reconstruction time can be accelerated by parallelizing the execution of the underlying beamforming algorithms. However, the efficiency of the parallelization will most likely depend on the grade of parallelism of the imaging algorithm and of the utilized hardware. This paper investigates the dependencies of two different beamforming algorithms on multiple hardware specification of several graphics boards. The parallel implementation is realized by using NVIDIA’s CUDA. Three conclusions are drawn about the behavior of the parallel implementation and how to efficiently use the accessible hardware.

  20. Integrated conception of hardware/software mixed systems used in nuclear instrumentation

    International Nuclear Information System (INIS)

    Dias, Ailton F.; Sorel, Yves; Akil, Mohamed

    2002-01-01

    Hardware/software codesign carries out the design of systems composed by a hardware portion, with specific components, and a software portion, with microprocessor based architecture. This paper describes the Algorithm Architecture Adequation (AAA) design methodology - originally oriented to programmable multicomponent architectures, its extension to reconfigurable circuits and its application to design and development of nuclear instrumentation systems composed by programmable and configurable circuits. AAA methodology uses an unified model to describe algorithm, architecture and implementation, based on graph theory. The great advantage of AAA methodology is the utilization of a same model from the specification to the implementation of hardware/software systems, reducing the complexity and design time. (author)