WorldWideScience

Sample records for flight hardware processing

  1. Flight Hardware Virtualization for On-Board Science Data Processing

    Data.gov (United States)

    National Aeronautics and Space Administration — Utilize Hardware Virtualization technology to benefit on-board science data processing by investigating new real time embedded Hardware Virtualization solutions and...

  2. Flight Hardware Virtualization for On-Board Science Data Processing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Utilize Hardware Virtualization technology to benefit on-board science data processing by investigating new real time embedded Hardware Virtualization solutions and...

  3. Contamination Control and Hardware Processing Solutions at Marshall Space Flight Center

    Science.gov (United States)

    Burns, DeWitt H.; Hampton, Tammy; Huey, LaQuieta; Mitchell, Mark; Norwood, Joey; Lowrey, Nikki

    2012-01-01

    The Contamination Control Team of Marshall Space Flight Center's Materials and Processes Laboratory supports many Programs/ Projects that design, manufacture, and test a wide range of hardware types that are sensitive to contamination and foreign object damage (FOD). Examples where contamination/FOD concerns arise include sensitive structural bondline failure, critical orifice blockage, seal leakage, and reactive fluid compatibility (liquid oxygen, hydrazine) as well as performance degradation of sensitive instruments or spacecraft surfaces such as optical elements and thermal control systems. During the design phase, determination of the sensitivity of a hardware system to different types or levels of contamination/FOD is essential. A contamination control and FOD control plan must then be developed and implemented through all phases of ground processing, and, sometimes, on-orbit use, recovery, and refurbishment. Implementation of proper controls prevents cost and schedule impacts due to hardware damage or rework and helps assure mission success. Current capabilities are being used to support recent and on-going activities for multiple Mission Directorates / Programs such as International Space Station (ISS), James Webb Space Telescope (JWST), Space Launch System (SLS) elements (tanks, engines, booster), etc. The team also advances Green Technology initiatives and addresses materials obsolescence issues for NASA and external customers, most notably in the area of solvent replacement (e.g. aqueous cleaners containing hexavalent chrome, ozone depleting chemicals (CFC s and HCFC's), suspect carcinogens). The team evaluates new surface cleanliness inspection and cleaning technologies (e.g. plasma cleaning), and maintains databases for processing support materials as well as outgassing and optical compatibility test results for spaceflight environments.

  4. Novel Exercise Hardware Requirements, Development, and Selection Process for Long-Duration Space Flight

    Science.gov (United States)

    Weaver, Aaron S.; Funk, Justin H.; Funk, Nathan W.; Dewitt, John K.; Fincke, Renita S.; Newby, Nathaniel; Caldwell, Erin; Sheehan, Christopher C.; Moore, E. Cherice; Ploutz-Snyder, Lori; hide

    2014-01-01

    Long-duration space flight poses many hazards to the health of the crew. Among those hazards is the physiological deconditioning of the musculoskeletal and cardiovascular systems due to prolonged exposure to microgravity. To combat the physical toll that exploration space flight may take on the crew, NASAs Human Research Program is charged with developing exercise protocols and hardware to maintain astronaut health and fitness during long-term missions. The goal of this effort is to preserve the physical capability of the crew to perform mission critical tasks in transit and during planetary surface operations. As NASA aims toward space travel outside of low-earth orbit (LEO), the constraints placed upon exercise equipment onboard the vehicle increase. Proposed vehicle architectures for transit to and from locations outside of LEO call for limits to equipment volume, mass, and power consumption. While NASA has made great strides in providing for the physical welfare of the crew, the equipment currently used onboard ISS is too large, too massive, and too power hungry to consider for long-duration flight. The goal of the Advanced Exercise Concepts (AEC) project is to maintain the resistive and aerobic capabilities of the current, ISS suite of exercise equipment, while making reductions in size, mass, and power consumption in order to make the equipment suitable for long-duration missions.

  5. The Impact of Flight Hardware Scavenging on Space Logistics

    Science.gov (United States)

    Oeftering, Richard C.

    2011-01-01

    For a given fixed launch vehicle capacity the logistics payload delivered to the moon may be only roughly 20 percent of the payload delivered to the International Space Station (ISS). This is compounded by the much lower flight frequency to the moon and thus low availability of spares for maintenance. This implies that lunar hardware is much more scarce and more costly per kilogram than ISS and thus there is much more incentive to preserve hardware. The Constellation Lunar Surface System (LSS) program is considering ways of utilizing hardware scavenged from vehicles including the Altair lunar lander. In general, the hardware will have only had a matter of hours of operation yet there may be years of operational life remaining. By scavenging this hardware the program, in effect, is treating vehicle hardware as part of the payload. Flight hardware may provide logistics spares for system maintenance and reduce the overall logistics footprint. This hardware has a wide array of potential applications including expanding the power infrastructure, and exploiting in-situ resources. Scavenging can also be seen as a way of recovering the value of, literally, billions of dollars worth of hardware that would normally be discarded. Scavenging flight hardware adds operational complexity and steps must be taken to augment the crew s capability with robotics, capabilities embedded in flight hardware itself, and external processes. New embedded technologies are needed to make hardware more serviceable and scavengable. Process technologies are needed to extract hardware, evaluate hardware, reconfigure or repair hardware, and reintegrate it into new applications. This paper also illustrates how scavenging can be used to drive down the cost of the overall program by exploiting the intrinsic value of otherwise discarded flight hardware.

  6. Cognitive Processing Hardware Elements

    National Research Council Canada - National Science Library

    Widrow, Bernard; Eliashberg, Victor; Kamenetsky, Max

    2005-01-01

    The purpose of this research is to identify and develop cognitive information processing systems and algorithms that can be implemented with novel architectures and devices with the goal of achieving...

  7. Mechanics of Granular Materials (MGM) Flight Hardware

    Science.gov (United States)

    1997-01-01

    A test cell for the Mechanics of Granular Materials (MGM) experiment is shown in its on-orbit configuration in Spacehab during preparations for STS-89. The twin locker to the left contains the hydraulic system to operate the experiment. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Note: Because the image on the screen was muted in the original image, its brightness and contrast are boosted in this rendering to make the test cell more visible. Credit: NASA/Marshall Space Flight Center (MSFC)

  8. Autonomous Operations Design Guidelines for Flight Hardware

    Data.gov (United States)

    National Aeronautics and Space Administration — SSC experimentally modified an autonomous operations flexible system suite developed for a ground application for a flight system under development by JSC. The...

  9. Development of a Methodology to Conduct Usability Evaluation for Hand Tools that May Reduce the Amount of Small Parts that are Dropped During Installation while Processing Space Flight Hardware

    Science.gov (United States)

    Miller, Darcy

    2000-01-01

    Foreign object debris (FOD) is an important concern while processing space flight hardware. FOD can be defined as "The debris that is left in or around flight hardware, where it could cause damage to that flight hardware," (United Space Alliance, 2000). Just one small screw left unintentionally in the wrong place could delay a launch schedule while it is retrieved, increase the cost of processing, or cause a potentially fatal accident. At this time, there is not a single solution to help reduce the number of dropped parts such as screws, bolts, nuts, and washers during installation. Most of the effort is currently focused on training employees and on capturing the parts once they are dropped. Advances in ergonomics and hand tool design suggest that a solution may be possible, in the form of specialty hand tools, which secure the small parts while they are being handled. To assist in the development of these new advances, a test methodology was developed to conduct a usability evaluation of hand tools, while performing tasks with risk of creating FOD. The methodology also includes hardware in the form of a testing board and the small parts that can be installed onto the board during a test. The usability of new hand tools was determined based on efficiency and the number of dropped parts. To validate the methodology, participants were tested while performing a task that is representative of the type of work that may be done when processing space flight hardware. Test participants installed small parts using their hands and two commercially available tools. The participants were from three groups: (1) students, (2) engineers / managers and (3) technicians. The test was conducted to evaluate the differences in performance when using the three installation methods, as well as the difference in performance of the three participant groups.

  10. Recycling Flight Hardware Components and Systems to Reduce Next Generation Research Costs

    Science.gov (United States)

    Turner, Wlat

    2011-01-01

    With the recent 'new direction' put forth by President Obama identifying NASA's new focus in research rather than continuing on a path to return to the Moon and Mars, the focus of work at Kennedy Space Center (KSC) may be changing dramatically. Research opportunities within the micro-gravity community potentially stands at the threshold of resurgence when the new direction of the agency takes hold for the next generation of experimenters. This presentation defines a strategy for recycling flight experiment components or part numbers, in order to reduce research project costs, not just in component selection and fabrication, but in expediting qualification of hardware for flight. A key component of the strategy is effective communication of relevant flight hardware information and available flight hardware components to researchers, with the goal of 'short circuiting' the design process for flight experiments

  11. Use of Heritage Hardware on MPCV Exploration Flight Test One

    Science.gov (United States)

    Rains, George Edward; Cross, Cynthia D.

    2011-01-01

    Due to an aggressive schedule for the first orbital test flight of an unmanned Orion capsule, known as Exploration Flight Test One (EFT1), combined with severe programmatic funding constraints, an effort was made to identify heritage hardware, i.e., already existing, flight-certified components from previous manned space programs, which might be available for use on EFT1. With the end of the Space Shuttle Program, no current means exists to launch Multi Purpose Logistics Modules (MPLMs) to the International Space Station (ISS), and so the inventory of many flight-certified Shuttle and MPLM components are available for other purposes. Two of these items are the Shuttle Ground Support Equipment Heat Exchanger (GSE Hx) and the MPLM cabin Positive Pressure Relief Assembly (PPRA). In preparation for the utilization of these components by the Orion Program, analyses and testing of the hardware were performed. The PPRA had to be analyzed to determine its susceptibility to pyrotechnic shock, and vibration testing had to be performed, since those environments are predicted to be significantly more severe during an Orion mission than those the hardware was originally designed to accommodate. The GSE Hx had to be tested for performance with the Orion thermal working fluids, which are different from those used by the Space Shuttle. This paper summarizes the certification of the use of heritage hardware for EFT1.

  12. Physics of Colloids in Space (PCS) Flight Hardware Developed

    Science.gov (United States)

    Koudelka, John M.

    2001-01-01

    investigation that will be located in an Expedite the Process of Experiments to Space Station (EXPRESS) Rack. The investigation will be conducted in the International Space Station U.S. laboratory, Destiny, over a period of approximately 10 months during the station assembly period from flight 6A through flight UF-2. This experiment will gather data on the basic physical properties of colloids by studying three different colloid systems with the objective of understanding how they grow and what structures they form. A colloidal suspension consists of fine particles (micrometer to submicrometer) suspended in a fluid for example, paints, milk, salad dressings, and aerosols. The long-term goal of this investigation is to learn how to steer the growth of colloidal suspensions to create new materials and new structures. This experiment is part of a two-stage investigation conceived by Professor David Weitz of Harvard University along with Professor Peter Pusey of the University of Edinburgh. The experiment hardware was developed by the NASA Glenn Research Center through contracts with Dynacs, Inc., and ZIN Technologies.

  13. Flight Hardware Packaging Design for Stringent EMC Radiated Emission Requirements

    Science.gov (United States)

    Lortz, Charlene L.; Huang, Chi-Chien N.; Ravich, Joshua A.; Steiner, Carl N.

    2013-01-01

    This packaging design approach can help heritage hardware meet a flight project's stringent EMC radiated emissions requirement. The approach requires only minor modifications to a hardware's chassis and mainly concentrates on its connector interfaces. The solution is to raise the surface area where the connector is mounted by a few millimeters using a pedestal, and then wrapping with conductive tape from the cable backshell down to the surface-mounted connector. This design approach has been applied to JPL flight project subsystems. The EMC radiated emissions requirements for flight projects can vary from benign to mission critical. If the project's EMC requirements are stringent, the best approach to meet EMC requirements would be to design an EMC control program for the project early on and implement EMC design techniques starting with the circuit board layout. This is the ideal scenario for hardware that is built from scratch. Implementation of EMC radiated emissions mitigation techniques can mature as the design progresses, with minimal impact to the design cycle. The real challenge exists for hardware that is planned to be flown following a built-to-print approach, in which heritage hardware from a past project with a different set of requirements is expected to perform satisfactorily for a new project. With acceptance of heritage, the design would already be established (circuit board layout and components have already been pre-determined), and hence any radiated emissions mitigation techniques would only be applicable at the packaging level. The key is to take a heritage design with its known radiated emissions spectrum and repackage, or modify its chassis design so that it would have a better chance of meeting the new project s radiated emissions requirements.

  14. Low extractable wipers for cleaning space flight hardware

    Science.gov (United States)

    Tijerina, Veronica; Gross, Frederick C.

    1986-01-01

    There is a need for low extractable wipers for solvent cleaning of space flight hardware. Soxhlet extraction is the method utilized today by most NASA subcontractors, but there may be alternate methods to achieve the same results. The need for low non-volatile residue materials, the history of soxhlet extraction, and proposed alternate methods are discussed, as well as different types of wipers, test methods, and current standards.

  15. Cumulative Measurement Errors for Dynamic Testing of Space Flight Hardware

    Science.gov (United States)

    Winnitoy, Susan

    2012-01-01

    Located at the NASA Johnson Space Center in Houston, TX, the Six-Degree-of-Freedom Dynamic Test System (SDTS) is a real-time, six degree-of-freedom, short range motion base simulator originally designed to simulate the relative dynamics of two bodies in space mating together (i.e., docking or berthing). The SDTS has the capability to test full scale docking and berthing systems utilizing a two body dynamic docking simulation for docking operations and a Space Station Remote Manipulator System (SSRMS) simulation for berthing operations. The SDTS can also be used for nonmating applications such as sensors and instruments evaluations requiring proximity or short range motion operations. The motion base is a hydraulic powered Stewart platform, capable of supporting a 3,500 lb payload with a positional accuracy of 0.03 inches. The SDTS is currently being used for the NASA Docking System testing and has been also used by other government agencies. The SDTS is also under consideration for use by commercial companies. Examples of tests include the verification of on-orbit robotic inspection systems, space vehicle assembly procedures and docking/berthing systems. The facility integrates a dynamic simulation of on-orbit spacecraft mating or de-mating using flight-like mechanical interface hardware. A force moment sensor is used for input during the contact phase, thus simulating the contact dynamics. While the verification of flight hardware presents unique challenges, one particular area of interest involves the use of external measurement systems to ensure accurate feedback of dynamic contact. The measurement systems for the test facility have two separate functions. The first is to take static measurements of facility and test hardware to determine both the static and moving frames used in the simulation and control system. The test hardware must be measured after each configuration change to determine both sets of reference frames. The second function is to take dynamic

  16. Test Hardware Design for Flight-Like Operation of Advanced Stirling Convertors

    Science.gov (United States)

    Oriti, Salvatore M.

    2012-01-01

    NASA Glenn Research Center (GRC) has been supporting development of the Advanced Stirling Radioisotope Generator (ASRG) since 2006. A key element of the ASRG project is providing life, reliability, and performance testing of the Advanced Stirling Convertor (ASC). For this purpose, the Thermal Energy Conversion branch at GRC has been conducting extended operation of a multitude of free-piston Stirling convertors. The goal of this effort is to generate long-term performance data (tens of thousands of hours) simultaneously on multiple units to build a life and reliability database. The test hardware for operation of these convertors was designed to permit in-air investigative testing, such as performance mapping over a range of environmental conditions. With this, there was no requirement to accurately emulate the flight hardware. For the upcoming ASC-E3 units, the decision has been made to assemble the convertors into a flight-like configuration. This means the convertors will be arranged in the dual-opposed configuration in a housing that represents the fit, form, and thermal function of the ASRG. The goal of this effort is to enable system level tests that could not be performed with the traditional test hardware at GRC. This offers the opportunity to perform these system-level tests much earlier in the ASRG flight development, as they would normally not be performed until fabrication of the qualification unit. This paper discusses the requirements, process, and results of this flight-like hardware design activity.

  17. Life sciences flight hardware development for the International Space Station

    Science.gov (United States)

    Kern, V. D.; Bhattacharya, S.; Bowman, R. N.; Donovan, F. M.; Elland, C.; Fahlen, T. F.; Girten, B.; Kirven-Brooks, M.; Lagel, K.; Meeker, G. B.; Santos, O.

    During the construction phase of the International Space Station (ISS), early flight opportunities have been identified (including designated Utilization Flights, UF) on which early science experiments may be performed. The focus of NASA's and other agencies' biological studies on the early flight opportunities is cell and molecular biology; with UF-1 scheduled to fly in fall 2001, followed by flights 8A and UF-3. Specific hardware is being developed to verify design concepts, e.g., the Avian Development Facility for incubation of small eggs and the Biomass Production System for plant cultivation. Other hardware concepts will utilize those early research opportunities onboard the ISS, e.g., an Incubator for sample cultivation, the European Modular Cultivation System for research with small plant systems, an Insect Habitat for support of insect species. Following the first Utilization Flights, additional equipment will be transported to the ISS to expand research opportunities and capabilities, e.g., a Cell Culture Unit, the Advanced Animal Habitat for rodents, an Aquatic Facility to support small fish and aquatic specimens, a Plant Research Unit for plant cultivation, and a specialized Egg Incubator for developmental biology studies. Host systems (Figure 1A, B), e.g., a 2.5 m Centrifuge Rotor (g-levels from 0.01-g to 2-g) for direct comparisons between μg and selectable g levels, the Life Sciences Glove☐ for contained manipulations, and Habitat Holding Racks (Figure 1B) will provide electrical power, communication links, and cooling to the habitats. Habitats will provide food, water, light, air and waste management as well as humidity and temperature control for a variety of research organisms. Operators on Earth and the crew on the ISS will be able to send commands to the laboratory equipment to monitor and control the environmental and experimental parameters inside specific habitats. Common laboratory equipment such as microscopes, cryo freezers, radiation

  18. Hardware Development Process for Human Research Facility Applications

    Science.gov (United States)

    Bauer, Liz

    2000-01-01

    The simple goal of the Human Research Facility (HRF) is to conduct human research experiments on the International Space Station (ISS) astronauts during long-duration missions. This is accomplished by providing integration and operation of the necessary hardware and software capabilities. A typical hardware development flow consists of five stages: functional inputs and requirements definition, market research, design life cycle through hardware delivery, crew training, and mission support. The purpose of this presentation is to guide the audience through the early hardware development process: requirement definition through selecting a development path. Specific HRF equipment is used to illustrate the hardware development paths. The source of hardware requirements is the science community and HRF program. The HRF Science Working Group, consisting of SCientists from various medical disciplines, defined a basic set of equipment with functional requirements. This established the performance requirements of the hardware. HRF program requirements focus on making the hardware safe and operational in a space environment. This includes structural, thermal, human factors, and material requirements. Science and HRF program requirements are defined in a hardware requirements document which includes verification methods. Once the hardware is fabricated, requirements are verified by inspection, test, analysis, or demonstration. All data is compiled and reviewed to certify the hardware for flight. Obviously, the basis for all hardware development activities is requirement definition. Full and complete requirement definition is ideal prior to initiating the hardware development. However, this is generally not the case, but the hardware team typically has functional inputs as a guide. The first step is for engineers to conduct market research based on the functional inputs provided by scientists. CommerCially available products are evaluated against the science requirements as

  19. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  20. Foam Protection of Flight Hardware From Impact Loads Due To Drops

    Data.gov (United States)

    National Aeronautics and Space Administration — In response to several instances of flight hardware being dropped during shipment with expensive hits to cost and schedule, a methodology to normalize foam data was...

  1. Hardware development process for Human Research facility applications

    Science.gov (United States)

    Bauer, Liz

    2000-01-01

    The simple goal of the Human Research Facility (HRF) is to conduct human research experiments on the International Space Station (ISS) astronauts during long-duration missions. This is accomplished by providing integration and operation of the necessary hardware and software capabilities. A typical hardware development flow consists of five stages: functional inputs and requirements definition, market research, design life cycle through hardware delivery, crew training, and mission support. The purpose of this presentation is to guide the audience through the early hardware development process: requirement definition through selecting a development path. Specific HRF equipment is used to illustrate the hardware development paths. .

  2. A Fuzzy Heater Control System Stimulating Thermal Cycling of Flight Hardware for a Thermal Environmental Test

    Science.gov (United States)

    Chang, Chih-Li; Chen, Yow-Hwa; Pan, Hsu-Pin; Cheng, Robert; Hsiao, Chiuder

    2004-08-01

    The flight hardware suffers thermal cycling in space environment. The temperature range of the hardware is controlled between -45 C and 85 C for the space-flight test environment in a thermal vacuum chamber on ground. A Heater Control System (HCS) provides thirty heating points to simulate the thermal status of flight hardware. The control is configured in traditional PD algorithm and implemented in a workstation of a control room. Since the thermal mass is different for the different articles, the pre-determined parameters of PD control cannot fit all articles. The fuzzy logics are then proposed to be adaptive to the different articles. The fuzzy control is implemented with LabVIEW in a PXI industrial computer. The remote GPIB instruments of hibay are interfaced to PXI computer via Ethernet communication. In summary, the overall system takes advantages of GPIB standardized component, increasing capabilities, adaptive control with a fuzzy algorithm, and distributed control architecture.

  3. Parallel Processing with Digital Signal Processing Hardware and Software

    Science.gov (United States)

    Swenson, Cory V.

    1995-01-01

    The assembling and testing of a parallel processing system is described which will allow a user to move a Digital Signal Processing (DSP) application from the design stage to the execution/analysis stage through the use of several software tools and hardware devices. The system will be used to demonstrate the feasibility of the Algorithm To Architecture Mapping Model (ATAMM) dataflow paradigm for static multiprocessor solutions of DSP applications. The individual components comprising the system are described followed by the installation procedure, research topics, and initial program development.

  4. Random vibration analysis of space flight hardware using NASTRAN

    Science.gov (United States)

    Thampi, S. K.; Vidyasagar, S. N.

    1990-01-01

    During liftoff and ascent flight phases, the Space Transportation System (STS) and payloads are exposed to the random acoustic environment produced by engine exhaust plumes and aerodynamic disturbances. The analysis of payloads for randomly fluctuating loads is usually carried out using the Miles' relationship. This approximation technique computes an equivalent load factor as a function of the natural frequency of the structure, the power spectral density of the excitation, and the magnification factor at resonance. Due to the assumptions inherent in Miles' equation, random load factors are often over-estimated by this approach. In such cases, the estimates can be refined using alternate techniques such as time domain simulations or frequency domain spectral analysis. Described here is the use of NASTRAN to compute more realistic random load factors through spectral analysis. The procedure is illustrated using Spacelab Life Sciences (SLS-1) payloads and certain unique features of this problem are described. The solutions are compared with Miles' results in order to establish trends at over or under prediction.

  5. Pre-Hardware Optimization of Spacecraft Image Processing Software Algorithms and Hardware Implementation

    Science.gov (United States)

    Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)

    2001-01-01

    Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the

  6. Use of Heritage Hardware on Orion MPCV Exploration Flight Test One

    Science.gov (United States)

    Rains, George Edward; Cross, Cynthia D.

    2012-01-01

    Due to an aggressive schedule for the first space flight of an unmanned Orion capsule, currently known as Exploration Flight Test One (EFT1), combined with severe programmatic funding constraints, an effort was made within the Orion Program to identify heritage hardware, i.e., already existing, flight-certified components from previous manned space programs, which might be available for use on EFT1. With the end of the Space Shuttle Program, no current means exists to launch Multi-Purpose Logistics Modules (MPLMs) to the International Space Station (ISS), and so the inventory of many flight-certified Shuttle and MPLM components are available for other purposes. Two of these items are the MPLM cabin Positive Pressure Relief Assembly (PPRA), and the Shuttle Ground Support Equipment Heat Exchanger (GSE HX). In preparation for the utilization of these components by the Orion Program, analyses and testing of the hardware were performed. The PPRA had to be analyzed to determine its susceptibility to pyrotechnic shock, and vibration testing had to be performed, since those environments are predicted to be more severe during an Orion mission than those the hardware was originally designed to accommodate. The GSE HX had to be tested for performance with the Orion thermal working fluids, which are different from those used by the Space Shuttle. This paper summarizes the activities required in order to utilize heritage hardware for EFT1.

  7. CID-720 aircraft Langley Research Center preflight hardware tests: Development, flight acceptance and qualification

    Science.gov (United States)

    Pride, J. D.

    1986-01-01

    The testing conducted on LaRC-developed hardware for the controlled impact demonstration transport aircraft is discussed. To properly develop flight qualified crash systems, two environments were considered: the aircraft flight environment with the focus on vibration and temperature effects, and the crash environment with the long pulse shock effects. Also with the large quantity of fuel in the wing tanks the possibility of fire was considered to be a threat to data retrieval and thus fire tests were included in the development test process. The aircraft test successfully demonstrated the performance of the LaRC developed heat shields. Good telemetered data (S-band) was received during the impact and slide-out phase, and even after the aircraft came to rest. The two onboard DAS tape recorders were protected from the intense fire and high quality tape data was recovered. The complete photographic system performed as planned throughout the 40.0 sec of film supply. The four photo power distribution pallets remained in good condition and all ten onboard 16 mm high speed (400 frames/sec) cameras produced good film data.

  8. AirSTAR Hardware and Software Design for Beyond Visual Range Flight Research

    Science.gov (United States)

    Laughter, Sean; Cox, David

    2016-01-01

    The National Aeronautics and Space Administration (NASA) Airborne Subscale Transport Aircraft Research (AirSTAR) Unmanned Aerial System (UAS) is a facility developed to study the flight dynamics of vehicles in emergency conditions, in support of aviation safety research. The system was upgraded to have its operational range significantly expanded, going beyond the line of sight of a ground-based pilot. A redesign of the airborne flight hardware was undertaken, as well as significant changes to the software base, in order to provide appropriate autonomous behavior in response to a number of potential failures and hazards. Ground hardware and system monitors were also upgraded to include redundant communication links, including ADS-B based position displays and an independent flight termination system. The design included both custom and commercially available avionics, combined to allow flexibility in flight experiment design while still benefiting from tested configurations in reversionary flight modes. A similar hierarchy was employed in the software architecture, to allow research codes to be tested, with a fallback to more thoroughly validated flight controls. As a remotely piloted facility, ground systems were also developed to ensure the flight modes and system state were communicated to ground operations personnel in real-time. Presented in this paper is a general overview of the concept of operations for beyond visual range flight, and a detailed review of the airborne hardware and software design. This discussion is held in the context of the safety and procedural requirements that drove many of the design decisions for the AirSTAR UAS Beyond Visual Range capability.

  9. The " Daphnia" Lynx Mark I Suborbital Flight Experiment: Hardware Qualification at the Drop Tower Bremen

    Science.gov (United States)

    Knie, Miriam; Schoppmann, Kathrin; Eck, Hendrik; Ribeiro, Bernard Wolfschoon; Laforsch, Christian

    2016-06-01

    The Drop Tower Bremen, a ground-based facility enabling research under real microgravity conditions, is an excellent platform for testing new types of experimental hardware to ensure full performance when deployed in costly and rare flight opportunities such as suborbital flights. Here we describe the " Daphnia" experiment which will fly on XCOR Aerospace Lynx Mark I and our experience from the hardware tests with the catapult system at the drop tower. The aim of the " Daphnia" experiment is to obtain data on the biological performance of daphnids and predator-prey interactions in microgravity, which are important for the development of aquatic bioregenerative life support systems (BLSS). The experiment consists of two subunits: The first unit is dedicated to predator-prey interactions, where behavioural analysis should reveal if microgravity interfere with prey ( Daphnia) detection or feeding and therefore may interrupt the trophic cascade. The functioning of such an artificial food web is indispensable for a long-lasting BLSS suitable for long-duration manned space missions or Earth-based explorations to extreme habitats. The second unit is designed to investigate the impact of microgravity on gene expression and the cytoskeleton in Daphnia. Next to data collection, the real microgravity conditions at the drop tower have helped to identify the weak points of the " Daphnia" experimental hardware and lead to further improvement. Hence, the drop tower is ideal for testing new experimental hardware which is indispensable before the implementation in suborbital flights.

  10. NPOESS Interface Data Processing Segment (IDPS) Hardware

    Science.gov (United States)

    Sullivan, W. J.; Grant, K. D.; Bergeron, C.

    2008-12-01

    The National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation weather and environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the Defense Meteorological Satellite Program (DMSP) managed by the DoD. The NPOESS satellites carry a suite of sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The NPOESS design allows centralized mission management and delivers high quality environmental products to military, civil and scientific users. The ground data processing segment for NPOESS is the Interface Data Processing Segment (IDPS), developed by Raytheon Intelligence and Information Systems. IDPS processes NPOESS satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government. IDPS will process environmental data products beginning with the NPOESS Preparatory Project (NPP) and continuing through the lifetime of the NPOESS system. Within the overall NPOESS processing environment, the IDPS must process a data volume several orders of magnitude the size of current systems -- in one-quarter of the time. Further, it must support the calibration, validation, and data quality improvement initiatives of the NPOESS program to ensure the production of atmospheric and environmental products that meet strict requirements for accuracy and precision. This poster will illustrate and describe the IDPS HW architecture that is necessary to meet these challenging design requirements. In addition, it will illustrate the expandability features of the architecture in support of future data processing and data distribution needs.

  11. Storage Information Management System (SIMS) Spaceflight Hardware Warehousing at Goddard Space Flight Center

    Science.gov (United States)

    Kubicko, Richard M.; Bingham, Lindy

    1995-01-01

    Goddard Space Flight Center (GSFC) on site and leased warehouses contain thousands of items of ground support equipment (GSE) and flight hardware including spacecraft, scaffolding, computer racks, stands, holding fixtures, test equipment, spares, etc. The control of these warehouses, and the management, accountability, and control of the items within them, is accomplished by the Logistics Management Division. To facilitate this management and tracking effort, the Logistics and Transportation Management Branch, is developing a system to provide warehouse personnel, property owners, and managers with storage and inventory information. This paper will describe that PC-based system and address how it will improve GSFC warehouse and storage management.

  12. Mechanics of Granular Materials (MGM0 Flight Hardware in Bench Test

    Science.gov (United States)

    2000-01-01

    Engineering bench system hardware for the Mechanics of Granular Materials (MGM) experiment is tested on a lab bench at the University of Colorado in Boulder. This is done in a horizontal arrangement to reduce pressure differences so the tests more closely resemble behavior in the microgravity of space. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: University of Colorado at Boulder).

  13. Foundations of digital signal processing theory, algorithms and hardware design

    CERN Document Server

    Gaydecki, Patrick

    2005-01-01

    An excellent introductory text, this book covers the basic theoretical, algorithmic and real-time aspects of digital signal processing (DSP). Detailed information is provided on off-line, real-time and DSP programming and the reader is effortlessly guided through advanced topics such as DSP hardware design, FIR and IIR filter design and difference equation manipulation.

  14. Parallel asynchronous hardware implementation of image processing algorithms

    Science.gov (United States)

    Coon, Darryl D.; Perera, A. G. U.

    1990-01-01

    Research is being carried out on hardware for a new approach to focal plane processing. The hardware involves silicon injection mode devices. These devices provide a natural basis for parallel asynchronous focal plane image preprocessing. The simplicity and novel properties of the devices would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture built from arrays of the devices would form a two-dimensional (2-D) array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuron-like asynchronous pulse-coded form through the laminar processor. No multiplexing, digitization, or serial processing would occur in the preprocessing state. High performance is expected, based on pulse coding of input currents down to one picoampere with noise referred to input of about 10 femtoamperes. Linear pulse coding has been observed for input currents ranging up to seven orders of magnitude. Low power requirements suggest utility in space and in conjunction with very large arrays. Very low dark current and multispectral capability are possible because of hardware compatibility with the cryogenic environment of high performance detector arrays. The aforementioned hardware development effort is aimed at systems which would integrate image acquisition and image processing.

  15. PW-Sat on-board flight computer, hardware and software design

    Science.gov (United States)

    Mosdorf, Michal; Kurowski, Michal; Mosdorf, Lukasz; Cichocki, Andrzej; Mosdorf, Lukasz; Kocoń, Marcin

    2009-06-01

    Presented paper describes an inexpensive computer for CubeSat missions based on widely available commercial of-the-shelf components. Main requirements for the discussed project were: ability to handle satellite onboard data, platform flexibility and portability. Hardware design of the presented project is based on an ARM microcontroller that is able to run FreeRTOS real time kernel used as a base for software design. Presented project is a part of the PW-Sat satellite. It is scheduled to be launched during Vega maiden flight.

  16. Pre-Hardware Optimization of Spacecraft Image Processing Algorithms and Hardware Implementation

    Science.gov (United States)

    Kizhner, Semion; Petrick, David J.; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Day, John H. (Technical Monitor)

    2002-01-01

    Spacecraft telemetry rates and telemetry product complexity have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image data processing and color picture generation application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The proposed solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms, and reconfigurable computing hardware (RC) technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processors (DSP). It has been shown that this approach can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft.

  17. Automatic Optimization of Hardware Accelerators for Image Processing

    OpenAIRE

    Reiche, Oliver; Häublein, Konrad; Reichenbach, Marc; Hannig, Frank; Teich, Jürgen; Fey, Dietmar

    2015-01-01

    In the domain of image processing, often real-time constraints are required. In particular, in safety-critical applications, such as X-ray computed tomography in medical imaging or advanced driver assistance systems in the automotive domain, timing is of utmost importance. A common approach to maintain real-time capabilities of compute-intensive applications is to offload those computations to dedicated accelerator hardware, such as Field Programmable Gate Arrays (FPGAs). Programming such arc...

  18. System for processing an encrypted instruction stream in hardware

    Science.gov (United States)

    Griswold, Richard L.; Nickless, William K.; Conrad, Ryan C.

    2016-04-12

    A system and method of processing an encrypted instruction stream in hardware is disclosed. Main memory stores the encrypted instruction stream and unencrypted data. A central processing unit (CPU) is operatively coupled to the main memory. A decryptor is operatively coupled to the main memory and located within the CPU. The decryptor decrypts the encrypted instruction stream upon receipt of an instruction fetch signal from a CPU core. Unencrypted data is passed through to the CPU core without decryption upon receipt of a data fetch signal.

  19. NASA's Rodent Research Project: Validation of Flight Hardware, Operations and Science Capabilities for Conducting Long Duration Experiments in Space

    Science.gov (United States)

    Choi, S. Y.; Beegle, J. E.; Wigley, C. L.; Pletcher, D.; Globus, R. K.

    2015-01-01

    Research using rodents is an essential tool for advancing biomedical research on Earth and in space. Rodent Research (RR)-1 was conducted to validate flight hardware, operations, and science capabilities that were developed at the NASA Ames Research Center. Twenty C57BL/6J adult female mice were launched on Sept 21, 2014 in a Dragon Capsule (SpaceX-4), then transferred to the ISS for a total time of 21-22 days (10 commercial mice) or 37 (10 validation mice). Tissues collected on-orbit were either rapidly frozen or preserved in RNA later at less than or equal to -80 C (n=2/group) until their return to Earth. Remaining carcasses were rapidly frozen for dissection post-flight. The three controls groups at Kennedy Space Center consisted of: Basal mice euthanized at the time of launch, Vivarium controls, housed in standard cages, and Ground Controls (GC), housed in flight hardware within an environmental chamber. FLT mice appeared more physically active on-orbit than GC, and behavior analysis are in progress. Upon return to Earth, there were no differences in body weights between FLT and GC at the end of the 37 days in space. RNA was of high quality (RIN greater than 8.5). Liver enzyme activity levels of FLT mice and all control mice were similar in magnitude to those of the samples that were optimally processed in the laboratory. Liver samples collected from the intact frozen FLT carcasses had RNA RIN of 7.27 +/- 0.52, which was lower than that of the samples processed on-orbit, but similar to those obtained from the control group intact carcasses. Nonetheless, the RNA samples from the intact carcasses were acceptable for the most demanding transcriptomic analyses. Adrenal glands, thymus and spleen (organs associated with stress response) showed no significant difference in weights between FLT and GC. Enzymatic activity was also not significantly different. Over 3,000 tissues collected from the four groups of mice have become available for the Biospecimen Sharing

  20. Getting expert systems off the ground: Lessons learned from integrating model-based diagnostics with prototype flight hardware

    Science.gov (United States)

    Stephan, Amy; Erikson, Carol A.

    1991-11-01

    As an initial attempt to introduce expert system technology into an onboard environment, a model based diagnostic system using the TRW MARPLE software tool was integrated with prototype flight hardware and its corresponding control software. Because this experiment was designed primarily to test the effectiveness of the model based reasoning technique used, the expert system ran on a separate hardware platform, and interactions between the control software and the model based diagnostics were limited. While this project met its objective of showing that model based reasoning can effectively isolate failures in flight hardware, it also identified the need for an integrated development path for expert system and control software for onboard applications. In developing expert systems that are ready for flight, artificial intelligence techniques must be evaluated to determine whether they offer a real advantage onboard, identify which diagnostic functions should be performed by the expert systems and which are better left to the procedural software, and work closely with both the hardware and the software developers from the beginning of a project to produce a well designed and thoroughly integrated application.

  1. Alternative, Green Processes for the Precision Cleaning of Aerospace Hardware

    Science.gov (United States)

    Maloney, Phillip R.; Grandelli, Heather Eilenfield; Devor, Robert; Hintze, Paul E.; Loftin, Kathleen B.; Tomlin, Douglas J.

    2014-01-01

    Precision cleaning is necessary to ensure the proper functioning of aerospace hardware, particularly those systems that come in contact with liquid oxygen or hypergolic fuels. Components that have not been cleaned to the appropriate levels may experience problems ranging from impaired performance to catastrophic failure. Traditionally, this has been achieved using various halogenated solvents. However, as information on the toxicological and/or environmental impacts of each came to light, they were subsequently regulated out of use. The solvent currently used in Kennedy Space Center (KSC) precision cleaning operations is Vertrel MCA. Environmental sampling at KSC indicates that continued use of this or similar solvents may lead to high remediation costs that must be borne by the Program for years to come. In response to this problem, the Green Solvents Project seeks to develop state-of-the-art, green technologies designed to meet KSCs precision cleaning needs.Initially, 23 solvents were identified as potential replacements for the current Vertrel MCA-based process. Highly halogenated solvents were deliberately omitted since historical precedents indicate that as the long-term consequences of these solvents become known, they will eventually be regulated out of practical use, often with significant financial burdens for the user. Three solvent-less cleaning processes (plasma, supercritical carbon dioxide, and carbon dioxide snow) were also chosen since they produce essentially no waste stream. Next, experimental and analytical procedures were developed to compare the relative effectiveness of these solvents and technologies to the current KSC standard of Vertrel MCA. Individually numbered Swagelok fittings were used to represent the hardware in the cleaning process. First, the fittings were cleaned using Vertrel MCA in order to determine their true cleaned mass. Next, the fittings were dipped into stock solutions of five commonly encountered contaminants and were

  2. Design Process of Flight Vehicle Structures for a Common Bulkhead and an MPCV Spacecraft Adapter

    Science.gov (United States)

    Aggarwal, Pravin; Hull, Patrick V.

    2015-01-01

    Design and manufacturing space flight vehicle structures is a skillset that has grown considerably at NASA during that last several years. Beginning with the Ares program and followed by the Space Launch System (SLS); in-house designs were produced for both the Upper Stage and the SLS Multipurpose crew vehicle (MPCV) spacecraft adapter. Specifically, critical design review (CDR) level analysis and flight production drawing were produced for the above mentioned hardware. In particular, the experience of this in-house design work led to increased manufacturing infrastructure for both Marshal Space Flight Center (MSFC) and Michoud Assembly Facility (MAF), improved skillsets in both analysis and design, and hands on experience in building and testing (MSA) full scale hardware. The hardware design and development processes from initiation to CDR and finally flight; resulted in many challenges and experiences that produced valuable lessons. This paper builds on these experiences of NASA in recent years on designing and fabricating flight hardware and examines the design/development processes used, as well as the challenges and lessons learned, i.e. from the initial design, loads estimation and mass constraints to structural optimization/affordability to release of production drawing to hardware manufacturing. While there are many documented design processes which a design engineer can follow, these unique experiences can offer insight into designing hardware in current program environments and present solutions to many of the challenges experienced by the engineering team.

  3. Extended Logic Intelligent Processing System for a Sensor Fusion Processor Hardware

    Science.gov (United States)

    Stoica, Adrian; Thomas, Tyson; Li, Wei-Te; Daud, Taher; Fabunmi, James

    2000-01-01

    The paper presents the hardware implementation and initial tests from a low-power, highspeed reconfigurable sensor fusion processor. The Extended Logic Intelligent Processing System (ELIPS) is described, which combines rule-based systems, fuzzy logic, and neural networks to achieve parallel fusion of sensor signals in compact low power VLSI. The development of the ELIPS concept is being done to demonstrate the interceptor functionality which particularly underlines the high speed and low power requirements. The hardware programmability allows the processor to reconfigure into different machines, taking the most efficient hardware implementation during each phase of information processing. Processing speeds of microseconds have been demonstrated using our test hardware.

  4. Reconfigurable Signal Processing and Hardware Architecture for Broadband Wireless Communications

    Directory of Open Access Journals (Sweden)

    Liang Ying-Chang

    2005-01-01

    Full Text Available This paper proposes a broadband wireless transceiver which can be reconfigured to any type of cyclic-prefix (CP -based communication systems, including orthogonal frequency-division multiplexing (OFDM, single-carrier cyclic-prefix (SCCP system, multicarrier (MC code-division multiple access (MC-CDMA, MC direct-sequence CDMA (MC-DS-CDMA, CP-based CDMA (CP-CDMA, and CP-based direct-sequence CDMA (CP-DS-CDMA. A hardware platform is proposed and the reusable common blocks in such a transceiver are identified. The emphasis is on the equalizer design for mobile receivers. It is found that after block despreading operation, MC-DS-CDMA and CP-DS-CDMA have the same equalization blocks as OFDM and SCCP systems, respectively, therefore hardware and software sharing is possible for these systems. An attempt has also been made to map the functional reconfigurable transceiver onto the proposed hardware platform. The different functional entities which will be required to perform the reconfiguration and realize the transceiver are explained.

  5. Using Automation to Improve the Flight Software Testing Process

    Science.gov (United States)

    ODonnell, James R., Jr.; Morgenstern, Wendy M.; Bartholomew, Maureen O.

    2001-01-01

    One of the critical phases in the development of a spacecraft attitude control system (ACS) is the testing of its flight software. The testing (and test verification) of ACS flight software requires a mix of skills involving software, knowledge of attitude control, and attitude control hardware, data manipulation, and analysis. The process of analyzing and verifying flight software test results often creates a bottleneck which dictates the speed at which flight software verification can be conducted. In the development of the Microwave Anisotropy Probe (MAP) spacecraft ACS subsystem, an integrated design environment was used that included a MAP high fidelity (HiFi) simulation, a central database of spacecraft parameters, a script language for numeric and string processing, and plotting capability. In this integrated environment, it was possible to automate many of the steps involved in flight software testing, making the entire process more efficient and thorough than on previous missions. In this paper, we will compare the testing process used on MAP to that used on other missions. The software tools that were developed to automate testing and test verification will be discussed, including the ability to import and process test data, synchronize test data and automatically generate HiFi script files used for test verification, and an automated capability for generating comparison plots. A summary of the benefits of applying these test methods on MAP will be given. Finally, the paper will conclude with a discussion of re-use of the tools and techniques presented, and the ongoing effort to apply them to flight software testing of the Triana spacecraft ACS subsystem.

  6. NASA HUNCH Hardware

    Science.gov (United States)

    Hall, Nancy R.; Wagner, James; Phelps, Amanda

    2014-01-01

    What is NASA HUNCH? High School Students United with NASA to Create Hardware-HUNCH is an instructional partnership between NASA and educational institutions. This partnership benefits both NASA and students. NASA receives cost-effective hardware and soft goods, while students receive real-world hands-on experiences. The 2014-2015 was the 12th year of the HUNCH Program. NASA Glenn Research Center joined the program that already included the NASA Johnson Space Flight Center, Marshall Space Flight Center, Langley Research Center and Goddard Space Flight Center. The program included 76 schools in 24 states and NASA Glenn worked with the following five schools in the HUNCH Build to Print Hardware Program: Medina Career Center, Medina, OH; Cattaraugus Allegheny-BOCES, Olean, NY; Orleans Niagara-BOCES, Medina, NY; Apollo Career Center, Lima, OH; Romeo Engineering and Tech Center, Washington, MI. The schools built various parts of an International Space Station (ISS) middeck stowage locker and learned about manufacturing process and how best to build these components to NASA specifications. For the 2015-2016 school year the schools will be part of a larger group of schools building flight hardware consisting of 20 ISS middeck stowage lockers for the ISS Program. The HUNCH Program consists of: Build to Print Hardware; Build to Print Soft Goods; Design and Prototyping; Culinary Challenge; Implementation: Web Page and Video Production.

  7. Hardware embedded fiber sensor interrogation system using intensive digital signal processing

    OpenAIRE

    Wang, Yujuan; Negri, Lucas H.; Kalinowski, Hypolito J.; Mattos, Daniel S.; Negri, Gabriel H.; Paterno, Aleksander S.

    2014-01-01

    The description of an interrogation system for fiber Bragg grating sensors is reported. The full implementation in hardware of the required signal processing is proposed and made publicly available. The hardware description is implemented in a field programmable gate array (FPGA) development kit and the processing units allow one to control an optoelectronic interrogation system that uses the tunable filter method. Since the signal that drives the used Fabry-Perot filter (FFP) using a digital...

  8. Parabolic Flight Investigation for Advanced Exercise Concept Hardware Hybrid Ultimate Lifting Kit (HULK)

    Science.gov (United States)

    Weaver, A. S.; Funk, J. H.; Funk, N. W.; Sheehan, C. C.; Humphreys, B. T.; Perusek, G. P.

    2015-01-01

    Long-duration space flight poses many hazards to the health of the crew. Among those hazards is the physiological deconditioning of the musculoskeletal and cardiovascular systems due to prolonged exposure to microgravity. To combat this erosion of physical condition space flight may take on the crew, the Human Research Program (HRP) is charged with developing Advanced Exercise Concepts to maintain astronaut health and fitness during long-term missions, while keeping device mass, power, and volume to a minimum. The goal of this effort is to preserve the physical capability of the crew to perform mission critical tasks in transit and during planetary surface operations. The HULK is a pneumatic-based exercise system, which provides both resistive and aerobic modes to protect against human deconditioning in microgravity. Its design targeted the International Space Station (ISS) Advanced Resistive Exercise Device (ARED) high level performance characteristics and provides up to 600 foot pounds resitive loading with the capability to allow for eccentric to concentric (E:C) ratios of higher than 1:1 through a DC motor assist component. The device's rowing mode allows for high cadence aerobic activity. The HULK parabolic flight campaign, conducted through the NASA Flight Opportunities Program at Ellington Field, resulted in the creation of device specific data sets including low fidelity motion capture, accelerometry and both inline and ground reaction forces. These data provide a critical link in understanding how to vibration isolate the device in both ISS and space transit applications. Secondarily, the study of human exercise and associated body kinematics in microgravity allows for more complete understanding of human to machine interface designs to allow for maximum functionality of the device in microgravity.

  9. Rapid VLIW Processor Customization for Signal Processing Applications Using Combinational Hardware Functions

    Directory of Open Access Journals (Sweden)

    Hoare Raymond R

    2006-01-01

    Full Text Available This paper presents an architecture that combines VLIW (very long instruction word processing with the capability to introduce application-specific customized instructions and highly parallel combinational hardware functions for the acceleration of signal processing applications. To support this architecture, a compilation and design automation flow is described for algorithms written in C. The key contributions of this paper are as follows: (1 a 4-way VLIW processor implemented in an FPGA, (2 large speedups through hardware functions, (3 a hardware/software interface with zero overhead, (4 a design methodology for implementing signal processing applications on this architecture, (5 tractable design automation techniques for extracting and synthesizing hardware functions. Several design tradeoffs for the architecture were examined including the number of VLIW functional units and register file size. The architecture was implemented on an Altera Stratix II FPGA. The Stratix II device was selected because it offers a large number of high-speed DSP (digital signal processing blocks that execute multiply-accumulate operations. Using the MediaBench benchmark suite, we tested our methodology and architecture to accelerate software. Our combined VLIW processor with hardware functions was compared to that of software executing on a RISC processor, specifically the soft core embedded NIOS II processor. For software kernels converted into hardware functions, we show a hardware performance multiplier of up to times that of software with an average times faster. For the entire application in which only a portion of the software is converted to hardware, the performance improvement is as much as 30X times faster than the nonaccelerated application, with a 12X improvement on average.

  10. FPGA based hardware optimized implementation of signal processing system for LFM pulsed radar

    Science.gov (United States)

    Azim, Noor ul; Jun, Wang

    2016-11-01

    Signal processing is one of the main parts of any radar system. Different signal processing algorithms are used to extract information about different parameters like range, speed, direction etc, of a target in the field of radar communication. This paper presents LFM (Linear Frequency Modulation) pulsed radar signal processing algorithms which are used to improve target detection, range resolution and to estimate the speed of a target. Firstly, these algorithms are simulated in MATLAB to verify the concept and theory. After the conceptual verification in MATLAB, the simulation is converted into implementation on hardware using Xilinx FPGA. Chosen FPGA is Xilinx Virtex-6 (XC6LVX75T). For hardware implementation pipeline optimization is adopted and also other factors are considered for resources optimization in the process of implementation. Focusing algorithms in this work for improving target detection, range resolution and speed estimation are hardware optimized fast convolution processing based pulse compression and pulse Doppler processing.

  11. TAP II Processing System Final Report. Hardware Documentation

    Science.gov (United States)

    1977-05-01

    system operating instructions. Separate operacion and maintenance manuals are available for the HDDR elec- tronics and the recorders, the Analogic A...has been used on prior shipboard systems in conjunc- tion with the Lambda array. Emphasis is given in this manual to the new equipment and processing...S.E./8 Channel Diffetential t• Multiplexer FLOATING POINT SYSTEMS, DOCUMENTATION . jFPS-7309. AP-120B Internal Interface Manual FPS-7322 AP-120B

  12. Comparative Modal Analysis of Sieve Hardware Designs

    Science.gov (United States)

    Thompson, Nathaniel

    2012-01-01

    The CMTB Thwacker hardware operates as a testbed analogue for the Flight Thwacker and Sieve components of CHIMRA, a device on the Curiosity Rover. The sieve separates particles with a diameter smaller than 150 microns for delivery to onboard science instruments. The sieving behavior of the testbed hardware should be similar to the Flight hardware for the results to be meaningful. The elastodynamic behavior of both sieves was studied analytically using the Rayleigh Ritz method in conjunction with classical plate theory. Finite element models were used to determine the mode shapes of both designs, and comparisons between the natural frequencies and mode shapes were made. The analysis predicts that the performance of the CMTB Thwacker will closely resemble the performance of the Flight Thwacker within the expected steady state operating regime. Excitations of the testbed hardware that will mimic the flight hardware were recommended, as were those that will improve the efficiency of the sieving process.

  13. Marshall Space Flight Center Materials and Processes Laboratory

    Science.gov (United States)

    Tramel, Terri L.

    2012-01-01

    Marshall?s Materials and Processes Laboratory has been a core capability for NASA for over fifty years. MSFC has a proven heritage and recognized expertise in materials and manufacturing that are essential to enable and sustain space exploration. Marshall provides a "systems-wise" capability for applied research, flight hardware development, and sustaining engineering. Our history of leadership and achievements in materials, manufacturing, and flight experiments includes Apollo, Skylab, Mir, Spacelab, Shuttle (Space Shuttle Main Engine, External Tank, Reusable Solid Rocket Motor, and Solid Rocket Booster), Hubble, Chandra, and the International Space Station. MSFC?s National Center for Advanced Manufacturing, NCAM, facilitates major M&P advanced manufacturing partnership activities with academia, industry and other local, state and federal government agencies. The Materials and Processes Laborato ry has principal competencies in metals, composites, ceramics, additive manufacturing, materials and process modeling and simulation, space environmental effects, non-destructive evaluation, and fracture and failure analysis provide products ranging from materials research in space to fully integrated solutions for large complex systems challenges. Marshall?s materials research, development and manufacturing capabilities assure that NASA and National missions have access to cutting-edge, cost-effective engineering design and production options that are frugal in using design margins and are verified as safe and reliable. These are all critical factors in both future mission success and affordability.

  14. Event-driven processing for hardware-efficient neural spike sorting

    Science.gov (United States)

    Liu, Yan; Pereira, João L.; Constandinou, Timothy G.

    2018-02-01

    Objective. The prospect of real-time and on-node spike sorting provides a genuine opportunity to push the envelope of large-scale integrated neural recording systems. In such systems the hardware resources, power requirements and data bandwidth increase linearly with channel count. Event-based (or data-driven) processing can provide here a new efficient means for hardware implementation that is completely activity dependant. In this work, we investigate using continuous-time level-crossing sampling for efficient data representation and subsequent spike processing. Approach. (1) We first compare signals (synthetic neural datasets) encoded with this technique against conventional sampling. (2) We then show how such a representation can be directly exploited by extracting simple time domain features from the bitstream to perform neural spike sorting. (3) The proposed method is implemented in a low power FPGA platform to demonstrate its hardware viability. Main results. It is observed that considerably lower data rates are achievable when using 7 bits or less to represent the signals, whilst maintaining the signal fidelity. Results obtained using both MATLAB and reconfigurable logic hardware (FPGA) indicate that feature extraction and spike sorting accuracies can be achieved with comparable or better accuracy than reference methods whilst also requiring relatively low hardware resources. Significance. By effectively exploiting continuous-time data representation, neural signal processing can be achieved in a completely event-driven manner, reducing both the required resources (memory, complexity) and computations (operations). This will see future large-scale neural systems integrating on-node processing in real-time hardware.

  15. DEVELOPMENT OF SIGNAL PROCESSING TOOLS AND HARDWARE FOR PIEZOELECTRIC SENSOR DIAGNOSTIC PROCESSES

    Energy Technology Data Exchange (ETDEWEB)

    OVERLY, TIMOTHY G. [Los Alamos National Laboratory; PARK, GYUHAE [Los Alamos National Laboratory; FARRAR, CHARLES R. [Los Alamos National Laboratory

    2007-02-09

    This paper presents a piezoelectric sensor diagnostic and validation procedure that performs in -situ monitoring of the operational status of piezoelectric (PZT) sensor/actuator arrays used in structural health monitoring (SHM) applications. The validation of the proper function of a sensor/actuator array during operation, is a critical component to a complete and robust SHM system, especially with the large number of active sensors typically involved. The method of this technique used to obtain the health of the PZT transducers is to track their capacitive value, this value manifests in the imaginary part of measured electrical admittance. Degradation of the mechanical/electric properties of a PZT sensor/actuator as well as bonding defects between a PZT patch and a host structure can be identified with the proposed procedure. However, it was found that temperature variations and changes in sensor boundary conditions manifest themselves in similar ways in the measured electrical admittances. Therefore, they examined the effects of temperature variation and sensor boundary conditions on the sensor diagnostic process. The objective of this study is to quantify and classify several key characteristics of temperature change and to develop efficient signal processing techniques to account for those variations in the sensor diagnostis process. In addition, they developed hardware capable of making the necessary measurements to perform the sensor diagnostics and to make impedance-based SHM measurements. The paper concludes with experimental results to demonstrate the effectiveness of the proposed technique.

  16. PARAGON-IPS: A Portable Imaging Software System For Multiple Generations Of Image Processing Hardware

    Science.gov (United States)

    Montelione, John

    1989-07-01

    Paragon-IPS is a comprehensive software system which is available on virtually all generations of image processing hardware. It is designed for an image processing department or a scientist and engineer who is doing image processing full-time. It is being used by leading R&D labs in government agencies and Fortune 500 companies. Applications include reconnaissance, non-destructive testing, remote sensing, medical imaging, etc.

  17. Utilizing Commercial Hardware and Open Source Computer Vision Software to Perform Motion Capture for Reduced Gravity Flight

    Science.gov (United States)

    Humphreys, Brad; Bellisario, Brian; Gallo, Christopher; Thompson, William K.; Lewandowski, Beth

    2016-01-01

    Long duration space travel to Mars or to an asteroid will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited. Therefore, compact resistance exercise device prototypes are being developed. The NASA Digital Astronaut Project (DAP) is supporting the Advanced Exercise Concepts (AEC) Project, Exercise Physiology and Countermeasures (ExPC) project and the National Space Biomedical Research Institute (NSBRI) funded researchers by developing computational models of exercising with these new advanced exercise device concepts. To perform validation of these models and to support the Advanced Exercise Concepts Project, several candidate devices have been flown onboard NASAs Reduced Gravity Aircraft. In terrestrial laboratories, researchers typically have available to them motion capture systems for the measurement of subject kinematics. Onboard the parabolic flight aircraft it is not practical to utilize the traditional motion capture systems due to the large working volume they require and their relatively high replacement cost if damaged. To support measuring kinematics on board parabolic aircraft, a motion capture system is being developed utilizing open source computer vision code with commercial off the shelf (COTS) video camera hardware. While the systems accuracy is lower than lab setups, it provides a means to produce quantitative comparison motion capture kinematic data. Additionally, data such as required exercise volume for small spaces such as the Orion capsule can be determined. METHODS: OpenCV is an open source computer vision library that provides the

  18. Flight Avionics Hardware Roadmap

    Science.gov (United States)

    Hodson, Robert; McCabe, Mary; Paulick, Paul; Ruffner, Tim; Some, Rafi; Chen, Yuan; Vitalpur, Sharada; Hughes, Mark; Ling, Kuok; Redifer, Matt; hide

    2013-01-01

    As part of NASA's Avionics Steering Committee's stated goal to advance the avionics discipline ahead of program and project needs, the committee initiated a multi-Center technology roadmapping activity to create a comprehensive avionics roadmap. The roadmap is intended to strategically guide avionics technology development to effectively meet future NASA missions needs. The scope of the roadmap aligns with the twelve avionics elements defined in the ASC charter, but is subdivided into the following five areas: Foundational Technology (including devices and components), Command and Data Handling, Spaceflight Instrumentation, Communication and Tracking, and Human Interfaces.

  19. A Visual Environment for Real-Time Image Processing in Hardware (VERTIPH

    Directory of Open Access Journals (Sweden)

    P. Lyons

    2006-07-01

    Full Text Available Real-time video processing is an image-processing application that is ideally suited to implementation on FPGAs. We discuss the strengths and weaknesses of a number of existing languages and hardware compilers that have been developed for specifying image processing algorithms on FPGAs. We propose VERTIPH, a new multiple-view visual language that avoids the weaknesses we identify. A VERTIPH design incorporates three different views, each tailored to a different aspect of the image processing system under development; an overall architectural view, a computational view, and a resource and scheduling view.

  20. A Visual Environment for Real-Time Image Processing in Hardware (VERTIPH

    Directory of Open Access Journals (Sweden)

    Johnston CT

    2006-01-01

    Full Text Available Real-time video processing is an image-processing application that is ideally suited to implementation on FPGAs. We discuss the strengths and weaknesses of a number of existing languages and hardware compilers that have been developed for specifying image processing algorithms on FPGAs. We propose VERTIPH, a new multiple-view visual language that avoids the weaknesses we identify. A VERTIPH design incorporates three different views, each tailored to a different aspect of the image processing system under development; an overall architectural view, a computational view, and a resource and scheduling view.

  1. Joint preprocesser-based detector for cooperative networks with limited hardware processing capability

    KAUST Repository

    Abuzaid, Abdulrahman I.

    2015-02-01

    In this letter, a joint detector for cooperative communication networks is proposed when the destination has limited hardware processing capability. The transmitter sends its symbols with the help of L relays. As the destination has limited hardware, only U out of L signals are processed and the energy of the remaining relays is lost. To solve this problem, a joint preprocessing based detector is proposed. This joint preprocessor based detector operate on the principles of minimizing the symbol error rate (SER). For a realistic assessment, pilot symbol aided channel estimation is incorporated for this proposed detector. From our simulations, it can be observed that our proposed detector achieves the same SER performance as that of the maximum likelihood (ML) detector with all participating relays. Additionally, our detector outperforms selection combining (SC), channel shortening (CS) scheme and reduced-rank techniques when using the same U. Our proposed scheme has low computational complexity.

  2. Fibonacci-based hardware post-processing for non-autonomous signum hyperchaotic system

    KAUST Repository

    Mansingka, Abhinav S.

    2013-12-01

    This paper presents a hardware implementation of a robust non-autonomous hyperchaotic-based PRNG driven by a 256-bit LFSR. The original chaotic output is post-processed using a novel technique based on the Fibonacci series, bitwise XOR, rotation, and feedback. The proposed post-processing technique preserves the throughput of the system and enhances the randomness in the output which is verified by successfully passing all NIST SP. 800-22 tests. The system is realized on a Xilinx Virtex 4 FPGA achieving throughput up to 13.165 Gbits/s for 16-bit bus-width surpassing previously reported CB-PRNGs. © 2013 IEEE.

  3. Sterilization of space hardware.

    Science.gov (United States)

    Pflug, I. J.

    1971-01-01

    Discussion of various techniques of sterilization of space flight hardware using either destructive heating or the action of chemicals. Factors considered in the dry-heat destruction of microorganisms include the effects of microbial water content, temperature, the physicochemical properties of the microorganism and adjacent support, and nature of the surrounding gas atmosphere. Dry-heat destruction rates of microorganisms on the surface, between mated surface areas, or buried in the solid material of space vehicle hardware are reviewed, along with alternative dry-heat sterilization cycles, thermodynamic considerations, and considerations of final sterilization-process design. Discussed sterilization chemicals include ethylene oxide, formaldehyde, methyl bromide, dimethyl sulfoxide, peracetic acid, and beta-propiolactone.

  4. Hardware Acceleration of SQL-Queries Processing in MDM-Systems Based on MISDSolution

    Directory of Open Access Journals (Sweden)

    V. E. Podol'skii

    2015-01-01

    Full Text Available In this article we examine the possibility of hardware support for functions of mobile device management platform (MDM-platform using a Multiple Instructions and Single Data stream computer system, developed within the framework of the project in Bauman Moscow State Technical University. At the universities the MDM-platform is used to provide various mobile services for the faculty, students and administration to facilitate the learning process: a mobile schedule, document sharing, text messages, and other interactive activities. Most of these services are provided by the extensive use of data stored in MDM-platform databases. When accessing the databases SQL- queries are commonly used. These queries comprise operators of SQL-language that are based on mathematical sets theory. Hardware support for operations on sets is implemented in Multiple Instructions and Single Data stream computer system (MISD System. This allows performance improvement of algorithms and operations on sets. Thus, the hardware support for the processing of SQL-queries in MISD system allows us to benefit from the implementation of SQL-queries in the MISD paradigm.The scientific novelty of the work lies in the fact that it is the first time a set of algorithms for basic SQL statements has been presented in a format supported by MISD system. In addition, for the first time operators INNER JOIN, LEFT JOIN and LEFT OUTER JOIN have been implemented for MISD system and tested for it (testing was done for FPGA Xilinx Virtex-II Pro XC2VP30 implementation of MISD system. The practical significance of the work lies in the fact that the results of the study will be used in the project "Development of the Russian analogue of the system software for centralized management of personal devices and platforms in enterprise networks" of the St. Petersburg Polytechnic University (with the financial support of the state represented by the Ministry of Education and Science of the Russian

  5. Real-Time Processing Library for Open-Source Hardware Biomedical Sensors.

    Science.gov (United States)

    Molina-Cantero, Alberto J; Castro-García, Juan A; Lebrato-Vázquez, Clara; Gómez-González, Isabel M; Merino-Monge, Manuel

    2018-03-29

    Applications involving data acquisition from sensors need samples at a preset frequency rate, the filtering out of noise and/or analysis of certain frequency components. We propose a novel software architecture based on open-software hardware platforms which allows programmers to create data streams from input channels and easily implement filters and frequency analysis objects. The performances of the different classes given in the size of memory allocated and execution time (number of clock cycles) were analyzed in the low-cost platform Arduino Genuino. In addition, 11 people took part in an experiment in which they had to implement several exercises and complete a usability test. Sampling rates under 250 Hz (typical for many biomedical applications) makes it feasible to implement filters, sliding windows and Fourier analysis, operating in real time. Participants rated software usability at 70.2 out of 100 and the ease of use when implementing several signal processing applications was rated at just over 4.4 out of 5. Participants showed their intention of using this software because it was percieved as useful and very easy to use. The performances of the library showed that it may be appropriate for implementing small biomedical real-time applications or for human movement monitoring, even in a simple open-source hardware device like Arduino Genuino. The general perception about this library is that it is easy to use and intuitive.

  6. FPGA implementation of hardware processing modules as coprocessors in brain-machine interfaces.

    Science.gov (United States)

    Wang, Dong; Hao, Yaoyao; Zhu, Xiaoping; Zhao, Ting; Wang, Yiwen; Chen, Yaowu; Chen, Weidong; Zheng, Xiaoxiang

    2011-01-01

    Real-time computation, portability and flexibility are crucial for practical brain-machine interface (BMI) applications. In this work, we proposed Hardware Processing Modules (HPMs) as a method for accelerating BMI computation. Two HPMs have been developed. One is the field-programmable gate array (FPGA) implementation of spike sorting based on probabilistic neural network (PNN), and the other is the FPGA implementation of neural ensemble decoding based on Kalman filter (KF). These two modules were configured under the same framework and tested with real data from motor cortex recording in rats performing a lever-pressing task for water rewards. Due to the parallelism feature of FPGA, the computation time was reduced by several dozen times, while the results are almost the same as those from Matlab implementations. Such HPMs provide a high performance coprocessor for neural signal computation.

  7. Hardware acceleration of lucky-region fusion (LRF) algorithm for image acquisition and processing

    Science.gov (United States)

    Maignan, William; Koeplinger, David; Carhart, Gary W.; Aubailly, Mathieu; Kiamilev, Fouad; Liu, J. Jiang

    2013-05-01

    "Lucky-region fusion" (LRF) is an image processing technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm extracts sharp regions of an image obtained from a series of short exposure frames, and "fuses" them into a final image with improved quality. In previous research, the LRF algorithm had been implemented on a PC using a compiled programming language. However, the PC usually does not have sufficient processing power to handle real-time extraction, processing and reduction required when the LRF algorithm is applied not to single picture images but rather to real-time video from fast, high-resolution image sensors. This paper describes a hardware implementation of the LRF algorithm on a Virtex 6 field programmable gate array (FPGA) to achieve real-time video processing. The novelty in our approach is the creation of a "black box" LRF video processing system with a standard camera link input, a user controller interface, and a standard camera link output.

  8. Life Improvement of Pot Hardware in Continuous Hot Dipping Processes Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Xingbo Liu

    2006-01-18

    The process of continuous galvanizing of rolled sheet steel includes immersion into a bath of molten zinc/aluminum alloy. The steel strip is dipped in the molten bath through a series of driving motors and rollers which control the speed and tension of the strip, with the ability to modify both the amount of coating applied to the steel as well as the thickness and width of the sheet being galvanized. There are three rolls used to guide the steel strip through the molten metal bath. The rolls that operate in the molten Zn/Al are subject to a severely corrosive environment and require frequent changing. The performance of this equipment, the metallic hardware submerged in the molten Zn/Al bath, is the focus of this research. The primary objective of this research is to extend the performance life of the metallic hardware components of molten Zn/Al pot hardware by an order of magnitude. Typical galvanizing operations experience downtimes on the order of every two weeks to change the metallic hardware submerged in the molten metal bath. This is an expensive process for industry which takes upwards of 3 days for a complete turn around to resume normal operation. Each roll bridle consists of a sink, stabilizer, and corrector roll with accompanying bearing components. The cost of the bridle rig with all components is as much as $25,000 dollars just for materials. These inefficiencies are of concern to the steel coating companies and serve as a potential market for many materials suppliers. This research effort served as a bridge between the market potential and industry need to provide an objective analytical and mechanistic approach to the problem of wear and corrosion of molten metal bath hardware in a continuous sheet galvanizing line. The approach of the investigators was to provide a means of testing and analysis that was both expeditious and cost effective. The consortium of researchers from West Virginia University and Oak Ridge National Laboratory developed

  9. Flight Dynamics Mission Support and Quality Assurance Process

    Science.gov (United States)

    Oh, InHwan

    1996-01-01

    This paper summarizes the method of the Computer Sciences Corporation Flight Dynamics Operation (FDO) quality assurance approach to support the National Aeronautics and Space Administration Goddard Space Flight Center Flight Dynamics Support Branch. Historically, a strong need has existed for developing systematic quality assurance using methods that account for the unique nature and environment of satellite Flight Dynamics mission support. Over the past few years FDO has developed and implemented proactive quality assurance processes applied to each of the six phases of the Flight Dynamics mission support life cycle: systems and operations concept, system requirements and specifications, software development support, operations planing and training, launch support, and on-orbit mission operations. Rather than performing quality assurance as a final step after work is completed, quality assurance has been built in as work progresses in the form of process assurance. Process assurance activities occur throughout the Flight Dynamics mission support life cycle. The FDO Product Assurance Office developed process checklists for prephase process reviews, mission team orientations, in-progress reviews, and end-of-phase audits. This paper will outline the evolving history of FDO quality assurance approaches, discuss the tailoring of Computer Science Corporations's process assurance cycle procedures, describe some of the quality assurance approaches that have been or are being developed, and present some of the successful results.

  10. Consort 1 flight results - A synopsis. [low gravity materials processing

    Science.gov (United States)

    Wessling, Francis C.; Lundquist, Charles A.; Maybee, George W.

    1989-01-01

    All six experiments performed onboard Consort 1, the first low gravity materials processing payload to be launched by a commercially licensed rocket in the U.S., are evaluated. The six experiments were carried out as planned, during approximately seven minutes of suborbital, low gravity flight and returned in excellent condition within four hour of launch. Nearly 150 physical samples supported by measurements and photographs made during the flight were taken for analysis. The rocket flight and payload configuration is described, along with experiment objectives, methods, results, and conclusions.

  11. Hardware malware

    CERN Document Server

    Krieg, Christian

    2013-01-01

    In our digital world, integrated circuits are present in nearly every moment of our daily life. Even when using the coffee machine in the morning, or driving our car to work, we interact with integrated circuits. The increasing spread of information technology in virtually all areas of life in the industrialized world offers a broad range of attack vectors. So far, mainly software-based attacks have been considered and investigated, while hardware-based attacks have attracted comparatively little interest. The design and production process of integrated circuits is mostly decentralized due to

  12. Managing the Testing Process Practical Tools and Techniques for Managing Hardware and Software Testing

    CERN Document Server

    Black, Rex

    2011-01-01

    New edition of one of the most influential books on managing software and hardware testing In this new edition of his top-selling book, Rex Black walks you through the steps necessary to manage rigorous testing programs of hardware and software. The preeminent expert in his field, Mr. Black draws upon years of experience as president of both the International and American Software Testing Qualifications boards to offer this extensive resource of all the standards, methods, and tools you'll need. The book covers core testing concepts and thoroughly examines the best test management practices

  13. Software and hardware complex for research and management of the separation process

    Science.gov (United States)

    Borisov, A. P.

    2018-01-01

    The article is devoted to the development of a program for studying the operation of an asynchronous electric drive using vector-algorithmic switching of windings, as well as the development of a hardware-software complex for controlling parameters and controlling the speed of rotation of an asynchronous electric drive for investigating the operation of a cyclone. To study the operation of an asynchronous electric drive, a method was used in which the average value of flux linkage is found and a method for vector-algorithmic calculation of the power and electromagnetic moment of an asynchronous electric drive feeding from a single-phase network is developed, with vector-algorithmic commutation, and software for calculating parameters. The software part of the complex allows to regulate the speed of rotation of the motor by vector-algorithmic switching of transistors or, using pulse-width modulation (PWM), set any engine speed. Also sensors are connected to the hardware-software complex at the inlet and outlet of the cyclone. The developed cyclone with an inserted complex allows to receive high efficiency of product separation at various entrance speeds. At an inlet air speed of 18 m / s, the cyclone’s maximum efficiency is achieved. For this, it is necessary to provide the rotational speed of an asynchronous electric drive with a frequency of 45 Hz.

  14. An Efficient Technique for Hardware/Software Partitioning Process in Codesign

    Directory of Open Access Journals (Sweden)

    Imene Mhadhbi

    2016-01-01

    Full Text Available Codesign methodology deals with the problem of designing complex embedded systems, where automatic hardware/software partitioning is one key issue. The research efforts in this issue are focused on exploring new automatic partitioning methods which consider only binary or extended partitioning problems. The main contribution of this paper is to propose a hybrid FCMPSO partitioning technique, based on Fuzzy C-Means (FCM and Particle Swarm Optimization (PSO algorithms suitable for mapping embedded applications for both binary and multicores target architecture. Our FCMPSO optimization technique has been compared using different graphical models with a large number of instances. Performance analysis reveals that FCMPSO outperforms PSO algorithm as well as the Genetic Algorithm (GA, Simulated Annealing (SA, Ant Colony Optimization (ACO, and FCM standard metaheuristic based techniques and also hybrid solutions including PSO then GA, GA then SA, GA then ACO, ACO then SA, FCM then GA, FCM then SA, and finally ACO followed by FCM.

  15. Multi-processing CTH: Porting legacy FORTRAN code to MP hardware

    Energy Technology Data Exchange (ETDEWEB)

    Bell, R.L.; Elrick, M.G.; Hertel, E.S. Jr.

    1996-12-31

    CTH is a family of codes developed at Sandia National Laboratories for use in modeling complex multi-dimensional, multi-material problems that are characterized by large deformations and/or strong shocks. A two-step, second-order accurate Eulerian solution algorithm is used to solve the mass, momentum, and energy conservation equations. CTH has historically been run on systems where the data are directly accessible to the cpu, such as workstations and vector supercomputers. Multiple cpus can be used if all data are accessible to all cpus. This is accomplished by placing compiler directives or subroutine calls within the source code. The CTH team has implemented this scheme for Cray shared memory machines under the Unicos operating system. This technique is effective, but difficult to port to other (similar) shared memory architectures because each vendor has a different format of directives or subroutine calls. A different model of high performance computing is one where many (> 1,000) cpus work on a portion of the entire problem and communicate by passing messages that contain boundary data. Most, if not all, codes that run effectively on parallel hardware were written with a parallel computing paradigm in mind. Modifying an existing code written for serial nodes poses a significantly different set of challenges that will be discussed. CTH, a legacy FORTRAN code, has been modified to allow for solutions on distributed memory parallel computers such as the IBM SP2, the Intel Paragon, Cray T3D, or a network of workstations. The message passing version of CTH will be discussed and example calculations will be presented along with performance data. Current timing studies indicate that CTH is 2--3 times faster than equivalent C++ code written specifically for parallel hardware. CTH on the Intel Paragon exhibits linear speed up with problems that are scaled (constant problem size per node) for the number of parallel nodes.

  16. A signal pre-processing algorithm designed for the needs of hardware implementation of neural classifiers used in condition monitoring

    DEFF Research Database (Denmark)

    Dabrowski, Dariusz; Hashemiyan, Zahra; Adamczyk, Jan

    2015-01-01

    Gearboxes have a significant influence on the durability and reliability of a power transmission system. Currently, extensive research studies are being carried out to increase the reliability of gearboxes working in the energy industry, especially with a focus on planetary gears in wind turbines...... and bucket wheel excavators. In this paper, a signal pre-processing algorithm designed for condition monitoring of planetary gears working in non-stationary operation is presented. The algorithm is dedicated for hardware implementation on Field Programmable Gate Arrays (FPGAs). The purpose of the algorithm...

  17. Parabolic flights as Earth analogue for surface processes on Mars

    Science.gov (United States)

    Kuhn, Nikolaus J.

    2017-04-01

    The interpretation of landforms and environmental archives on Mars with regards to habitability and preservation of traces of life requires a quantitative understanding of the processes that shaped them. Commonly, qualitative similarities in sedimentary rocks between Earth and Mars are used as an analogue to reconstruct the environments in which they formed on Mars. However, flow hydraulics and sedimentation differ between Earth and Mars, requiring a recalibration of models describing runoff, erosion, transport and deposition. Simulation of these processes on Earth is limited because gravity cannot be changed and the trade-off between adjusting e.g. fluid or particle density generates other mismatches, such as fluid viscosity. Computational Fluid Dynamics offer an alternative, but would also require a certain degree of calibration or testing. Parabolic flights offer a possibility to amend the shortcomings of these approaches. Parabolas with reduced gravity last up to 30 seconds, which allows the simulation of sedimentation processes and the measurement of flow hydraulics. This study summarizes the experience gathered during four campaigns of parabolic flights, aimed at identifying potential and limitations of their use as an Earth analogue for surface processes on Mars.

  18. Development of a software-hardware complex for studying the process of grinding by a pendulum deformer

    Science.gov (United States)

    Borisov, A. P.

    2018-01-01

    The article is devoted to the development of a software and hardware complex for investigating the grinding process on a pendulum deformer. The hardware part of this complex is the Raspberry Pi model 2B platform, to which a contactless angle sensor is connected, which allows to obtain data on the angle of deviation of the pendulum surface, usb-cameras, which allow to obtain grain images before and after grinding, and stepping motors allowing lifting of the pendulum surface and adjust the clearance between the pendulum and the supporting surfaces. The program part of the complex is written in C # and allows receiving data from the sensor and usb-cameras, processing the received data, and also controlling the synchronous-step motors in manual and automatic mode. The conducted studies show that the rational mode is the deviation of the pendulum surface by an angle of 400, and the location of the grain in the central zone of the support surface, regardless of the orientation of the grain in space. Also, due to the non-contact angle sensor, energy consumption for grinding, speed and acceleration of the pendulum surface, as well as vitreousness of grain and the energy consumption are calculated. With the help of photographs obtained from usb cameras, the work of a pendulum deformer based on the Rebinder formula and calculation of the grain area before and after grinding is determined.

  19. Hardware System for Real-Time EMG Signal Acquisition and Separation Processing during Electrical Stimulation.

    Science.gov (United States)

    Hsueh, Ya-Hsin; Yin, Chieh; Chen, Yan-Hong

    2015-09-01

    The study aimed to develop a real-time electromyography (EMG) signal acquiring and processing device that can acquire signal during electrical stimulation. Since electrical stimulation output can affect EMG signal acquisition, to integrate the two elements into one system, EMG signal transmitting and processing method has to be modified. The whole system was designed in a user-friendly and flexible manner. For EMG signal processing, the system applied Altera Field Programmable Gate Array (FPGA) as the core to instantly process real-time hybrid EMG signal and output the isolated signal in a highly efficient way. The system used the power spectral density to evaluate the accuracy of signal processing, and the cross correlation showed that the delay of real-time processing was only 250 μs.

  20. Hardware-software and algorithmic provision of multipoint systems for long-term monitoring of dynamic processes

    Science.gov (United States)

    Yakunin, A. G.; Hussein, H. M.

    2017-08-01

    An example of information-measuring systems for climate monitoring and operational control of energy resources consumption of the university campus that is functioning in the Altai State Technical University since 2009. The advantages of using such systems for studying various physical processes are discussed. General principles of construction of similar systems, their software, hardware and algorithmic support are considered. It is shown that their fundamental difference from traditional SCADA - systems is the use of databases for storing the results of the observation with a specialized data structure, and by preprocessing of the input signal for its compression. Another difference is the absence of clear criteria for detecting the anomalies in the time series of the observed process. The examples of algorithms that solve this problem are given.

  1. Development of a software and hardware system for monitoring the air cleaning process using a cyclone-separator

    Science.gov (United States)

    Nicolaeva, B. K.; Borisov, A. P.; Zlochevskiy, V. L.

    2017-08-01

    The article is devoted to the development of a hardware-software complex for monitoring and controlling the process of air purification by means of a cyclone-separator. The hardware of this complex is the Arduino platform, to which are connected pressure sensors, air velocities, dustmeters, which allow monitoring of the main parameters of the cyclone-separator. Also, a frequency converter was developed to regulate the rotation speed of an asynchronous motor necessary to correct the flow rate, the control signals of which come with Arduino. The program part of the complex is written in the form of a web application in the programming language JavaScript and inserts into CSS and HTML for the user interface. This program allows you to receive data from sensors, build dependencies in real time and control the speed of rotation of an asynchronous electric drive. The conducted experiment shows that the cleaning efficiency is 95-99.9%, while the airflow at the cyclone inlet is 16-18 m/s, and at the exit 50-70 m/s.

  2. Process Scheduling for Performance Estimation and Synthesis of Hardware/Software Systems

    DEFF Research Database (Denmark)

    Eles, Petru; Kuchcinski, Krzysztof; Peng, Zebo

    1998-01-01

    The paper presents an approach to process scheduling for embedded systems. Target architectures consist of several processors and ASICs connected by shared busses. We have developed algorithms for process graph scheduling based on listscheduling and branch-and-bound strategies. One essential...

  3. Hardware architectures for real time processing of High Definition video sequences

    OpenAIRE

    Genovese, Mariangela

    2014-01-01

    Actually, application fields, such as medicine, space exploration, surveillance, authentication, HDTV, and automated industry inspection, require capturing, storing and processing continuous streams of video data. Consequently, different process techniques (video enhancement, segmentation, object detection, or video compression, as examples) are involved in these applications. Such techniques often require a significant number of operations depending on the algorithm complexity and the video ...

  4. Real-time medical video processing, enabled by hardware accelerated correlations

    DEFF Research Database (Denmark)

    Savarimuthu, T. R.; Kjaer-Nielsen, A.; Sorensen, A. S.

    2011-01-01

    Image processing involving correlation based filter algorithms have proved extremely useful for image enhancement, feature extraction and recognition, in a wide range of medical applications, but is almost exclusively used with still images due to the amount of computations required by the correl......Image processing involving correlation based filter algorithms have proved extremely useful for image enhancement, feature extraction and recognition, in a wide range of medical applications, but is almost exclusively used with still images due to the amount of computations required......, while the second method employs an embedded FPGA. We will discuss major difference between the two approaches, and their suitability for clinical use. The system presented detects blood vessels in human forearms in images from NIR camera setup for the use in a clinical environment....

  5. DAX - The Next Generation: Towards One Million Processes on Commodity Hardware.

    Science.gov (United States)

    Damon, Stephen M; Boyd, Brian D; Plassard, Andrew J; Taylor, Warren; Landman, Bennett A

    2017-01-01

    Large scale image processing demands a standardized way of not only storage but also a method for job distribution and scheduling. The eXtensible Neuroimaging Archive Toolkit (XNAT) is one of several platforms that seeks to solve the storage issues. Distributed Automation for XNAT (DAX) is a job control and distribution manager. Recent massive data projects have revealed several bottlenecks for projects with >100,000 assessors (i.e., data processing pipelines in XNAT). In order to address these concerns, we have developed a new API, which exposes a direct connection to the database rather than REST API calls to accomplish the generation of assessors. This method, consistent with XNAT, keeps a full history for auditing purposes. Additionally, we have optimized DAX to keep track of processing status on disk (called DISKQ) rather than on XNAT, which greatly reduces load on XNAT by vastly dropping the number of API calls. Finally, we have integrated DAX into a Docker container with the idea of using it as a Docker controller to launch Docker containers of image processing pipelines. Using our new API, we reduced the time to create 1,000 assessors (a sub-cohort of our case project) from 65040 seconds to 229 seconds (a decrease of over 270 fold). DISKQ, using pyXnat, allows launching of 400 jobs in under 10 seconds which previously took 2,000 seconds. Together these updates position DAX to support projects with hundreds of thousands of scans and to run them in a time-efficient manner.

  6. Generalized hardware post-processing technique for chaos-based pseudorandom number generators

    KAUST Repository

    Barakat, Mohamed L.

    2013-06-01

    This paper presents a generalized post-processing technique for enhancing the pseudorandomness of digital chaotic oscillators through a nonlinear XOR-based operation with rotation and feedback. The technique allows full utilization of the chaotic output as pseudorandom number generators and improves throughput without a significant area penalty. Digital design of a third-order chaotic system with maximum function nonlinearity is presented with verified chaotic dynamics. The proposed post-processing technique eliminates statistical degradation in all output bits, thus maximizing throughput compared to other processing techniques. Furthermore, the technique is applied to several fully digital chaotic oscillators with performance surpassing previously reported systems in the literature. The enhancement in the randomness is further examined in a simple image encryption application resulting in a better security performance. The system is verified through experiment on a Xilinx Virtex 4 FPGA with throughput up to 15.44 Gbit/s and logic utilization less than 0.84% for 32-bit implementations. © 2013 ETRI.

  7. DAX - the next generation: towards one million processes on commodity hardware

    Science.gov (United States)

    Damon, Stephen M.; Boyd, Brian D.; Plassard, Andrew J.; Taylor, Warren; Landman, Bennett A.

    2017-03-01

    Large scale image processing demands a standardized way of not only storage but also a method for job distribution and scheduling. The eXtensible Neuroimaging Archive Toolkit (XNAT) is one of several platforms that seeks to solve the storage issues. Distributed Automation for XNAT (DAX) is a job control and distribution manager. Recent massive data projects have revealed several bottlenecks for projects with jobs in under 10 seconds which previously took 2,000 seconds. Together these updates position DAX to support projects with hundreds of thousands of scans and to run them in a time-efficient manner.

  8. Open Hardware at CERN

    CERN Multimedia

    CERN Knowledge Transfer Group

    2015-01-01

    CERN is actively making its knowledge and technology available for the benefit of society and does so through a variety of different mechanisms. Open hardware has in recent years established itself as a very effective way for CERN to make electronics designs and in particular printed circuit board layouts, accessible to anyone, while also facilitating collaboration and design re-use. It is creating an impact on many levels, from companies producing and selling products based on hardware designed at CERN, to new projects being released under the CERN Open Hardware Licence. Today the open hardware community includes large research institutes, universities, individual enthusiasts and companies. Many of the companies are actively involved in the entire process from design to production, delivering services and consultancy and even making their own products available under open licences.

  9. Ground Testing of Prototype Hardware and Processing Algorithms for a Wide Area Space Surveillance System (WASSS)

    Science.gov (United States)

    Goldstein, N.; Dressler, R. A.; Richtsmeier, S. S.; McLean, J.; Dao, P. D.; Murray-Krezan, J.; Fulcoly, D. O.

    2013-09-01

    Recent ground testing of a wide area camera system and automated star removal algorithms has demonstrated the potential to detect, quantify, and track deep space objects using small aperture cameras and on-board processors. The camera system, which was originally developed for a space-based Wide Area Space Surveillance System (WASSS), operates in a fixed-stare mode, continuously monitoring a wide swath of space and differentiating celestial objects from satellites based on differential motion across the field of view. It would have greatest utility in a LEO orbit to provide automated and continuous monitoring of deep space with high refresh rates, and with particular emphasis on the GEO belt and GEO transfer space. Continuous monitoring allows a concept of change detection and custody maintenance not possible with existing sensors. The detection approach is equally applicable to Earth-based sensor systems. A distributed system of such sensors, either Earth-based, or space-based, could provide automated, persistent night-time monitoring of all of deep space. The continuous monitoring provides a daily record of the light curves of all GEO objects above a certain brightness within the field of view. The daily updates of satellite light curves offers a means to identify specific satellites, to note changes in orientation and operational mode, and to queue other SSA assets for higher resolution queries. The data processing approach may also be applied to larger-aperture, higher resolution camera systems to extend the sensitivity towards dimmer objects. In order to demonstrate the utility of the WASSS system and data processing, a ground based field test was conducted in October 2012. We report here the results of the observations made at Magdalena Ridge Observatory using the prototype WASSS camera, which has a 4×60° field-of-view , <0.05° resolution, a 2.8 cm2 aperture, and the ability to view within 4° of the sun. A single camera pointed at the GEO belt provided a

  10. NASA-STD-(I)-6016, Standard Materials and Processes Requirements for Spacecraft

    Science.gov (United States)

    Pedley, Michael; Griffin, Dennis

    2006-01-01

    This document is directed toward Materials and Processes (M&P) used in the design, fabrication, and testing of flight components for all NASA manned, unmanned, robotic, launch vehicle, lander, in-space and surface systems, and spacecraft program/project hardware elements. All flight hardware is covered by the M&P requirements of this document, including vendor designed, off-the-shelf, and vendor furnished items. Materials and processes used in interfacing ground support equipment (GSE); test equipment; hardware processing equipment; hardware packaging; and hardware shipment shall be controlled to prevent damage to or contamination of flight hardware.

  11. Knowledge-based processing for aircraft flight control

    Science.gov (United States)

    Painter, John H.; Glass, Emily; Economides, Gregory; Russell, Paul

    1994-01-01

    This Contractor Report documents research in Intelligent Control using knowledge-based processing in a manner dual to methods found in the classic stochastic decision, estimation, and control discipline. Such knowledge-based control has also been called Declarative, and Hybid. Software architectures were sought, employing the parallelism inherent in modern object-oriented modeling and programming. The viewpoint adopted was that Intelligent Control employs a class of domain-specific software architectures having features common over a broad variety of implementations, such as management of aircraft flight, power distribution, etc. As much attention was paid to software engineering issues as to artificial intelligence and control issues. This research considered that particular processing methods from the stochastic and knowledge-based worlds are duals, that is, similar in a broad context. They provide architectural design concepts which serve as bridges between the disparate disciplines of decision, estimation, control, and artificial intelligence. This research was applied to the control of a subsonic transport aircraft in the airport terminal area.

  12. Experience in adjustment of hardware for technological process on-line control system at a NPP power unit with the WWER-1000 reactor

    International Nuclear Information System (INIS)

    Morozov, B.P.

    1989-01-01

    The problem of adjustment of unified hardware complex included in the composition of technological process on-line control system for a power unit with the WWER-1000 type reactor is discussed. The adjustment of the complex takes place in two stages, i.e. input control and independent alive adjustment. 1 fig.; 2 tabs

  13. Do the design concepts used for the space flight hardware directly affect cell structure and/or cell function ground based simulations

    Science.gov (United States)

    Chapman, David K.

    1989-01-01

    The use of clinostats and centrifuges to explore the hypogravity range between zero and 1 g is described. Different types of clinostat configurations and clinostat-centrifuge combinations are compared. Some examples selected from the literature and current research in gravitational physiology are presented to show plant responses in the simulated hypogravity region of the g-parameter (0 is greater than g is greater than 1). The validation of clinostat simulation is discussed. Examples in which flight data can be compared to clinostat data are presented. The data from 3 different laboratories using 3 different plant species indicate that clinostat simulation in some cases were qualitatively similar to flight data, but that in all cases were quantitatively different. The need to conduct additional tests in weightlessness is emphasized.

  14. Design and flight experience with a digital fly-by-wire control system using Apollo guidance system hardware on an F-8 aircraft.

    Science.gov (United States)

    Deets, D. A.; Szalai, K. J.

    1972-01-01

    This paper discusses the design and initial flight tests of the first digital fly-by-wire system to be flown in an aircraft. The system, which used components from the Apollo guidance system, was installed in an F-8 aircraft. A lunar module guidance computer is the central element in the three-axis, single-channel, multimode, digital, primary control system. An electrohydraulic triplex system providing unaugmented control of the F-8 aircraft is the only backup to the digital system. Emphasis is placed on the digital system in its role as a control augmentor, a logic processor, and a failure detector. A sampled-data design synthesis example is included to demonstrate the role of various analytical and simulation methods. The use of a digital system to implement conventional control laws was shown to be practical for flight. Logic functions coded as an integral part of the control laws were found to be advantageous. Verification of software required an extensive effort, but confidence in the software was achieved. Initial flight results showed highly successful system operation, although quantization of pilot's stick and trim were areas of minor concern from the piloting standpoint.

  15. Knowledge-based processing for aircraft flight control

    Science.gov (United States)

    Painter, John H.

    1991-01-01

    The purpose is to develop algorithms and architectures for embedding artificial intelligence in aircraft guidance and control systems. With the approach adopted, AI-computing is used to create an outer guidance loop for driving the usual aircraft autopilot. That is, a symbolic processor monitors the operation and performance of the aircraft. Then, based on rules and other stored knowledge, commands are automatically formulated for driving the autopilot so as to accomplish desired flight operations. The focus is on developing a software system which can respond to linguistic instructions, input in a standard format, so as to formulate a sequence of simple commands to the autopilot. The instructions might be a fairly complex flight clearance, input either manually or by data-link. Emphasis is on a software system which responds much like a pilot would, employing not only precise computations, but, also, knowledge which is less precise, but more like common-sense. The approach is based on prior work to develop a generic 'shell' architecture for an AI-processor, which may be tailored to many applications by describing the application in appropriate processor data bases (libraries). Such descriptions include numerical models of the aircraft and flight control system, as well as symbolic (linguistic) descriptions of flight operations, rules, and tactics.

  16. Manned Flight Simulator (MFS)

    Data.gov (United States)

    Federal Laboratory Consortium — The Aircraft Simulation Division, home to the Manned Flight Simulator (MFS), provides real-time, high fidelity, hardware-in-the-loop flight simulation capabilities...

  17. ISS Logistics Hardware Disposition and Metrics Validation

    Science.gov (United States)

    Rogers, Toneka R.

    2010-01-01

    I was assigned to the Logistics Division of the International Space Station (ISS)/Spacecraft Processing Directorate. The Division consists of eight NASA engineers and specialists that oversee the logistics portion of the Checkout, Assembly, and Payload Processing Services (CAPPS) contract. Boeing, their sub-contractors and the Boeing Prime contract out of Johnson Space Center, provide the Integrated Logistics Support for the ISS activities at Kennedy Space Center. Essentially they ensure that spares are available to support flight hardware processing and the associated ground support equipment (GSE). Boeing maintains a Depot for electrical, mechanical and structural modifications and/or repair capability as required. My assigned task was to learn project management techniques utilized by NASA and its' contractors to provide an efficient and effective logistics support infrastructure to the ISS program. Within the Space Station Processing Facility (SSPF) I was exposed to Logistics support components, such as, the NASA Spacecraft Services Depot (NSSD) capabilities, Mission Processing tools, techniques and Warehouse support issues, required for integrating Space Station elements at the Kennedy Space Center. I also supported the identification of near-term ISS Hardware and Ground Support Equipment (GSE) candidates for excessing/disposition prior to October 2010; and the validation of several Logistics Metrics used by the contractor to measure logistics support effectiveness.

  18. Introduction to Hardware Security

    OpenAIRE

    Yier Jin

    2015-01-01

    Hardware security has become a hot topic recently with more and more researchers from related research domains joining this area. However, the understanding of hardware security is often mixed with cybersecurity and cryptography, especially cryptographic hardware. For the same reason, the research scope of hardware security has never been clearly defined. To help researchers who have recently joined in this area better understand the challenges and tasks within the hardware security domain an...

  19. Intersection points for the driving of applier processes of the hardware control of the ZEUS forward detector

    International Nuclear Information System (INIS)

    Siemon, T.

    1992-08-01

    The ZEUS forward detector is built of drift- and transition-radiation chambers which are supported by many peripheral devices. The resulting complex system has to be monitored and controlled continously to preserve safety and to achieve optimal performance. For this task a Hardware-Control-System (HWC) has been developed. Ten VME and OS9-based microprocessors which are connected by Ethernet and VME-bus are provided to run the control- and monitoring tasks. Special attention has been paid to the development of efficient user-interfaces: RDT, an object-oriented database-toolkit, serves as an interface to the data of the HWC. The concept and the usage of this interface are outlined. Finally special features that may be useful for other applications are discussed. (orig.) [de

  20. Real-Time Hardware-in-the-Loop Laboratory Testing for Multisensor Sense and Avoid Systems

    Directory of Open Access Journals (Sweden)

    Giancarmine Fasano

    2013-01-01

    Full Text Available This paper focuses on a hardware-in-the-loop facility aimed at real-time testing of architectures and algorithms of multisensor sense and avoid systems. It was developed within a research project aimed at flight demonstration of autonomous non-cooperative collision avoidance for Unmanned Aircraft Systems. In this framework, an optionally piloted Very Light Aircraft was used as experimental platform. The flight system is based on multiple-sensor data integration and it includes a Ka-band radar, four electro-optical sensors, and two dedicated processing units. The laboratory test system was developed with the primary aim of prototype validation before multi-sensor tracking and collision avoidance flight tests. System concept, hardware/software components, and operating modes are described in the paper. The facility has been built with a modular approach including both flight hardware and simulated systems and can work on the basis of experimentally tested or synthetically generated scenarios. Indeed, hybrid operating modes are also foreseen which enable performance assessment also in the case of alternative sensing architectures and flight scenarios that are hardly reproducible during flight tests. Real-time multisensor tracking results based on flight data are reported, which demonstrate reliability of the laboratory simulation while also showing the effectiveness of radar/electro-optical fusion in a non-cooperative collision avoidance architecture.

  1. Hardware for dynamic quantum computing.

    Science.gov (United States)

    Ryan, Colm A; Johnson, Blake R; Ristè, Diego; Donovan, Brian; Ohki, Thomas A

    2017-10-01

    We describe the hardware, gateware, and software developed at Raytheon BBN Technologies for dynamic quantum information processing experiments on superconducting qubits. In dynamic experiments, real-time qubit state information is fed back or fed forward within a fraction of the qubits' coherence time to dynamically change the implemented sequence. The hardware presented here covers both control and readout of superconducting qubits. For readout, we created a custom signal processing gateware and software stack on commercial hardware to convert pulses in a heterodyne receiver into qubit state assignments with minimal latency, alongside data taking capability. For control, we developed custom hardware with gateware and software for pulse sequencing and steering information distribution that is capable of arbitrary control flow in a fraction of superconducting qubit coherence times. Both readout and control platforms make extensive use of field programmable gate arrays to enable tailored qubit control systems in a reconfigurable fabric suitable for iterative development.

  2. An open-source hardware and software system for acquisition and real-time processing of electrophysiology during high field MRI.

    Science.gov (United States)

    Purdon, Patrick L; Millan, Hernan; Fuller, Peter L; Bonmassar, Giorgio

    2008-11-15

    Simultaneous recording of electrophysiology and functional magnetic resonance imaging (fMRI) is a technique of growing importance in neuroscience. Rapidly evolving clinical and scientific requirements have created a need for hardware and software that can be customized for specific applications. Hardware may require customization to enable a variety of recording types (e.g., electroencephalogram, local field potentials, or multi-unit activity) while meeting the stringent and costly requirements of MRI safety and compatibility. Real-time signal processing tools are an enabling technology for studies of learning, attention, sleep, epilepsy, neurofeedback, and neuropharmacology, yet real-time signal processing tools are difficult to develop. We describe an open-source system for simultaneous electrophysiology and fMRI featuring low-noise (tested up to 7T), and user-programmable real-time signal processing. The hardware distribution provides the complete specifications required to build an MRI-compatible electrophysiological data acquisition system, including circuit schematics, print circuit board (PCB) layouts, Gerber files for PCB fabrication and robotic assembly, a bill of materials with part numbers, data sheets, and vendor information, and test procedures. The software facilitates rapid implementation of real-time signal processing algorithms. This system has been used in human EEG/fMRI studies at 3 and 7T examining the auditory system, visual system, sleep physiology, and anesthesia, as well as in intracranial electrophysiological studies of the non-human primate visual system during 3T fMRI, and in human hyperbaric physiology studies at depths of up to 300 feet below sea level.

  3. Image processing for flight crew enhanced situation awareness

    Science.gov (United States)

    Roberts, Barry

    1993-01-01

    This presentation describes the image processing work that is being performed for the Enhanced Situational Awareness System (ESAS) application. Specifically, the presented work supports the Enhanced Vision System (EVS) component of ESAS.

  4. Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process

    Science.gov (United States)

    McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.

    1999-01-01

    This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.

  5. Process Improvement for Next Generation Space Flight Vehicles: MSFC Lessons Learned

    Science.gov (United States)

    Housch, Helen

    2008-01-01

    This viewgraph presentation reviews the lessons learned from process improvement for Next Generation Space Flight Vehicles. The contents include: 1) Organizational profile; 2) Process Improvement History; 3) Appraisal Preparation; 4) The Appraisal Experience; 5) Useful Tools; and 6) Is CMMI working?

  6. A Kinematic Calibration Process for Flight Robotic Arms

    Science.gov (United States)

    Collins, Curtis L.; Robinson, Matthew L.

    2013-01-01

    The Mars Science Laboratory (MSL) robotic arm is ten times more massive than any Mars robotic arm before it, yet with similar accuracy and repeatability positioning requirements. In order to assess and validate these requirements, a higher-fidelity model and calibration processes were needed. Kinematic calibration of robotic arms is a common and necessary process to ensure good positioning performance. Most methodologies assume a rigid arm, high-accuracy data collection, and some kind of optimization of kinematic parameters. A new detailed kinematic and deflection model of the MSL robotic arm was formulated in the design phase and used to update the initial positioning and orientation accuracy and repeatability requirements. This model included a higher-fidelity link stiffness matrix representation, as well as a link level thermal expansion model. In addition, it included an actuator backlash model. Analytical results highlighted the sensitivity of the arm accuracy to its joint initialization methodology. Because of this, a new technique for initializing the arm joint encoders through hardstop calibration was developed. This involved selecting arm configurations to use in Earth-based hardstop calibration that had corresponding configurations on Mars with the same joint torque to ensure repeatability in the different gravity environment. The process used to collect calibration data for the arm included the use of multiple weight stand-in turrets with enough metrology targets to reconstruct the full six-degree-of-freedom location of the rover and tool frames. The follow-on data processing of the metrology data utilized a standard differential formulation and linear parameter optimization technique.

  7. Introduction to Hardware Security

    Directory of Open Access Journals (Sweden)

    Yier Jin

    2015-10-01

    Full Text Available Hardware security has become a hot topic recently with more and more researchers from related research domains joining this area. However, the understanding of hardware security is often mixed with cybersecurity and cryptography, especially cryptographic hardware. For the same reason, the research scope of hardware security has never been clearly defined. To help researchers who have recently joined in this area better understand the challenges and tasks within the hardware security domain and to help both academia and industry investigate countermeasures and solutions to solve hardware security problems, we will introduce the key concepts of hardware security as well as its relations to related research topics in this survey paper. Emerging hardware security topics will also be clearly depicted through which the future trend will be elaborated, making this survey paper a good reference for the continuing research efforts in this area.

  8. Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR): Guide to data processing and revision: Part 3, Hardware component failure data entry and revision procedures

    International Nuclear Information System (INIS)

    Gilmore, W.E.; Gertman, D.I.; Gilbert, B.G.; Reece, W.J.

    1988-11-01

    The Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR) is an automated data base management system for processing and storing human error probability (HEP) and hardware component failure data (HCFD). The NUCLARR system software resides on an IBM (or compatible) personal micro-computer. Users can perform data base searches to furnish HEP estimates and HCFD rates. In this manner, the NUCLARR system can be used to support a variety of risk assessment activities. This volume, Volume 3 of a 5-volume series, presents the procedures used to process HEP and HCFD for entry in NUCLARR and describes how to modify the existing NUCLARR taxonomy in order to add equipment types of action verbs. Volume 3 also specifies the various roles of the administrative staff on assignment to the NUCLARR Clearinghouse who are tasked with maintaining the data base, dealing with user requests, and processing NUCLARR data

  9. Flight Computer Processing Avionics for Space Station Microgravity Experiments: A Risk Assessment of Commercial Off-the-Shelf Utilization

    Science.gov (United States)

    Estes, Howard; Liggin, Karl; Crawford, Kevin; Humphries, Rick (Technical Monitor)

    2001-01-01

    NASA/Marshall Space Flight Center (MSFC) is continually looking for ways to reduce the costs and schedule and minimize the technical risks during the development of microgravity programs. One of the more prominent ways to minimize the cost and schedule is to use off-the-shelf hardware (OTS). However, the use of OTS often increases the risk. This paper addresses relevant factors considered during the selection and utilization of commercial off-the-shelf (COTS) flight computer processing equipment for the control of space station microgravity experiments. The paper will also discuss how to minimize the technical risks when using COTS processing hardware. Two microgravity experiments for which the COTS processing equipment is being evaluated for are the Equiaxed Dendritic Solidification Experiment (EDSE) and the Self-diffusion in Liquid Elements (SDLE) experiment. Since MSFC is the lead center for Microgravity research, EDSE and SDLE processor selection will be closely watched by other experiments that are being designed to meet payload carrier requirements. This includes the payload carriers planned for the International Space Station (ISS). The purpose of EDSE is to continue to investigate microstructural evolution of, and thermal interactions between multiple dendrites growing under diffusion controlled conditions. The purpose of SDLE is to determine accurate self-diffusivity data as a function of temperature for liquid elements selected as representative of class-like structures. In 1999 MSFC initiated a Center Director's Discretionary Fund (CDDF) effort to investigate and determine the optimal commercial data bus architecture that could lead to faster, better, and lower cost data acquisition systems for the control of microgravity experiments. As part of this effort various commercial data acquisition systems were acquired and evaluated. This included equipment with various form factors, (3U, 6U, others) and equipment that utilized various bus structures, (VME

  10. Mathematical Modeling of Aerodynamic Space -to - Surface Flight with Trajectory for Avoid Intercepting Process

    OpenAIRE

    Gornev, Serge

    2006-01-01

    Modeling has been created for a Space-to-Surface system defined for an optimal trajectory for targeting in terminal phase with avoids an intercepting process. The modeling includes models for simulation atmosphere, speed of sound, aerodynamic flight and navigation by an infrared system. The modeling and simulation includes statistical analysis of the modeling results.

  11. Test Hardware Design for Flightlike Operation of Advanced Stirling Convertors (ASC-E3)

    Science.gov (United States)

    Oriti, Salvatore M.

    2012-01-01

    NASA Glenn Research Center (GRC) has been supporting development of the Advanced Stirling Radioisotope Generator (ASRG) since 2006. A key element of the ASRG project is providing life, reliability, and performance testing of the Advanced Stirling Convertor (ASC). For this purpose, the Thermal Energy Conversion branch at GRC has been conducting extended operation of a multitude of free-piston Stirling convertors. The goal of this effort is to generate long-term performance data (tens of thousands of hours) simultaneously on multiple units to build a life and reliability database. The test hardware for operation of these convertors was designed to permit in-air investigative testing, such as performance mapping over a range of environmental conditions. With this, there was no requirement to accurately emulate the flight hardware. For the upcoming ASC-E3 units, the decision has been made to assemble the convertors into a flight-like configuration. This means the convertors will be arranged in the dual-opposed configuration in a housing that represents the fit, form, and thermal function of the ASRG. The goal of this effort is to enable system level tests that could not be performed with the traditional test hardware at GRC. This offers the opportunity to perform these system-level tests much earlier in the ASRG flight development, as they would normally not be performed until fabrication of the qualification unit. This paper discusses the requirements, process, and results of this flight-like hardware design activity.

  12. Protocol Processing for 100 Gbit/s and Beyond - A Soft Real-Time Approach in Hardware and Software

    Science.gov (United States)

    Büchner, Steffen; Lopacinski, Lukasz; Kraemer, Rolf; Nolte, Jörg

    2017-09-01

    100 Gbit/s wireless communication protocol processing stresses all parts of a communication system until the outermost. The efficient use of upcoming 100 Gbit/s and beyond transmission technology requires the rethinking of the way protocols are processed by the communication endpoints. This paper summarizes the achievements of the project End2End100. We will present a comprehensive soft real-time stream processing approach that allows the protocol designer to develop, analyze, and plan scalable protocols for ultra high data rates of 100 Gbit/s and beyond. Furthermore, we will present an ultra-low power, adaptable, and massively parallelized FEC (Forward Error Correction) scheme that detects and corrects bit errors at line rate with an energy consumption between 1 pJ/bit and 13 pJ/bit. The evaluation results discussed in this publication show that our comprehensive approach allows end-to-end communication with a very low protocol processing overhead.

  13. Management Process of a Frequency Response Flight Test for Rotorcraft Flying Qualities Evaluation

    Directory of Open Access Journals (Sweden)

    João Otávio Falcão Arantes Filho

    2016-07-01

    Full Text Available This paper applies the frequency response methodology to characterize and analyze the flying qualities of longitudinal and lateral axes of a rotary-wing aircraft, AS355-F2. Using the results, it is possible to check the suitability of the aircraft in accordance with ADS-33E-PRF standard, whose flying qualities specifications criteria are based on parameters in the frequency domain. The key steps addressed in the study involve getting, by means of flight test data, the closed-loop dynamic responses including the design of the instrumentation and specification of the sensors to be used in the flight test campaign, the definition of the appropriate maneuvers characteristics for excitation of the aircraft, the planning and execution of the flight test to collect the data, and the proper data treatment, processing and analysis after the flight. After treatment of the collected data, single input-single output spectral analysis is performed. The results permit the analysis of the flying qualities characteristics, anticipation of the demands to which the pilot will be subjected during closed-loop evaluations and check of compliance with the aforementioned standard, within the range of consistent excitation frequencies for flight tests, setting the agility level of the test aircraft.

  14. Contamination Examples and Lessons from Low Earth Orbit Experiments and Operational Hardware

    Science.gov (United States)

    Pippin, Gary; Finckenor, Miria M.

    2009-01-01

    Flight experiments flown on the Space Shuttle, the International Space Station, Mir, Skylab, and free flyers such as the Long Duration Exposure Facility, the European Retrievable Carrier, and the EFFU, provide multiple opportunities for the investigation of molecular contamination effects. Retrieved hardware from the Solar Maximum Mission satellite, Mir, and the Hubble Space Telescope has also provided the means gaining insight into contamination processes. Images from the above mentioned hardware show contamination effects due to materials processing, hardware storage, pre-flight cleaning, as well as on-orbit events such as outgassing, mechanical failure of hardware in close proximity, impacts from man-made debris, and changes due to natural environment factors.. Contamination effects include significant changes to thermal and electrical properties of thermal control surfaces, optics, and power systems. Data from several flights has been used to develop a rudimentary estimate of asymptotic values for absorptance changes due to long-term solar exposure (4000-6000 Equivalent Sun Hours) of silicone-based molecular contamination deposits of varying thickness. Recommendations and suggestions for processing changes and constraints based on the on-orbit observed results will be presented.

  15. Parameter Validation for Evaluation of Spaceflight Hardware Reusability

    Science.gov (United States)

    Childress-Thompson, Rhonda; Dale, Thomas L.; Farrington, Phillip

    2017-01-01

    Within recent years, there has been an influx of companies around the world pursuing reusable systems for space flight. Much like NASA, many of these new entrants are learning that reusable systems are complex and difficult to acheive. For instance, in its first attempts to retrieve spaceflight hardware for future reuse, SpaceX unsuccessfully tried to land on a barge at sea, resulting in a crash-landing. As this new generation of launch developers continues to develop concepts for reusable systems, having a systematic approach for determining the most effective systems for reuse is paramount. Three factors that influence the effective implementation of reusability are cost, operability and reliability. Therefore, a method that integrates these factors into the decision-making process must be utilized to adequately determine whether hardware used in space flight should be reused or discarded. Previous research has identified seven features that contribute to the successful implementation of reusability for space flight applications, defined reusability for space flight applications, highlighted the importance of reusability, and presented areas that hinder successful implementation of reusability. The next step is to ensure that the list of reusability parameters previously identified is comprehensive, and any duplication is either removed or consolidated. The characteristics to judge the seven features as good indicators for successful reuse are identified and then assessed using multiattribute decision making. Next, discriminators in the form of metrics or descriptors are assigned to each parameter. This paper explains the approach used to evaluate these parameters, define the Measures of Effectiveness (MOE) for reusability, and quantify these parameters. Using the MOEs, each parameter is assessed for its contribution to the reusability of the hardware. Potential data sources needed to validate the approach will be identified.

  16. Flight Path Recovery System (FPRS) design study

    International Nuclear Information System (INIS)

    1978-09-01

    The study contained herein presents a design for a Flight Path Recovery System (FPPS) for use in the NURE Program which will be more accurate than systems presently used, provide position location data in digital form suitable for automatic data processing, and provide for flight path recovery in a more economic and operationally suitable manner. The design is based upon the use of presently available hardware and technoloy, and presents little, it any, development risk. In addition, a Flight Test Plan designed to test the FPRS design concept is presented

  17. Flight Path Recovery System (FPRS) design study

    Energy Technology Data Exchange (ETDEWEB)

    1978-09-01

    The study contained herein presents a design for a Flight Path Recovery System (FPPS) for use in the NURE Program which will be more accurate than systems presently used, provide position location data in digital form suitable for automatic data processing, and provide for flight path recovery in a more economic and operationally suitable manner. The design is based upon the use of presently available hardware and technoloy, and presents little, it any, development risk. In addition, a Flight Test Plan designed to test the FPRS design concept is presented.

  18. Neural computation to predict in-flight particle characteristic dependences from processing parameters in the APS process

    Science.gov (United States)

    Guessasma, Sofiane; Montavon, Ghislain; Coddet, Christian

    2004-12-01

    In-flight particle sensors for thermal spraying are used for real-time monitoring of coating manufacture. However, such tools do not offer facilities to tune the processing parameters when the monitoring reveals fluctuations or instabilities in the thermal jet. To complete the process control, any diagnostic sensors need to be coupled with a predictive system to separate the effect of each processing parameter on the in-flight particle characteristics. In this work, a nonlinear dynamic system based on an artificial neural network (ANN) model is proposed to play this role. It consists of a method that relates the processing parameters to the particle emitted signal characteristics recorded with a DPV2000 (TECNAR Automation, St-Bruno, QC, Canada) optical sensing device. In such a way, a database was built to train and optimize an ANN structure. The in-flight particle average velocity, temperature, and diameter of an alumina-13wt.%titania feedstock were correlated to the injection and power parameters. Correlations are discussed on the basis of these predictive results.

  19. Mechanics of Granular Materials labeled hardware

    Science.gov (United States)

    2000-01-01

    Mechanics of Granular Materials (MGM) flight hardware takes two twin double locker assemblies in the Space Shuttle middeck or the Spacehab module. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: NASA/MSFC).

  20. A Framework for Assessing the Reusability of Hardware (Reusable Rocket Engines)

    Science.gov (United States)

    Childress-Thompson, Rhonda; Thomas, Dale; Farrington, Philip

    2016-01-01

    Within the past few years, there has been a renewed interest in reusability as it applies to space flight hardware. Commercial companies such as Space Exploration Technologies Corporation (SpaceX), Blue Origin, and United Launch Alliance (ULA) are pursuing reusable hardware. Even foreign companies are pursuing this option. The Indian Space Research Organization (ISRO) launched a reusable space plane technology demonstrator and Airbus Defense and Space is planning to recover the main engines and avionics from its Advanced Expendable Launcher with Innovative engine Economy [1] [2]. To date, the Space Shuttle remains as the only Reusable Launch (RLV) to have flown repeated missions and the Space Shutte Main Engine (SSME) is the only demonstrated reusable engine. Whether the hardware being considered for reuse is a launch vehicle (fully reusable), a first stage (partially reusable), or a booster engine (single component), the overall governing process is the same; it must be recovered and recertified for flight. Therefore, there is a need to identify the key factors in determining the reusability of flight hardware. This paper begins with defining reusability to set the context, addresses the significance of reuse, and discusses areas that limit successful implementation. Finally, this research identifies the factors that should be considered when incorporating reuse.

  1. Small Satellite Proximity Operations Hardware-in-the-Loop Test Bed Development

    Data.gov (United States)

    National Aeronautics and Space Administration — With the proliferation of small satellites resulting from CubeSat standardization of flight hardware elements, new mission architectures involving automated small...

  2. Hardware protection through obfuscation

    CERN Document Server

    Bhunia, Swarup; Tehranipoor, Mark

    2017-01-01

    This book introduces readers to various threats faced during design and fabrication by today’s integrated circuits (ICs) and systems. The authors discuss key issues, including illegal manufacturing of ICs or “IC Overproduction,” insertion of malicious circuits, referred as “Hardware Trojans”, which cause in-field chip/system malfunction, and reverse engineering and piracy of hardware intellectual property (IP). The authors provide a timely discussion of these threats, along with techniques for IC protection based on hardware obfuscation, which makes reverse-engineering an IC design infeasible for adversaries and untrusted parties with any reasonable amount of resources. This exhaustive study includes a review of the hardware obfuscation methods developed at each level of abstraction (RTL, gate, and layout) for conventional IC manufacturing, new forms of obfuscation for emerging integration strategies (split manufacturing, 2.5D ICs, and 3D ICs), and on-chip infrastructure needed for secure exchange o...

  3. Advances in flexible optrode hardware for use in cybernetic insects

    Science.gov (United States)

    Register, Joseph; Callahan, Dennis M.; Segura, Carlos; LeBlanc, John; Lissandrello, Charles; Kumar, Parshant; Salthouse, Christopher; Wheeler, Jesse

    2017-08-01

    Optogenetic manipulation is widely used to selectively excite and silence neurons in laboratory experiments. Recent efforts to miniaturize the components of optogenetic systems have enabled experiments on freely moving animals, but further miniaturization is required for freely flying insects. In particular, miniaturization of high channel-count optical waveguides are needed for high-resolution interfaces. Thin flexible waveguide arrays are needed to bend light around tight turns to access small anatomical targets. We present the design of lightweight miniaturized optogentic hardware and supporting electronics for the untethered steering of dragonfly flight. The system is designed to enable autonomous flight and includes processing, guidance sensors, solar power, and light stimulators. The system will weigh less than 200mg and be worn by the dragonfly as a backpack. The flexible implant has been designed to provide stimuli around nerves through micron scale apertures of adjacent neural tissue without the use of heavy hardware. We address the challenges of lightweight optogenetics and the development of high contrast polymer waveguides for this purpose.

  4. IDD Archival Hardware Architecture and Workflow

    Energy Technology Data Exchange (ETDEWEB)

    Mendonsa, D; Nekoogar, F; Martz, H

    2008-10-09

    This document describes the functionality of every component in the DHS/IDD archival and storage hardware system shown in Fig. 1. The document describes steps by step process of image data being received at LLNL then being processed and made available to authorized personnel and collaborators. Throughout this document references will be made to one of two figures, Fig. 1 describing the elements of the architecture and the Fig. 2 describing the workflow and how the project utilizes the available hardware.

  5. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  6. Powder Processing of High Temperature Cermets and Carbides at Marshall Space Flight Center

    Science.gov (United States)

    Salvail, Pat; Panda, Binayak; Hickman, Robert R.

    2007-01-01

    The Materials and Processing Laboratory at NASA Marshall Space Flight Center is developing Powder Metallurgy (PM) processing techniques for high temperature cermet and carbide material consolidation. These new group of materials would be utilized in the nuclear core for Nuclear Thermal Rockets (NTR). Cermet materials offer several advantages for NTR such as retention of fission products and fuels, better thermal shock resistance, hydrogen compatibility, high thermal conductivity, and high strength. Carbide materials offer the highest operating temperatures but are sensitive to thermal stresses and are difficult to process. To support the effort, a new facility has been setup to process refractory metal, ceramic, carbides and depleted uranium-based powders. The facility inciudes inert atmosphere glove boxes for the handling of reactive powders, a high temperature furnace, and powder processing equipment used for blending, milling, and sieving. The effort is focused on basic research to identify the most promising compositions and processing techniques. Several PM processing methods including Cold and Hot Isostatic Pressing are being evaluated to fabricate samples for characterization and hot hydrogen testing.

  7. Implementation of an ergonomics intervention in a Swedish flight baggage handling company-A process evaluation.

    Science.gov (United States)

    Bergsten, Eva L; Mathiassen, Svend Erik; Larsson, Johan; Kwak, Lydia

    2018-01-01

    To conduct a process evaluation of the implementation of an ergonomics training program aimed at increasing the use of loading assist devices in flight baggage handling. Feasibility related to the process items recruitment, reach, context, dose delivered (training time and content); dose received (participants' engagement); satisfaction with training; intermediate outcomes (skills, confidence and behaviors); and barriers and facilitators of the training intervention were assessed by qualitative and quantitative methods. Implementation proved successful regarding dose delivered, dose received and satisfaction. Confidence among participants in the training program in using and talking about devices, observed use of devices among colleagues, and internal feedback on work behavior increased significantly (pjob insecurity. In identifying important barriers and facilitators for a successful outcome, this study can help supporting the effectiveness of future interventions. Our results suggest that barriers caused by organizational changes may likely be alleviated by recruiting motivated trainees and securing strong organizational support for the implementation.

  8. A Framework for Assessing the Reusability of Hardware (Reusable Rocket Engines)

    Science.gov (United States)

    Childress-Thompson, Rhonda; Thomas, Dale; Farrington, Phillip

    2016-01-01

    Within the space flight community, reusability has taken center stage as the new buzzword. In order for reusable hardware to be competitive with its expendable counterpart, two major elements must be closely scrutinized. First, recovery and refurbishment costs must be lower than the development and acquisition costs. Additionally, the reliability for reused hardware must remain the same (or nearly the same) as "first use" hardware. Therefore, it is imperative that a systematic approach be established to enhance the development of reusable systems. However, before the decision can be made on whether it is more beneficial to reuse hardware or to replace it, the parameters that are needed to deem hardware worthy of reuse must be identified. For reusable hardware to be successful, the factors that must be considered are reliability (integrity, life, number of uses), operability (maintenance, accessibility), and cost (procurement, retrieval, refurbishment). These three factors are essential to the successful implementation of reusability while enabling the ability to meet performance goals. Past and present strategies and attempts at reuse within the space industry will be examined to identify important attributes of reusability that can be used to evaluate hardware when contemplating reusable versus expendable options. This paper will examine why reuse must be stated as an initial requirement rather than included as an afterthought in the final design. Late in the process, changes in the overall objective/purpose of components typically have adverse effects that potentially negate the benefits. A methodology for assessing the viability of reusing hardware will be presented by using the Space Shuttle Main Engine (SSME) to validate the approach. Because reliability, operability, and costs are key drivers in making this critical decision, they will be used to assess requirements for reuse as applied to components of the SSME.

  9. The LASS hardware processor

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1976-01-01

    The problems of data analysis with hardware processors are reviewed and a description is given of a programmable processor. This processor, the 168/E, has been designed for use in the LASS multi-processor system; it has an execution speed comparable to the IBM 370/168 and uses the subset of IBM 370 instructions appropriate to the LASS analysis task. (Auth.)

  10. CERN Neutrino Platform Hardware

    CERN Document Server

    Nelson, Kevin

    2017-01-01

    My summer research was broadly in CERN's neutrino platform hardware efforts. This project had two main components: detector assembly and data analysis work for ICARUS. Specifically, I worked on assembly for the ProtoDUNE project and monitored the safety of ICARUS as it was transported to Fermilab by analyzing the accelerometer data from its move.

  11. Heat Capacity Mapping Radiometer (HCMR) data processing algorithm, calibration, and flight performance evaluation

    Science.gov (United States)

    Bohse, J. R.; Bewtra, M.; Barnes, W. L.

    1979-01-01

    The rationale and procedures used in the radiometric calibration and correction of Heat Capacity Mapping Mission (HCMM) data are presented. Instrument-level testing and calibration of the Heat Capacity Mapping Radiometer (HCMR) were performed by the sensor contractor ITT Aerospace/Optical Division. The principal results are included. From the instrumental characteristics and calibration data obtained during ITT acceptance tests, an algorithm for post-launch processing was developed. Integrated spacecraft-level sensor calibration was performed at Goddard Space Flight Center (GSFC) approximately two months before launch. This calibration provided an opportunity to validate the data calibration algorithm. Instrumental parameters and results of the validation are presented and the performances of the instrument and the data system after launch are examined with respect to the radiometric results. Anomalies and their consequences are discussed. Flight data indicates a loss in sensor sensitivity with time. The loss was shown to be recoverable by an outgassing procedure performed approximately 65 days after the infrared channel was turned on. It is planned to repeat this procedure periodically.

  12. Hardware characteristic and application

    International Nuclear Information System (INIS)

    Gu, Dong Hyeon

    1990-03-01

    The contents of this book are system board on memory, performance, system timer system click and specification, coprocessor such as programing interface and hardware interface, power supply on input and output, protection for DC output, Power Good signal, explanation on 84 keyboard and 101/102 keyboard,BIOS system, 80286 instruction set and 80287 coprocessor, characters, keystrokes and colors, communication and compatibility of IBM personal computer on application direction, multitasking and code for distinction of system.

  13. Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)

    Science.gov (United States)

    Niewoehner, Kevin R.; Carter, John (Technical Monitor)

    2001-01-01

    The research accomplishments for the cooperative agreement 'Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)' include the following: (1) previous IFC program data collection and analysis; (2) IFC program support site (configured IFC systems support network, configured Tornado/VxWorks OS development system, made Configuration and Documentation Management Systems Internet accessible); (3) Airborne Research Test Systems (ARTS) II Hardware (developed hardware requirements specification, developing environmental testing requirements, hardware design, and hardware design development); (4) ARTS II software development laboratory unit (procurement of lab style hardware, configured lab style hardware, and designed interface module equivalent to ARTS II faceplate); (5) program support documentation (developed software development plan, configuration management plan, and software verification and validation plan); (6) LWR algorithm analysis (performed timing and profiling on algorithm); (7) pre-trained neural network analysis; (8) Dynamic Cell Structures (DCS) Neural Network Analysis (performing timing and profiling on algorithm); and (9) conducted technical interchange and quarterly meetings to define IFC research goals.

  14. 75 FR 73014 - Notice of Public Meeting: Updating the Flight Instructor Renewal Process To Enhance Safety of Flight

    Science.gov (United States)

    2010-11-29

    ..., questions regarding the logistics of the meeting, and any technical questions should be directed to... been reviewing indicators that suggest that the processes currently in place may lack sufficient...

  15. Robotic welding at the Marshall Space Flight Center

    Science.gov (United States)

    Jones, Clyde S.

    1992-01-01

    The Marshall Space Flight Center is developing welding and robotics technologies to improve manufacturing of space hardware. Commercial robots are used for these development programs, but they are teamed with advanced sensors, process controls, and computer simulation to form highly productive manufacturing systems. Application of welding robotics and controls to structural welding for the space shuttle and space station Freedom programs is addressed. Several advanced welding process sensors under development for application to space hardware are discussed, as well as the application of commercial robotic simulation software to provide offline programming.

  16. Implementation of an ergonomics intervention in a Swedish flight baggage handling company—A process evaluation

    Science.gov (United States)

    Mathiassen, Svend Erik; Larsson, Johan; Kwak, Lydia

    2018-01-01

    Objective To conduct a process evaluation of the implementation of an ergonomics training program aimed at increasing the use of loading assist devices in flight baggage handling. Methods Feasibility related to the process items recruitment, reach, context, dose delivered (training time and content); dose received (participants’ engagement); satisfaction with training; intermediate outcomes (skills, confidence and behaviors); and barriers and facilitators of the training intervention were assessed by qualitative and quantitative methods. Results Implementation proved successful regarding dose delivered, dose received and satisfaction. Confidence among participants in the training program in using and talking about devices, observed use of devices among colleagues, and internal feedback on work behavior increased significantly (p<0.01). Main facilitators were self-efficacy, motivation, and perceived utility of training among the trainees. Barriers included lack of peer support, opportunities to observe and practice behaviors, and follow-up activities; as well as staff reduction and job insecurity. Conclusions In identifying important barriers and facilitators for a successful outcome, this study can help supporting the effectiveness of future interventions. Our results suggest that barriers caused by organizational changes may likely be alleviated by recruiting motivated trainees and securing strong organizational support for the implementation. PMID:29513671

  17. COMPUTER HARDWARE MARKING

    CERN Multimedia

    Groupe de protection des biens

    2000-01-01

    As part of the campaign to protect CERN property and for insurance reasons, all computer hardware belonging to the Organization must be marked with the words 'PROPRIETE CERN'.IT Division has recently introduced a new marking system that is both economical and easy to use. From now on all desktop hardware (PCs, Macintoshes, printers) issued by IT Division with a value equal to or exceeding 500 CHF will be marked using this new system.For equipment that is already installed but not yet marked, including UNIX workstations and X terminals, IT Division's Desktop Support Service offers the following services free of charge:Equipment-marking wherever the Service is called out to perform other work (please submit all work requests to the IT Helpdesk on 78888 or helpdesk@cern.ch; for unavoidable operational reasons, the Desktop Support Service will only respond to marking requests when these coincide with requests for other work such as repairs, system upgrades, etc.);Training of personnel designated by Division Leade...

  18. Using the World Wide Web for GIDEP Problem Data Processing at Marshall Space Flight Center

    Science.gov (United States)

    McPherson, John W.; Haraway, Sandra W.; Whirley, J. Don

    1999-01-01

    Since April 1997, Marshall Space Flight Center has been using electronic transfer and the web to support our processing of the Government-Industry Data Exchange Program (GIDEP) and NASA ALERT information. Specific aspects include: (1) Extraction of ASCII text information from GIDEP for loading into Word documents for e-mail to ALERT actionees; (2) Downloading of GIDEP form image formats in Adobe Acrobat (.pdf) for internal storage display on the MSFC ALERT web page; (3) Linkage of stored GRDEP problem forms with summary information for access from the MSFC ALERT Distribution Summary Chart or from an html table of released MSFC ALERTs (4) Archival of historic ALERTs for reference by GIDEP ID, MSFC ID, or MSFC release date; (5) On-line tracking of ALERT response status using a Microsoft Access database and the web (6) On-line response to ALERTs from MSFC actionees through interactive web forms. The technique, benefits, effort, coordination, and lessons learned for each aspect are covered herein.

  19. Software control and system configuration management - A process that works

    Science.gov (United States)

    Petersen, K. L.; Flores, C., Jr.

    1983-01-01

    A comprehensive software control and system configuration management process for flight-crucial digital control systems of advanced aircraft has been developed and refined to insure efficient flight system development and safe flight operations. Because of the highly complex interactions among the hardware, software, and system elements of state-of-the-art digital flight control system designs, a systems-wide approach to configuration control and management has been used. Specific procedures are implemented to govern discrepancy reporting and reconciliation, software and hardware change control, systems verification and validation testing, and formal documentation requirements. An active and knowledgeable configuration control board reviews and approves all flight system configuration modifications and revalidation tests. This flexible process has proved effective during the development and flight testing of several research aircraft and remotely piloted research vehicles with digital flight control systems that ranged from relatively simple to highly complex, integrated mechanizations.

  20. [Adaptive process in Vietnamese military pilots during the flights on modern Russian aircraft].

    Science.gov (United States)

    Ushakov, I V; Pham Xuan, Nihn; Bukhtiaiarov, I V; Ushakov, B N

    2013-04-01

    Study on health status of 156 Vietnamese military pilots on Russian modern jet planes (Su-22, Su-27, Su-30, MiG-21B). The results showed that unprofitable factors in working environment (acceleration, radiation, high temperature, humidity, noise) have an impact on the health of pilots during the flight, leading to deterioration of professional health and physiological functions (cardiovascular, respiratory and nervous system) and obesity of pilots after 35 years old. Basing on the studies, we suggested some measures for health protecting, safety of flight and prolonging flight-activity of pilots (training in decompression chamber, vestibular training) and balance in food ration for prevention of professional diseases.

  1. Open hardware for open science

    CERN Document Server

    CERN Bulletin

    2011-01-01

    Inspired by the open source software movement, the Open Hardware Repository was created to enable hardware developers to share the results of their R&D activities. The recently published CERN Open Hardware Licence offers the legal framework to support this knowledge and technology exchange.   Two years ago, a group of electronics designers led by Javier Serrano, a CERN engineer, working in experimental physics laboratories created the Open Hardware Repository (OHR). This project was initiated in order to facilitate the exchange of hardware designs across the community in line with the ideals of “open science”. The main objectives include avoiding duplication of effort by sharing results across different teams that might be working on the same need. “For hardware developers, the advantages of open hardware are numerous. For example, it is a great learning tool for technologies some developers would not otherwise master, and it avoids unnecessary work if someone ha...

  2. Foundations of hardware IP protection

    CERN Document Server

    Torres, Lionel

    2017-01-01

    This book provides a comprehensive and up-to-date guide to the design of security-hardened, hardware intellectual property (IP). Readers will learn how IP can be threatened, as well as protected, by using means such as hardware obfuscation/camouflaging, watermarking, fingerprinting (PUF), functional locking, remote activation, hidden transmission of data, hardware Trojan detection, protection against hardware Trojan, use of secure element, ultra-lightweight cryptography, and digital rights management. This book serves as a single-source reference to design space exploration of hardware security and IP protection. · Provides readers with a comprehensive overview of hardware intellectual property (IP) security, describing threat models and presenting means of protection, from integrated circuit layout to digital rights management of IP; · Enables readers to transpose techniques fundamental to digital rights management (DRM) to the realm of hardware IP security; · Introduce designers to the concept of salutar...

  3. Hardware Accelerated Sequence Alignment with Traceback

    Directory of Open Access Journals (Sweden)

    Scott Lloyd

    2009-01-01

    in a timely manner. Known methods to accelerate alignment on reconfigurable hardware only address sequence comparison, limit the sequence length, or exhibit memory and I/O bottlenecks. A space-efficient, global sequence alignment algorithm and architecture is presented that accelerates the forward scan and traceback in hardware without memory and I/O limitations. With 256 processing elements in FPGA technology, a performance gain over 300 times that of a desktop computer is demonstrated on sequence lengths of 16000. For greater performance, the architecture is scalable to more processing elements.

  4. Processes and Procedures of the Higher Education Programs at Marshall Space Flight Center

    Science.gov (United States)

    Heard, Pamala D.

    2002-01-01

    The purpose of my research was to investigate the policies, processes, procedures and timelines for the higher education programs at Marshall Space Flight Center. The three higher education programs that comprised this research included: the Graduate Student Researchers Program (GSRP), the National Research Council/Resident Research Associateships Program (NRC/RRA) and the Summer Faculty Fellowship Program (SFFP). The GSRP award fellowships each year to promising U.S. graduate students whose research interest coincides with NASA's mission. Fellowships are awarded for one year and are renewable for up to three years to competitively selected students. Each year, the award provides students the opportunity to spend a period in residence at a NASA center using that installation's unique facilities. This program is renewable for three years, students must reapply. The National Research Council conducts the Resident Research Associateships Program (NRC/RRA), a national competition to identify outstanding recent postdoctoral scientists and engineers and experience senior scientists and engineers, for tenure as guest researchers at NASA centers. The Resident Research Associateship Program provides an opportunity for recipients of doctoral degrees to concentrate their research in association with NASA personnel, often as a culmination to formal career preparation. The program also affords established scientists and engineers an opportunity for research without any interruptions and distracting assignments generated from permanent career positions. All opportunities for research at NASA Centers are open to citizens of the U.S. and to legal permanent residents. The Summer Faculty Fellowship Program (SFFP) is conducted each summer. NASA awards research fellowships to university faculty through the NASA/American Society for Engineering Education. The program is designed to promote an exchange of ideas between university faculties, NASA scientists and engineers. Selected

  5. Hardware Support for Embedded Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    The general Java runtime environment is resource hungry and unfriendly for real-time systems. To reduce the resource consumption of Java in embedded systems, direct hardware support of the language is a valuable option. Furthermore, an implementation of the Java virtual machine in hardware enables...... worst-case execution time analysis of Java programs. This chapter gives an overview of current approaches to hardware support for embedded and real-time Java....

  6. Mission Management Computer and Sequencing Hardware for RLV-TD HEX-01 Mission

    Science.gov (United States)

    Gupta, Sukrat; Raj, Remya; Mathew, Asha Mary; Koshy, Anna Priya; Paramasivam, R.; Mookiah, T.

    2017-12-01

    Reusable Launch Vehicle-Technology Demonstrator Hypersonic Experiment (RLV-TD HEX-01) mission posed some unique challenges in the design and development of avionics hardware. This work presents the details of mission critical avionics hardware mainly Mission Management Computer (MMC) and sequencing hardware. The Navigation, Guidance and Control (NGC) chain for RLV-TD is dual redundant with cross-strapped Remote Terminals (RTs) interfaced through MIL-STD-1553B bus. MMC is Bus Controller on the 1553 bus, which does the function of GPS aided navigation, guidance, digital autopilot and sequencing for the RLV-TD launch vehicle in different periodicities (10, 20, 500 ms). Digital autopilot execution in MMC with a periodicity of 10 ms (in ascent phase) is introduced for the first time and successfully demonstrated in the flight. MMC is built around Intel i960 processor and has inbuilt fault tolerance features like ECC for memories. Fault Detection and Isolation schemes are implemented to isolate the failed MMC. The sequencing hardware comprises Stage Processing System (SPS) and Command Execution Module (CEM). SPS is `RT' on the 1553 bus which receives the sequencing and control related commands from MMCs and posts to downstream modules after proper error handling for final execution. SPS is designed as a high reliability system by incorporating various fault tolerance and fault detection features. CEM is a relay based module for sequence command execution.

  7. High-precision optical systems with inexpensive hardware: a unified alignment and structural design approach

    Science.gov (United States)

    Winrow, Edward G.; Chavez, Victor H.

    2011-09-01

    High-precision opto-mechanical structures have historically been plagued by high costs for both hardware and the associated alignment and assembly process. This problem is especially true for space applications where only a few production units are produced. A methodology for optical alignment and optical structure design is presented which shifts the mechanism of maintaining precision from tightly toleranced, machined flight hardware to reusable, modular tooling. Using the proposed methodology, optical alignment error sources are reduced by the direct alignment of optics through their surface retroreflections (pips) as seen through a theodolite. Optical alignment adjustments are actualized through motorized, sub-micron precision actuators in 5 degrees of freedom. Optical structure hardware costs are reduced through the use of simple shapes (tubes, plates) and repeated components. This approach produces significantly cheaper hardware and more efficient assembly without sacrificing alignment precision or optical structure stability. The design, alignment plan and assembly of a 4" aperture, carbon fiber composite, Schmidt-Cassegrain concept telescope is presented.

  8. CHeCS (Crew Health Care Systems): International Space Station (ISS) Medical Hardware Catalog. Version 10.0

    Science.gov (United States)

    2011-01-01

    The purpose of this catalog is to provide a detailed description of each piece of hardware in the Crew Health Care System (CHeCS), including subpacks associated with the hardware, and to briefly describe the interfaces between the hardware and the ISS. The primary user of this document is the Space Medicine/Medical Operations ISS Biomedical Flight Controllers (ISS BMEs).

  9. Hardware assisted hypervisor introspection.

    Science.gov (United States)

    Shi, Jiangyong; Yang, Yuexiang; Tang, Chuan

    2016-01-01

    In this paper, we introduce hypervisor introspection, an out-of-box way to monitor the execution of hypervisors. Similar to virtual machine introspection which has been proposed to protect virtual machines in an out-of-box way over the past decade, hypervisor introspection can be used to protect hypervisors which are the basis of cloud security. Virtual machine introspection tools are usually deployed either in hypervisor or in privileged virtual machines, which might also be compromised. By utilizing hardware support including nested virtualization, EPT protection and #BP, we are able to monitor all hypercalls belongs to the virtual machines of one hypervisor, include that of privileged virtual machine and even when the hypervisor is compromised. What's more, hypercall injection method is used to simulate hypercall-based attacks and evaluate the performance of our method. Experiment results show that our method can effectively detect hypercall-based attacks with some performance cost. Lastly, we discuss our furture approaches of reducing the performance cost and preventing the compromised hypervisor from detecting the existence of our introspector, in addition with some new scenarios to apply our hypervisor introspection system.

  10. LHCb: Hardware Data Injector

    CERN Multimedia

    Delord, V; Neufeld, N

    2009-01-01

    The LHCb High Level Trigger and Data Acquisition system selects about 2 kHz of events out of the 1 MHz of events, which have been selected previously by the first-level hardware trigger. The selected events are consolidated into files and then sent to permanent storage for subsequent analysis on the Grid. The goal of the upgrade of the LHCb readout is to lift the limitation to 1 MHz. This means speeding up the DAQ to 40 MHz. Such a DAQ system will certainly employ 10 Gigabit or technologies and might also need new networking protocols: a customized TCP or proprietary solutions. A test module is being presented, which integrates in the existing LHCb infrastructure. It is a 10-Gigabit traffic generator, flexible enough to generate LHCb's raw data packets using dummy data or simulated data. These data are seen as real data coming from sub-detectors by the DAQ. The implementation is based on an FPGA using 10 Gigabit Ethernet interface. This module is integrated in the experiment control system. The architecture, ...

  11. Quantum neuromorphic hardware for quantum artificial intelligence

    Science.gov (United States)

    Prati, Enrico

    2017-08-01

    The development of machine learning methods based on deep learning boosted the field of artificial intelligence towards unprecedented achievements and application in several fields. Such prominent results were made in parallel with the first successful demonstrations of fault tolerant hardware for quantum information processing. To which extent deep learning can take advantage of the existence of a hardware based on qubits behaving as a universal quantum computer is an open question under investigation. Here I review the convergence between the two fields towards implementation of advanced quantum algorithms, including quantum deep learning.

  12. Hardware for soft computing and soft computing for hardware

    CERN Document Server

    Nedjah, Nadia

    2014-01-01

    Single and Multi-Objective Evolutionary Computation (MOEA),  Genetic Algorithms (GAs), Artificial Neural Networks (ANNs), Fuzzy Controllers (FCs), Particle Swarm Optimization (PSO) and Ant colony Optimization (ACO) are becoming omnipresent in almost every intelligent system design. Unfortunately, the application of the majority of these techniques is complex and so requires a huge computational effort to yield useful and practical results. Therefore, dedicated hardware for evolutionary, neural and fuzzy computation is a key issue for designers. With the spread of reconfigurable hardware such as FPGAs, digital as well as analog hardware implementations of such computation become cost-effective. The idea behind this book is to offer a variety of hardware designs for soft computing techniques that can be embedded in any final product. Also, to introduce the successful application of soft computing technique to solve many hard problem encountered during the design of embedded hardware designs. Reconfigurable em...

  13. Hardware Acceleration of Adaptive Neural Algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - world conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.

  14. Processing of acquisition data for a time of flight positron tomograph

    International Nuclear Information System (INIS)

    Robert, G.

    1987-10-01

    After a review of basic principles concerning the time of flight positron tomography, the LETI positron tomograph is briefly described. For performance optimization (acquisition, calibration, image reconstruction), various specialized operators have been designed: the realization of the acquisition system is presented [fr

  15. Secure coupling of hardware components

    NARCIS (Netherlands)

    Hoepman, J.H.; Joosten, H.J.M.; Knobbe, J.W.

    2011-01-01

    A method and a system for securing communication between at least a first and a second hardware components of a mobile device is described. The method includes establishing a first shared secret between the first and the second hardware components during an initialization of the mobile device and,

  16. MSAP Hardware Verification: Testing Multi-Mission System Architecture Platform Hardware Using Simulation and Bench Test Equipment

    Science.gov (United States)

    Crossin, Kent R.

    2005-01-01

    The Multi-Mission System Architecture Platform (MSAP) project aims to develop a system of hardware and software that will provide the core functionality necessary in many JPL missions and can be tailored to accommodate mission-specific requirements. The MSAP flight hardware is being developed in the Verilog hardware description language, allowing developers to simulate their design before releasing it to a field programmable gate array (FPGA). FPGAs can be updated in a matter of minutes, drastically reducing the time and expense required to produce traditional application-specific integrated circuits. Bench test equipment connected to the FPGAs can then probe and run Tcl scripts on the hardware. The Verilog and Tcl code can be reused or modified with each design. These steps are effective in confirming that the design operates according specifications.

  17. 2nd Generation QUATARA Flight Computer Project

    Science.gov (United States)

    Falker, Jay; Keys, Andrew; Fraticelli, Jose Molina; Capo-Iugo, Pedro; Peeples, Steven

    2015-01-01

    Single core flight computer boards have been designed, developed, and tested (DD&T) to be flown in small satellites for the last few years. In this project, a prototype flight computer will be designed as a distributed multi-core system containing four microprocessors running code in parallel. This flight computer will be capable of performing multiple computationally intensive tasks such as processing digital and/or analog data, controlling actuator systems, managing cameras, operating robotic manipulators and transmitting/receiving from/to a ground station. In addition, this flight computer will be designed to be fault tolerant by creating both a robust physical hardware connection and by using a software voting scheme to determine the processor's performance. This voting scheme will leverage on the work done for the Space Launch System (SLS) flight software. The prototype flight computer will be constructed with Commercial Off-The-Shelf (COTS) components which are estimated to survive for two years in a low-Earth orbit.

  18. Hardware descriptions of the I and C systems for NPP

    International Nuclear Information System (INIS)

    Lee, Cheol Kwon; Oh, In Suk; Park, Joo Hyun; Kim, Dong Hoon; Han, Jae Bok; Shin, Jae Whal; Kim, Young Bak

    2003-09-01

    The hardware specifications for I and C Systems of SNPP(Standard Nuclear Power Plant) are reviewed in order to acquire the hardware requirement and specification of KNICS (Korea Nuclear Instrumentation and Control System). In the study, we investigated hardware requirements, hardware configuration, hardware specifications, man-machine hardware requirements, interface requirements with the other system, and data communication requirements that are applicable to SNP. We reviewed those things of control systems, protection systems, monitoring systems, information systems, and process instrumentation systems. Through the study, we described the requirements and specifications of digital systems focusing on a microprocessor and a communication interface, and repeated it for analog systems focusing on the manufacturing companies. It is expected that the experience acquired from this research will provide vital input for the development of the KNICS

  19. Reliable software for unreliable hardware a cross layer perspective

    CERN Document Server

    Rehman, Semeen; Henkel, Jörg

    2016-01-01

    This book describes novel software concepts to increase reliability under user-defined constraints. The authors’ approach bridges, for the first time, the reliability gap between hardware and software. Readers will learn how to achieve increased soft error resilience on unreliable hardware, while exploiting the inherent error masking characteristics and error (stemming from soft errors, aging, and process variations) mitigations potential at different software layers. · Provides a comprehensive overview of reliability modeling and optimization techniques at different hardware and software levels; · Describes novel optimization techniques for software cross-layer reliability, targeting unreliable hardware.

  20. In-line monitoring of effluents from HTGR fuel particle preparation processes using a time-of-flight mass spectrometer

    International Nuclear Information System (INIS)

    Lee, D.A.; Costanzo, D.A.; Stinton, D.P.; Carpenter, J.A.; Rainey, W.T. Jr.; Canada, D.C.; Carter, J.A.

    1976-08-01

    The carbonization, conversion, and coating processes in the manufacture of HTGR fuel particles have been studied with the use of a time-of-flight mass spectrometer. Non-condensable effluents from these fluidized-bed processes have been monitored continuously from the beginning to the end of the process. The processes which have been monitored are these: uranium-loaded ion exchange resin carbonization, the carbothermic reduction of UO 2 to UC 2 , buffer and low temperature isotropic pyrocarbon coatings of fuel kernels, SiC coating of the kernels, and high-temperature particle annealing. Changes in concentrations of significant molecules with time and temperature have been useful in the interpretation of reaction mechanisms and optimization of process procedures

  1. Hardware and software for image acquisition in nuclear medicine

    International Nuclear Information System (INIS)

    Fideles, E.L.; Vilar, G.; Silva, H.S.

    1992-01-01

    A system for image acquisition and processing in nuclear medicine is presented, including the hardware and software referring to acquisition. The hardware is consisted of an analog-digital conversion card, developed in wire-wape. Its function is digitate the analogic signs provided by gamma camera. The acquisitions are made in list or frame mode. (C.G.C.)

  2. Server hardware trends

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    This talk will cover the status of the current and upcoming offers on server platforms, focusing mainly on the processing and storage parts. Alternative solutions like Open Compute (OCP) will be quickly covered.

  3. Space Flight Operations Center local area network

    Science.gov (United States)

    Goodman, Ross V.

    1988-01-01

    The existing Mission Control and Computer Center at JPL will be replaced by the Space Flight Operations Center (SFOC). One part of the SFOC is the LAN-based distribution system. The purpose of the LAN is to distribute the processed data among the various elements of the SFOC. The SFOC LAN will provide a robust subsystem that will support the Magellan launch configuration and future project adaptation. Its capabilities include (1) a proven cable medium as the backbone for the entire network; (2) hardware components that are reliable, varied, and follow OSI standards; (3) accurate and detailed documentation for fault isolation and future expansion; and (4) proven monitoring and maintenance tools.

  4. NDAS Hardware Translation Layer Development

    Science.gov (United States)

    Nazaretian, Ryan N.; Holladay, Wendy T.

    2011-01-01

    The NASA Data Acquisition System (NDAS) project is aimed to replace all DAS software for NASA s Rocket Testing Facilities. There must be a software-hardware translation layer so the software can properly talk to the hardware. Since the hardware from each test stand varies, drivers for each stand have to be made. These drivers will act more like plugins for the software. If the software is being used in E3, then the software should point to the E3 driver package. If the software is being used at B2, then the software should point to the B2 driver package. The driver packages should also be filled with hardware drivers that are universal to the DAS system. For example, since A1, A2, and B2 all use the Preston 8300AU signal conditioners, then the driver for those three stands should be the same and updated collectively.

  5. Computer hardware for radiologists: Part I

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM), Picture Archiving and Communication System (PACS), Radiology information system (RIS) technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU), the chipset, the random access memory (RAM), the memory modules, bus, storage drives, and ports. The personnel computer (PC) has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs). The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called “buses”. The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute “programs”. A Pentium ® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM) is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration

  6. Computer hardware for radiologists: Part I

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM, Picture Archiving and Communication System (PACS, Radiology information system (RIS technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU, the chipset, the random access memory (RAM, the memory modules, bus, storage drives, and ports. The personnel computer (PC has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs. The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called "buses". The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute "programs". A Pentium® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration.

  7. DARPA/USAF/USN J-UCAS X-45A System Demonstration Program: A Review of Flight Test Site Processes and Personnel

    Science.gov (United States)

    Cosentino, Gary B.

    2008-01-01

    The Joint Unmanned Combat Air Systems (J-UCAS) program is a collaborative effort between the Defense Advanced Research Project Agency (DARPA), the US Air Force (USAF) and the US Navy (USN). Together they have reviewed X-45A flight test site processes and personnel as part of a system demonstration program for the UCAV-ATD Flight Test Program. The goal was to provide a disciplined controlled process for system integration and testing and demonstration flight tests. NASA's Dryden Flight Research Center (DFRC) acted as the project manager during this effort and was tasked with the responsibilities of range and ground safety, the provision of flight test support and infrastructure and the monitoring of technical and engineering tasks. DFRC also contributed their engineering knowledge through their contributions in the areas of autonomous ground taxi control development, structural dynamics testing and analysis and the provision of other flight test support including telemetry data, tracking radars, and communications and control support equipment. The Air Force Flight Test Center acted at the Deputy Project Manager in this effort and was responsible for the provision of system safety support and airfield management and air traffic control services, among other supporting roles. The T-33 served as a J-UCAS surrogate aircraft and demonstrated flight characteristics similar to that of the the X-45A. The surrogate served as a significant risk reduction resource providing mission planning verification, range safety mission assessment and team training, among other contributions.

  8. Electronic Flight Bag (EFB) 2015 Industry Survey.

    Science.gov (United States)

    2015-10-01

    This document provides an overview of Electronic Flight Bag (EFB) hardware and software capabilities, including portable electronic devices (PEDs) used as EFBs, as of July 2015. This document updates and replaces the Volpe Centers previous EFB ind...

  9. Residual mean first-passage time for jump processes: theory and applications to Levy flights and fractional Brownian motion

    International Nuclear Information System (INIS)

    Tejedor, V; Benichou, O; Voituriez, R; Metzler, Ralf

    2011-01-01

    We derive a functional equation for the mean first-passage time (MFPT) of a generic self-similar Markovian continuous process to a target in a one-dimensional domain and obtain its exact solution. We show that the obtained expression of the MFPT for continuous processes is actually different from the large system size limit of the MFPT for discrete jump processes allowing leapovers. In the case considered here, the asymptotic MFPT admits non-vanishing corrections, which we call residual MFPT. The case of Levy flights with diverging variance of jump lengths is investigated in detail, in particular, with respect to the associated leapover behavior. We also show numerically that our results apply with good accuracy to fractional Brownian motion, despite its non-Markovian nature.

  10. Hardware independence checkout software

    Science.gov (United States)

    Cameron, Barry W.; Helbig, H. R.

    1990-01-01

    ACSI has developed a program utilizing CLIPS to assess compliance with various programming standards. Essentially the program parses C code to extract the names of all function calls. These are asserted as CLIPS facts which also include information about line numbers, source file names, and called functions. Rules have been devised to establish functions called that have not been defined in any of the source parsed. These are compared against lists of standards (represented as facts) using rules that check intersections and/or unions of these. By piping the output into other processes the source is appropriately commented by generating and executing parsed scripts.

  11. Operational experience and design recommendations for teleoperated flight hardware

    Science.gov (United States)

    Burgess, T. W.; Kuban, D. P.; Hankins, W. W.; Mixon, R. W.

    1988-01-01

    Teleoperation (remote manipulation) will someday supplement/minimize astronaut extravehicular activity in space to perform such tasks as satellite servicing and repair, and space station construction and servicing. This technology is being investigated by NASA with teleoperation of two space-related tasks having been demonstrated at the Oak Ridge National Lab. The teleoperator experiments are discussed and the results of these experiments are summarized. The related equipment design recommendations are also presented. In addition, a general discussion of equipment design for teleoperation is also presented.

  12. Fault Tolerant Hardware/Software Architecture for Flight Critical Function

    Science.gov (United States)

    1985-09-01

    after Augusta Ada Byron, Countess Lovelace , often recognized as the first programmer. The first Ada language reference manual was published in 1981...34A Survivable Distributed Computing System for Embedded Application Programs Written in Ada ," Ada Letters , Vol. 111, No. 3, November/December 1983...TOLERATED? " Ky T.Ander~on 4A DEPENABLE VONIC DATA TRANSMISSION) Sby D.R.PoweNf and J.C.Valadir 5 MULTI-COMPUTER kAULT TOLERANT SYSTEMS USING ADA by

  13. Environmental Friendly Coatings & Corrosion Prevention for Flight Hardware

    Data.gov (United States)

    National Aeronautics and Space Administration — The objectives for this project are to identify, test, and develop qualification criteria for environmentally friendly corrosion protective coatings and corrosion...

  14. Image Processing Software

    Science.gov (United States)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  15. Hardware for mammography

    International Nuclear Information System (INIS)

    Rozhkova, N.I.; Chikirdin, Eh.G.; Ryudiger, Yu.G.; Kochetova, G.P.; Lisachenko, I.V.; Yakobs, O.Eh.

    2000-01-01

    The comparative studies on various visualization means, in particular, the intensifying screens and films with application of quantitative methods for determining small details on photographs, including measurements of corresponding exposures, absorbed doses and verification of conclusions through the analysis of clinical observations are carried out. It is shown, that technical equipment of the modern mammography room should include the X-ray mammographic apparatus, providing for the image high-quality by low dose loads with special film holders, meeting the mammography requirements, the corresponding X-ray film and the automatic photolaboratory process, provided by one and the same company. The quality of photographs under such conditions is guarantied, the defects and errors by the image interpretation are excluded. The modern computerized information technologies for work with medical images on the basic of creating new generations of diagnostic instrumentation with digital video channels and computerized working places dispose of many medical, technological, organizational and financial problems [ru

  16. Flight and Integrated Vehicle Testing: Laying the Groundwork for the Next Generation of Space Exploration Launch Vehicles

    Science.gov (United States)

    Taylor, J. L.; Cockrell, C. E.

    2009-01-01

    Integrated vehicle testing will be critical to ensuring proper vehicle integration of the Ares I crew launch vehicle and Ares V cargo launch vehicle. The Ares Projects, based at Marshall Space Flight Center in Alabama, created the Flight and Integrated Test Office (FITO) as a separate team to ensure that testing is an integral part of the vehicle development process. As its name indicates, FITO is responsible for managing flight testing for the Ares vehicles. FITO personnel are well on the way toward assembling and flying the first flight test vehicle of Ares I, the Ares I-X. This suborbital development flight will evaluate the performance of Ares I from liftoff to first stage separation, testing flight control algorithms, vehicle roll control, separation and recovery systems, and ground operations. Ares I-X is now scheduled to fly in summer 2009. The follow-on flight, Ares I-Y, will test a full five-segment first stage booster and will include cryogenic propellants in the upper stage, an upper stage engine simulator, and an active launch abort system. The following flight, Orion 1, will be the first flight of an active upper stage and upper stage engine, as well as the first uncrewed flight of an Orion spacecraft into orbit. The Ares Projects are using an incremental buildup of flight capabilities prior to the first operational crewed flight of Ares I and the Orion crew exploration vehicle in 2015. In addition to flight testing, the FITO team will be responsible for conducting hardware, software, and ground vibration tests of the integrated launch vehicle. These efforts will include verifying hardware, software, and ground handling interfaces. Through flight and integrated testing, the Ares Projects will identify and mitigate risks early as the United States prepares to take its next giant leaps to the Moon and beyond.

  17. Raspberry Pi hardware projects 1

    CERN Document Server

    Robinson, Andrew

    2013-01-01

    Learn how to take full advantage of all of Raspberry Pi's amazing features and functions-and have a blast doing it! Congratulations on becoming a proud owner of a Raspberry Pi, the credit-card-sized computer! If you're ready to dive in and start finding out what this amazing little gizmo is really capable of, this ebook is for you. Taken from the forthcoming Raspberry Pi Projects, Raspberry Pi Hardware Projects 1 contains three cool hardware projects that let you have fun with the Raspberry Pi while developing your Raspberry Pi skills. The authors - PiFace inventor, Andrew Robinson and Rasp

  18. Data processing workflow for time of flight polarized neutrons inelastic measurements

    Energy Technology Data Exchange (ETDEWEB)

    Savici, Andrei T [ORNL; Zaliznyak, Igor [Brookhaven National Laboratory (BNL); Garlea, Vasile O [ORNL; Winn, Barry L [ORNL

    2017-01-01

    We discuss the data processing workflow for polarized neutron scattering measurements performed at HYSPEC spectrometer at the Spallation Neutron Source, Oak Ridge National Laboratory. The effects of the focusing Heusler crystal polarizer and the wide-angle supermirror transmission polarization analyzer are added to the data processing flow of the non-polarized case. The implementation is done using the Mantid software package.

  19. Commodity hardware and software summary

    International Nuclear Information System (INIS)

    Wolbers, S.

    1997-04-01

    A review is given of the talks and papers presented in the Commodity Hardware and Software Session at the CHEP97 conference. An examination of the trends leading to the consideration of PC's for HEP is given, and a status of the work that is being done at various HEP labs and Universities is given

  20. Hardware Algorithm Implementation for Mission Specific Processing

    Science.gov (United States)

    2008-03-01

    have new equipment in the field in a matter of days as opposed to the old way of doing business , which could take 1-2 years for a weapons system to be...difficult for the War Fighter to do their mission without wondering if their batteries are going to sustain throughout their mission. There is a need to...knowledge about the VLSI technology and understands VHDL, scripting, and intergrating the script in Cadencersoftware pro- gram or Modelsimr. The main

  1. Reconfigurable Hardware Adapts to Changing Mission Demands

    Science.gov (United States)

    2003-01-01

    A new class of computing architectures and processing systems, which use reconfigurable hardware, is creating a revolutionary approach to implementing future spacecraft systems. With the increasing complexity of electronic components, engineers must design next-generation spacecraft systems with new technologies in both hardware and software. Derivation Systems, Inc., of Carlsbad, California, has been working through NASA s Small Business Innovation Research (SBIR) program to develop key technologies in reconfigurable computing and Intellectual Property (IP) soft cores. Founded in 1993, Derivation Systems has received several SBIR contracts from NASA s Langley Research Center and the U.S. Department of Defense Air Force Research Laboratories in support of its mission to develop hardware and software for high-assurance systems. Through these contracts, Derivation Systems began developing leading-edge technology in formal verification, embedded Java, and reconfigurable computing for its PF3100, Derivational Reasoning System (DRS ), FormalCORE IP, FormalCORE PCI/32, FormalCORE DES, and LavaCORE Configurable Java Processor, which are designed for greater flexibility and security on all space missions.

  2. Test Program for Stirling Radioisotope Generator Hardware at NASA Glenn Research Center

    Science.gov (United States)

    Lewandowski, Edward J.; Bolotin, Gary S.; Oriti, Salvatore M.

    2015-01-01

    Stirling-based energy conversion technology has demonstrated the potential of high efficiency and low mass power systems for future space missions. This capability is beneficial, if not essential, to making certain deep space missions possible. Significant progress was made developing the Advanced Stirling Radioisotope Generator (ASRG), a 140-W radioisotope power system. A variety of flight-like hardware, including Stirling convertors, controllers, and housings, was designed and built under the ASRG flight development project. To support future Stirling-based power system development NASA has proposals that, if funded, will allow this hardware to go on test at the NASA Glenn Research Center. While future flight hardware may not be identical to the hardware developed under the ASRG flight development project, many components will likely be similar, and system architectures may have heritage to ASRG. Thus, the importance of testing the ASRG hardware to the development of future Stirling-based power systems cannot be understated. This proposed testing will include performance testing, extended operation to establish an extensive reliability database, and characterization testing to quantify subsystem and system performance and better understand system interfaces. This paper details this proposed test program for Stirling radioisotope generator hardware at NASA Glenn. It explains the rationale behind the proposed tests and how these tests will meet the stated objectives.

  3. Neutron Imaging for Selective Laser Melting Inconel Hardware with Internal Passages

    Science.gov (United States)

    Tramel, Terri L.; Norwood, Joseph K.; Bilheux, Hassina

    2014-01-01

    Additive Manufacturing is showing great promise for the development of new innovative designs and large potential life cycle cost reduction for the Aerospace Industry. However, more development work is required to move this technology into space flight hardware production. With selective laser melting (SLM), hardware that once consisted of multiple, carefully machined and inspected pieces, joined together can be made in one part. However standard inspection techniques cannot be used to verify that the internal passages are within dimensional tolerances or surface finish requirements. NASA/MSFC traveled to Oak Ridge National Lab's (ORNL) Spallation Neutron Source to perform some non-destructive, proof of concept imaging measurements to assess the capabilities to understand internal dimensional tolerances and internal passages surface roughness. This presentation will describe 1) the goals of this proof of concept testing, 2) the lessons learned when designing and building these Inconel 718 test specimens to minimize beam time, 3) the neutron imaging test setup and test procedure to get the images, 4) the initial results in images, volume and a video, 4) the assessment of using this imaging technique to gather real data for designing internal flow passages in SLM manufacturing aerospace hardware, and lastly 5) how proper cleaning of the internal passages is critically important. In summary, the initial results are very promising and continued development of a technique to assist in SLM development for aerospace components is desired by both NASA and ORNL. A plan forward that benefits both ORNL and NASA will also be presented, based on the promising initial results. The initial images and volume reconstruction showed that clean, clear images of the internal passages geometry are obtainable. These clear images of the internal passages of simple geometries will be compared to the build model to determine any differences. One surprising result was that a new cleaning

  4. Miracle Flights

    Science.gov (United States)

    ... a Flight Get Involved Events Shop Miles Contact Miracle Flights Blog Giving Tuesday 800-359-1711 Thousands of children have been saved, but we still have miles to go. Request a Flight Click Here to Donate - Your ...

  5. Solid Rocket Booster (SRB) Flight System Integration at Its Best

    Science.gov (United States)

    Wood, T. David; Kanner, Howard S.; Freeland, Donna M.; Olson, Derek T.

    2011-01-01

    The Solid Rocket Booster (SRB) element integrates all the subsystems needed for ascent flight, entry, and recovery of the combined Booster and Motor system. These include the structures, avionics, thrust vector control, pyrotechnic, range safety, deceleration, thermal protection, and retrieval systems. This represents the only human-rated, recoverable and refurbishable solid rocket ever developed and flown. Challenges included subsystem integration, thermal environments and severe loads (including water impact), sometimes resulting in hardware attrition. Several of the subsystems evolved during the program through design changes. These included the thermal protection system, range safety system, parachute/recovery system, and others. Because the system was recovered, the SRB was ideal for data and imagery acquisition, which proved essential for understanding loads, environments and system response. The three main parachutes that lower the SRBs to the ocean are the largest parachutes ever designed, and the SRBs are the largest structures ever to be lowered by parachutes. SRB recovery from the ocean was a unique process and represented a significant operational challenge; requiring personnel, facilities, transportation, and ground support equipment. The SRB element achieved reliability via extensive system testing and checkout, redundancy management, and a thorough postflight assessment process. However, the in-flight data and postflight assessment process revealed the hardware was affected much more strongly than originally anticipated. Assembly and integration of the booster subsystems required acceptance testing of reused hardware components for each build. Extensive testing was done to assure hardware functionality at each level of stage integration. Because the booster element is recoverable, subsystems were available for inspection and testing postflight, unique to the Shuttle launch vehicle. Problems were noted and corrective actions were implemented as needed

  6. Between Longing and Flight – Migratory processes in mountain areas, particularly in the European Alps

    Directory of Open Access Journals (Sweden)

    Heinz Veit

    2011-04-01

    Full Text Available Mountain areas, including the alpine region, have always seen a great deal of migration movements. Current migratory processes, however, are related to urban population's new lifestyles and housing needs, to the construction of second homes and to international tourism. They present new challenges to many alpine regions. On 20 November 2009 the Swiss Interacademic Commission for Alpine Studies (ICAS invited experts to discuss issues of Migration in Mountain Areas, particularly in the alpine ...

  7. The principles of computer hardware

    CERN Document Server

    Clements, Alan

    2000-01-01

    Principles of Computer Hardware, now in its third edition, provides a first course in computer architecture or computer organization for undergraduates. The book covers the core topics of such a course, including Boolean algebra and logic design; number bases and binary arithmetic; the CPU; assembly language; memory systems; and input/output methods and devices. It then goes on to cover the related topics of computer peripherals such as printers; the hardware aspects of the operating system; and data communications, and hence provides a broader overview of the subject. Its readable, tutorial-based approach makes it an accessible introduction to the subject. The book has extensive in-depth coverage of two microprocessors, one of which (the 68000) is widely used in education. All chapters in the new edition have been updated. Major updates include: powerful software simulations of digital systems to accompany the chapters on digital design; a tutorial-based introduction to assembly language, including many exam...

  8. BIOLOGICALLY INSPIRED HARDWARE CELL ARCHITECTURE

    DEFF Research Database (Denmark)

    2010-01-01

    Disclosed is a system comprising: - a reconfigurable hardware platform; - a plurality of hardware units defined as cells adapted to be programmed to provide self-organization and self-maintenance of the system by means of implementing a program expressed in a programming language defined as DNA...... language, where each cell is adapted to communicate with one or more other cells in the system, and where the system further comprises a converter program adapted to convert keywords from the DNA language to a binary DNA code; where the self-organisation comprises that the DNA code is transmitted to one...... or more of the cells, and each of the one or more cells is adapted to determine its function in the system; where if a fault occurs in a first cell and the first cell ceases to perform its function, self-maintenance is performed by that the system transmits information to the cells that the first cell has...

  9. Signal processing for airborne doppler radar detection of hazardous wind shear as applied to NASA 1991 radar flight experiment data

    Science.gov (United States)

    Baxa, Ernest G., Jr.

    1992-01-01

    Radar data collected during the 1991 NASA flight tests have been selectively analyzed to support research directed at developing both improved as well as new algorithms for detecting hazardous low-altitude windshear. Analysis of aircraft attitude data from several flights indicated that platform stability bandwidths were small compared to the data rate bandwidths which should support an assumption that radar returns can be treated as short time stationary. Various approaches at detection of weather returns in the presence of ground clutter are being investigated. Non-coventional clutter rejection through spectrum mode tracking and classification algorithms is a subject of continuing research. Based upon autoregressive modeling of the radar return time sequence, this approach may offer an alternative to overcome errors in conventional pulse-pair estimates. Adaptive filtering is being evaluated as a means of rejecting clutter with emphasis on low signal-to-clutter ratio situations, particularly in the presence of discrete clutter interference. An analysis of out-of-range clutter returns is included to illustrate effects of ground clutter interference due to range aliasing for aircraft on final approach. Data are presented to indicate how aircraft groundspeed might be corrected from the radar data as well as point to an observed problem of groundspeed estimate bias variation with radar antenna scan angle. A description of how recorded clutter return data are mixed with simulated weather returns is included. This enables the researcher to run controlled experiments to test signal processing algorithms. In the summary research efforts involving improved modelling of radar ground clutter returns and a Bayesian approach at hazard factor estimation are mentioned.

  10. Mars Science Laboratory Flight Software Boot Robustness Testing Project Report

    Science.gov (United States)

    Roth, Brian

    2011-01-01

    On the surface of Mars, the Mars Science Laboratory will boot up its flight computers every morning, having charged the batteries through the night. This boot process is complicated, critical, and affected by numerous hardware states that can be difficult to test. The hardware test beds do not facilitate testing a long duration of back-to-back unmanned automated tests, and although the software simulation has provided the necessary functionality and fidelity for this boot testing, there has not been support for the full flexibility necessary for this task. Therefore to perform this testing a framework has been build around the software simulation that supports running automated tests loading a variety of starting configurations for software and hardware states. This implementation has been tested against the nominal cases to validate the methodology, and support for configuring off-nominal cases is ongoing. The implication of this testing is that the introduction of input configurations that have yet proved difficult to test may reveal boot scenarios worth higher fidelity investigation, and in other cases increase confidence in the robustness of the flight software boot process.

  11. Optical Properties of Nanosatellite Hardware

    Science.gov (United States)

    Finckenor, M. M.; Coker, R. F.

    2014-01-01

    Over the last decade, a number of very small satellites have been launched into space. These have been called nanosatellites (generally of a weight between 1 and 10 kg) or picosatellites (weight Nanosatellite Effect (SMDC-ONE) and the Edison Demonstration of Smallsat Networks (EDSN) nanosatellites. These optical property measurements are documented here in hopes that they may benefit future nanosatellite and picosatellite programs and aid thermal analysis to ensure project goals are met, with the understanding that material properties may vary by vendor, batch, manufacturing process, and preflight handling. Where possible, complementary data are provided from ground simulations of the space environment and flight experiments, such as the Materials on International Space Station Experiment (MISSE) series. NASA gives no recommendation, endorsement, or preference, either expressed or implied, concerning materials and vendors used. Solar absorptance was calculated from spectral reflectance measurements made from 250 to 2,800 nm with an AZ Technology Laboratory Portable Spectroreflectometer (LPSR) model 300. ASTM E-903 was the test method used under normal laboratory conditions, and ASTM E-490 was the solar spectral irradiance data used to calculate solar absorptance. Most of the samples were flat, but stray light was minimized as much as possible with either a blackbody or black cloth as sample background. The LPSR has repeatability of approximately +/-1%, where solar absorptance is given as range, that is, from actual measurements taken across the sample. Infrared emittance measurements were made with an AZ Technology TEMP 2000A infrared reflectometer. This instrument measures the total hemispheric reflectance averaged over 3-35 micrometer wavelengths. ASTM E-408 was the test method used under normal laboratory conditions. 3 Stray light was minimized as much as possible. The TEMP 2000A has repeatability of approximately +/-0.5%, where infrared emittance is given as a

  12. Hunting for hardware changes in data centres

    Science.gov (United States)

    Coelho dos Santos, M.; Steers, I.; Szebenyi, I.; Xafi, A.; Barring, O.; Bonfillou, E.

    2012-12-01

    With many servers and server parts the environment of warehouse sized data centres is increasingly complex. Server life-cycle management and hardware failures are responsible for frequent changes that need to be managed. To manage these changes better a project codenamed “hardware hound” focusing on hardware failure trending and hardware inventory has been started at CERN. By creating and using a hardware oriented data set - the inventory - with detailed information on servers and their parts as well as tracking changes to this inventory, the project aims at, for example, being able to discover trends in hardware failure rates.

  13. Hunting for hardware changes in data centres

    International Nuclear Information System (INIS)

    Coelho dos Santos, M; Steers, I; Szebenyi, I; Xafi, A; Barring, O; Bonfillou, E

    2012-01-01

    With many servers and server parts the environment of warehouse sized data centres is increasingly complex. Server life-cycle management and hardware failures are responsible for frequent changes that need to be managed. To manage these changes better a project codenamed “hardware hound” focusing on hardware failure trending and hardware inventory has been started at CERN. By creating and using a hardware oriented data set - the inventory - with detailed information on servers and their parts as well as tracking changes to this inventory, the project aims at, for example, being able to discover trends in hardware failure rates.

  14. Processing Functional Near Infrared Spectroscopy Signal with a Kalman Filter to Assess Working Memory during Simulated Flight.

    Science.gov (United States)

    Durantin, Gautier; Scannella, Sébastien; Gateau, Thibault; Delorme, Arnaud; Dehais, Frédéric

    2015-01-01

    Working memory (WM) is a key executive function for operating aircraft, especially when pilots have to recall series of air traffic control instructions. There is a need to implement tools to monitor WM as its limitation may jeopardize flight safety. An innovative way to address this issue is to adopt a Neuroergonomics approach that merges knowledge and methods from Human Factors, System Engineering, and Neuroscience. A challenge of great importance for Neuroergonomics is to implement efficient brain imaging techniques to measure the brain at work and to design Brain Computer Interfaces (BCI). We used functional near infrared spectroscopy as it has been already successfully tested to measure WM capacity in complex environment with air traffic controllers (ATC), pilots, or unmanned vehicle operators. However, the extraction of relevant features from the raw signal in ecological environment is still a critical issue due to the complexity of implementing real-time signal processing techniques without a priori knowledge. We proposed to implement the Kalman filtering approach, a signal processing technique that is efficient when the dynamics of the signal can be modeled. We based our approach on the Boynton model of hemodynamic response. We conducted a first experiment with nine participants involving a basic WM task to estimate the noise covariances of the Kalman filter. We then conducted a more ecological experiment in our flight simulator with 18 pilots who interacted with ATC instructions (two levels of difficulty). The data was processed with the same Kalman filter settings implemented in the first experiment. This filter was benchmarked with a classical pass-band IIR filter and a Moving Average Convergence Divergence (MACD) filter. Statistical analysis revealed that the Kalman filter was the most efficient to separate the two levels of load, by increasing the observed effect size in prefrontal areas involved in WM. In addition, the use of a Kalman filter increased

  15. Processing Functional Near Infrared Spectroscopy Signal with a Kalman Filter to Assess Working Memory during Simulated Flight.

    Directory of Open Access Journals (Sweden)

    Gautier eDurantin

    2016-01-01

    Full Text Available Working memory is a key executive function for operating aircraft, especially when pilots have to recall series of air traffic control instructions. There is a need to implement tools to monitor working memory as its limitation may jeopardize flight safety. An innovative way to address this issue is to adopt a Neuroergonomics approach that merges knowledge and methods from Human Factors, System Engineering and Neuroscience. A challenge of great importance for Neuroergonomics is to implement efficient brain imaging techniques to measure the brain at work and to design Brain Computer Interfaces. We used functional near infrared spectroscopy as it has been already successfully tested to measure working memory capacity in complex environment with air traffic controllers, pilots or unmanned vehicle operators. However, the extraction of relevant features from the raw signal in ecological environment is still a critical issue due to the complexity of implementing real-time signal processing techniques without a priori knowledge. We proposed to implement the Kalman filtering approach, a signal processing technique that is efficient when the dynamics of the signal can be modeled. We based our approach on the Boynton model of hemodynamic response. We conducted a first experiment with 9 participants involving a basic working memory task to estimate the noise covariances of the Kalman filter. We then conducted a more ecological experiment in our flight simulator with 18 pilots who interacted with air traffic controller instructions (two levels of difficulty. The data was processed with the same Kalman filter settings implemented in the first experiment. This filter was benchmarked with a classical pass-band IIR filter and a Moving Average Convergence Divergence filter. Statistical analysis revealed that the Kalman filter was the most efficient to separate the two levels of load, by increasing the observed effect size in prefrontal areas involved in working

  16. Model-Based Systems Engineering for Capturing Mission Architecture System Processes with an Application Case Study - Orion Flight Test 1

    Science.gov (United States)

    Bonanne, Kevin H.

    2011-01-01

    Model-based Systems Engineering (MBSE) is an emerging methodology that can be leveraged to enhance many system development processes. MBSE allows for the centralization of an architecture description that would otherwise be stored in various locations and formats, thus simplifying communication among the project stakeholders, inducing commonality in representation, and expediting report generation. This paper outlines the MBSE approach taken to capture the processes of two different, but related, architectures by employing the Systems Modeling Language (SysML) as a standard for architecture description and the modeling tool MagicDraw. The overarching goal of this study was to demonstrate the effectiveness of MBSE as a means of capturing and designing a mission systems architecture. The first portion of the project focused on capturing the necessary system engineering activities that occur when designing, developing, and deploying a mission systems architecture for a space mission. The second part applies activities from the first to an application problem - the system engineering of the Orion Flight Test 1 (OFT-1) End-to-End Information System (EEIS). By modeling the activities required to create a space mission architecture and then implementing those activities in an application problem, the utility of MBSE as an approach to systems engineering can be demonstrated.

  17. Hardware based redundant multi-threading inside a GPU for improved reliability

    Science.gov (United States)

    Sridharan, Vilas; Gurumurthi, Sudhanva

    2015-05-05

    A system and method for verifying computation output using computer hardware are provided. Instances of computation are generated and processed on hardware-based processors. As instances of computation are processed, each instance of computation receives a load accessible to other instances of computation. Instances of output are generated by processing the instances of computation. The instances of output are verified against each other in a hardware based processor to ensure accuracy of the output.

  18. Algorithms for Hardware-Based Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Müller Dietmar

    2004-01-01

    Full Text Available Nonlinear spatial transforms and fuzzy pattern classification with unimodal potential functions are established in signal processing. They have proved to be excellent tools in feature extraction and classification. In this paper, we will present a hardware-accelerated image processing and classification system which is implemented on one field-programmable gate array (FPGA. Nonlinear discrete circular transforms generate a feature vector. The features are analyzed by a fuzzy classifier. This principle can be used for feature extraction, pattern recognition, and classification tasks. Implementation in radix-2 structures is possible, allowing fast calculations with a computational complexity of up to . Furthermore, the pattern separability properties of these transforms are better than those achieved with the well-known method based on the power spectrum of the Fourier Transform, or on several other transforms. Using different signal flow structures, the transforms can be adapted to different image and signal processing applications.

  19. Post flight analysis of NASA standard star trackers recovered from the solar maximum mission

    Science.gov (United States)

    Newman, P.

    1985-01-01

    The flight hardware returned after the Solar Maximum Mission Repair Mission was analyzed to determine the effects of 4 years in space. The NASA Standard Star Tracker would be a good candidate for such analysis because it is moderately complex and had a very elaborate calibration during the acceptance procedure. However, the recovery process extensively damaged the cathode of the image dissector detector making proper operation of the tracker and a comparison with preflight characteristics impossible. Otherwise, the tracker functioned nominally during testing.

  20. Threats and Challenges in Reconfigurable Hardware Security

    OpenAIRE

    Kastner, Ryan; Huffmire, Ted

    2008-01-01

    Computing systems designed using reconfigurable hardware are now used in many sensitive applications, where security is of utmost importance. Unfortunately, a strong notion of security is not currently present in FPGA hardware and software design flows. In the following, we discuss the security implications of using reconfigurable hardware in sensitive applications, and outline problems, attacks, solutions and topics for future research.

  1. The Generalized Support Software (GSS) Domain Engineering Process: An Object-Oriented Implementation and Reuse Success at Goddard Space Flight Center

    Science.gov (United States)

    Condon, Steven; Hendrick, Robert; Stark, Michael E.; Steger, Warren

    1997-01-01

    The Flight Dynamics Division (FDD) of NASA's Goddard Space Flight Center (GSFC) recently embarked on a far-reaching revision of its process for developing and maintaining satellite support software. The new process relies on an object-oriented software development method supported by a domain specific library of generalized components. This Generalized Support Software (GSS) Domain Engineering Process is currently in use at the NASA GSFC Software Engineering Laboratory (SEL). The key facets of the GSS process are (1) an architecture for rapid deployment of FDD applications, (2) a reuse asset library for FDD classes, and (3) a paradigm shift from developing software to configuring software for mission support. This paper describes the GSS architecture and process, results of fielding the first applications, lessons learned, and future directions

  2. Computer hardware for radiologists: Part 2

    International Nuclear Information System (INIS)

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. “Storage drive” is a term describing a “memory” hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. “Drive interfaces” connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular “input/output devices” used commonly with computers are the printer, monitor, mouse, and keyboard. The “bus” is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. “Ports” are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ‘ever increasing’ digital future

  3. Computer hardware for radiologists: Part 2

    Directory of Open Access Journals (Sweden)

    Indrajit I

    2010-01-01

    Full Text Available Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU, chipset, random access memory (RAM, and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. "Storage drive" is a term describing a "memory" hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. "Drive interfaces" connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular "input/output devices" used commonly with computers are the printer, monitor, mouse, and keyboard. The "bus" is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated ISA bus. "Ports" are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ′ever increasing′ digital future.

  4. Fast DRR splat rendering using common consumer graphics hardware

    International Nuclear Information System (INIS)

    Spoerk, Jakob; Bergmann, Helmar; Wanschitz, Felix; Dong, Shuo; Birkfellner, Wolfgang

    2007-01-01

    Digitally rendered radiographs (DRR) are a vital part of various medical image processing applications such as 2D/3D registration for patient pose determination in image-guided radiotherapy procedures. This paper presents a technique to accelerate DRR creation by using conventional graphics hardware for the rendering process. DRR computation itself is done by an efficient volume rendering method named wobbled splatting. For programming the graphics hardware, NVIDIAs C for Graphics (Cg) is used. The description of an algorithm used for rendering DRRs on the graphics hardware is presented, together with a benchmark comparing this technique to a CPU-based wobbled splatting program. Results show a reduction of rendering time by about 70%-90% depending on the amount of data. For instance, rendering a volume of 2x10 6 voxels is feasible at an update rate of 38 Hz compared to 6 Hz on a common Intel-based PC using the graphics processing unit (GPU) of a conventional graphics adapter. In addition, wobbled splatting using graphics hardware for DRR computation provides higher resolution DRRs with comparable image quality due to special processing characteristics of the GPU. We conclude that DRR generation on common graphics hardware using the freely available Cg environment is a major step toward 2D/3D registration in clinical routine

  5. Hardware complications in scoliosis surgery

    Energy Technology Data Exchange (ETDEWEB)

    Bagchi, Kaushik; Mohaideen, Ahamed [Department of Orthopaedic Surgery and Musculoskeletal Services, Maimonides Medical Center, Brooklyn, NY (United States); Thomson, Jeffrey D. [Connecticut Children' s Medical Center, Department of Orthopaedics, Hartford, CT (United States); Foley, Christopher L. [Department of Radiology, Connecticut Children' s Medical Center, Hartford, Connecticut (United States)

    2002-07-01

    Background: Scoliosis surgery has undergone a dramatic evolution over the past 20 years with the advent of new surgical techniques and sophisticated instrumentation. Surgeons have realized scoliosis is a complex multiplanar deformity that requires thorough knowledge of spinal anatomy and pathophysiology in order to manage patients afflicted by it. Nonoperative modalities such as bracing and casting still play roles in the treatment of scoliosis; however, it is the operative treatment that has revolutionized the treatment of this deformity that affects millions worldwide. As part of the evolution of scoliosis surgery, newer implants have resulted in improved outcomes with respect to deformity correction, reliability of fixation, and paucity of complications. Each technique and implant has its own set of unique complications, and the surgeon must appreciate these when planning surgery. Materials and methods: Various surgical techniques and types of instrumentation typically used in scoliosis surgery are briefly discussed. Though scoliosis surgery is associated with a wide variety of complications, only those that directly involve the hardware are discussed. The current literature is reviewed and several illustrative cases of patients treated for scoliosis at the Connecticut Children's Medical Center and the Newington Children's Hospital in Connecticut are briefly presented. Conclusion: Spine surgeons and radiologists should be familiar with the different types of instrumentation in the treatment of scoliosis. Furthermore, they should recognize the clinical and roentgenographic signs of hardware failure as part of prompt and effective treatment of such complications. (orig.)

  6. Travel Software using GPU Hardware

    CERN Document Server

    Szalwinski, Chris M; Dimov, Veliko Atanasov; CERN. Geneva. ATS Department

    2015-01-01

    Travel is the main multi-particle tracking code being used at CERN for the beam dynamics calculations through hadron and ion linear accelerators. It uses two routines for the calculation of space charge forces, namely, rings of charges and point-to-point. This report presents the studies to improve the performance of Travel using GPU hardware. The studies showed that the performance of Travel with the point-to-point simulations of space-charge effects can be speeded up at least 72 times using current GPU hardware. Simple recompilation of the source code using an Intel compiler can improve performance at least 4 times without GPU support. The limited memory of the GPU is the bottleneck. Two algorithms were investigated on this point: repeated computation and tiling. The repeating computation algorithm is simpler and is the currently recommended solution. The tiling algorithm was more complicated and degraded performance. Both build and test instructions for the parallelized version of the software are inclu...

  7. Physics of Colloids in Space--Plus (PCS+) Experiment Completed Flight Acceptance Testing

    Science.gov (United States)

    Doherty, Michael P.

    2004-01-01

    The Physics of Colloids in Space--Plus (PCS+) experiment successfully completed system-level flight acceptance testing in the fall of 2003. This testing included electromagnetic interference (EMI) testing, vibration testing, and thermal testing. PCS+, an Expedite the Process of Experiments to Space Station (EXPRESS) Rack payload will deploy a second set of colloid samples within the PCS flight hardware system that flew on the International Space Station (ISS) from April 2001 to June 2002. PCS+ is slated to return to the ISS in late 2004 or early 2005.

  8. Reconfigurable Hardware for Compressing Hyperspectral Image Data

    Science.gov (United States)

    Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua

    2010-01-01

    High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of

  9. Hardware interface unit for control of shuttle RMS vibrations

    Science.gov (United States)

    Lindsay, Thomas S.; Hansen, Joseph M.; Manouchehri, Davoud; Forouhar, Kamran

    1994-01-01

    Vibration of the Shuttle Remote Manipulator System (RMS) increases the time for task completion and reduces task safety for manipulator-assisted operations. If the dynamics of the manipulator and the payload can be physically isolated, performance should improve. Rockwell has developed a self contained hardware unit which interfaces between a manipulator arm and payload. The End Point Control Unit (EPCU) is built and is being tested at Rockwell and at the Langley/Marshall Coupled, Multibody Spacecraft Control Research Facility in NASA's Marshall Space Flight Center in Huntsville, Alabama.

  10. Biomechanics of bird flight.

    Science.gov (United States)

    Tobalske, Bret W

    2007-09-01

    Power output is a unifying theme for bird flight and considerable progress has been accomplished recently in measuring muscular, metabolic and aerodynamic power in birds. The primary flight muscles of birds, the pectoralis and supracoracoideus, are designed for work and power output, with large stress (force per unit cross-sectional area) and strain (relative length change) per contraction. U-shaped curves describe how mechanical power output varies with flight speed, but the specific shapes and characteristic speeds of these curves differ according to morphology and flight style. New measures of induced, profile and parasite power should help to update existing mathematical models of flight. In turn, these improved models may serve to test behavioral and ecological processes. Unlike terrestrial locomotion that is generally characterized by discrete gaits, changes in wing kinematics and aerodynamics across flight speeds are gradual. Take-off flight performance scales with body size, but fully revealing the mechanisms responsible for this pattern awaits new study. Intermittent flight appears to reduce the power cost for flight, as some species flap-glide at slow speeds and flap-bound at fast speeds. It is vital to test the metabolic costs of intermittent flight to understand why some birds use intermittent bounds during slow flight. Maneuvering and stability are critical for flying birds, and design for maneuvering may impinge upon other aspects of flight performance. The tail contributes to lift and drag; it is also integral to maneuvering and stability. Recent studies have revealed that maneuvers are typically initiated during downstroke and involve bilateral asymmetry of force production in the pectoralis. Future study of maneuvering and stability should measure inertial and aerodynamic forces. It is critical for continued progress into the biomechanics of bird flight that experimental designs are developed in an ecological and evolutionary context.

  11. Communication Estimation for Hardware/Software Codesign

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1998-01-01

    to be general enough to be able to capture the characteristics of a wide range of communication protocols and yet to be sufficiently detailed as to allow the designer or design tool to efficiently explore tradeoffs between throughput, bus widths, burst/non-burst transfers and data packing strategies. Thus......This paper presents a general high level estimation model of communication throughput for the implementation of a given communication protocol. The model, which is part of a larger model that includes component price, software driver object code size and hardware driver area, is intended...... it provides a basis for decision making with respect to communication protocols/components and communication driver design in the initial design space exploration phase of a co-synthesis process where a large number of possibilities must be examined and where fast estimators are therefore necessary. The fill...

  12. SAMBA: hardware accelerator for biological sequence comparison.

    Science.gov (United States)

    Guerdoux-Jamet, P; Lavenier, D

    1997-12-01

    SAMBA (Systolic Accelerator for Molecular Biological Applications) is a 128 processor hardware accelerator for speeding up the sequence comparison process. The short-term objective is to provide a low-cost board to boost PC or workstation performance on this class of applications. This paper places SAMBA amongst other existing systems and highlights the original features. Real performance obtained from the prototype is demonstrated. For example, a sequence of 300 amino acids is scanned against SWISS-PROT-34 (21 210 389 residues) in 30 s using the Smith and Waterman algorithm. More time-consuming applications, like the bank-to-bank comparison, are computed in a few hours instead of days on standard workstations. Technology allows the prototype to fit onto a single PCI board for plugging into any PC or workstation. SAMBA can be tested on the WEB server at URL http://www.irisa.fr/SAMBA/.

  13. Compressive Sensing Image Sensors-Hardware Implementation

    Directory of Open Access Journals (Sweden)

    Shahram Shirani

    2013-04-01

    Full Text Available The compressive sensing (CS paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal–oxide–semiconductor technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed.

  14. Flight Planning in the Cloud

    Science.gov (United States)

    Flores, Sarah L.; Chapman, Bruce D.; Tung, Waye W.; Zheng, Yang

    2011-01-01

    This new interface will enable Principal Investigators (PIs), as well as UAVSAR (Uninhabited Aerial Vehicle Synthetic Aperture Radar) members to do their own flight planning and time estimation without having to request flight lines through the science coordinator. It uses an all-in-one Google Maps interface, a JPL hosted database, and PI flight requirements to design an airborne flight plan. The application will enable users to see their own flight plan being constructed interactively through a map interface, and then the flight planning software will generate all the files necessary for the flight. Afterward, the UAVSAR team can then complete the flight request, including calendaring and supplying requisite flight request files in the expected format for processing by NASA s airborne science program. Some of the main features of the interface include drawing flight lines on the map, nudging them, adding them to the current flight plan, and reordering them. The user can also search and select takeoff, landing, and intermediate airports. As the flight plan is constructed, all of its components are constantly being saved to the database, and the estimated flight times are updated. Another feature is the ability to import flight lines from previously saved flight plans. One of the main motivations was to make this Web application as simple and intuitive as possible, while also being dynamic and robust. This Web application can easily be extended to support other airborne instruments.

  15. Hardware Approach for Real Time Machine Stereo Vision

    Directory of Open Access Journals (Sweden)

    Michael Tornow

    2006-02-01

    Full Text Available Image processing is an effective tool for the analysis of optical sensor information for driver assistance systems and controlling of autonomous robots. Algorithms for image processing are often very complex and costly in terms of computation. In robotics and driver assistance systems, real-time processing is necessary. Signal processing algorithms must often be drastically modified so they can be implemented in the hardware. This task is especially difficult for continuous real-time processing at high speeds. This article describes a hardware-software co-design for a multi-object position sensor based on a stereophotogrammetric measuring method. In order to cover a large measuring area, an optimized algorithm based on an image pyramid is implemented in an FPGA as a parallel hardware solution for depth map calculation. Object recognition and tracking are then executed in real-time in a processor with help of software. For this task a statistical cluster method is used. Stabilization of the tracking is realized through use of a Kalman filter. Keywords: stereophotogrammetry, hardware-software co-design, FPGA, 3-d image analysis, real-time, clustering and tracking.

  16. Open-source hardware for medical devices.

    Science.gov (United States)

    Niezen, Gerrit; Eslambolchilar, Parisa; Thimbleby, Harold

    2016-04-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device.

  17. Hardware Resource Allocation for Hardware/Software Partitioning in the LYCOS System

    DEFF Research Database (Denmark)

    Grode, Jesper Nicolai Riis; Knudsen, Peter Voigt; Madsen, Jan

    1998-01-01

    as a designer's/design tool's aid to generate good hardware allocations for use in hardware/software partitioning. The algorithm has been implemented in a tool under the LYCOS system. The results show that the allocations produced by the algorithm come close to the best allocations obtained by exhaustive search.......This paper presents a novel hardware resource allocation technique for hardware/software partitioning. It allocates hardware resources to the hardware data-path using information such as data-dependencies between operations in the application, and profiling information. The algorithm is useful...

  18. Hardware Resource Allocation for Hardware/Software Partitioning in the LYCOS System

    DEFF Research Database (Denmark)

    Grode, Jesper Nicolai Riis; Madsen, Jan; Knudsen, Peter Voigt

    1998-01-01

    as a designer's/design tool's aid to generate good hardware allocations for use in hardware/software partitioning. The algorithm has been implemented in a tool under the LYCOS system. The results show that the allocations produced by the algorithm come close to the best allocations obtained by exhaustive search......This paper presents a novel hardware resource allocation technique for hardware/software partitioning. It allocates hardware resources to the hardware data-path using information such as data-dependencies between operations in the application, and profiling information. The algorithm is useful...

  19. Overview of Additive Manufacturing Initiatives at NASA Marshall Space Flight Center

    Science.gov (United States)

    Clinton, R. G., Jr.

    2018-01-01

    NASA's In Space Manufacturing Initiative (ISM) includes: The case for ISM - why; ISM path to exploration - results from the 3D Printing In Zero-G Technology Demonstration - ISM challenges; In space Robotic Manufacturing and Assembly (IRMA); Additive construction. Additively Manufacturing (AM) development for liquid rocket engine space flight hardware. MSFC standard and specification for additively manufactured space flight hardware. Summary.

  20. A control and data processing system for neutron time-of-flight experiments at the Harwell linear accelerator based on a PDP-11/45 mini-computer

    International Nuclear Information System (INIS)

    Chapman, W.S.; Boyce, D.A.; Brisland, J.B.; Langman, A.E.; Morris, D.V.; Schomberg, M.G.; Webb, D.A.

    1977-05-01

    The subject is treated in sections, entitled: introduction (experimental method, need for the PDP-11/45 based system); features required in the control and data processing system; description of the selected system configuration (PDP 11/45 mini-computer and RSX-11 D operating system, the single parameter experimental stations (the CAMAC units, the time-of-flight scaler)); description of the applications software; system performance. (U.K.)

  1. A pre-processing strategy for liquid chromatography time-of-flight mass spectrometry metabolic fingerprinting data

    DEFF Research Database (Denmark)

    Nielsen, Nikoline Juul; Tomasi, Giorgio; Frandsen, Rasmus John Normand

    2010-01-01

    A series of simple and robust operations for handling large chromatographic time-of-flight mass spectrometry fingerprinting data has been established and applied to data from extracts of Fusarium graminearum genotypes modified in a non-ribosomal peptide synthase gene by over-expression and deleti...

  2. Hardware support for the tumult real-time scheduler

    NARCIS (Netherlands)

    van der Bij, H.C.; Smit, Gerardus Johannes Maria; Havinga, Paul J.M.

    1989-01-01

    This article describes the hardware which is designed for speeding up and supporting the schedule routines of the TUMULT multi-tasking operating system. TUMULT uses a “priority running up” schedule algorithm which automatically increases the priority of a process when (part of) it must be finished

  3. Hardware and software maintenance strategies for upgrading vintage computers

    International Nuclear Information System (INIS)

    Wang, B.C.; Buijs, W.J.; Banting, R.D.

    1992-01-01

    The paper focuses on the maintenance of the computer hardware and software for digital control computers (DCC). Specific design and problems related to various maintenance strategies are reviewed. A foundation was required for a reliable computer maintenance and upgrading program to provide operation of the DCC with high availability and reliability for 40 years. This involved a carefully planned and executed maintenance and upgrading program, involving complementary hardware and software strategies. The computer system was designed on a modular basis, with large sections easily replaceable, to facilitate maintenance and improve availability of the system. Advances in computer hardware have made it possible to replace DCC peripheral devices with reliable, inexpensive, and widely available components from PC-based systems (PC = personal computer). By providing a high speed link from the DCC to a PC, it is now possible to use many commercial software packages to process data from the plant. 1 fig

  4. High-performance free-space optical modem hardware

    Science.gov (United States)

    Sluz, Joseph E.; Juarez, Juan C.; Bair, Chun-Huei; Oberc, Rachel L.; Venkat, Radha A.; Rollend, Derek; Young, David W.

    2012-06-01

    This paper describes key aspects of modem hardware designed to operate in free space optical (FSO) links of up to 200 km. The hardware serves as a bridge between 10 gigabit Ethernet client data systems and FSO terminals. The modem hardware alters the client data rate and format for optimal transmission and reception over the FSO link by applying forward error correction (FEC) processing and differential phase shift keying (DPSK) modulation. Optical automatic gain control (OAGC) is also used. The result of these features provide sensitivities approaching -48 dBm with 60 dB of error-free dynamic range while in the presence of turbulent optical conditions to deal with large dynamic range optical power fades.

  5. Hardware Abstraction and Protocol Optimization for Coded Sensor Networks

    DEFF Research Database (Denmark)

    Nistor, Maricica; Roetter, Daniel Enrique Lucani; Barros, João

    2015-01-01

    The design of the communication protocols in wireless sensor networks (WSNs) often neglects several key characteristics of the sensor's hardware, while assuming that the number of transmitted bits is the dominating factor behind the system's energy consumption. A closer look at the hardware...... specifications of common sensors reveals, however, that other equally important culprits exist, such as the reception and processing energy. Hence, there is a need for a more complete hardware abstraction of a sensor node to reduce effectively the total energy consumption of the network by designing energy......-efficient protocols that use such an abstraction, as well as mechanisms to optimize a communication protocol in terms of energy consumption. The problem is modeled for different feedback-based techniques, where sensors are connected to a base station, either directly or through relays. We show that for four example...

  6. XOR-FREE Implementation of Convolutional Encoder for Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    Gaurav Purohit

    2016-01-01

    Full Text Available This paper presents a novel XOR-FREE algorithm to implement the convolutional encoder using reconfigurable hardware. The approach completely removes the XOR processing of a chosen nonsystematic, feedforward generator polynomial of larger constraint length. The hardware (HW implementation of new architecture uses Lookup Table (LUT for storing the parity bits. The design implements architectural reconfigurability by modifying the generator polynomial of the same constraint length and code rate to reduce the design complexity. The proposed architecture reduces the dynamic power up to 30% and improves the hardware cost and propagation delay up to 20% and 32%, respectively. The performance of the proposed architecture is validated in MATLAB Simulink and tested on Zynq-7 series FPGA.

  7. Hardware Design Considerations for Edge-Accelerated Stereo Correspondence Algorithms

    Directory of Open Access Journals (Sweden)

    Christos Ttofis

    2012-01-01

    Full Text Available Stereo correspondence is a popular algorithm for the extraction of depth information from a pair of rectified 2D images. Hence, it has been used in many computer vision applications that require knowledge about depth. However, stereo correspondence is a computationally intensive algorithm and requires high-end hardware resources in order to achieve real-time processing speed in embedded computer vision systems. This paper presents an overview of the use of edge information as a means to accelerate hardware implementations of stereo correspondence algorithms. The presented approach restricts the stereo correspondence algorithm only to the edges of the input images rather than to all image points, thus resulting in a considerable reduction of the search space. The paper highlights the benefits of the edge-directed approach by applying it to two stereo correspondence algorithms: an SAD-based fixed-support algorithm and a more complex adaptive support weight algorithm. Furthermore, we present design considerations about the implementation of these algorithms on reconfigurable hardware and also discuss issues related to the memory structures needed, the amount of parallelism that can be exploited, the organization of the processing blocks, and so forth. The two architectures (fixed-support based versus adaptive-support weight based are compared in terms of processing speed, disparity map accuracy, and hardware overheads, when both are implemented on a Virtex-5 FPGA platform.

  8. On-Chip Reconfigurable Hardware Accelerators for Popcount Computations

    Directory of Open Access Journals (Sweden)

    Valery Sklyarov

    2016-01-01

    Full Text Available Popcount computations are widely used in such areas as combinatorial search, data processing, statistical analysis, and bio- and chemical informatics. In many practical problems the size of initial data is very large and increase in throughput is important. The paper suggests two types of hardware accelerators that are (1 designed in FPGAs and (2 implemented in Zynq-7000 all programmable systems-on-chip with partitioning of algorithms that use popcounts between software of ARM Cortex-A9 processing system and advanced programmable logic. A three-level system architecture that includes a general-purpose computer, the problem-specific ARM, and reconfigurable hardware is then proposed. The results of experiments and comparisons with existing benchmarks demonstrate that although throughput of popcount computations is increased in FPGA-based designs interacting with general-purpose computers, communication overheads (in experiments with PCI express are significant and actual advantages can be gained if not only popcount but also other types of relevant computations are implemented in hardware. The comparison of software/hardware designs for Zynq-7000 all programmable systems-on-chip with pure software implementations in the same Zynq-7000 devices demonstrates increase in performance by a factor ranging from 5 to 19 (taking into account all the involved communication overheads between the programmable logic and the processing systems.

  9. W-026 acceptance test plan plant control system hardware (submittal {number_sign} 216)

    Energy Technology Data Exchange (ETDEWEB)

    Watson, T.L., Fluor Daniel Hanford

    1997-02-14

    Acceptance Testing of the WRAP 1 Plant Control System Hardware will be conducted throughout the construction of WRAP I with the final testing on the Process Area hardware being completed in November 1996. The hardware tests will be broken out by the following functional areas; Local Control Units, Operator Control Stations in the WRAP Control Room, DMS Server, PCS Server, Operator Interface Units, printers, DNS terminals, WRAP Local Area Network/Communications, and bar code equipment. This document will contain completed copies of each of the hardware tests along with the applicable test logs and completed test exception reports.

  10. Spaceborne computer executive routine functional design specification. Volume 1: Functional design of a flight computer executive program for the reusable shuttle

    Science.gov (United States)

    Curran, R. T.

    1971-01-01

    A flight computer functional executive design for the reusable shuttle is presented. The design is given in the form of functional flowcharts and prose description. Techniques utilized in the regulation of process flow to accomplish activation, resource allocation, suspension, termination, and error masking based on process primitives are considered. Preliminary estimates of main storage utilization by the Executive are furnished. Conclusions and recommendations for timely, effective software-hardware integration in the reusable shuttle avionics system are proposed.

  11. Implementation of an Adaptive Controller System from Concept to Flight Test

    Science.gov (United States)

    Larson, Richard R.; Burken, John J.; Butler, Bradley S.; Yokum, Steve

    2009-01-01

    The National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) is conducting ongoing flight research using adaptive controller algorithms. A highly modified McDonnell-Douglas NF-15B airplane called the F-15 Intelligent Flight Control System (IFCS) is used to test and develop these algorithms. Modifications to this airplane include adding canards and changing the flight control systems to interface a single-string research controller processor for neural network algorithms. Research goals include demonstration of revolutionary control approaches that can efficiently optimize aircraft performance in both normal and failure conditions and advancement of neural-network-based flight control technology for new aerospace system designs. This report presents an overview of the processes utilized to develop adaptive controller algorithms during a flight-test program, including a description of initial adaptive controller concepts and a discussion of modeling formulation and performance testing. Design finalization led to integration with the system interfaces, verification of the software, validation of the hardware to the requirements, design of failure detection, development of safety limiters to minimize the effect of erroneous neural network commands, and creation of flight test control room displays to maximize human situational awareness; these are also discussed.

  12. Imaging Sensor Flight and Test Equipment Software

    Science.gov (United States)

    Freestone, Kathleen; Simeone, Louis; Robertson, Byran; Frankford, Maytha; Trice, David; Wallace, Kevin; Wilkerson, DeLisa

    2007-01-01

    The Lightning Imaging Sensor (LIS) is one of the components onboard the Tropical Rainfall Measuring Mission (TRMM) satellite, and was designed to detect and locate lightning over the tropics. The LIS flight code was developed to run on a single onboard digital signal processor, and has operated the LIS instrument since 1997 when the TRMM satellite was launched. The software provides controller functions to the LIS Real-Time Event Processor (RTEP) and onboard heaters, collects the lightning event data from the RTEP, compresses and formats the data for downlink to the satellite, collects housekeeping data and formats the data for downlink to the satellite, provides command processing and interface to the spacecraft communications and data bus, and provides watchdog functions for error detection. The Special Test Equipment (STE) software was designed to operate specific test equipment used to support the LIS hardware through development, calibration, qualification, and integration with the TRMM spacecraft. The STE software provides the capability to control instrument activation, commanding (including both data formatting and user interfacing), data collection, decompression, and display and image simulation. The LIS STE code was developed for the DOS operating system in the C programming language. Because of the many unique data formats implemented by the flight instrument, the STE software was required to comprehend the same formats, and translate them for the test operator. The hardware interfaces to the LIS instrument using both commercial and custom computer boards, requiring that the STE code integrate this variety into a working system. In addition, the requirement to provide RTEP test capability dictated the need to provide simulations of background image data with short-duration lightning transients superimposed. This led to the development of unique code used to control the location, intensity, and variation above background for simulated lightning strikes

  13. The failure analysis, redesign, and final preparation of the Brilliant Eyes Thermal Storage Unit for flight testing

    Science.gov (United States)

    Lamkin, T.; Whitney, Brian

    1995-01-01

    This paper describes the engineering thought process behind the failure analysis, redesign, and rework of the flight hardware for the Brilliant Eyes Thermal Storage Unit (BETSU) experiment. This experiment was designed to study the zero-g performance of 2-methylpentane as a suitable phase change material. This hydrocarbon served as the cryogenic storage medium for the BETSU experiment which was flown 04 Mar 94 on board Shuttle STS-62. Ground testing had indicated satisfactory performance of the BETSU at the 120 Kelvin design temperature. However, questions remained as to the micro-gravity performance of this unit; potential deviations in ground (1 g) versus space flight (0 g) performance, and how the unit would operate in a realistic space environment undergoing cyclical operation. The preparations and rework performed on the BETSU unit, which failed initial flight qualification, give insight and lessons learned to successfully develop and qualify a space flight experiment.

  14. Treatment alternatives for non-fuel-bearing hardware

    Energy Technology Data Exchange (ETDEWEB)

    Ross, W.A.; Clark, L.L.; Oma, K.H.

    1987-01-01

    This evaluation compared four alternatives for the treatment or processing of non-fuel bearing hardware (NFBH) to reduce its volume and prepare it for disposal. These treatment alternatives are: shredding; shredding and low pressure compaction; shredding and supercompaction; and melting. These alternatives are compared on the basis of system costs, waste form characteristics, and process considerations. The study recommends that melting and supercompaction alternatives be further considered and that additional testing be conducted for these two alternatives.

  15. Hardware image assessment for wireless endoscopy capsules.

    Science.gov (United States)

    Khorsandi, M A; Karimi, N; Samavi, S; Hajabdollahi, M; Soroushmehr, S M R; Ward, K; Najarian, K

    2016-08-01

    Wireless capsule endoscopy is a new technology in the realm of telemedicine that has many advantages over the traditional endoscopy systems. Transmitted images should help diagnosis of diseases of the gastrointestinal tract. Two important technical challenges for the manufacturers of these capsules are power consumption and size of the circuitry. Also, the system must be fast enough for real-time processing of image or video data. To solve this problem, many hardware designs have been proposed for implementation of the image processing unit. In this paper we propose an architecture that could be used for the assessment of endoscopy images. The assessment allows avoidance of transmission of medically useless images. Hence, volume of data is reduced for more efficient transmission of images by the endoscopy capsule. This is done by color space conversion and moment calculation of images captured by the capsule. The inputs of the proposed architecture are RGB image frames and the outputs are images with converted colors and calculated image moments. Experimental results indicate that the proposed architecture has low complexity and is appropriate for a real-time application.

  16. Hardware Reuse Improvement through the Domain Specific Language dHDL.

    OpenAIRE

    Sánchez Marcos, Miguel Ángel; López Vallejo, Marisa; Iglesias Fernandez, Carlos Angel

    2012-01-01

    The dHDL language has been defined to improve hardware design productivity. This is achieved through the definition of a better reuse interface (including parameters, attributes and macroports) and the creation of control structures that help the designer in the hardware generation process.

  17. ACTEX flight experiment: development issues and lessons learned

    Science.gov (United States)

    Schubert, S. R.

    1993-09-01

    The ACTEX flight experiment is scheduled for launch and to begin its on orbit operations in early 1994. The objective of the ACTEX experiment is to demonstrate active vibration control in space, using the smart structure technology. This paper discusses primarily the hardware development and program management issues associated with delivering low cost flight experiments.

  18. Development and flight test of a helicopter compact, portable, precision landing system concept

    Science.gov (United States)

    Bull, J. S.; Clary, G. R.; Davis, T. J.; Chisholm, J. P.

    1984-01-01

    An airborne, radar based, precision approach concept is being developed and flight tested as a part of NASA's Rotorcraft All-Weather Operations Research Program. A transponder based beacon landing system (BLS) applying state of the art X band radar technology and digital processing techniques, has been built and is being flight tested to demonstrate the concept feasibility. The BLS airborne hardware consists of an add on microprocessor, installed in conjunction with the aircraft weather/mapping radar, which analyzes the radar beacon receiver returns and determines range, localizer deviation, and glide slope derivation. The ground station is an inexpensive, portable unit which can be quickly deployed at a landing site. Results from the flight test program show that the BLS concept has a significant potential for providing rotorcraft with low cost, precision instrument approach capability in remote areas.

  19. Toward a Model-Based Approach to Flight System Fault Protection

    Science.gov (United States)

    Day, John; Murray, Alex; Meakin, Peter

    2012-01-01

    Fault Protection (FP) is a distinct and separate systems engineering sub-discipline that is concerned with the off-nominal behavior of a system. Flight system fault protection is an important part of the overall flight system systems engineering effort, with its own products and processes. As with other aspects of systems engineering, the FP domain is highly amenable to expression and management in models. However, while there are standards and guidelines for performing FP related analyses, there are not standards or guidelines for formally relating the FP analyses to each other or to the system hardware and software design. As a result, the material generated for these analyses are effectively creating separate models that are only loosely-related to the system being designed. Development of approaches that enable modeling of FP concerns in the same model as the system hardware and software design enables establishment of formal relationships that has great potential for improving the efficiency, correctness, and verification of the implementation of flight system FP. This paper begins with an overview of the FP domain, and then continues with a presentation of a SysML/UML model of the FP domain and the particular analyses that it contains, by way of showing a potential model-based approach to flight system fault protection, and an exposition of the use of the FP models in FSW engineering. The analyses are small examples, inspired by current real-project examples of FP analyses.

  20. Hardware-in-the-Loop Testing

    Data.gov (United States)

    Federal Laboratory Consortium — RTC has a suite of Hardware-in-the Loop facilities that include three operational facilities that provide performance assessment and production acceptance testing of...

  1. Advanced planning for ISS payload ground processing

    Science.gov (United States)

    Page, Kimberly A.

    2000-01-01

    Ground processing at John F. Kennedy Space Center (KSC) is the concluding phase of the payload/flight hardware development process and is the final opportunity to ensure safe and successful recognition of mission objectives. Planning for the ground processing of on-orbit flight hardware elements and payloads for the International Space Station is a responsibility taken seriously at KSC. Realizing that entering into this operational environment can be an enormous undertaking for a payload customer, KSC continually works to improve this process by instituting new/improved services for payload developer/owner, applying state-of-the-art technologies to the advanced planning process, and incorporating lessons learned for payload ground processing planning to ensure complete customer satisfaction. This paper will present an overview of the KSC advanced planning activities for ISS hardware/payload ground processing. It will focus on when and how KSC begins to interact with the payload developer/owner, how that interaction changes (and grows) throughout the planning process, and how KSC ensures that advanced planning is successfully implemented at the launch site. It will also briefly consider the type of advance planning conducted by the launch site that is transparent to the payload user but essential to the successful processing of the payload (i.e. resource allocation, executing documentation, etc.) .

  2. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Science.gov (United States)

    Hwang, Wen-Jyi; Lee, Wei-Hao; Lin, Shiow-Jyu; Lai, Sheng-Ying

    2013-01-01

    This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA) and fuzzy C-means (FCM) algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA). It is embedded in a System-on-Chip (SOC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation. PMID:24189331

  3. Efficient Architecture for Spike Sorting in Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    Sheng-Ying Lai

    2013-11-01

    Full Text Available This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA and fuzzy C-means (FCM algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA. It is embedded in a System-on-Chip (SOC platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation.

  4. CT image reconstruction system based on hardware implementation

    International Nuclear Information System (INIS)

    Silva, Hamilton P. da; Evseev, Ivan; Schelin, Hugo R.; Paschuk, Sergei A.; Milhoretto, Edney; Setti, Joao A.P.; Zibetti, Marcelo; Hormaza, Joel M.; Lopes, Ricardo T.

    2009-01-01

    Full text: The timing factor is very important for medical imaging systems, which can nowadays be synchronized by vital human signals, like heartbeats or breath. The use of hardware implemented devices in such a system has advantages considering the high speed of information treatment combined with arbitrary low cost on the market. This article refers to a hardware system which is based on electronic programmable logic called FPGA, model Cyclone II from ALTERA Corporation. The hardware was implemented on the UP3 ALTERA Kit. A partially connected neural network with unitary weights was programmed. The system was tested with 60 topographic projections, 100 points in each, of the Shepp and Logan phantom created by MATLAB. The main restriction was found to be the memory size available on the device: the dynamic range of reconstructed image was limited to 0 65535. Also, the normalization factor must be observed in order to do not saturate the image during the reconstruction and filtering process. The test shows a principal possibility to build CT image reconstruction systems for any reasonable amount of input data by arranging the parallel work of the hardware units like we have tested. However, further studies are necessary for better understanding of the error propagation from topographic projections to reconstructed image within the implemented method. (author)

  5. Cooperative communications hardware, channel and PHY

    CERN Document Server

    Dohler, Mischa

    2010-01-01

    Facilitating Cooperation for Wireless Systems Cooperative Communications: Hardware, Channel & PHY focuses on issues pertaining to the PHY layer of wireless communication networks, offering a rigorous taxonomy of this dispersed field, along with a range of application scenarios for cooperative and distributed schemes, demonstrating how these techniques can be employed. The authors discuss hardware, complexity and power consumption issues, which are vital for understanding what can be realized at the PHY layer, showing how wireless channel models differ from more traditional

  6. Designing Secure Systems on Reconfigurable Hardware

    OpenAIRE

    Huffmire, Ted; Brotherton, Brett; Callegari, Nick; Valamehr, Jonathan; White, Jeff; Kastner, Ryan; Sherwood, Ted

    2008-01-01

    The extremely high cost of custom ASIC fabrication makes FPGAs an attractive alternative for deployment of custom hardware. Embedded systems based on reconfigurable hardware integrate many functions onto a single device. Since embedded designers often have no choice but to use soft IP cores obtained from third parties, the cores operate at different trust levels, resulting in mixed trust designs. The goal of this project is to evaluate recently proposed security primitives for reconfigurab...

  7. LWH and ACH Helmet Hardware Study

    Science.gov (United States)

    2015-11-30

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6355--15-9642 LWH & ACH Helmet Hardware Study November 30, 2015 Ronald l. Holtz PeteR...19b. TELEPHONE NUMBER (include area code) b. ABSTRACT c. THIS PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT LWH & ACH Helmet Hardware Study...screws and nuts used with the Light Weight Helmet (LWH) and Advanced Combat Helmet ( ACH ). The testing included basic dimensional measurements, Rockwell

  8. A Hardware Track Finder for ATLAS Trigger

    CERN Document Server

    Volpi, G; The ATLAS collaboration; Andreazza, A; Citterio, M; Favareto, A; Liberali, V; Meroni, C; Riva, M; Sabatini, F; Stabile, A; Annovi, A; Beretta, M; Castegnaro, A; Bevacqua, V; Crescioli, F; Francesco, C; Dell'Orso, M; Giannetti, P; Magalotti, D; Piendibene, M; Roda, C; Sacco, I; Tripiccione, R; Fabbri, L; Franchini, M; Giorgi, F; Giannuzzi, F; Lasagni, F; Sbarra, C; Valentinetti, S; Villa, M; Zoccoli, A; Lanza, A; Negri, A; Vercesi, V; Bogdan, M; Boveia, A; Canelli, F; Cheng, Y; Dunford, M; Li, H L; Kapliy, A; Kim, Y K; Melachrinos, C; Shochet, M; Tang, F; Tang, J; Tuggle, J; Tompkins, L; Webster, J; Atkinson, M; Cavaliere, V; Chang, P; Kasten, M; McCarn, A; Neubauer, M; Hoff, J; Liu, T; Okumura, Y; Olsen, J; Penning, B; Todri, A; Wu, J; Drake, G; Proudfoot, J; Zhang, J; Blair, R; Anderson, J; Auerbach, B; Blazey, G; Kimura, N; Yorita, K; Sakurai, Y; Mitani, T; Iizawa, T

    2012-01-01

    The existing three level ATLAS trigger system is deployed to reduce the event rate from the bunch crossing rate of 40 MHz to ~400 Hz for permanent storage at the LHC design luminosity of 10^34 cm^-2 s^-1. When the LHC reaches beyond the design luminosity, the load on the Level-2 trigger system will significantly increase due to both the need for more sophisticated algorithms to suppress background and the larger event sizes. The Fast TracKer (FTK) is a custom electronics system that will operate at the full Level-1 accepted rate of 100 KHz and provide high quality tracks at the beginning of processing in the Level-2 trigger, by performing track reconstruction in hardware with massive parallelism of associative memories and FPGAs. The performance in important physics areas including b-tagging, tau-tagging and lepton isolation will be demonstrated with the ATLAS MC simulation at different LHC luminosities. The system design will be overviewed. The latest R&amp;amp;D progress of individual components...

  9. Channel Communication and Reconfigurable Hardware

    NARCIS (Netherlands)

    Bos, M.; Havinga, Paul J.M.; Smit, Gerardus Johannes Maria; Karelse, F.

    2000-01-01

    Many applications can be structured as a set of processes or threads that communicate via channels. These threads can be executed on various platforms (e.g. general purpose CPU, DSP, FPGA, etc). In our research we apply channels as a basic communication mechanism between threads in a reconfigurable

  10. Hard- and software complex for laser time-of-flight mass spectrometer

    International Nuclear Information System (INIS)

    Sysoev, A.A.; Kas'yanov, V.B.; Poteshin, S.S.; Sil'nikov, E.E.; Sysoev, A.A.; Trofimov, A.S.

    2007-01-01

    The two-level principle serves as a basis to design the hardware and software system for the laser time-of-flight spectrometer. At the upper level the PC ensures the on-line control of the recording and the processing of the highest priority mass spectrometers. The controllers representing the control second level are responsible for monitoring and stabilization. The exchange between the controllers and the PC takes place in the periods free from recording and processing of the mass spectrometers. The use of the hardware and software system ensures as follows: to form up to 10 ns duration short mass peaks at the half-height; to reduce the scattering of the relative sensitivity coefficients from 2-3 orders up to 1 order; to improve the dynamic range of the recorded mass spectra up to 1-1x10 9 [ru

  11. Implementation of a Hardware Ray Tracer for digital design education

    OpenAIRE

    Eggen, Jonas Agentoft

    2017-01-01

    Digital design is a large and complex field of electronic engineering, and learning digital design requires maturing over time. The learning process can be facilitated by making use of a single learning platform throughout a whole course. A learning platform built around a hardware ray tracer can be used in illustrating many important aspects of digital design. A unified learning platform allows students to delve into intricate details of digital design while still seeing the bigger pictur...

  12. Toward Composable Hardware Agnostic Communications Blocks Lessons Learned

    Science.gov (United States)

    2016-11-01

    processing through a common thread- ing, scheduling, IPC, and memory management approach • Hardware-specific optimization abstraction • Flow -based block...composition - Each block may receive multiple inputs and generate multiple outputs to different blocks enabling flow -based usage Presentation Name - 5...with a high level block complexity analysis. Assumptions such as infinite memory/all access in L1 cache , hand assembly (no function call overhead/stack

  13. Hardware realization of chaos based block cipher for image encryption

    KAUST Repository

    Barakat, Mohamed L.

    2011-12-01

    Unlike stream ciphers, block ciphers are very essential for parallel processing applications. In this paper, the first hardware realization of chaotic-based block cipher is proposed for image encryption applications. The proposed system is tested for known cryptanalysis attacks and for different block sizes. When implemented on Virtex-IV, system performance showed high throughput and utilized small area. Passing successfully in all tests, our system proved to be secure with all block sizes. © 2011 IEEE.

  14. Peculiarities of hardware implementation of generalized cellular tetra automaton

    OpenAIRE

    Аноприенко, Александр Яковлевич; Федоров, Евгений Евгениевич; Иваница, Сергей Васильевич; Альрабаба, Хамза

    2015-01-01

    Cellular automata are widely used in many fields of knowledge for the study of variety of complex real processes: computer engineering and computer science, cryptography, mathematics, physics, chemistry, ecology, biology, medicine, epidemiology, geology, architecture, sociology, theory of neural networks. Thus, cellular automata (CA) and tetra automata are gaining relevance taking into account the hardware and software solutions.Also it is marked a trend towards an increase in the number of p...

  15. Hardware Trojans - Prevention, Detection, Countermeasures (A Literature Review)

    Science.gov (United States)

    2011-07-01

    manufacturing process in-house is infeasible for all but the smallest Application Specific Integrated Circuit (ASIC) designs. Our reliance on the globalisation ...for all but the smallest ASIC designs. Our reliance on the globalisation of the electronics industry is critical for developing both our commercial and...on the detection mechanism used, a Hardware Trojan may be either definitively identified, or a statistical measure may be provided indicating the

  16. Atomic memory access hardware implementations

    Science.gov (United States)

    Ahn, Jung Ho; Erez, Mattan; Dally, William J

    2015-02-17

    Atomic memory access requests are handled using a variety of systems and methods. According to one example method, a data-processing circuit having an address-request generator that issues requests to a common memory implements a method of processing the requests using a memory-access intervention circuit coupled between the generator and the common memory. The method identifies a current atomic-memory access request from a plurality of memory access requests. A data set is stored that corresponds to the current atomic-memory access request in a data storage circuit within the intervention circuit. It is determined whether the current atomic-memory access request corresponds to at least one previously-stored atomic-memory access request. In response to determining correspondence, the current request is implemented by retrieving data from the common memory. The data is modified in response to the current request and at least one other access request in the memory-access intervention circuit.

  17. 14 CFR 417.311 - Flight safety crew roles and qualifications.

    Science.gov (United States)

    2010-01-01

    ... the knowledge, skills, and abilities needed to operate the flight safety system hardware in accordance with § 417.113. (1) A flight safety crew must have knowledge of: (i) All flight safety system assets... knowledge of and be capable of resolving malfunctions in: (i) The application of safety support systems such...

  18. SPORT FACILITIES - SPORT ACTIVITIES HARDWARE

    Directory of Open Access Journals (Sweden)

    Zoran Mašić

    2008-08-01

    Full Text Available Realisation of sport activities always demanded certain conditions. Among those, sports facilities are certainly necessary. Since there were important changes in the process of training itself and successful performance, as well as, the results achieved by the sportsmen; there is a need for adequate sports facilities, that include whole variety of systems,equipment and necessities. Nowadays, Sport facilities are not only “the place of event”, but also a condition/necessity in achieving best sport results. It is demanded that these facilities are comfortable, absolutely secure and that they can accommodate transmissions: an opening, the course of sports activities and the announcement of the winner. The kind of sport activity, age, sex; so the “sports level” of the competitors is emphasising the specific demands to wards sports facilities.

  19. Hardware Implementation of a Bilateral Subtraction Filter

    Science.gov (United States)

    Huertas, Andres; Watson, Robert; Villalpando, Carlos; Goldberg, Steven

    2009-01-01

    A bilateral subtraction filter has been implemented as a hardware module in the form of a field-programmable gate array (FPGA). In general, a bilateral subtraction filter is a key subsystem of a high-quality stereoscopic machine vision system that utilizes images that are large and/or dense. Bilateral subtraction filters have been implemented in software on general-purpose computers, but the processing speeds attainable in this way even on computers containing the fastest processors are insufficient for real-time applications. The present FPGA bilateral subtraction filter is intended to accelerate processing to real-time speed and to be a prototype of a link in a stereoscopic-machine- vision processing chain, now under development, that would process large and/or dense images in real time and would be implemented in an FPGA. In terms that are necessarily oversimplified for the sake of brevity, a bilateral subtraction filter is a smoothing, edge-preserving filter for suppressing low-frequency noise. The filter operation amounts to replacing the value for each pixel with a weighted average of the values of that pixel and the neighboring pixels in a predefined neighborhood or window (e.g., a 9 9 window). The filter weights depend partly on pixel values and partly on the window size. The present FPGA implementation of a bilateral subtraction filter utilizes a 9 9 window. This implementation was designed to take advantage of the ability to do many of the component computations in parallel pipelines to enable processing of image data at the rate at which they are generated. The filter can be considered to be divided into the following parts (see figure): a) An image pixel pipeline with a 9 9- pixel window generator, b) An array of processing elements; c) An adder tree; d) A smoothing-and-delaying unit; and e) A subtraction unit. After each 9 9 window is created, the affected pixel data are fed to the processing elements. Each processing element is fed the pixel value for

  20. Hardware in the loop simulation of arbitrary magnitude shaped correlated radar clutter

    CSIR Research Space (South Africa)

    Strydom, JJ

    2014-10-01

    Full Text Available This paper describes a simple process for the generation of arbitrary probability distributions of complex data with correlation from sample to sample, optimized for hardware in the loop radar environment simulation. Measured radar clutter is used...

  1. Features of the Test Automation Software-Hardware Data Protection Tools

    Directory of Open Access Journals (Sweden)

    Tatiana Mikhailovna Borisova

    2013-06-01

    Full Text Available The author discusses the various types of testing in terms of automation of this process, the advantages and disadvantages of automated testing, as well as ways and especially its application to software-hardware data protection.

  2. A Hardware Abstraction Layer in Java

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Korsholm, Stephan; Kalibera, Tomas

    2011-01-01

    Embedded systems use specialized hardware devices to interact with their environment, and since they have to be dependable, it is attractive to use a modern, type-safe programming language like Java to develop programs for them. Standard Java, as a platform-independent language, delegates access...... to devices, direct memory access, and interrupt handling to some underlying operating system or kernel, but in the embedded systems domain resources are scarce and a Java Virtual Machine (JVM) without an underlying middleware is an attractive architecture. The contribution of this article is a proposal...... for Java packages with hardware objects and interrupt handlers that interface to such a JVM. We provide implementations of the proposal directly in hardware, as extensions of standard interpreters, and finally with an operating system middleware. The latter solution is mainly seen as a migration path...

  3. MFTF supervisory control and diagnostics system hardware

    International Nuclear Information System (INIS)

    Butner, D.N.

    1979-01-01

    The Supervisory Control and Diagnostics System (SCDS) for the Mirror Fusion Test Facility (MFTF) is a multiprocessor minicomputer system designed so that for most single-point failures, the hardware may be quickly reconfigured to provide continued operation of the experiment. The system is made up of nine Perkin-Elmer computers - a mixture of 8/32's and 7/32's. Each computer has ports on a shared memory system consisting of two independent shared memory modules. Each processor can signal other processors through hardware external to the shared memory. The system communicates with the Local Control and Instrumentation System, which consists of approximately 65 microprocessors. Each of the six system processors has facilities for communicating with a group of microprocessors; the groups consist of from four to 24 microprocessors. There are hardware switches so that if an SCDS processor communicating with a group of microprocessors fails, another SCDS processor takes over the communication

  4. Hardware Accelerators for Elliptic Curve Cryptography

    Directory of Open Access Journals (Sweden)

    C. Puttmann

    2008-05-01

    Full Text Available In this paper we explore different hardware accelerators for cryptography based on elliptic curves. Furthermore, we present a hierarchical multiprocessor system-on-chip (MPSoC platform that can be used for fast integration and evaluation of novel hardware accelerators. In respect of two application scenarios the hardware accelerators are coupled at different hierarchy levels of the MPSoC platform. The whole system is implemented in a state of the art 65 nm standard cell technology. Moreover, an FPGA-based rapid prototyping system for fast system verification is presented. Finally, a metric to analyze the resource efficiency by means of chip area, execution time and energy consumption is introduced.

  5. A comprehensive workflow for general-purpose neural modeling with highly configurable neuromorphic hardware systems.

    Science.gov (United States)

    Brüderle, Daniel; Petrovici, Mihai A; Vogginger, Bernhard; Ehrlich, Matthias; Pfeil, Thomas; Millner, Sebastian; Grübl, Andreas; Wendt, Karsten; Müller, Eric; Schwartz, Marc-Olivier; de Oliveira, Dan Husmann; Jeltsch, Sebastian; Fieres, Johannes; Schilling, Moritz; Müller, Paul; Breitwieser, Oliver; Petkov, Venelin; Muller, Lyle; Davison, Andrew P; Krishnamurthy, Pradeep; Kremkow, Jens; Lundqvist, Mikael; Muller, Eilif; Partzsch, Johannes; Scholze, Stefan; Zühl, Lukas; Mayr, Christian; Destexhe, Alain; Diesmann, Markus; Potjans, Tobias C; Lansner, Anders; Schüffny, René; Schemmel, Johannes; Meier, Karlheinz

    2011-05-01

    In this article, we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results.

  6. Ethernet for Space Flight Applications

    Science.gov (United States)

    Webb, Evan; Day, John H. (Technical Monitor)

    2002-01-01

    NASA's Goddard Space Flight Center (GSFC) is adapting current data networking technologies to fly on future spaceflight missions. The benefits of using commercially based networking standards and protocols have been widely discussed and are expected to include reduction in overall mission cost, shortened integration and test (I&T) schedules, increased operations flexibility, and hardware and software upgradeability/scalability with developments ongoing in the commercial world. The networking effort is a comprehensive one encompassing missions ranging from small University Explorer (UNEX) class spacecraft to large observatories such as the Next Generation Space Telescope (NGST). Mission aspects such as flight hardware and software, ground station hardware and software, operations, RF communications, and security (physical and electronic) are all being addressed to ensure a complete end-to-end system solution. One of the current networking development efforts at GSFC is the SpaceLAN (Spacecraft Local Area Network) project, development of a space-qualifiable Ethernet network. To this end we have purchased an IEEE 802.3-compatible 10/100/1000 Media Access Control (MAC) layer Intellectual Property (IP) core and are designing a network node interface (NNI) and associated network components such as a switch. These systems will ultimately allow the replacement of the typical MIL-STD-1553/1773 and custom interfaces that inhabit most spacecraft. In this paper we will describe our current Ethernet NNI development along with a novel new space qualified physical layer that will be used in place of the standard interfaces. We will outline our plans for development of space qualified network components that will allow future spacecraft to operate in significant radiation environments while using a single onboard network for reliable commanding and data transfer. There will be a brief discussion of some issues surrounding system implications of a flight Ethernet. Finally, we will

  7. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    Directory of Open Access Journals (Sweden)

    David R. W. Barr

    2009-01-01

    Full Text Available We present a software environment for the efficient simulation of cellular processor arrays (CPAs. This software (APRON is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  8. Hardware support for CSP on a Java chip multiprocessor

    DEFF Research Database (Denmark)

    Gruian, Flavius; Schoeberl, Martin

    2013-01-01

    Due to memory bandwidth limitations, chip multiprocessors (CMPs) adopting the convenient shared memory model for their main memory architecture scale poorly. On-chip core-to-core communication is a solution to this problem, that can lead to further performance increase for a number of multithreaded...... applications. Programmatically, the Communicating Sequential Processes (CSPs) paradigm provides a sound computational model for such an architecture with message based communication. In this paper we explore hardware support for CSP in the context of an embedded Java CMP. The hardware support for CSP are on......-chip communication channels, implemented by a ring-based network-on-chip (NoC), to reduce the memory bandwidth pressure on the shared memory.The presented solution is scalable and also specific for our limited resources and real-time predictability requirements. CMP architectures of three to eight processors were...

  9. Verification of OpenSSL version via hardware performance counters

    Science.gov (United States)

    Bruska, James; Blasingame, Zander; Liu, Chen

    2017-05-01

    Many forms of malware and security breaches exist today. One type of breach downgrades a cryptographic program by employing a man-in-the-middle attack. In this work, we explore the utilization of hardware events in conjunction with machine learning algorithms to detect which version of OpenSSL is being run during the encryption process. This allows for the immediate detection of any unknown downgrade attacks in real time. Our experimental results indicated this detection method is both feasible and practical. When trained with normal TLS and SSL data, our classifier was able to detect which protocol was being used with 99.995% accuracy. After the scope of the hardware event recording was enlarged, the accuracy diminished greatly, but to 53.244%. Upon removal of TLS 1.1 from the data set, the accuracy returned to 99.905%.

  10. Outline of a fast hardware implementation of Winograd's DFT algorithm

    Science.gov (United States)

    Zohar, S.

    1980-01-01

    The main characteristics of the discrete Fourier transform (DFT) algorithm considered by Winograd (1976) is a significant reduction in the number of multiplications. Its primary disadvantage is a higher structural complexity. It is, therefore, difficult to translate the reduced number of multiplications into faster execution of the DFT by means of a software implementation of the algorithm. For this reason, a hardware implementation is considered in the current study, taking into account a design based on the algorithm prescription discussed by Zohar (1979). The hardware implementation of a FORTRAN subroutine is proposed, giving attention to a pipelining scheme in which 5 consecutive data batches are being operated on simultaneously, each batch undergoing one of 5 processing phases.

  11. Human Centered Hardware Modeling and Collaboration

    Science.gov (United States)

    Stambolian Damon; Lawrence, Brad; Stelges, Katrine; Henderson, Gena

    2013-01-01

    In order to collaborate engineering designs among NASA Centers and customers, to in clude hardware and human activities from multiple remote locations, live human-centered modeling and collaboration across several sites has been successfully facilitated by Kennedy Space Center. The focus of this paper includes innovative a pproaches to engineering design analyses and training, along with research being conducted to apply new technologies for tracking, immersing, and evaluating humans as well as rocket, vehic le, component, or faci lity hardware utilizing high resolution cameras, motion tracking, ergonomic analysis, biomedical monitoring, wor k instruction integration, head-mounted displays, and other innovative human-system integration modeling, simulation, and collaboration applications.

  12. NASA Langley's AirSTAR Testbed: A Subscale Flight Test Capability for Flight Dynamics and Control System Experiments

    Science.gov (United States)

    Jordan, Thomas L.; Bailey, Roger M.

    2008-01-01

    As part of the Airborne Subscale Transport Aircraft Research (AirSTAR) project, NASA Langley Research Center (LaRC) has developed a subscaled flying testbed in order to conduct research experiments in support of the goals of NASA s Aviation Safety Program. This research capability consists of three distinct components. The first of these is the research aircraft, of which there are several in the AirSTAR stable. These aircraft range from a dynamically-scaled, twin turbine vehicle to a propeller driven, off-the-shelf airframe. Each of these airframes carves out its own niche in the research test program. All of the airplanes have sophisticated on-board data acquisition and actuation systems, recording, telemetering, processing, and/or receiving data from research control systems. The second piece of the testbed is the ground facilities, which encompass the hardware and software infrastructure necessary to provide comprehensive support services for conducting flight research using the subscale aircraft, including: subsystem development, integrated testing, remote piloting of the subscale aircraft, telemetry processing, experimental flight control law implementation and evaluation, flight simulation, data recording/archiving, and communications. The ground facilities are comprised of two major components: (1) The Base Research Station (BRS), a LaRC laboratory facility for system development, testing and data analysis, and (2) The Mobile Operations Station (MOS), a self-contained, motorized vehicle serving as a mobile research command/operations center, functionally equivalent to the BRS, capable of deployment to remote sites for supporting flight tests. The third piece of the testbed is the test facility itself. Research flights carried out by the AirSTAR team are conducted at NASA Wallops Flight Facility (WFF) on the Eastern Shore of Virginia. The UAV Island runway is a 50 x 1500 paved runway that lies within restricted airspace at Wallops Flight Facility. The

  13. Environmental Control System Software & Hardware Development

    Science.gov (United States)

    Vargas, Daniel Eduardo

    2017-01-01

    ECS hardware: (1) Provides controlled purge to SLS Rocket and Orion spacecraft. (2) Provide mission-focused engineering products and services. ECS software: (1) NASA requires Compact Unique Identifiers (CUIs); fixed-length identifier used to identify information items. (2) CUI structure; composed of nine semantic fields that aid the user in recognizing its purpose.

  14. Femoral neck fracture following hardware removal.

    Science.gov (United States)

    Shaer, James A; Hileman, Barbara M; Newcomer, Jill E; Hanes, Marina C

    2012-01-16

    It is uncommon for femoral neck fractures to occur after proximal femoral hardware removal because age, osteoporosis, and technical error are often noted as the causes for this type of fracture. However, excessive alcohol consumption and failure to comply with protected weight bearing for 6 weeks increases the risk of femoral neck fractures.This article describes a case of a 57-year-old man with a high-energy ipsilateral inter-trochanteric hip fracture, comminuted distal third femoral shaft fracture, and displaced lateral tibial plateau fracture. Cephalomedullary fixation was used to fix the ipsilateral femur fractures after medical stabilization and evaluation of the patient. The patient healed clinically and radiographically at 6 months. Despite conservative treatment for painful proximal hardware, elective hip screw removal was performed 22.5 months after injury. Seven weeks later, he sustained a nontraumatic femoral neck fracture.In this case, it is unlikely that the femoral neck fracture occurred as a result of hardware removal. We assumed that, in addition to the patient's alcohol abuse and tobacco use, stress fractures may have attributed to the femoral neck fracture. We recommend using a shorter hip screw to minimize hardware prominence or possibly off-label use of an injectable bone filler, such as calcium phosphate cement. Copyright 2012, SLACK Incorporated.

  15. QCE : A Simulator for Quantum Computer Hardware

    NARCIS (Netherlands)

    Michielsen, Kristel; Raedt, Hans De

    2003-01-01

    The Quantum Computer Emulator (QCE) described in this paper consists of a simulator of a generic, general purpose quantum computer and a graphical user interface. The latter is used to control the simulator, to define the hardware of the quantum computer and to debug and execute quantum algorithms.

  16. The fast Amsterdam multiprocessor (FAMP) system hardware

    International Nuclear Information System (INIS)

    Hertzberger, L.O.; Kieft, G.; Kisielewski, B.; Wiggers, L.W.; Engster, C.; Koningsveld, L. van

    1981-01-01

    The architecture of a multiprocessor system is described that will be used for on-line filter and second stage trigger applications. The system is based on the MC 68000 microprocessor from Motorola. Emphasis is paid to hardware aspects, in particular the modularity, processor communication and interfacing, whereas the system software and the applications will be described in separate articles. (orig.)

  17. Microprocessor Design Using Hardware Description Language

    Science.gov (United States)

    Mita, Rosario; Palumbo, Gaetano

    2008-01-01

    The following paper has been conceived to deal with the contents of some lectures aimed at enhancing courses on digital electronic, microelectronic or VLSI systems. Those lectures show how to use a hardware description language (HDL), such as the VHDL, to specify, design and verify a custom microprocessor. The general goal of this work is to teach…

  18. CAMAC high energy physics electronics hardware

    International Nuclear Information System (INIS)

    Kolpakov, I.F.

    1977-01-01

    CAMAC hardware for high energy physics large spectrometers and control systems is reviewed as is the development of CAMAC modules at the High Energy Laboratory, JINR (Dubna). The total number of crates used at the Laboratory is 179. The number of CAMAC modules of 120 different types exceeds 1700. The principles of organization and the structure of developed CAMAC systems are described. (author)

  19. Enabling Open Hardware through FOSS tools

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Software developers often take open file formats and tools for granted. When you publish code on github, you do not ask yourself if somebody will be able to open it and modify it. We need the same freedom in the open hardware world, to make it truly accessible for everyone.

  20. Hardware Acceleration of Sparse Cognitive Algorithms

    Science.gov (United States)

    2016-05-01

    is clear that these emerging algorithms that can support unsupervised , or lightly supervised learning , as well as incremental learning , map poorly...distribution unlimited. 8.0 CONCLUDING REMARKS These emerging algorithms that can support unsupervised , or lightly supervised learning , as well as...15. SUBJECT TERMS Cortical Algorithms; Machine Learning ; Hardware; VLSI; ASIC 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT: SAR

  1. Optimizing Investment Strategies with the Reconfigurable Hardware Platform RIVYERA

    Directory of Open Access Journals (Sweden)

    Christoph Starke

    2012-01-01

    Full Text Available The hardware structure of a processing element used for optimization of an investment strategy for financial markets is presented. It is shown how this processing element can be multiply implemented on the massively parallel FPGA-machine RIVYERA. This leads to a speedup of a factor of about 17,000 in comparison to one single high-performance PC, while saving more than 99% of the consumed energy. Furthermore, it is shown for a special security and different time periods that the optimized investment strategy delivers an outperformance between 2 and 14 percent in relation to a buy and hold strategy.

  2. NASA-STD-6016 Standard Materials and Processes Requirements for Spacecraft

    Science.gov (United States)

    Hirsch, David B.

    2009-01-01

    The standards for materials and processes surrounding spacecraft are discussed. Presentation focused on minimum requirements for Materials and Processes (M&P) used in design, fabrication, and testing of flight components for NASA manned, unmanned, robotic, launch vehicle, lander, in-space and surface systems, and spacecraft program/project hardware elements.Included is information on flammability, offgassing, compatibility requirements, and processes; both metallic and non-metallic materials are mentioned.

  3. IN-SITU PROBING OF RADIATION-INDUCED PROCESSING OF ORGANICS IN ASTROPHYSICAL ICE ANALOGS-NOVEL LASER DESORPTION LASER IONIZATION TIME-OF-FLIGHT MASS SPECTROSCOPIC STUDIES

    Energy Technology Data Exchange (ETDEWEB)

    Gudipati, Murthy S.; Yang Rui, E-mail: gudipati@jpl.nasa.gov, E-mail: ryang73@ustc.edu [University of Maryland (United States)

    2012-09-01

    Understanding the evolution of organic molecules in ice grains in the interstellar medium (ISM) under cosmic rays, stellar radiation, and local electrons and ions is critical to our understanding of the connection between ISM and solar systems. Our study is aimed at reaching this goal of looking directly into radiation-induced processing in these ice grains. We developed a two-color laser-desorption laser-ionization time-of-flight mass spectroscopic method (2C-MALDI-TOF), similar to matrix-assisted laser desorption and ionization time-of-flight (MALDI-TOF) mass spectroscopy. Results presented here with polycyclic aromatic hydrocarbon (PAH) probe molecules embedded in water-ice at 5 K show for the first time that hydrogenation and oxygenation are the primary chemical reactions that occur in astrophysical ice analogs when subjected to Ly{alpha} radiation. We found that hydrogenation can occur over several unsaturated bonds and the product distribution corresponds to their stabilities. Multiple hydrogenation efficiency is found to be higher at higher temperatures (100 K) compared to 5 K-close to the interstellar ice temperatures. Hydroxylation is shown to have similar efficiencies at 5 K or 100 K, indicating that addition of O atoms or OH radicals to pre-ionized PAHs is a barrierless process. These studies-the first glimpses into interstellar ice chemistry through analog studies-show that once accreted onto ice grains PAHs lose their PAH spectroscopic signatures through radiation chemistry, which could be one of the reason for the lack of PAH detection in interstellar ice grains, particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks.

  4. IN-SITU PROBING OF RADIATION-INDUCED PROCESSING OF ORGANICS IN ASTROPHYSICAL ICE ANALOGS—NOVEL LASER DESORPTION LASER IONIZATION TIME-OF-FLIGHT MASS SPECTROSCOPIC STUDIES

    International Nuclear Information System (INIS)

    Gudipati, Murthy S.; Yang Rui

    2012-01-01

    Understanding the evolution of organic molecules in ice grains in the interstellar medium (ISM) under cosmic rays, stellar radiation, and local electrons and ions is critical to our understanding of the connection between ISM and solar systems. Our study is aimed at reaching this goal of looking directly into radiation-induced processing in these ice grains. We developed a two-color laser-desorption laser-ionization time-of-flight mass spectroscopic method (2C-MALDI-TOF), similar to matrix-assisted laser desorption and ionization time-of-flight (MALDI-TOF) mass spectroscopy. Results presented here with polycyclic aromatic hydrocarbon (PAH) probe molecules embedded in water-ice at 5 K show for the first time that hydrogenation and oxygenation are the primary chemical reactions that occur in astrophysical ice analogs when subjected to Lyα radiation. We found that hydrogenation can occur over several unsaturated bonds and the product distribution corresponds to their stabilities. Multiple hydrogenation efficiency is found to be higher at higher temperatures (100 K) compared to 5 K—close to the interstellar ice temperatures. Hydroxylation is shown to have similar efficiencies at 5 K or 100 K, indicating that addition of O atoms or OH radicals to pre-ionized PAHs is a barrierless process. These studies—the first glimpses into interstellar ice chemistry through analog studies—show that once accreted onto ice grains PAHs lose their PAH spectroscopic signatures through radiation chemistry, which could be one of the reason for the lack of PAH detection in interstellar ice grains, particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks.

  5. Hardware Accelerators Targeting a Novel Group Based Packet Classification Algorithm

    Directory of Open Access Journals (Sweden)

    O. Ahmed

    2013-01-01

    Full Text Available Packet classification is a ubiquitous and key building block for many critical network devices. However, it remains as one of the main bottlenecks faced when designing fast network devices. In this paper, we propose a novel Group Based Search packet classification Algorithm (GBSA that is scalable, fast, and efficient. GBSA consumes an average of 0.4 megabytes of memory for a 10 k rule set. The worst-case classification time per packet is 2 microseconds, and the preprocessing speed is 3 M rules/second based on an Xeon processor operating at 3.4 GHz. When compared with other state-of-the-art classification techniques, the results showed that GBSA outperforms the competition with respect to speed, memory usage, and processing time. Moreover, GBSA is amenable to implementation in hardware. Three different hardware implementations are also presented in this paper including an Application Specific Instruction Set Processor (ASIP implementation and two pure Register-Transfer Level (RTL implementations based on Impulse-C and Handel-C flows, respectively. Speedups achieved with these hardware accelerators ranged from 9x to 18x compared with a pure software implementation running on an Xeon processor.

  6. Space Flight Software Development Software for Intelligent System Health Management

    Science.gov (United States)

    Trevino, Luis C.; Crumbley, Tim

    2004-01-01

    The slide presentation examines the Marshall Space Flight Center Flight Software Branch, including software development projects, mission critical space flight software development, software technical insight, advanced software development technologies, and continuous improvement in the software development processes and methods.

  7. Evaluation and Hardware Implementation of Real-Time Color Compression Algorithms

    OpenAIRE

    Ojani, Amin; Caglar, Ahmet

    2008-01-01

    A major bottleneck, for performance as well as power consumption, for graphics hardware in mobile devices is the amount of data that needs to be transferred to and from memory. In, for example, hardware accelerated 3D graphics, a large part of the memory accesses are due to large and frequent color buffer data transfers. In a graphic hardware block color data is typically processed using RGB color format. For both 3D graphic rasterization and image composition several pixels needs to be read ...

  8. An Interview with Joe McMann: Lessons Learned from Fifty Years of Observing Hardware and Human Behavior

    Science.gov (United States)

    McMann, Joe

    2011-01-01

    Pica Kahn conducted "An Interview with Joe McMann: Lessons Learned in Human and Hardware Behavior" on August 16, 2011. With more than 40 years of experience in the aerospace industry, McMann has gained a wealth of knowledge. This presentation focused on lessons learned in human and hardware behavior. During his many years in the industry, McMann observed that the hardware development process was intertwined with human influences, which impacted the outcome of the product.

  9. Testing of hardware implementation of infrared image enhancing algorithm

    Science.gov (United States)

    Dulski, R.; Sosnowski, T.; PiÄ tkowski, T.; Trzaskawka, P.; Kastek, M.; Kucharz, J.

    2012-10-01

    The interpretation of IR images depends on radiative properties of observed objects and surrounding scenery. Skills and experience of an observer itself are also of great importance. The solution to improve the effectiveness of observation is utilization of algorithm of image enhancing capable to improve the image quality and the same effectiveness of object detection. The paper presents results of testing the hardware implementation of IR image enhancing algorithm based on histogram processing. Main issue in hardware implementation of complex procedures for image enhancing algorithms is high computational cost. As a result implementation of complex algorithms using general purpose processors and software usually does not bring satisfactory results. Because of high efficiency requirements and the need of parallel operation, the ALTERA's EP2C35F672 FPGA device was used. It provides sufficient processing speed combined with relatively low power consumption. A digital image processing and control module was designed and constructed around two main integrated circuits: a FPGA device and a microcontroller. Programmable FPGA device performs image data processing operations which requires considerable computing power. It also generates the control signals for array readout, performs NUC correction and bad pixel mapping, generates the control signals for display module and finally executes complex image processing algorithms. Implemented adaptive algorithm is based on plateau histogram equalization. Tests were performed on real IR images of different types of objects registered in different spectral bands. The simulations and laboratory experiments proved the correct operation of the designed system in executing the sophisticated image enhancement.

  10. Rodent Research-1 (RR1) NASA Validation Flight: Mouse liver transcriptomic proteomic and epigenomic data

    Data.gov (United States)

    National Aeronautics and Space Administration — RR-1 is a validation flight to evaluate the hardware operational and science capabilities of the Rodent Research Project on the ISS. RNA DNA and protein were...

  11. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  12. Hardware Implementation of COTS Avionics System on Unmanned Aerial Vehicle Platforms

    Science.gov (United States)

    Yeh, Yoo-Hsiu; Kumar, Parth; Ishihara, Abraham; Ippolito, Corey

    2010-01-01

    Unmanned Aerial Vehicles (UAVs) can serve as low cost and low risk platforms for flight testing in Aeronautics research. The NASA Exploration Aerial Vehicle (EAV) and Experimental Sensor-Controlled Aerial Vehicle (X-SCAV) UAVs were developed in support of control systems research at NASA Ames Research Center. The avionics hardware for both systems has been redesigned and updated, and the structure of the EAV has been further strengthened. Preliminary tests show the avionics operate properly in the new configuration. A linear model for the EAV also was estimated from flight data, and was verified in simulation. These modifications and results prepare the EAV and X-SCAV to be used in a wide variety of flight research projects.

  13. Particle Transport Simulation on Heterogeneous Hardware

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    CPUs and GPGPUs. About the speaker Vladimir Koylazov is CTO and founder of Chaos Software and one of the original developers of the V-Ray raytracing software. Passionate about 3D graphics and programming, Vlado is the driving force behind Chaos Group's software solutions. He participated in the implementation of algorithms for accurate light simulations and support for different hardware platforms, including CPU and GPGPU, as well as distributed calculat...

  14. Hardware-Independent Proofs of Numerical Programs

    Science.gov (United States)

    Boldo, Sylvie; Nguyen, Thi Minh Tuyen

    2010-01-01

    On recent architectures, a numerical program may give different answers depending on the execution hardware and the compilation. Our goal is to formally prove properties about numerical programs that are true for multiple architectures and compilers. We propose an approach that states the rounding error of each floating-point computation whatever the environment. This approach is implemented in the Frama-C platform for static analysis of C code. Small case studies using this approach are entirely and automatically proved

  15. Trends in computer hardware and software.

    Science.gov (United States)

    Frankenfeld, F M

    1993-04-01

    Previously identified and current trends in the development of computer systems and in the use of computers for health care applications are reviewed. Trends identified in a 1982 article were increasing miniaturization and archival ability, increasing software costs, increasing software independence, user empowerment through new software technologies, shorter computer-system life cycles, and more rapid development and support of pharmaceutical services. Most of these trends continue today. Current trends in hardware and software include the increasing use of reduced instruction-set computing, migration to the UNIX operating system, the development of large software libraries, microprocessor-based smart terminals that allow remote validation of data, speech synthesis and recognition, application generators, fourth-generation languages, computer-aided software engineering, object-oriented technologies, and artificial intelligence. Current trends specific to pharmacy and hospitals are the withdrawal of vendors of hospital information systems from the pharmacy market, improved linkage of information systems within hospitals, and increased regulation by government. The computer industry and its products continue to undergo dynamic change. Software development continues to lag behind hardware, and its high cost is offsetting the savings provided by hardware.

  16. A Hardware Lab Anywhere At Any Time

    Directory of Open Access Journals (Sweden)

    Tobias Schubert

    2004-12-01

    Full Text Available Scientific technical courses are an important component in any student's education. These courses are usually characterised by the fact that the students execute experiments in special laboratories. This leads to extremely high costs and a reduction in the maximum number of possible participants. From this traditional point of view, it doesn't seem possible to realise the concepts of a Virtual University in the context of sophisticated technical courses since the students must be "on the spot". In this paper we introduce the so-called Mobile Hardware Lab which makes student participation possible at any time and from any place. This lab nevertheless transfers a feeling of being present in a laboratory. This is accomplished with a special Learning Management System in combination with hardware components which correspond to a fully equipped laboratory workstation that are lent out to the students for the duration of the lab. The experiments are performed and solved at home, then handed in electronically. Judging and marking are also both performed electronically. Since 2003 the Mobile Hardware Lab is now offered in a completely web based form.

  17. Computer, Network, Software, and Hardware Engineering with Applications

    CERN Document Server

    Schneidewind, Norman F

    2012-01-01

    There are many books on computers, networks, and software engineering but none that integrate the three with applications. Integration is important because, increasingly, software dominates the performance, reliability, maintainability, and availability of complex computer and systems. Books on software engineering typically portray software as if it exists in a vacuum with no relationship to the wider system. This is wrong because a system is more than software. It is comprised of people, organizations, processes, hardware, and software. All of these components must be considered in an integr

  18. Hardware and software constructs for a vibration analysis network

    International Nuclear Information System (INIS)

    Cook, S.A.; Crowe, R.D.; Toffer, H.

    1985-01-01

    Vibration level monitoring and analysis has been initiated at N Reactor, the dual purpose reactor operated at Hanford, Washington by UNC Nuclear Industries (UNC) for the Department of Energy (DOE). The machinery to be monitored was located in several buildings scattered over the plant site, necessitating an approach using satellite stations to collect, monitor and temporarily store data. The satellite stations are, in turn, linked to a centralized processing computer for further analysis. The advantages of a networked data analysis system are discussed in this paper along with the hardware and software required to implement such a system

  19. SYNTHESIS OF INFORMATION SYSTEM FOR SMART HOUSE HARDWARE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Vikentyeva Olga Leonidovna

    2017-10-01

    Full Text Available Subject: smart house maintenance requires taking into account a number of factors: resource-saving, reduction of operational expenditures, safety enhancement, providing comfortable working and leisure conditions. Automation of the corresponding engineering systems of illumination, climate control, security as well as communication systems and networks via utilization of contemporary technologies (e.g., IoT - Internet of Things poses a significant challenge related to storage and processing of the overwhelmingly massive volume of data whose utilization extent is extremely low nowadays. Since a building’s lifespan is large enough and exceeds the lifespan of codes and standards that take into account the requirements of safety, comfort, energy saving, etc., it is necessary to consider management aspects in the context of rational use of large data at the stage of information modeling. Research objectives: increase the efficiency of managing the subsystems of smart buildings hardware on the basis of a web-based information system that has a flexible multi-level architecture with several control loops and an adaptation model. Materials and methods: since a smart house belongs to man-machine systems, the cybernetic approach is considered as the basic method for design and research of information management system. Instrumental research methods are represented by set-theoretical modelling, automata theory and architectural principles of organization of information management systems. Results: a flexible architecture of information system for management of smart house hardware subsystems has been synthesized. This architecture encompasses several levels: client level, application level and data level as well as three layers: presentation level, actuating device layer and analytics layer. The problem of growing volumes of information processed by realtime message controller is attended by employment of sensors and actuating mechanisms with configurable

  20. Development and flight test of a helicopter, X-band, portable precision landing system concept

    Science.gov (United States)

    Davis, T. J.; Clary, G. R.; Chisholm, J. P.; Macdonald, S. L.

    1985-01-01

    A beacon landing system (BLS) is being developed and flight tested as a part of NASA's Rotorcraft All-Weather Operations Research Program. The system is based on state-of-of-the-art X-band radar technology and digital processing techniques. The bLS airborne hardware consists of an X-band receiver and a small micropreocessor, installed in conjunction wht the aircraft instrument landing system (ILS) receiver. The microprocessor analyzes the X-band, BLS pulses and outputs ILS-compatible localizer and glide slope signals. Range information is obtained using an on-board weather/mapping radar in conjunction with the BLS. The ground station is an inexpensive, portable unit; it weighs less than 70 lb and can be quickly deployed at a landing site. Results from the flight-test program show that the BLS has a significant potential for providing rotorcaraft with low-cost, precision instrument approach capability in remote areas.

  1. Hardware Efficient Architecture with Variable Block Size for Motion Estimation

    Directory of Open Access Journals (Sweden)

    Nehal N. Shah

    2016-01-01

    Full Text Available Video coding standards such as MPEG-x and H.26x incorporate variable block size motion estimation (VBSME which is highly time consuming and extremely complex from hardware implementation perspective due to huge computation. In this paper, we have discussed basic aspects of video coding and studied and compared existing architectures for VBSME. Various architectures with different pixel scanning pattern give a variety of performance results for motion vector (MV generation, showing tradeoff between macroblock processed per second and resource requirement for computation. Aim of this paper is to design VBSME architecture which utilizes optimal resources to minimize chip area and offer adequate frame processing rate for real time implementation. Speed of computation can be improved by accessing 16 pixels of base macroblock of size 4 × 4 in single clock cycle using z scanning pattern. Widely adopted cost function for hardware implementation known as sum of absolute differences (SAD is used for VBSME architecture with multiplexer based absolute difference calculator and partial summation term reduction (PSTR based multioperand adders. Device utilization of proposed implementation is only 22k gates and it can process 179 HD (1920 × 1080 resolution frames in best case and 47 HD resolution frames in worst case per second. Due to such higher throughput design is well suitable for real time implementation.

  2. Analog Exercise Hardware to Implement a High Intensity Exercise Program During Bed Rest

    Science.gov (United States)

    Loerch, Linda; Newby, Nate; Ploutz-Snyder, Lori

    2012-01-01

    Background: In order to evaluate novel countermeasure protocols in a space flight analog prior to validation on the International Space Station (ISS), NASA's Human Research Program (HRP) is sponsoring a multi-investigator bedrest campaign that utilizes a combination of commercial and custom-made exercise training hardware to conduct daily resistive and aerobic exercise protocols. This paper will describe these pieces of hardware and how they are used to support current bedrest studies at NASA's Flight Analog Research Unit in Galveston, TX. Discussion: To implement candidate exercise countermeasure studies during extended bed rest studies the following analog hardware are being utilized: Stand alone Zero-Gravity Locomotion Simulator (sZLS) -- a custom built device by NASA, the sZLS allows bedrest subjects to remain supine as they run on a vertically-oriented treadmill (0-15 miles/hour). The treadmill includes a pneumatic subject loading device to provide variable body loading (0-100%) and a harness to keep the subject in contact with the motorized treadmill to provide a ground reaction force at their feet that is quantified by a Kistler Force Plate. Supine Cycle Ergometer -- a commercially available supine cycle ergometer (Lode, Groningen, Netherlands) is used for all cycle ergometer sessions. The ergometer has adjustable shoulder supports and handgrips to help stabilize the subject during exercise. Horizontal Squat Device (HSD) -- a custom built device by Quantum Fitness Corp (Stafford, TX), the HSD allows for squat exercises to be performed while lying in a supine position. The HSD can provide 0 to 600 pounds of force in selectable 5 lb increments, and allows hip translation in both the vertical and horizontal planes. Prone Leg Curl -- a commercially available prone leg curl machine (Cybex International Inc., Medway, MA) is used to complete leg curl exercises. Horizontal Leg Press -- a commercially available horizontal leg press (Quantum Fitness Corporation) is

  3. A Framework for Hardware-Accelerated Services Using Partially Reconfigurable SoCs

    Directory of Open Access Journals (Sweden)

    MACHIDON, O. M.

    2016-05-01

    Full Text Available The current trend towards ?Everything as a Service? fosters a new approach on reconfigurable hardware resources. This innovative, service-oriented approach has the potential of bringing a series of benefits for both reconfigurable and distributed computing fields by favoring a hardware-based acceleration of web services and increasing service performance. This paper proposes a framework for accelerating web services by offloading the compute-intensive tasks to reconfigurable System-on-Chip (SoC devices, as integrated IP (Intellectual Property cores. The framework provides a scalable, dynamic management of the tasks and hardware processing cores, based on dynamic partial reconfiguration of the SoC. We have enhanced security of the entire system by making use of the built-in detection features of the hardware device and also by implementing active counter-measures that protect the sensitive data.

  4. GPM GROUND VALIDATION FLIGHT SUMMARIES AND FLIGHT TRACKS IMAGERY MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The Flight Summaries and Flight Tracks Imagery dataset for MC3E provides processed summaries from University of North Dakota including sonde maps, a radar animation,...

  5. Hardware implementation of a GFSR pseudo-random number generator

    Science.gov (United States)

    Aiello, G. R.; Budinich, M.; Milotti, E.

    1989-12-01

    We describe the hardware implementation of a pseudo-random number generator of the "Generalized Feedback Shift Register" (GFSR) type. After brief theoretical considerations we describe two versions of the hardware, the tests done and the performances achieved.

  6. Automated Flight Dynamics Product Generation for the EOS AM-1 Spacecraft

    Science.gov (United States)

    Matusow, Carla

    1999-01-01

    As part of NASA's Earth Science Enterprise, the Earth Observing System (EOS) AM-1 spacecraft is designed to monitor long-term, global, environmental changes. Because of the complexity of the AM-1 spacecraft, the mission operations center requires more than 80 distinct flight dynamics products (reports). To create these products, the AM-1 Flight Dynamics Team (FDT) will use a combination of modified commercial software packages (e.g., Analytical Graphic's Satellite ToolKit) and NASA-developed software applications. While providing the most cost-effective solution to meeting the mission requirements, the integration of these software applications raises several operational concerns: (1) Routine product generation requires knowledge of multiple applications executing on variety of hardware platforms. (2) Generating products is a highly interactive process requiring a user to interact with each application multiple times to generate each product. (3) Routine product generation requires several hours to complete. (4) User interaction with each application introduces the potential for errors, since users are required to manually enter filenames and input parameters as well as run applications in the correct sequence. Generating products requires some level of flight dynamics expertise to determine the appropriate inputs and sequencing. To address these issues, the FDT developed an automation software tool called AutoProducts, which runs on a single hardware platform and provides all necessary coordination and communication among the various flight dynamics software applications. AutoProducts, autonomously retrieves necessary files, sequences and executes applications with correct input parameters, and deliver the final flight dynamics products to the appropriate customers. Although AutoProducts will normally generate pre-programmed sets of routine products, its graphical interface allows for easy configuration of customized and one-of-a-kind products. Additionally, Auto

  7. An Environmental for Hardware-in-the-Loop Formation Navigation and Control

    Science.gov (United States)

    Burns, Rich; Naasz, Bo; Gaylor, Dave; Higinbotham, John

    2004-01-01

    Recent interest in formation flying satellite systems has spurred a considerable amount of research in the relative navigation and control of satellites. Development in this area has included new estimation and control algorithms as well as sensor and actuator development specifically geared toward the relative control problem. This paper describes a simulation facility, the Formation Flying Test Bed (FFTB) at NASA Goddard Space Flight Center, which allows engineers to test new algorithms for the formation flying problem with relevant GN&C hardware in a closed loop simulation. The FFTB currently supports the inclusion of GPS receiver hardware in the simulation loop. Support for satellite crosslink ranging technology is at a prototype stage. This closed-loop, hardware inclusive simulation capability permits testing of navigation and control software in the presence of the actual hardware with which the algorithms must interact. This capability provides the navigation or control developer with a perspective on how the algorithms perform as part of the closed-loop system. In this paper, the overall design and evolution of the FFTB are presented. Each component of the FFTB is then described. Interfaces between the components of the FFTB are shown and the interfaces to and between navigation and control software are described. Finally, an example of closed-loop formation control with GPS receivers in the loop is presented.

  8. An Environment for Hardware-in-the-Loop Formation Navigation and Control Simulation

    Science.gov (United States)

    Burns, Rich

    2004-01-01

    Recent interest in formation flying satellite systems has spurred a considerable amount of research in the relative navigation and control of satellites. Development in this area has included new estimation and control algorithms as well as sensor and actuator development specifically geared toward the relative control problem. This paper describes a simulation facility, the Formation Flying Testbed (FFTB) at NASA's Goddard Space Flight Center, which allows engineers to test new algorithms for the formation flying problem with relevant GN&C hardware in a closed loop simulation. The FFTB currently supports the injection of GPS receiver hardware into the simulation loop, and support for satellite crosslink ranging technology is at a prototype stage. This closed-loop, hardware inclusive simulation capability permits testing of navigation and control software in the presence of the actual hardware with which the algorithms must interact. This capability provides the navigation or control developer with a perspective on how the algorithms perform as part of the closed-loop system. In this paper, the overall design and evolution of the FFTB are presented. Each component of the FFTB is then described in detail. Interfaces between the components of the FFTB are shown and the interfaces to and between navigation and control software are described in detail. Finally, an example of closed-loop formation control with GPS receivers in the loop is presented and results are analyzed.

  9. Open Source Hardware for DIY Environmental Sensing

    Science.gov (United States)

    Aufdenkampe, A. K.; Hicks, S. D.; Damiano, S. G.; Montgomery, D. S.

    2014-12-01

    The Arduino open source electronics platform has been very popular within the DIY (Do It Yourself) community for several years, and it is now providing environmental science researchers with an inexpensive alternative to commercial data logging and transmission hardware. Here we present the designs for our latest series of custom Arduino-based dataloggers, which include wireless communication options like self-meshing radio networks and cellular phone modules. The main Arduino board uses a custom interface board to connect to various research-grade sensors to take readings of turbidity, dissolved oxygen, water depth and conductivity, soil moisture, solar radiation, and other parameters. Sensors with SDI-12 communications can be directly interfaced to the logger using our open Arduino-SDI-12 software library (https://github.com/StroudCenter/Arduino-SDI-12). Different deployment options are shown, like rugged enclosures to house the loggers and rigs for mounting the sensors in both fresh water and marine environments. After the data has been collected and transmitted by the logger, the data is received by a mySQL-PHP stack running on a web server that can be accessed from anywhere in the world. Once there, the data can be visualized on web pages or served though REST requests and Water One Flow (WOF) services. Since one of the main benefits of using open source hardware is the easy collaboration between users, we are introducing a new web platform for discussion and sharing of ideas and plans for hardware and software designs used with DIY environmental sensors and data loggers.

  10. Monitoring and Hardware Management for Critical Fusion Plasma Instrumentation

    Directory of Open Access Journals (Sweden)

    Carvalho Paulo F.

    2018-01-01

    Full Text Available Controlled nuclear fusion aims to obtain energy by particles collision confined inside a nuclear reactor (Tokamak. These ionized particles, heavier isotopes of hydrogen, are the main elements inside of plasma that is kept at high temperatures (millions of Celsius degrees. Due to high temperatures and magnetic confinement, plasma is exposed to several sources of instabilities which require a set of procedures by the control and data acquisition systems throughout fusion experiments processes. Control and data acquisition systems often used in nuclear fusion experiments are based on the Advanced Telecommunication Computer Architecture (AdvancedTCA® standard introduced by the Peripheral Component Interconnect Industrial Manufacturers Group (PICMG®, to meet the demands of telecommunications that require large amount of data (TB transportation at high transfer rates (Gb/s, to ensure high availability including features such as reliability, serviceability and redundancy. For efficient plasma control, systems are required to collect large amounts of data, process it, store for later analysis, make critical decisions in real time and provide status reports either from the experience itself or the electronic instrumentation involved. Moreover, systems should also ensure the correct handling of detected anomalies and identified faults, notify the system operator of occurred events, decisions taken to acknowledge and implemented changes. Therefore, for everything to work in compliance with specifications it is required that the instrumentation includes hardware management and monitoring mechanisms for both hardware and software. These mechanisms should check the system status by reading sensors, manage events, update inventory databases with hardware system components in use and maintenance, store collected information, update firmware and installed software modules, configure and handle alarms to detect possible system failures and prevent emergency

  11. Monitoring and Hardware Management for Critical Fusion Plasma Instrumentation

    Science.gov (United States)

    Carvalho, Paulo F.; Santos, Bruno; Correia, Miguel; Combo, Álvaro M.; Rodrigues, AntÓnio P.; Pereira, Rita C.; Fernandes, Ana; Cruz, Nuno; Sousa, Jorge; Carvalho, Bernardo B.; Batista, AntÓnio J. N.; Correia, Carlos M. B. A.; Gonçalves, Bruno

    2018-01-01

    Controlled nuclear fusion aims to obtain energy by particles collision confined inside a nuclear reactor (Tokamak). These ionized particles, heavier isotopes of hydrogen, are the main elements inside of plasma that is kept at high temperatures (millions of Celsius degrees). Due to high temperatures and magnetic confinement, plasma is exposed to several sources of instabilities which require a set of procedures by the control and data acquisition systems throughout fusion experiments processes. Control and data acquisition systems often used in nuclear fusion experiments are based on the Advanced Telecommunication Computer Architecture (AdvancedTCA®) standard introduced by the Peripheral Component Interconnect Industrial Manufacturers Group (PICMG®), to meet the demands of telecommunications that require large amount of data (TB) transportation at high transfer rates (Gb/s), to ensure high availability including features such as reliability, serviceability and redundancy. For efficient plasma control, systems are required to collect large amounts of data, process it, store for later analysis, make critical decisions in real time and provide status reports either from the experience itself or the electronic instrumentation involved. Moreover, systems should also ensure the correct handling of detected anomalies and identified faults, notify the system operator of occurred events, decisions taken to acknowledge and implemented changes. Therefore, for everything to work in compliance with specifications it is required that the instrumentation includes hardware management and monitoring mechanisms for both hardware and software. These mechanisms should check the system status by reading sensors, manage events, update inventory databases with hardware system components in use and maintenance, store collected information, update firmware and installed software modules, configure and handle alarms to detect possible system failures and prevent emergency scenarios

  12. Space station common module network topology and hardware development

    Science.gov (United States)

    Anderson, P.; Braunagel, L.; Chwirka, S.; Fishman, M.; Freeman, K.; Eason, D.; Landis, D.; Lech, L.; Martin, J.; Mccorkle, J.

    1990-01-01

    Conceptual space station common module power management and distribution (SSM/PMAD) network layouts and detailed network evaluations were developed. Individual pieces of hardware to be developed for the SSM/PMAD test bed were identified. A technology assessment was developed to identify pieces of equipment requiring development effort. Equipment lists were developed from the previously selected network schematics. Additionally, functional requirements for the network equipment as well as other requirements which affected the suitability of specific items for use on the Space Station Program were identified. Assembly requirements were derived based on the SSM/PMAD developed requirements and on the selected SSM/PMAD network concepts. Basic requirements and simplified design block diagrams are included. DC remote power controllers were successfully integrated into the DC Marshall Space Flight Center breadboard. Two DC remote power controller (RPC) boards experienced mechanical failure of UES 706 stud-mounted diodes during mechanical installation of the boards into the system. These broken diodes caused input to output shorting of the RPC's. The UES 706 diodes were replaced on these RPC's which eliminated the problem. The DC RPC's as existing in the present breadboard configuration do not provide ground fault protection because the RPC was designed to only switch the hot side current. If ground fault protection were to be implemented, it would be necessary to design the system so the RPC switched both the hot and the return sides of power.

  13. Methodology for Assessing Reusability of Spaceflight Hardware

    Science.gov (United States)

    Childress-Thompson, Rhonda; Thomas, L. Dale; Farrington, Phillip

    2017-01-01

    In 2011 the Space Shuttle, the only Reusable Launch Vehicle (RLV) in the world, returned to earth for the final time. Upon retirement of the Space Shuttle, the United States (U.S.) no longer possessed a reusable vehicle or the capability to send American astronauts to space. With the National Aeronautics and Space Administration (NASA) out of the RLV business and now only pursuing Expendable Launch Vehicles (ELV), not only did companies within the U.S. start to actively pursue the development of either RLVs or reusable components, but entities around the world began to venture into the reusable market. For example, SpaceX and Blue Origin are developing reusable vehicles and engines. The Indian Space Research Organization is developing a reusable space plane and Airbus is exploring the possibility of reusing its first stage engines and avionics housed in the flyback propulsion unit referred to as the Advanced Expendable Launcher with Innovative engine Economy (Adeline). Even United Launch Alliance (ULA) has announced plans for eventually replacing the Atlas and Delta expendable rockets with a family of RLVs called Vulcan. Reuse can be categorized as either fully reusable, the situation in which the entire vehicle is recovered, or partially reusable such as the National Space Transportation System (NSTS) where only the Space Shuttle, Space Shuttle Main Engines (SSME), and Solid Rocket Boosters (SRB) are reused. With this influx of renewed interest in reusability for space applications, it is imperative that a systematic approach be developed for assessing the reusability of spaceflight hardware. The partially reusable NSTS offered many opportunities to glean lessons learned; however, when it came to efficient operability for reuse the Space Shuttle and its associated hardware fell short primarily because of its two to four-month turnaround time. Although there have been several attempts at designing RLVs in the past with the X-33, Venture Star and Delta Clipper

  14. Open Hardware for CERN's accelerator control systems

    International Nuclear Information System (INIS)

    Bij, E van der; Serrano, J; Wlostowski, T; Cattin, M; Gousiou, E; Sanchez, P Alvarez; Boccardi, A; Voumard, N; Penacoba, G

    2012-01-01

    The accelerator control systems at CERN will be upgraded and many electronics modules such as analog and digital I/O, level converters and repeaters, serial links and timing modules are being redesigned. The new developments are based on the FPGA Mezzanine Card, PCI Express and VME64x standards while the Wishbone specification is used as a system on a chip bus. To attract partners, the projects are developed in an 'Open' fashion. Within this Open Hardware project new ways of working with industry are being evaluated and it has been proven that industry can be involved at all stages, from design to production and support.

  15. Management of cladding hulls and fuel hardware

    International Nuclear Information System (INIS)

    1985-01-01

    The reprocessing of spent fuel from power reactors based on chop-leach technology produces a solid waste product of cladding hulls and other metallic residues. This report describes the current situation in the management of fuel cladding hulls and hardware. Information is presented on the material composition of such waste together with the heating effects due to neutron-induced activation products and fuel contamination. As no country has established a final disposal route and the corresponding repository, this report also discusses possible disposal routes and various disposal options under consideration at present

  16. Hardware trigger processor for the MDT system

    CERN Document Server

    AUTHOR|(SzGeCERN)757787; The ATLAS collaboration; Hazen, Eric; Butler, John; Black, Kevin; Gastler, Daniel Edward; Ntekas, Konstantinos; Taffard, Anyes; Martinez Outschoorn, Verena; Ishino, Masaya; Okumura, Yasuyuki

    2017-01-01

    We are developing a low-latency hardware trigger processor for the Monitored Drift Tube system in the Muon spectrometer. The processor will fit candidate Muon tracks in the drift tubes in real time, improving significantly the momentum resolution provided by the dedicated trigger chambers. We present a novel pure-FPGA implementation of a Legendre transform segment finder, an associative-memory alternative implementation, an ARM (Zynq) processor-based track fitter, and compact ATCA carrier board architecture. The ATCA architecture is designed to allow a modular, staged approach to deployment of the system and exploration of alternative technologies.

  17. Development of Hardware Dual Modality Tomography System

    Directory of Open Access Journals (Sweden)

    R. M. Zain

    2009-06-01

    Full Text Available The paper describes the hardware development and performance of the Dual Modality Tomography (DMT system. DMT consists of optical and capacitance sensors. The optical sensors consist of 16 LEDs and 16 photodiodes. The Electrical Capacitance Tomography (ECT electrode design use eight electrode plates as the detecting sensor. The digital timing and the control unit have been developing in order to control the light projection of optical emitters, switching the capacitance electrodes and to synchronize the operation of data acquisition. As a result, the developed system is able to provide a maximum 529 set data per second received from the signal conditioning circuit to the computer.

  18. Hardware-efficient autonomous quantum memory protection.

    Science.gov (United States)

    Leghtas, Zaki; Kirchmair, Gerhard; Vlastakis, Brian; Schoelkopf, Robert J; Devoret, Michel H; Mirrahimi, Mazyar

    2013-09-20

    We propose to encode a quantum bit of information in a superposition of coherent states of an oscillator, with four different phases. Our encoding in a single cavity mode, together with a protection protocol, significantly reduces the error rate due to photon loss. This protection is ensured by an efficient quantum error correction scheme employing the nonlinearity provided by a single physical qubit coupled to the cavity. We describe in detail how to implement these operations in a circuit quantum electrodynamics system. This proposal directly addresses the task of building a hardware-efficient quantum memory and can lead to important shortcuts in quantum computing architectures.

  19. A Hardware Track Trigger (FTK) for the ATLAS Trigger

    CERN Document Server

    Zhang, J; The ATLAS collaboration

    2014-01-01

    The design and studies of the performance for the ATLAS hardware Fast TracKer (FTK) are presented. The existing trigger system of the ATLAS experiment is deployed to reduce the event rate from the bunch crossing rate of 40 MHz to < 1 KHz for permanent storage at the LHC design luminosity of 10^34 cm^-2 s^-1. The LHC has performed exceptionally well and routinely exceeds the design luminosity and from 2015 is due to operate with higher still luminosities. This will place a significant load on the High Level trigger (HLT) system, both due to the need for more sophisticated algorithms to reject background, and from the larger data volumes that will need to be processed. The Fast TracKer is a custom electronics system that will operate at the full Level-1 accepted rate of 100 KHz and provide high quality tracks at the beginning of processing in the HLT. This will be performing by track reconstruction using hardware with massive parallelism using associative memories (AM) and FPGAs. The availability of the full...

  20. The FTK: A Hardware Track Finder for the ATLAS Trigger

    CERN Document Server

    Alison, J; Anderson, J; Andreani, A; Andreazza, A; Annovi, A; Antonelli, M; Atkinson, M; Auerbach, B; Baines, J; Barberio, E; Beccherle, R; Beretta, M; Biesuz, N V; Blair, R; Blazey, G; Bogdan, M; Boveia, A; Britzger, D; Bryant, P; Burghgrave, B; Calderini, G; Cavaliere, V; Cavasinni, V; Chakraborty, D; Chang, P; Cheng, Y; Cipriani, R; Citraro, S; Citterio, M; Crescioli, F; Dell'Orso, M; Donati, S; Dondero, P; Drake, G; Gadomski, S; Gatta, M; Gentsos, C; Giannetti, P; Giulini, M; Gkaitatzis, S; Howarth, J W; Iizawa, T; Kapliy, A; Kasten, M; Kim, Y K; Kimura, N; Klimkovich, T; Kordas, K; Korikawa, T; Krizka, K; Kubota, T; Lanza, A; Lasagni, F; Liberali, V; Li, H L; Love, J; Luciano, P; Luongo, C; Magalotti, D; Melachrinos, C; Meroni, C; Mitani, T; Negri, A; Neroutsos, P; Neubauer, M; Nikolaidis, S; Okumura, Y; Pandini, C; Penning, B; Petridou, C; Piendibene, M; Proudfoot, J; Rados, P; Roda, C; Rossi, E; Sakurai, Y; Sampsonidis, D; Sampsonidou, D; Schmitt, S; Schoening, A; Shochet, M; Shojaii, S; Soltveit, H; Sotiropoulou, C L; Stabile, A; Tang, F; Testa, M; Tompkins, L; Vercesi, V; Villa, M; Volpi, G; Webster, J; Wu, X; Yorita, K; Yurkewicz, A; Zeng, J C; Zhang, J

    2014-01-01

    The ATLAS experiment trigger system is designed to reduce the event rate, at the LHC design luminosity of 1034 cm-2 s-1, from the nominal bunch crossing rate of 40 MHz to less than 1 kHz for permanent storage. During Run 1, the LHC has performed exceptionally well, routinely exceeding the design luminosity. From 2015 the LHC is due to operate with higher still luminosities. This will place a significant load on the High Level Trigger system, both due to the need for more sophisticated algorithms to reject background, and from the larger data volumes that will need to be processed. The Fast TracKer is a hardware upgrade for Run 2, consisting of a custom electronics system that will operate at the full rate for Level-1 accepted events of 100 kHz and provide high quality tracks at the beginning of processing in the High Level Trigger. This will perform track reconstruction using hardware with massive parallelism using associative memories and FPGAs. The availability of the full tracking information will enable r...

  1. Dynamically-Loaded Hardware Libraries (HLL) Technology for Audio Applications

    DEFF Research Database (Denmark)

    Esposito, A.; Lomuscio, A.; Nunzio, L. Di

    2016-01-01

    In this work, we apply hardware acceleration to embedded systems running audio applications. We present a new framework, Dynamically-Loaded Hardware Libraries or HLL, to dynamically load hardware libraries on reconfigurable platforms (FPGAs). Provided a library of application-specific processors,...

  2. Visual basic application in computer hardware control and data ...

    African Journals Online (AJOL)

    ... hardware device control and data acquisition is experimented using Visual Basic and the Speech Application Programming Interface (SAPI) Software Development Kit. To control hardware using Visual Basic, all hardware requests were designed to go through Windows via the printer parallel ports which is accessed and ...

  3. PACE: A dynamic programming algorithm for hardware/software partitioning

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1996-01-01

    with a hardware area constraint and the problem of minimizing hardware area with a system execution time constraint. The target architecture consists of a single microprocessor and a single hardware chip (ASIC, FPGA, etc.) which are connected by a communication channel. The algorithm incorporates a realistic...

  4. Static Scheduling of Periodic Hardware Tasks with Precedence and Deadline Constraints on Reconfigurable Hardware Devices

    Directory of Open Access Journals (Sweden)

    Ikbel Belaid

    2011-01-01

    Full Text Available Task graph scheduling for reconfigurable hardware devices can be defined as finding a schedule for a set of periodic tasks with precedence, dependence, and deadline constraints as well as their optimal allocations on the available heterogeneous hardware resources. This paper proposes a new methodology comprising three main stages. Using these three main stages, dynamic partial reconfiguration and mixed integer programming, pipelined scheduling and efficient placement are achieved and enable parallel computing of the task graph on the reconfigurable devices by optimizing placement/scheduling quality. Experiments on an application of heterogeneous hardware tasks demonstrate an improvement of resource utilization of 12.45% of the available reconfigurable resources corresponding to a resource gain of 17.3% compared to a static design. The configuration overhead is reduced to 2% of the total running time. Due to pipelined scheduling, the task graph spanning is minimized by 4% compared to sequential execution of the graph.

  5. Generic Health Management: A System Engineering Process Handbook Overview and Process

    Science.gov (United States)

    Wilson, Moses Lee; Spruill, Jim; Hong, Yin Paw

    1995-01-01

    Health Management, a System Engineering Process, is one of those processes-techniques-and-technologies used to define, design, analyze, build, verify, and operate a system from the viewpoint of preventing, or minimizing, the effects of failure or degradation. It supports all ground and flight elements during manufacturing, refurbishment, integration, and operation through combined use of hardware, software, and personnel. This document will integrate Health Management Processes (six phases) into five phases in such a manner that it is never a stand alone task/effort which separately defines independent work functions.

  6. Analog Exercise Hardware to Implement a High Intensity Exercise Program During Bed Rest

    Science.gov (United States)

    Loerch, Linda; Newby, Nate; Sinka, Joe; Ploutz-Snyder, Lori

    2013-02-01

    To evaluate novel countermeasure protocols in a spaceflight analog setting before validation on the International Space Station, NASA’s Human Research Program is sponsoring a multi-investigator bed rest campaign that uses a combination of commercial and custom-made exercise training hardware to conduct daily resistance and aerobic exercise protocols. These devices include the stand alone zero-gravity locomotion simulator, horizontal squat device, Lode commercial supine cycle ergometer, Cybex commercial prone leg curl machine, and Quantum Fitness commercial horizontal leg press. This paper will describe these pieces of hardware that are used to support current bed rest studies at NASA’s Flight Analog Research Unit in Galveston, Texas, USA.

  7. Design and Hardware Verification of Canard Based Sounding Rocket Attitude Controller Using Adaptive Filter

    Science.gov (United States)

    Sawai, Shujiro; Matsuda, Seiji

    Canard based controller using an adaptive notch filter is proposed to control the attitude of launch vehicles including the ISAS's sounding rocket `S-520'. As the characteristics of launch vehicles are time variant in nature, conventional time invariant controller is not suitable for this purpose. Here, adaptive notch filter is proposed to treat the time variant nature. This adaptive filter acts to null out the structural bending mode, which often causes the instability of the attitude controller. The proposed adaptation law requires only limited calculation cost. It means that it is easy to install to the real flight system. The hardware module which aims to control the attitude of the sounding rocket `S-520' is designed and verified not only by the numerical simulations, but also by the hardware tests.

  8. On-board fault diagnostics for fly-by-light flight control systems using neural network flight processors

    Science.gov (United States)

    Urnes, James M., Sr.; Cushing, John; Bond, William E.; Nunes, Steve

    1996-10-01

    Fly-by-Light control systems offer higher performance for fighter and transport aircraft, with efficient fiber optic data transmission, electric control surface actuation, and multi-channel high capacity centralized processing combining to provide maximum aircraft flight control system handling qualities and safety. The key to efficient support for these vehicles is timely and accurate fault diagnostics of all control system components. These diagnostic tests are best conducted during flight when all facts relating to the failure are present. The resulting data can be used by the ground crew for efficient repair and turnaround of the aircraft, saving time and money in support costs. These difficult to diagnose (Cannot Duplicate) fault indications average 40 - 50% of maintenance activities on today's fighter and transport aircraft, adding significantly to fleet support cost. Fiber optic data transmission can support a wealth of data for fault monitoring; the most efficient method of fault diagnostics is accurate modeling of the component response under normal and failed conditions for use in comparison with the actual component flight data. Neural Network hardware processors offer an efficient and cost-effective method to install fault diagnostics in flight systems, permitting on-board diagnostic modeling of very complex subsystems. Task 2C of the ARPA FLASH program is a design demonstration of this diagnostics approach, using the very high speed computation of the Adaptive Solutions Neural Network processor to monitor an advanced Electrohydrostatic control surface actuator linked through a AS-1773A fiber optic bus. This paper describes the design approach and projected performance of this on-line diagnostics system.

  9. Comparison of two algorithmic data processing strategies for metabolic fingerprinting by comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry.

    Science.gov (United States)

    Almstetter, Martin F; Appel, Inka J; Dettmer, Katja; Gruber, Michael A; Oefner, Peter J

    2011-09-28

    The alignment algorithm Statistical Compare (SC) developed by LECO Corporation for the processing of comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry (GC×GC-TOFMS) data was validated and compared to the in-house developed retention time correction and data alignment tool INCA (Integrative Normalization and Comparative Analysis) by a spike-in experiment and the comparative metabolic fingerprinting of a wild type versus a double mutant strain of Escherichia coli (E. coli). Starting with the same peak lists generated by LECO's ChromaTOF software, the accuracy of peak alignment and detection of 1.1- to 4-fold changes in metabolite concentration was assessed by spiking 20 standard compounds into an aqueous methanol extract of E. coli. To provide the same quality input signals for both alignment routines, the universal m/z 73 trace of the trimethylsilyl (TMS) group was used as a quantitative measure for all features. The performance of data processing and alignment was evaluated and illustrated by ROC curves. Statistical Compare performed marginally better at the lower fold changes, while INCA did so at the higher fold changes. Using SC, quantitative precision could be improved substantially by exploiting the signal intensities of metabolite-specific unique (U) m/z ion traces rather than the universal m/z 73 trace. A list of 56 features that distinguished the two E. coli strains was obtained by the SC alignment using m/z U with an estimated false discovery rate (FDR) of <0.05. Ultimately, 23 metabolites could be identified, one additional and five less than with INCA due to the failure of SC to extract unitized m/z U's across all fingerprints with suitable spectral intensities for the latter metabolites. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. ARM assembly language with hardware experiments

    CERN Document Server

    Elahi, Ata

    2015-01-01

    This book provides a hands-on approach to learning ARM assembly language with the use of a TI microcontroller. The book starts with an introduction to computer architecture and then discusses number systems and digital logic. The text covers ARM Assembly Language, ARM Cortex Architecture and its components, and Hardware Experiments using TILM3S1968. Written for those interested in learning embedded programming using an ARM Microcontroller. ·         Introduces number systems and signal transmission methods   ·         Reviews logic gates, registers, multiplexers, decoders and memory   ·         Provides an overview and examples of ARM instruction set   ·         Uses using Keil development tools for writing and debugging ARM assembly language Programs   ·         Hardware experiments using a Mbed NXP LPC1768 microcontroller; including General Purpose Input/Output (GPIO) configuration, real time clock configuration, binary input to 7-segment display, creating ...

  11. Hardware system for man-machine interface

    International Nuclear Information System (INIS)

    Niki, Kiyoshi; Tai, Ichirou; Hiromoto, Hiroshi; Inubushi, Hiroyuki; Makino, Teruyuki.

    1988-01-01

    Keeping pace with the recent advance of electronic technology, the adoption of the system that can present more information efficiently and in orderly form to operators has been promoted rapidly, in place of the man-machine interface for power stations, which comprises conventional indicators, switches and annunciators. By the introduction of new hardware and software, the form of the central control rooms of power stations and the sharing of roles between man and machine there have been reexamined. In this report, the way the man-machine interface in power stations should be and the requirement for the role of operators are summarized, and based on them, the role of man-machine equipment is considered, thereafter, the features and functions of new typical man-machine equipments that are used for power stations at present or can be applied are described. Finally, the example of how these equipments are applied to power plants as the actual system is shown. The role of man-machine system in power stations, recent operation monitoring and control, the sharing of roles between hardware and operators, the role of machines, the recent typical hard ware of man-machine interface, and the examples of the latest application are reported. (K.I.)

  12. Introduction to Hardware Security and Trust

    CERN Document Server

    Wang, Cliff

    2012-01-01

    The emergence of a globalized, horizontal semiconductor business model raises a set of concerns involving the security and trust of the information systems on which modern society is increasingly reliant for mission-critical functionality. Hardware-oriented security and trust issues span a broad range including threats related to the malicious insertion of Trojan circuits designed, e.g.,to act as a ‘kill switch’ to disable a chip, to integrated circuit (IC) piracy,and to attacks designed to extract encryption keys and IP from a chip. This book provides the foundations for understanding hardware security and trust, which have become major concerns for national security over the past decade.  Coverage includes security and trust issues in all types of electronic devices and systems such as ASICs, COTS, FPGAs, microprocessors/DSPs, and embedded systems.  This serves as an invaluable reference to the state-of-the-art research that is of critical significance to the security of,and trust in, modern society�...

  13. A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems

    Science.gov (United States)

    Zinnecker, Alicia Mae; Culley, Dennis E.; Aretskin-Hariton, Eliot D.

    2014-01-01

    Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (40,000 pound force thrust) (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a Simulink (R) library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL

  14. A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems

    Science.gov (United States)

    Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.

    2015-01-01

    Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a SimulinkR library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.

  15. A Description of the Software Element of the NASA EME Flight Tests

    Science.gov (United States)

    Koppen, Sandra V.

    1996-01-01

    In support of NASA's Fly-By-Light/Power-By-Wire (FBL/PBW) program, a series of flight tests were conducted by NASA Langley Research Center in February, 1995. The NASA Boeing 757 was flown past known RF transmitters to measure both external and internal radiated fields. The aircraft was instrumented with strategically located sensors for acquiring data on shielding effectiveness and internal coupling. The data are intended to support computational and statistical modeling codes used to predict internal field levels of an electromagnetic environment (EME) on aircraft. The software was an integral part of the flight tests, as well as the data reduction process. The software, which provided flight test instrument control, data acquisition, and a user interface, executes on a Hewlett Packard (HP) 300 series workstation and uses BP VEEtest development software and the C programming language. Software tools were developed for data processing and analysis, and to provide a database organized by frequency bands, test runs, and sensors. This paper describes the data acquisition system on board the aircraft and concentrates on the software portion. Hardware and software interfaces are illustrated and discussed. Particular attention is given to data acquisition and data format. The data reduction process is discussed in detail to provide insight into the characteristics, quality, and limitations of the data. An analysis of obstacles encountered during the data reduction process is presented.

  16. Real time hardware vision processing for a bionic eye

    OpenAIRE

    Josh, Horace Edmund

    2017-01-01

    A recent objective in medical bionics research is to develop visual prostheses - devices that could potentially restore the sight of blind individuals. The Monash Vision Group is currently working towards implementing a fully autonomous direct-to-brain vision implant called the Gennaris. Although research in this field is progressing quickly, initial implementations of these devices will be quite naive, offering very basic levels of vision. The vision is anticipated to be binary - that is wit...

  17. Hardware-accelerated autostereogram rendering for interactive 3D visualization

    Science.gov (United States)

    Petz, Christoph; Goldluecke, Bastian; Magnor, Marcus

    2003-05-01

    Single Image Random Dot Stereograms (SIRDS) are an attractive way of depicting three-dimensional objects using conventional display technology. Once trained in decoupling the eyes' convergence and focusing, autostereograms of this kind are able to convey the three-dimensional impression of a scene. We present in this work an algorithm that generates SIRDS at interactive frame rates on a conventional PC. The presented system allows rotating a 3D geometry model and observing the object from arbitrary positions in real-time. Subjective tests show that the perception of a moving or rotating 3D scene presents no problem: The gaze remains focused onto the object. In contrast to conventional SIRDS algorithms, we render multiple pixels in a single step using a texture-based approach, exploiting the parallel-processing architecture of modern graphics hardware. A vertex program determines the parallax for each vertex of the geometry model, and the graphics hardware's texture unit is used to render the dot pattern. No data has to be transferred between main memory and the graphics card for generating the autostereograms, leaving CPU capacity available for other tasks. Frame rates of 25 fps are attained at a resolution of 1024x512 pixels on a standard PC using a consumer-grade nVidia GeForce4 graphics card, demonstrating the real-time capability of the system.

  18. MRI - From basic knowledge to advanced strategies: Hardware

    International Nuclear Information System (INIS)

    Carpenter, T.A.; Williams, E.J.

    1999-01-01

    There have been remarkable advances in the hardware used for nuclear magnetic resonance imaging scanners. These advances have enabled an extraordinary range of sophisticated magnetic resonance MR sequences to be performed routinely. This paper focuses on the following particular aspects: (a) Magnet system. Advances in magnet technology have allowed superconducting magnets which are low maintenance and have excellent homogeneity and very small stray field footprints. (b) Gradient system. Optimisation of gradient design has allowed gradient coils which provide excellent field for spatial encoding, have reduced diameter and have technology to minimise the effects of eddy currents. These coils can now routinely provide the strength and switching rate required by modern imaging methods. (c) Radio-frequency (RF) system. The advances in digital electronics can now provide RF electronics which have low noise characteristics, high accuracy and improved stability, which are all essential to the formation of excellent images. The use of surface coils has increased with the availability of phased-array systems, which are ideal for spinal work. (d) Computer system. The largest advance in technology has been in the supporting computer hardware which is now affordable, reliable and with performance to match the processing requirements demanded by present imaging sequences. (orig.)

  19. A novel hardware implementation for detecting respiration rate using photoplethysmography.

    Science.gov (United States)

    Prinable, Joseph; Jones, Peter; Thamrin, Cindy; McEwan, Alistair

    2017-07-01

    Asthma is a serious public health problem. Continuous monitoring of breathing may offer an alternative way to assess disease status. In this paper we present a novel hardware implementation for the capture and storage of a photoplethysmography (PPG) signal. The LED duty cycle was altered to determine the effect on respiratory rate accuracy. The oximeter was mounted to the left index finger of ten healthy volunteers. The breathing rate derived from the oximeter was validated against a nasal airflow sensor. The duty cycle of a pulse oximeter was changed between 5%, 10% and 25% at a sample rate of 500 Hz. A PPG signal and reference signal was captured for each duty cycle. The PPG signals were post processed in Matlab to derive a respiration rate using an existing Matlab toolbox. At a 25% duty cycle the RMSE was <;2 breaths per minute for the top performing algorithm. The RMSE increased to over 5 breaths per minute when the duty cycle was reduced to 5%. The power consumed by the hardware for a 5%, 10% and 25% duty cycle was 5.4 mW, 7.8 mW, and 15 mW respectively. For clinical assessment of respiratory rate, a RSME of <;2 breaths per minute is recommended. Further work is required to determine utility in asthma management. However for non-clinical applications such as fitness tracking, lower accuracy may be sufficient to allow a reduced duty cycle setting.

  20. A Reusable and Adaptable Software Architecture for Embedded Space Flight System: The Core Flight Software System (CFS)

    Science.gov (United States)

    Wilmot, Jonathan

    2005-01-01

    The contents include the following: High availability. Hardware is in harsh environment. Flight processor (constraints) very widely due to power and weight constraints. Software must be remotely modifiable and still operate while changes are being made. Many custom one of kind interfaces for one of a kind missions. Sustaining engineering. Price of failure is high, tens to hundreds of millions of dollars.

  1. Handbook of hardware/software codesign

    CERN Document Server

    Teich, Jürgen

    2017-01-01

    This handbook presents fundamental knowledge on the hardware/software (HW/SW) codesign methodology. Contributing expert authors look at key techniques in the design flow as well as selected codesign tools and design environments, building on basic knowledge to consider the latest techniques. The book enables readers to gain real benefits from the HW/SW codesign methodology through explanations and case studies which demonstrate its usefulness. Readers are invited to follow the progress of design techniques through this work, which assists readers in following current research directions and learning about state-of-the-art techniques. Students and researchers will appreciate the wide spectrum of subjects that belong to the design methodology from this handbook. .

  2. Theorem Proving in Intel Hardware Design

    Science.gov (United States)

    O'Leary, John

    2009-01-01

    For the past decade, a framework combining model checking (symbolic trajectory evaluation) and higher-order logic theorem proving has been in production use at Intel. Our tools and methodology have been used to formally verify execution cluster functionality (including floating-point operations) for a number of Intel products, including the Pentium(Registered TradeMark)4 and Core(TradeMark)i7 processors. Hardware verification in 2009 is much more challenging than it was in 1999 - today s CPU chip designs contain many processor cores and significant firmware content. This talk will attempt to distill the lessons learned over the past ten years, discuss how they apply to today s problems, outline some future directions.

  3. Battery Management System Hardware Concepts: An Overview

    Directory of Open Access Journals (Sweden)

    Markus Lelie

    2018-03-01

    Full Text Available This paper focuses on the hardware aspects of battery management systems (BMS for electric vehicle and stationary applications. The purpose is giving an overview on existing concepts in state-of-the-art systems and enabling the reader to estimate what has to be considered when designing a BMS for a given application. After a short analysis of general requirements, several possible topologies for battery packs and their consequences for the BMS’ complexity are examined. Four battery packs that were taken from commercially available electric vehicles are shown as examples. Later, implementation aspects regarding measurement of needed physical variables (voltage, current, temperature, etc. are discussed, as well as balancing issues and strategies. Finally, safety considerations and reliability aspects are investigated.

  4. EPICS: Allen-Bradley hardware reference manual

    International Nuclear Information System (INIS)

    Nawrocki, G.

    1993-01-01

    This manual covers the following hardware: Allen-Bradley 6008 -- SV VMEbus I/O scanner; Allen-Bradley universal I/O chassis 1771-A1B, -A2B, -A3B, and -A4B; Allen-Bradley power supply module 1771-P4S; Allen-Bradley 1771-ASB remote I/O adapter module; Allen-Bradley 1771-IFE analog input module; Allen-Bradley 1771-OFE analog output module; Allen-Bradley 1771-IG(D) TTL input module; Allen-Bradley 1771-OG(d) TTL output; Allen-Bradley 1771-IQ DC selectable input module; Allen-Bradley 1771-OW contact output module; Allen-Bradley 1771-IBD DC (10--30V) input module; Allen-Bradley 1771-OBD DC (10--60V) output module; Allen-Bradley 1771-IXE thermocouple/millivolt input module; and the Allen-Bradley 2705 RediPANEL push button module

  5. The double Chooz hardware trigger system

    Energy Technology Data Exchange (ETDEWEB)

    Cucoanes, Andi; Beissel, Franz; Reinhold, Bernd; Roth, Stefan; Stahl, Achim; Wiebusch, Christopher [RWTH Aachen (Germany)

    2008-07-01

    The double Chooz neutrino experiment aims to improve the present knowledge on {theta}{sub 13} mixing angle using two similar detectors placed at {proportional_to}280 m and respectively 1 km from the Chooz power plant reactor cores. The detectors measure the disappearance of reactor antineutrinos. The hardware trigger has to be very efficient for antineutrinos as well as for various types of background events. The triggering condition is based on discriminated PMT sum signals and the multiplicity of groups of PMTs. The talk gives an outlook to the double Chooz experiment and explains the requirements of the trigger system. The resulting concept and its performance is shown as well as first results from a prototype system.

  6. Fast Gridding on Commodity Graphics Hardware

    DEFF Research Database (Denmark)

    Sørensen, Thomas Sangild; Schaeffter, Tobias; Noe, Karsten Østergaard

    2007-01-01

    is the far most time consuming of the three steps (Table 1). Modern graphics cards (GPUs) can be utilised as a fast parallel processor provided that algorithms are reformulated in a parallel solution. The purpose of this work is to test the hypothesis, that a non-cartesian reconstruction can be efficiently......The most commonly used algorithm for non-cartesian MRI reconstruction is the gridding algorithm [1]. It consists of three steps:                    1) convolution with a gridding kernel and resampling on a cartesian grid, 2) inverse FFT, and 3) deapodization. On the CPU the convolution step...... implemented on graphics hardware giving a significant speedup compared to CPU based alternatives. We present a novel GPU implementation of the convolution step that overcomes the problems of memory bandwidth that has limited the speed of previous GPU gridding algorithms [2]....

  7. Locating hardware faults in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-04-13

    Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

  8. Marshall Space Flight Center's Virtual Reality Applications Program 1993

    Science.gov (United States)

    Hale, Joseph P., II

    1993-01-01

    A Virtual Reality (VR) applications program has been under development at the Marshall Space Flight Center (MSFC) since 1989. Other NASA Centers, most notably Ames Research Center (ARC), have contributed to the development of the VR enabling technologies and VR systems. This VR technology development has now reached a level of maturity where specific applications of VR as a tool can be considered. The objectives of the MSFC VR Applications Program are to develop, validate, and utilize VR as a Human Factors design and operations analysis tool and to assess and evaluate VR as a tool in other applications (e.g., training, operations development, mission support, teleoperations planning, etc.). The long-term goals of this technology program is to enable specialized Human Factors analyses earlier in the hardware and operations development process and develop more effective training and mission support systems. The capability to perform specialized Human Factors analyses earlier in the hardware and operations development process is required to better refine and validate requirements during the requirements definition phase. This leads to a more efficient design process where perturbations caused by late-occurring requirements changes are minimized. A validated set of VR analytical tools must be developed to enable a more efficient process for the design and development of space systems and operations. Similarly, training and mission support systems must exploit state-of-the-art computer-based technologies to maximize training effectiveness and enhance mission support. The approach of the VR Applications Program is to develop and validate appropriate virtual environments and associated object kinematic and behavior attributes for specific classes of applications. These application-specific environments and associated simulations will be validated, where possible, through empirical comparisons with existing, accepted tools and methodologies. These validated VR analytical

  9. Trainable hardware for dynamical computing using error backpropagation through physical media.

    Science.gov (United States)

    Hermans, Michiel; Burm, Michaël; Van Vaerenbergh, Thomas; Dambre, Joni; Bienstman, Peter

    2015-03-24

    Neural networks are currently implemented on digital Von Neumann machines, which do not fully leverage their intrinsic parallelism. We demonstrate how to use a novel class of reconfigurable dynamical systems for analogue information processing, mitigating this problem. Our generic hardware platform for dynamic, analogue computing consists of a reciprocal linear dynamical system with nonlinear feedback. Thanks to reciprocity, a ubiquitous property of many physical phenomena like the propagation of light and sound, the error backpropagation-a crucial step for tuning such systems towards a specific task-can happen in hardware. This can potentially speed up the optimization process significantly, offering important benefits for the scalability of neuro-inspired hardware. In this paper, we show, using one experimentally validated and one conceptual example, that such systems may provide a straightforward mechanism for constructing highly scalable, fully dynamical analogue computers.

  10. Neural Networks for Flight Control

    Science.gov (United States)

    Jorgensen, Charles C.

    1996-01-01

    Neural networks are being developed at NASA Ames Research Center to permit real-time adaptive control of time varying nonlinear systems, enhance the fault-tolerance of mission hardware, and permit online system reconfiguration. In general, the problem of controlling time varying nonlinear systems with unknown structures has not been solved. Adaptive neural control techniques show considerable promise and are being applied to technical challenges including automated docking of spacecraft, dynamic balancing of the space station centrifuge, online reconfiguration of damaged aircraft, and reducing cost of new air and spacecraft designs. Our experiences have shown that neural network algorithms solved certain problems that conventional control methods have been unable to effectively address. These include damage mitigation in nonlinear reconfiguration flight control, early performance estimation of new aircraft designs, compensation for damaged planetary mission hardware by using redundant manipulator capability, and space sensor platform stabilization. This presentation explored these developments in the context of neural network control theory. The discussion began with an overview of why neural control has proven attractive for NASA application domains. The more important issues in control system development were then discussed with references to significant technical advances in the literature. Examples of how these methods have been applied were given, followed by projections of emerging application needs and directions.

  11. Solar array flight dynamic experiment

    Science.gov (United States)

    Schock, Richard W.

    1987-01-01

    The purpose of the Solar Array Flight Dynamic Experiment (SAFDE) is to demonstrate the feasibility of on-orbit measurement and ground processing of large space structures' dynamic characteristics. Test definition or verification provides the dynamic characteristic accuracy required for control systems use. An illumination/measurement system was developed to fly on space shuttle flight STS-41D. The system was designed to dynamically evaluate a large solar array called the Solar Array Flight Experiment (SAFE) that had been scheduled for this flight. The SAFDE system consisted of a set of laser diode illuminators, retroreflective targets, an intelligent star tracker receiver and the associated equipment to power, condition, and record the results. In six tests on STS-41D, data was successfully acquired from 18 retroreflector targets and ground processed, post flight, to define the solar array's dynamic characteristic. The flight experiment proved the viability of on-orbit test definition of large space structures dynamic characteristics. Future large space structures controllability should be greatly enhanced by this capability.

  12. STS-51, RSRM-033, 360T033 KSC processing configuration and data report

    Science.gov (United States)

    Hillard, Robert C.

    1993-12-01

    KSC Processing Configuration and Data Report is being provided as a historical document and as an enhancement to future RSRM manufacturing and processing operations. The following sections provide information on segment receipt, aft booster build up, motor assembly, and closeout for STS-51, RSRM flight set 360T033. Section 2.0 contains a summary of RSRM-033 processing. Section 3.0 discusses any significant problems or special issues that require special attention. Sections 4.0 through 6.0 contain narrative descriptions of all key events, including any related processing problems. Appendix A provides engineering specifications and changes. A list and matrix of all problem reports (PR's) pertinent to this flight set is provided in Appendix B. The matrix was provided by the Thiokol LSS Quality Engineering office. Copies of the PR's generated during the processing of RSRM-033 will be provided upon request. Appendix C contains the motor set status matrix, which provides milestone dates for the RSRM-033 flow. Section 7.0 provides recommendations for the improvement of flight hardware processing. Section 8.0 contains data sheets that provide flight hardware parts and consumable information installed during the booster build-up and stacking operations by location, lot/serial number, expiration and cure dates/times, and installation dates.

  13. STS-56, RSRM-031, 360L031 KSC processing configuration and data report

    Science.gov (United States)

    1993-12-01

    KSC Processing Configuration and Data Report is being provided as a historical document and as an enhancement to future RSRM manufacturing and processing operations. The following sections provide information on segment receipt, aft booster build-up, booster assembly, and closeout for STS-56, RSRM flight set 36OL031. Section 2.0 contains a summary of RSRM-031 processing. Section 3.0 discusses any significant problems or special issues that require special attention. Sections 4.0 through 6.0 contain narrative descriptions of all key events, including any related processing problems. Appendix A provides engineering specifications and changes. A list and matrix of all problem reports (PR's) pertinent to this flight set is provided in Appendix B. The matrix was provided by the Thiokol LSS Quality Engineering office. Copies of the PR's generated during the processing of RSRM-031 will be provided upon request. Appendix C contains the motor set status matrix, which provides milestone dates for the RSRM-031 flow. Section 7.0 provides recommendations, if any, for the improvement of flight hardware processing. Section 8.0 contains data sheets that provide flight hardware parts and consumables information installed during the booster build-up and stacking operations by location, lot/serial number, expiration and cure dates/times, and installation dates.

  14. Flight Test Series 3: Flight Test Report

    Science.gov (United States)

    Marston, Mike; Sternberg, Daniel; Valkov, Steffi

    2015-01-01

    This document is a flight test report from the Operational perspective for Flight Test Series 3, a subpart of the Unmanned Aircraft System (UAS) Integration in the National Airspace System (NAS) project. Flight Test Series 3 testing began on June 15, 2015, and concluded on August 12, 2015. Participants included NASA Ames Research Center, NASA Armstrong Flight Research Center, NASA Glenn Research Center, NASA Langley Research center, General Atomics Aeronautical Systems, Inc., and Honeywell. Key stakeholders analyzed their System Under Test (SUT) in two distinct configurations. Configuration 1, known as Pairwise Encounters, was subdivided into two parts: 1a, involving a low-speed UAS ownship and intruder(s), and 1b, involving a high-speed surrogate ownship and intruder. Configuration 2, known as Full Mission, involved a surrogate ownship, live intruder(s), and integrated virtual traffic. Table 1 is a summary of flights for each configuration, with data collection flights highlighted in green. Section 2 and 3 of this report give an in-depth description of the flight test period, aircraft involved, flight crew, and mission team. Overall, Flight Test 3 gathered excellent data for each SUT. We attribute this successful outcome in large part from the experience that was acquired from the ACAS Xu SS flight test flown in December 2014. Configuration 1 was a tremendous success, thanks to the training, member participation, integration/testing, and in-depth analysis of the flight points. Although Configuration 2 flights were cancelled after 3 data collection flights due to various problems, the lessons learned from this will help the UAS in the NAS project move forward successfully in future flight phases.

  15. Accelerating epistasis analysis in human genetics with consumer graphics hardware

    Directory of Open Access Journals (Sweden)

    Cancare Fabio

    2009-07-01

    Full Text Available Abstract Background Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs have more memory bandwidth and computational capability than Central Processing Units (CPUs and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. Findings We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective

  16. Accelerating epistasis analysis in human genetics with consumer graphics hardware.

    Science.gov (United States)

    Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H

    2009-07-24

    Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR) is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs) have more memory bandwidth and computational capability than Central Processing Units (CPUs) and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective performance while leaving the CPU available for other

  17. Infrared Thermography Flight Experimentation

    Science.gov (United States)

    Blanchard, Robert C.; Carter, Matthew L.; Kirsch, Michael

    2003-01-01

    Analysis was done on IR data collected by DFRC on May 8, 2002. This includes the generation of a movie to initially examine the IR flight data. The production of the movie was challenged by the volume of data that needed to be processed, namely 40,500 images with each image (256 x 252) containing over 264 million points (pixel depth 4096). It was also observed during the initial analysis that the RTD surface coating has a different emissivity than the surroundings. This fact added unexpected complexity in obtaining a correlation between RTD data and IR data. A scheme was devised to generate IR data near the RTD location which is not affected by the surface coating This scheme is valid as long as the surface temperature as measured does not change too much over a few pixel distances from the RTD location. After obtaining IR data near the RTD location, it is possible to make a direct comparison with the temperature as measured during the flight after adjusting for the camera s auto scaling. The IR data seems to correlate well to the flight temperature data at three of the four RID locations. The maximum count intensity occurs closely to the maximum temperature as measured during flight. At one location (RTD #3), there is poor correlation and this must be investigated before any further progress is possible. However, with successful comparisons at three locations, it seems there is great potential to be able to find a calibration curve for the data. Moreover, as such it will be possible to measure temperature directly from the IR data in the near future.

  18. Hardware Implementation of Maximum Power Point Tracking for Thermoelectric Generators

    Science.gov (United States)

    Maganga, Othman; Phillip, Navneesh; Burnham, Keith J.; Montecucco, Andrea; Siviter, Jonathan; Knox, Andrew; Simpson, Kevin

    2014-06-01

    This work describes the practical implementation of two maximum power point tracking (MPPT) algorithms, namely those of perturb and observe, and extremum seeking control. The proprietary dSPACE system is used to perform hardware in the loop (HIL) simulation whereby the two control algorithms are implemented using the MATLAB/Simulink (Mathworks, Natick, MA) software environment in order to control a synchronous buck-boost converter connected to two commercial thermoelectric modules. The process of performing HIL simulation using dSPACE is discussed, and a comparison between experimental and simulated results is highlighted. The experimental results demonstrate the validity of the two MPPT algorithms, and in conclusion the benefits and limitations of real-time implementation of MPPT controllers using dSPACE are discussed.

  19. Design-to-fabricate: maker hardware requires maker software.

    Science.gov (United States)

    Schmidt, Ryan; Ratto, Matt

    2013-01-01

    As a result of consumer-level 3D printers' increasing availability and affordability, the audience for 3D-design tools has grown considerably. However, current tools are ill-suited for these users. They have steep learning curves and don't take into account that the end goal is a physical object, not a digital model. A new class of "maker"-level design tools is needed to accompany this new commodity hardware. However, recent examples of such tools achieve accessibility primarily by constraining functionality. In contrast, the meshmixer project is building tools that provide accessibility and expressive power by leveraging recent computer graphics research in geometry processing. The project members have had positive experiences with several 3D-design-to-print workshops and are exploring several design-to-fabricate problems. This article is part of a special issue on 3D printing.

  20. Graph based communication analysis for hardware/software codesign

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1999-01-01

    In this paper we present a coarse grain CDFG (Control/Data Flow Graph) model suitable for hardware/software partitioning of single processes and demonstrate how it is necessary to perform various transformations on the graph structure before partitioning in order to achieve a structure that allows...... for accurate estimation of communication overhead between nodes mapped to different processors. In particular, we demonstrate how various transformations of control structures can lead to a more accurate communication analysis and more efficient implementations. The purpose of the transformations is to obtain...... a CDFG structure that is sufficiently fine grained as to support a correct communication analysis but not more fine grained than necessary as this will increase partitioning and analysis time....

  1. CT and MRI techniques for imaging around orthopedic hardware

    International Nuclear Information System (INIS)

    Do, Thuy Duong; Skornitzke, Stephan; Weber, Marc-Andre; Sutter, Reto

    2018-01-01

    Orthopedic hardware impairs image quality in cross-sectional imaging. With an increasing number of orthopedic implants in an aging population, the need to mitigate metal artifacts in computed tomography and magnetic resonance imaging is becoming increasingly relevant. This review provides an overview of the major artifacts in CT and MRI and state-of-the-art solutions to improve image quality. All steps of image acquisition from device selection, scan preparations and parameters to image post-processing influence the magnitude of metal artifacts. Technological advances like dual-energy CT with the possibility of virtual monochromatic imaging (VMI) and new materials offer opportunities to further reduce artifacts in CT and MRI. Dedicated metal artifact reduction sequences contain algorithms to reduce artifacts and improve imaging of surrounding tissue and are essential tools in orthopedic imaging to detect postoperative complications in early stages.

  2. An integrable low-cost hardware random number generator

    Science.gov (United States)

    Ranasinghe, Damith C.; Lim, Daihyun; Devadas, Srinivas; Jamali, Behnam; Zhu, Zheng; Cole, Peter H.

    2005-02-01

    A hardware random number generator is different from a pseudo-random number generator; a pseudo-random number generator approximates the assumed behavior of a real hardware random number generator. Simple pseudo random number generators suffices for most applications, however for demanding situations such as the generation of cryptographic keys, requires an efficient and a cost effective source of random numbers. Arbiter-based Physical Unclonable Functions (PUFs) proposed for physical authentication of ICs exploits statistical delay variation of wires and transistors across integrated circuits, as a result of process variations, to build a secret key unique to each IC. Experimental results and theoretical studies show that a sufficient amount of variation exits across IC"s. This variation enables each IC to be identified securely. It is possible to exploit the unreliability of these PUF responses to build a physical random number generator. There exists measurement noise, which comes from the instability of an arbiter when it is in a racing condition. There exist challenges whose responses are unpredictable. Without environmental variations, the responses of these challenges are random in repeated measurements. Compared to other physical random number generators, the PUF-based random number generators can be a compact and a low-power solution since the generator need only be turned on when required. A 64-stage PUF circuit costs less than 1000 gates and the circuit can be implemented using a standard IC manufacturing processes. In this paper we have presented a fast and an efficient random number generator, and analysed the quality of random numbers produced using an array of tests used by the National Institute of Standards and Technology to evaluate the randomness of random number generators designed for cryptographic applications.

  3. Swarm behavioral sorting based on robotic hardware variation

    OpenAIRE

    Shang, Beining; Crowder, Richard; Zauner, Klaus-Peter

    2014-01-01

    Swarm robotic systems can offer advantages of robustness, flexibility and scalability, just like social insects. One of the issues that researchers are facing is the hardware variation when implementing real robotic swarms. Identical software cannot guarantee identical behaviors among all robots due to hardware differences between swarm members. We propose a novel approach for sorting swarm robots according to their hardware differences. This method is based on the large number of interaction...

  4. Why Open Source Hardware matters and why you should care

    OpenAIRE

    Gürkaynak, Frank K.

    2017-01-01

    Open source hardware is currently where open source software was about 30 years ago. The idea is well received by enthusiasts, there is interest and the open source hardware has gained visible momentum recently, with several well-known universities including UC Berkeley, Cambridge and ETH Zürich actively working on large projects involving open source hardware, attracting the attention of companies big and small. But it is still not quite there yet. In this talk, based on my experience on the...

  5. Hardware/Software Co-design using Primitive Interface

    OpenAIRE

    Navin Chourasia; Puran Gaur

    2011-01-01

    Most engineering designs can be viewed as systems, i.e., as collections of several components whose combined operation provides useful services. Components can be heterogeneous in nature and their interaction may be regulated by some simple or complex means. Interface between Hardware & Software plays a very important role in co-design of the embedded system. Hardware/software co-design means meeting system-level objectives by exploiting the synergism of hardware and software through their co...

  6. Overview of Pre-Flight Physical Training, In-Flight Exercise Countermeasures and the Post-Flight Reconditioning Program for International Space Station Astronauts

    Science.gov (United States)

    Kerstman, Eric

    2011-01-01

    International Space Station (ISS) astronauts receive supervised physical training pre-flight, utilize exercise countermeasures in-flight, and participate in a structured reconditioning program post-flight. Despite recent advances in exercise hardware and prescribed exercise countermeasures, ISS crewmembers are still found to have variable levels of deconditioning post-flight. This presentation provides an overview of the astronaut medical certification requirements, pre-flight physical training, in-flight exercise countermeasures, and the post-flight reconditioning program. Astronauts must meet medical certification requirements on selection, annually, and prior to ISS missions. In addition, extensive physical fitness testing and standardized medical assessments are performed on long duration crewmembers pre-flight. Limited physical fitness assessments and medical examinations are performed in-flight to develop exercise countermeasure prescriptions, ensure that the crewmembers are physically capable of performing mission tasks, and monitor astronaut health. Upon mission completion, long duration astronauts must re-adapt to the 1 G environment, and be certified as fit to return to space flight training and active duty. A structured, supervised postflight reconditioning program has been developed to prevent injuries, facilitate re-adaptation to the 1 G environment, and subsequently return astronauts to training and space flight. The NASA reconditioning program is implemented by the Astronaut Strength, Conditioning, and Rehabilitation (ASCR) team and supervised by NASA flight surgeons. This program has evolved over the past 10 years of the International Space Station (ISS) program and has been successful in ensuring that long duration astronauts safely re-adapt to the 1 g environment and return to active duty. Lessons learned from this approach to managing deconditioning can be applied to terrestrial medicine and future exploration space flight missions.

  7. System-Level Testing of the Advanced Stirling Radioisotope Generator Engineering Hardware

    Science.gov (United States)

    Chan, Jack; Wiser, Jack; Brown, Greg; Florin, Dominic; Oriti, Salvatore M.

    2014-01-01

    To support future NASA deep space missions, a radioisotope power system utilizing Stirling power conversion technology was under development. This development effort was performed under the joint sponsorship of the Department of Energy and NASA, until its termination at the end of 2013 due to budget constraints. The higher conversion efficiency of the Stirling cycle compared with that of the Radioisotope Thermoelectric Generators (RTGs) used in previous missions (Viking, Pioneer, Voyager, Galileo, Ulysses, Cassini, Pluto New Horizons and Mars Science Laboratory) offers the advantage of a four-fold reduction in Pu-238 fuel, thereby extending its limited domestic supply. As part of closeout activities, system-level testing of flight-like Advanced Stirling Convertors (ASCs) with a flight-like ASC Controller Unit (ACU) was performed in February 2014. This hardware is the most representative of the flight design tested to date. The test fully demonstrates the following ACU and system functionality: system startup; ASC control and operation at nominal and worst-case operating conditions; power rectification; DC output power management throughout nominal and out-of-range host voltage levels; ACU fault management, and system command / telemetry via MIL-STD 1553 bus. This testing shows the viability of such a system for future deep space missions and bolsters confidence in the maturity of the flight design.

  8. Nanorobot Hardware Architecture for Medical Defense

    Directory of Open Access Journals (Sweden)

    Luiz C. Kretly

    2008-05-01

    Full Text Available This work presents a new approach with details on the integrated platform and hardware architecture for nanorobots application in epidemic control, which should enable real time in vivo prognosis of biohazard infection. The recent developments in the field of nanoelectronics, with transducers progressively shrinking down to smaller sizes through nanotechnology and carbon nanotubes, are expected to result in innovative biomedical instrumentation possibilities, with new therapies and efficient diagnosis methodologies. The use of integrated systems, smart biosensors, and programmable nanodevices are advancing nanoelectronics, enabling the progressive research and development of molecular machines. It should provide high precision pervasive biomedical monitoring with real time data transmission. The use of nanobioelectronics as embedded systems is the natural pathway towards manufacturing methodology to achieve nanorobot applications out of laboratories sooner as possible. To demonstrate the practical application of medical nanorobotics, a 3D simulation based on clinical data addresses how to integrate communication with nanorobots using RFID, mobile phones, and satellites, applied to long distance ubiquitous surveillance and health monitoring for troops in conflict zones. Therefore, the current model can also be used to prevent and save a population against the case of some targeted epidemic disease.

  9. Live HDR video streaming on commodity hardware

    Science.gov (United States)

    McNamee, Joshua; Hatchett, Jonathan; Debattista, Kurt; Chalmers, Alan

    2015-09-01

    High Dynamic Range (HDR) video provides a step change in viewing experience, for example the ability to clearly see the soccer ball when it is kicked from the shadow of the stadium into sunshine. To achieve the full potential of HDR video, so-called true HDR, it is crucial that all the dynamic range that was captured is delivered to the display device and tone mapping is confined only to the display. Furthermore, to ensure widespread uptake of HDR imaging, it should be low cost and available on commodity hardware. This paper describes an end-to-end HDR pipeline for capturing, encoding and streaming high-definition HDR video in real-time using off-the-shelf components. All the lighting that is captured by HDR-enabled consumer cameras is delivered via the pipeline to any display, including HDR displays and even mobile devices with minimum latency. The system thus provides an integrated HDR video pipeline that includes everything from capture to post-production, archival and storage, compression, transmission, and display.

  10. 8-Channel Broadband Laser Ranging Hardware Development

    Science.gov (United States)

    Bennett, Corey; La Lone, Brandon; Younk, Patrick; Daykin, Ed; Rhodes, Michelle; Perry, Daniel; Tran, Vu; Miller, Edward

    2017-06-01

    Broadband Laser Ranging (BLR) is a new diagnostic being developed to precisely measure the position vs. time of surfaces, shock break out, particle clouds, jets, and debris moving at kilometers per second speeds. The instrument uses interferometry to encode distance into a modulation in the spectrum of pulses from a mode-locked fiber laser and uses a dispersive Fourier transformation to map the spectral modulation into time. Range information is thereby recorded on a fast oscilloscope at the repetition rate of the laser, approximately every 50 ns. Current R&D is focused on developing a compact 8-channel system utilizing one laser and one high-speed oscilloscope. This talk will emphasize the hardware being developed for applications at the Contained Firing Facility at LLNL, but has a common architecture being developed in collaboration with NSTec and LANL for applications at multiple other facilities. Prepared by LLNL under Contract DE-AC52-07NA27344, by LANL under Contract DE-AC52-06NA25396, and by NSTec Contract DE-AC52-06NA25946.

  11. Magnetic qubits as hardware for quantum computers

    International Nuclear Information System (INIS)

    Tejada, J.; Chudnovsky, E.; Barco, E. del

    2000-01-01

    We propose two potential realisations for quantum bits based on nanometre scale magnetic particles of large spin S and high anisotropy molecular clusters. In case (1) the bit-value basis states vertical bar-0> and vertical bar-1> are the ground and first excited spin states S z = S and S-1, separated by an energy gap given by the ferromagnetic resonance (FMR) frequency. In case (2), when there is significant tunnelling through the anisotropy barrier, the qubit states correspond to the symmetric, vertical bar-0>, and antisymmetric, vertical bar-1>, combinations of the two-fold degenerate ground state S z = ± S. In each case the temperature of operation must be low compared to the energy gap, Δ, between the states vertical bar-0> and vertical bar-1>. The gap Δ in case (2) can be controlled with an external magnetic field perpendicular to the easy axis of the molecular cluster. The states of different molecular clusters and magnetic particles may be entangled by connecting them by superconducting lines with Josephson switches, leading to the potential for quantum computing hardware. (author)

  12. Magnetic qubits as hardware for quantum computers

    Energy Technology Data Exchange (ETDEWEB)

    Tejada, J.; Chudnovsky, E.; Barco, E. del [and others

    2000-07-01

    We propose two potential realisations for quantum bits based on nanometre scale magnetic particles of large spin S and high anisotropy molecular clusters. In case (1) the bit-value basis states vertical bar-0> and vertical bar-1> are the ground and first excited spin states S{sub z} = S and S-1, separated by an energy gap given by the ferromagnetic resonance (FMR) frequency. In case (2), when there is significant tunnelling through the anisotropy barrier, the qubit states correspond to the symmetric, vertical bar-0>, and antisymmetric, vertical bar-1>, combinations of the two-fold degenerate ground state S{sub z} = {+-} S. In each case the temperature of operation must be low compared to the energy gap, {delta}, between the states vertical bar-0> and vertical bar-1>. The gap {delta} in case (2) can be controlled with an external magnetic field perpendicular to the easy axis of the molecular cluster. The states of different molecular clusters and magnetic particles may be entangled by connecting them by superconducting lines with Josephson switches, leading to the potential for quantum computing hardware. (author)

  13. Open Hardware For CERN's Accelerator Control Systems

    CERN Document Server

    van der Bij, E; Ayass, M; Boccardi, A; Cattin, M; Gil Soriano, C; Gousiou, E; Iglesias Gonsálvez, S; Penacoba Fernandez, G; Serrano, J; Voumard, N; Wlostowski, T

    2011-01-01

    The accelerator control systems at CERN will be renovated and many electronics modules will be redesigned as the modules they will replace cannot be bought anymore or use obsolete components. The modules used in the control systems are diverse: analog and digital I/O, level converters and repeaters, serial links and timing modules. Overall around 120 modules are supported that are used in systems such as beam instrumentation, cryogenics and power converters. Only a small percentage of the currently used modules are commercially available, while most of them had been specifically designed at CERN. The new developments are based on VITA and PCI-SIG standards such as FMC (FPGA Mezzanine Card), PCI Express and VME64x using transition modules. As system-on-chip interconnect, the public domain Wishbone specification is used. For the renovation, it is considered imperative to have for each board access to the full hardware design and its firmware so that problems could quickly be resolved by CERN engineers or its ...

  14. Current trends in hardware and software for brain-computer interfaces (BCIs).

    Science.gov (United States)

    Brunner, P; Bianchi, L; Guger, C; Cincotti, F; Schalk, G

    2011-04-01

    A brain-computer interface (BCI) provides a non-muscular communication channel to people with and without disabilities. BCI devices consist of hardware and software. BCI hardware records signals from the brain, either invasively or non-invasively, using a series of device components. BCI software then translates these signals into device output commands and provides feedback. One may categorize different types of BCI applications into the following four categories: basic research, clinical/translational research, consumer products, and emerging applications. These four categories use BCI hardware and software, but have different sets of requirements. For example, while basic research needs to explore a wide range of system configurations, and thus requires a wide range of hardware and software capabilities, applications in the other three categories may be designed for relatively narrow purposes and thus may only need a very limited subset of capabilities. This paper summarizes technical aspects for each of these four categories of BCI applications. The results indicate that BCI technology is in transition from isolated demonstrations to systematic research and commercial development. This process requires several multidisciplinary efforts, including the development of better integrated and more robust BCI hardware and software, the definition of standardized interfaces, and the development of certification, dissemination and reimbursement procedures.

  15. Survey of hardware supported by the Control System at the Advanced Photon Source

    International Nuclear Information System (INIS)

    Coulter, K.J.; Nawrocki, G.J.

    1993-01-01

    The Experimental Physics and Industrial control System (EPICS) has been under development at Los Alamos and Argonne National Laboratories for over six years. A wide variety of instrumentation is now supported. This presentation will give an overview of the types of hardware and subsystems which are currently supported and will discuss future plans for addressing additional hardware requirements at the APS. Supported systems to be discussed include: motion control, vacuum pump control and system monitoring, standard laboratory instrumentation (ADCs, DVMs, pulse generators, etc.), image processing, discrete binary and analog I/O, and standard temperature, pressure and flow monitoring

  16. STS-79 Flight Day 6

    Science.gov (United States)

    1996-01-01

    On this sixth day of the STS-79 mission, the flight crew, Cmdr. William F. Readdy, Pilot Terrence W. Wilcutt, Mission Specialists, Thomas D. Akers, Shannon Lucid, Jay Apt, and Carl E. Walz, continue activities aboard Atlantis/Mir as the nine astronauts and cosmonauts work in their second full day of docked operations. The continuing transfer of logistical supplies and scientific hardware can be seen proceeding smoothly. Apt and Walz once again worked with the Active Rack Isolation System experiment to replace a broken pushrod. With that complete, Apt monitors the ARIS experiment as Readdy and Korzun fire small maneuvering jets on their spacecraft to test the ability of ARIS to damp out any disturbances created by the firings. Walz also is continuing his work with the Mechanics of Granular Materials experiment in Atlantis' double Spacehab module. The astronauts used the large format IMAX camera to conduct a photographic survey of Mir from the Shuttle's flight deck windows while Akers shot IMAX movie scenes of Readdy, Wilcutt, and Korzun in the Spektr module.

  17. vulnerability Analysis Techniques of Hardware and Software Implementations of Cryptographic Algorithms

    OpenAIRE

    Roman Gevorkovich Korkikian; Evgeny Yurievich Rodionov; Alexander Vladimirovich Mamaev

    2014-01-01

    The article is a brief survey of hardware vulnerability analysis methods that might be applicable against cryptographic algorithm implementations. Those methods are based on physical properties of a device processing the algorithm. Focusing on mathematical algorithm’s background the article would help mastering various implementation-based attacks.

  18. Software development minimum guidance system. Algorithm and specifications of realizing special hardware processor data prefilter program

    International Nuclear Information System (INIS)

    Baginyan, S.A.; Govorun, N.N.; Tkhang, T.L.; Shigaev, V.N.

    1982-01-01

    Software development minimum guidance system for measuring pictures of bubble chamber on the base of a scanner (HPD) and special hardware processor (SHP) is described. The algorithm of selective filter is proposed. The local software structure and functional specifications of its major parts are described. Some examples of processing picture from HBC-1 (JINR) are also presented

  19. Realtime generation of K-Distributed sea clutter for hardware in the loop radar evaluation

    CSIR Research Space (South Africa)

    Van der Merwe, Johannes R

    2016-10-01

    Full Text Available distributed random variable (RV) to the required RV. The clutter is correlated by means of a filter process before translation, and it is shown that this technique produces an amplitude distribution that is sufficiently accurate for Hardware in the Loop (HIL...

  20. Towards the Development of a Model for Hardware Standards in Information Technology Procurement: Factors for Consideration

    Science.gov (United States)

    Ryan, David L.

    2010-01-01

    While research in academic and professional information technology (IT) journals address the need for strategic alignment and defined IT processes, there is little research about what factors should be considered when implementing specific IT hardware standards in an organization. The purpose of this study was to develop a set of factors for…

  1. Development of a driver information and warning system with vehicle hardware-in-the-loop simulations

    NARCIS (Netherlands)

    Gietelink, O.J.; Ploeg, J.; Schutter, B. de; Verhaegen, M.

    2009-01-01

    This paper presents a new method for the design and validation of advanced driver assistance systems (ADASs). With vehicle hardware-in-the-loop (VeHIL) simulations the development process, and more specifically the validation phase, of intelligent vehicles is carried out safer, cheaper, and more

  2. Integrating communication protocol selection with partitioning in hardware/software codesign

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1998-01-01

    frequencies of system components such as buses, CPU's, ASIC's, software code size, hardware area, and component prices. A distinct feature of the model is the modeling of driver processing of data (packing, splitting, compression, etc.) and its impact on communication throughput. The integration...

  3. Evaluation of state-of-the-art hardware architectures for fast cone-beam CT reconstruction

    CERN Document Server

    Scherl, Holger

    2011-01-01

    Holger Scherl introduces the reader to the reconstruction problem in computed tomography and its major scientific challenges that range from computational efficiency to the fulfillment of Tuy's sufficiency condition. The assessed hardware architectures include multi- and many-core systems, cell broadband engine architecture, graphics processing units, and field programmable gate arrays.

  4. Development of advanced driver assistance systems with vehicle hardware-in-the-loop simulations

    NARCIS (Netherlands)

    Gietelink, O.J.; Ploeg, J.; Schutter, B.de; Verhaegen, M.

    2006-01-01

    This paper presents a new method for the design and validation of advanced driver assistance systems (ADASs). With vehicle hardware-in-the-loop (VEHIL) simulations, the development process, and more specifically the validation phase, of intelligent vehicles is carried out safer, cheaper, and is more

  5. Lessons learned from hardware and software upgrades of IT-DB services

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    This talk gives an overview of recent changes in CERN database infrastructure. The presentation describes database service evolution, in particular new hardware & storage installation, integration with Agile infrastructure, complexity of validation strategy and finally the migration and upgrade process concerning the most critical database services.

  6. The Cibola flight experiment

    Energy Technology Data Exchange (ETDEWEB)

    Caffrey, Michael Paul [Los Alamos National Laboratory; Nelson, Anthony [Los Alamos National Laboratory; Salazar, Anthony [Los Alamos National Laboratory; Roussel - Dupre, Diane [Los Alamos National Laboratory; Katko, Kim [Los Alamos National Laboratory; Palmer, Joseph [ISE-3; Robinson, Scott [Los Alamos National Laboratory; Wirthlin, Michael [BRIGHAM YOUNG UNIV; Howes, William [BRIGHAM YOUNG UNIV; Richins, Daniel [BRIGHAM YOUNG UNIV

    2009-01-01

    The Cibola Flight Experiment (CFE) is an experimental small satellite carrying a reconfigurable processing instrument developed at the Los Alamos National Laboratory that demonstrates the feasibility of using FPGA-based high-performance computing for sensor processing in the space environment. The CFE satellite was launched on March 8, 2007 in low-earth orbit and has operated extremely well since its deployment. The nine Xilinx Virtex FPGAs used in the payload have been used for several high-throughput sensor processing applications and for single-event upset (SEU) monitoring and mitigation. This paper will describe the CFE system and summarize its operational results. In addition, this paper will describe the results from several SEU detection circuits that were performed on the spacecraft.

  7. Steps Towards Scalable and Modularized Flight Software for Unmanned Aircraft Systems

    Directory of Open Access Journals (Sweden)

    Johann C. Dauer

    2014-05-01

    Full Text Available Unmanned aircraft (UA applications impose a variety of computing tasks on the on-board computer system. From a research perspective, it is often more convenient to evaluate algorithms on bigger aircraft as they are capable of lifting heavier loads and thus more powerful computational units. On the other hand, smaller systems are often less expensive and operation is less restricted in many countries. This paper thus presents a conceptual design for flight software that can be evaluated on the UA of convenient size. The integration effort required to transfer the algorithm to different sized UA is significantly reduced. This scalability is achieved by using exchangeable payload modules and a flexible process distribution on different processing units. The presented approach is discussed using the example of the flight software of a 14 kg unmanned helicopter and an equivalent of 1.5 kg. The proof of concept is shown by means of flight performance in a hardware-in-the-loop simulation.

  8. Computer-Aided Systems Engineering for Flight Research Projects Using a Workgroup Database

    Science.gov (United States)

    Mizukami, Masahi

    2004-01-01

    An online systems engineering tool for flight research projects has been developed through the use of a workgroup database. Capabilities are implemented for typical flight research systems engineering needs in document library, configuration control, hazard analysis, hardware database, requirements management, action item tracking, project team information, and technical performance metrics. Repetitive tasks are automated to reduce workload and errors. Current data and documents are instantly available online and can be worked on collaboratively. Existing forms and conventional processes are used, rather than inventing or changing processes to fit the tool. An integrated tool set offers advantages by automatically cross-referencing data, minimizing redundant data entry, and reducing the number of programs that must be learned. With a simplified approach, significant improvements are attained over existing capabilities for minimal cost. By using a workgroup-level database platform, personnel most directly involved in the project can develop, modify, and maintain the system, thereby saving time and money. As a pilot project, the system has been used to support an in-house flight experiment. Options are proposed for developing and deploying this type of tool on a more extensive basis.

  9. A Near-Lossless Image Compression Algorithm Suitable for Hardware Design in Wireless Endoscopy System

    Directory of Open Access Journals (Sweden)

    Xie Xiang

    2007-01-01

    Full Text Available In order to decrease the communication bandwidth and save the transmitting power in the wireless endoscopy capsule, this paper presents a new near-lossless image compression algorithm based on the Bayer format image suitable for hardware design. This algorithm can provide low average compression rate ( bits/pixel with high image quality (larger than dB for endoscopic images. Especially, it has low complexity hardware overhead (only two line buffers and supports real-time compressing. In addition, the algorithm can provide lossless compression for the region of interest (ROI and high-quality compression for other regions. The ROI can be selected arbitrarily by varying ROI parameters. In addition, the VLSI architecture of this compression algorithm is also given out. Its hardware design has been implemented in m CMOS process.

  10. Efficient Hardware Implementation For Fingerprint Image Enhancement Using Anisotropic Gaussian Filter.

    Science.gov (United States)

    Khan, Tariq Mahmood; Bailey, Donald G; Khan, Mohammad A U; Kong, Yinan

    2017-05-01

    A real-time image filtering technique is proposed which could result in faster implementation for fingerprint image enhancement. One major hurdle associated with fingerprint filtering techniques is the expensive nature of their hardware implementations. To circumvent this, a modified anisotropic Gaussian filter is efficiently adopted in hardware by decomposing the filter into two orthogonal Gaussians and an oriented line Gaussian. An architecture is developed for dynamically controlling the orientation of the line Gaussian filter. To further improve the performance of the filter, the input image is homogenized by a local image normalization. In the proposed structure, for a middle-range reconfigurable FPGA, both parallel compute-intensive and real-time demands were achieved. We manage to efficiently speed up the image-processing time and improve the resource utilization of the FPGA. Test results show an improved speed for its hardware architecture while maintaining reasonable enhancement benchmarks.

  11. A Near-Lossless Image Compression Algorithm Suitable for Hardware Design in Wireless Endoscopy System

    Directory of Open Access Journals (Sweden)

    ZhiHua Wang

    2007-01-01

    Full Text Available In order to decrease the communication bandwidth and save the transmitting power in the wireless endoscopy capsule, this paper presents a new near-lossless image compression algorithm based on the Bayer format image suitable for hardware design. This algorithm can provide low average compression rate (2.12 bits/pixel with high image quality (larger than 53.11 dB for endoscopic images. Especially, it has low complexity hardware overhead (only two line buffers and supports real-time compressing. In addition, the algorithm can provide lossless compression for the region of interest (ROI and high-quality compression for other regions. The ROI can be selected arbitrarily by varying ROI parameters. In addition, the VLSI architecture of this compression algorithm is also given out. Its hardware design has been implemented in 0.18μm CMOS process.

  12. Integration and In-Field Gains Selection of Flight and Navigation Controller for Remotely Piloted Aircraft System

    Directory of Open Access Journals (Sweden)

    Słowik Maciej

    2017-03-01

    Full Text Available In the paper the implementation process of commercial flight and navigational controller in own aircraft is shown. The process of autopilot integration were performed for the fixed-wing type of unmanned aerial vehicle designed in high-wing and pull configuration of the drive. The above equipment were integrated and proper software control algorithms were chosen. The correctness of chosen hardware and software solution were verified in ground tests and experimental flights. The PID controllers for longitude and latitude controller channels were selected. The proper deflections of control surfaces and stabilization of roll, pitch and yaw angles were tested. In the next stage operation of telecommunication link and flight stabilization were verified. In the last part of investigations the preliminary control gains and configuration parameters for roll angle control loop were chosen. This enable better behavior of UAV during turns. Also it affected other modes of flight such as loiter (circle around designated point and auto mode where the plane executed a pre-programmed mission.

  13. Software and Hardware Infrastructure for Research in Electrophysiology

    Directory of Open Access Journals (Sweden)

    Roman eMouček

    2014-03-01

    Full Text Available As in other areas of experimental science, operation of electrophysiological laboratory, design and performance of electrophysiological experiments, collection, storage and sharing of experimental data and metadata, analysis and interpretation of these data, and publication of results are time consuming activities. If these activities are well organized and supported by a suitable infrastructure, work efficiency of researchers increases significantly.This article deals with the main concepts, design, and development of software and hardware infrastructure for research in electrophysiology. The described infrastructure has been primarily developed for the needs of neuroinformatics laboratory at the University of West Bohemia, the Czech Republic. However, from the beginning it has been also designed and developed to be open and applicable in laboratories that do similar research.After introducing the laboratory and the whole architectural concept the individual parts of the infrastructure are described. The central element of the software infrastructure is a web-based portal that enables community researchers to store, share, download and search data and metadata from electrophysiological experiments. The data model, domain ontology and usage of semantic web languages and technologies are described. Current data publication policy used in the portal is briefly introduced. The registration of the portal within Neuroscience Information Framework is described. Then the methods used for processing of electrophysiological signals are presented. The specific modifications of these methods introduced by laboratory researches are summarized; the methods are organized into a laboratory workflow. Other parts of the software infrastructure include mobile and offline solutions for data/metadata storing and a hardware stimulator communicating with an EEG amplifier and recording software.

  14. FPGA BASED HARDWARE KEY FOR TEMPORAL ENCRYPTION

    Directory of Open Access Journals (Sweden)

    B. Lakshmi

    2010-09-01

    Full Text Available In this paper, a novel encryption scheme with time based key technique on an FPGA is presented. Time based key technique ensures right key to be entered at right time and hence, vulnerability of encryption through brute force attack is eliminated. Presently available encryption systems, suffer from Brute force attack and in such a case, the time taken for breaking a code depends on the system used for cryptanalysis. The proposed scheme provides an effective method in which the time is taken as the second dimension of the key so that the same system can defend against brute force attack more vigorously. In the proposed scheme, the key is rotated continuously and four bits are drawn from the key with their concatenated value representing the delay the system has to wait. This forms the time based key concept. Also the key based function selection from a pool of functions enhances the confusion and diffusion to defend against linear and differential attacks while the time factor inclusion makes the brute force attack nearly impossible. In the proposed scheme, the key scheduler is implemented on FPGA that generates the right key at right time intervals which is then connected to a NIOS – II processor (a virtual microcontroller which is brought out from Altera FPGA that communicates with the keys to the personal computer through JTAG (Joint Test Action Group communication and the computer is used to perform encryption (or decryption. In this case the FPGA serves as hardware key (dongle for data encryption (or decryption.

  15. Navigation Doppler Lidar Sensor for Precision Altitude and Vector Velocity Measurements Flight Test Results

    Science.gov (United States)

    Pierrottet, Diego F.; Lockhard, George; Amzajerdian, Farzin; Petway, Larry B.; Barnes, Bruce; Hines, Glenn D.

    2011-01-01

    An all fiber Navigation Doppler Lidar (NDL) system is under development at NASA Langley Research Center (LaRC) for precision descent and landing applications on planetary bodies. The sensor produces high resolution line of sight range, altitude above ground, ground relative attitude, and high precision velocity vector measurements. Previous helicopter flight test results demonstrated the NDL measurement concepts, including measurement precision, accuracies, and operational range. This paper discusses the results obtained from a recent campaign to test the improved sensor hardware, and various signal processing algorithms applicable to real-time processing. The NDL was mounted in an instrumentation pod aboard an Erickson Air-Crane helicopter and flown over vegetation free terrain. The sensor was one of several sensors tested in this field test by NASA?s Autonomous Landing and Hazard Avoidance Technology (ALHAT) project.

  16. Coupling Sensing Hardware with Data Interrogation Software for Structural Health Monitoring

    Directory of Open Access Journals (Sweden)

    Charles R. Farrar

    2006-01-01

    Full Text Available The process of implementing a damage detection strategy for aerospace, civil and mechanical engineering infrastructure is referred to as structural health monitoring (SHM. The authors' approach is to address the SHM problem in the context of a statistical pattern recognition paradigm. In this paradigm, the process can be broken down into four parts: (1 Operational Evaluation, (2 Data Acquisition and Cleansing, (3 Feature Extraction and Data Compression, and (4 Statistical Model Development for Feature Discrimination. These processes must be implemented through hardware or software and, in general, some combination of these two approaches will be used. This paper will discuss each portion of the SHM process with particular emphasis on the coupling of a general purpose data interrogation software package for structural health monitoring with a modular wireless sensing and processing platform. More specifically, this paper will address the need to take an integrated hardware/software approach to developing SHM solutions.

  17. Hardware packet pacing using a DMA in a parallel computer

    Science.gov (United States)

    Chen, Dong; Heidelberger, Phillip; Vranas, Pavlos

    2013-08-13

    Method and system for hardware packet pacing using a direct memory access controller in a parallel computer which, in one aspect, keeps track of a total number of bytes put on the network as a result of a remote get operation, using a hardware token counter.

  18. The role of the visual hardware system in rugby performance ...

    African Journals Online (AJOL)

    This study explores the importance of the 'hardware' factors of the visual system in the game of rugby. A group of professional and club rugby players were tested and the results compared. The results were also compared with the established norms for elite athletes. The findings indicate no significant difference in hardware ...

  19. Human Integration Design Processes (HIDP)

    Science.gov (United States)

    Boyer, Jennifer

    2014-01-01

    The purpose of the Human Integration Design Processes (HIDP) document is to provide human-systems integration design processes, including methodologies and best practices that NASA has used to meet human systems and human rating requirements for developing crewed spacecraft. HIDP content is framed around human-centered design methodologies and processes in support of human-system integration requirements and human rating. NASA-STD-3001, Space Flight Human-System Standard, is a two-volume set of National Aeronautics and Space Administration (NASA) Agency-level standards established by the Office of the Chief Health and Medical Officer, directed at minimizing health and performance risks for flight crews in human space flight programs. Volume 1 of NASA-STD-3001, Crew Health, sets standards for fitness for duty, space flight permissible exposure limits, permissible outcome limits, levels of medical care, medical diagnosis, intervention, treatment and care, and countermeasures. Volume 2 of NASASTD- 3001, Human Factors, Habitability, and Environmental Health, focuses on human physical and cognitive capabilities and limitations and defines standards for spacecraft (including orbiters, habitats, and suits), internal environments, facilities, payloads, and related equipment, hardware, and software with which the crew interfaces during space operations. The NASA Procedural Requirements (NPR) 8705.2B, Human-Rating Requirements for Space Systems, specifies the Agency's human-rating processes, procedures, and requirements. The HIDP was written to share NASA's knowledge of processes directed toward achieving human certification of a spacecraft through implementation of human-systems integration requirements. Although the HIDP speaks directly to implementation of NASA-STD-3001 and NPR 8705.2B requirements, the human-centered design, evaluation, and design processes described in this document can be applied to any set of human-systems requirements and are independent of reference

  20. Amateur Radio on the International Space Station - Phase 2 Hardware System

    Science.gov (United States)

    Bauer, F.; McFadin, L.; Bruninga, B.; Watarikawa, H.

    2003-01-01

    The International Space Station (ISS) ham radio system has been on-orbit for over 3 years. Since its first use in November 2000, the first seven expedition crews and three Soyuz taxi crews have utilized the amateur radio station in the Functional Cargo Block (also referred to as the FGB or Zarya module) to talk to thousands of students in schools, to their families on Earth, and to amateur radio operators around the world. Early on, the Amateur Radio on the International Space Station (ARISS) international team devised a multi-phased hardware development approach for the ISS ham radio station. Three internal development Phases. Initial Phase 1, Mobile Radio Phase 2 and Permanently Mounted Phase 3 plus an externally mounted system, were proposed and agreed to by the ARISS team. The Phase 1 system hardware development which was started in 1996 has since been delivered to ISS. It is currently operational on 2 meters. The 70 cm system is expected to be installed and operated later this year. Since 2001, the ARISS international team have worked to bring the second generation ham system, called Phase 2, to flight qualification status. At this time, major portions of the Phase 2 hardware system have been delivered to ISS and will soon be installed and checked out. This paper intends to provide an overview of the Phase 1 system for background and then describe the capabilities of the Phase 2 radio system. It will also describe the current plans to finalize the Phase 1 and Phase 2 testing in Russia and outlines the plans to bring the Phase 2 hardware system to full operation.

  1. Embedded Hardware-Efficient Real-Time Classification With Cascade Support Vector Machines.

    Science.gov (United States)

    Kyrkou, Christos; Bouganis, Christos-Savvas; Theocharides, Theocharis; Polycarpou, Marios M

    2016-01-01

    Cascade support vector machines (SVMs) are optimized to efficiently handle problems, where the majority of the data belong to one of the two classes, such as image object classification, and hence can provide speedups over monolithic (single) SVM classifiers. However, SVM classification is a computationally demanding task and existing hardware architectures for SVMs only consider monolithic classifiers. This paper proposes the acceleration of cascade SVMs through a hybrid processing hardware architecture optimized for the cascade SVM classification flow, accompanied by a method to reduce the required hardware resources for its implementation, and a method to improve the classification speed utilizing cascade information to further discard data samples. The proposed SVM cascade architecture is implemented on a Spartan-6 field-programmable gate array (FPGA) platform and evaluated for object detection on 800×600 (Super Video Graphics Array) resolution images. The proposed architecture, boosted by a neural network that processes cascade information, achieves a real-time processing rate of 40 frames/s for the benchmark face detection application. Furthermore, the hardware-reduction method results in the utilization of 25% less FPGA custom-logic resources and 20% peak power reduction compared with a baseline implementation.

  2. HiCAT Software Infrastructure: Safe hardware control with object oriented Python

    Science.gov (United States)

    Moriarty, Christopher; Brooks, Keira; Soummer, Remi

    2018-01-01

    High contrast imaging for Complex Aperture Telescopes (HiCAT) is a testbed designed to demonstrate coronagraphy and wavefront control for segmented on-axis space telescopes such as envisioned for LUVOIR. To limit the air movements in the testbed room, software interfaces for several different hardware components were developed to completely automate operations. When developing software interfaces for many different pieces of hardware, unhandled errors are commonplace and can prevent the software from properly closing a hardware resource. Some fragile components (e.g. deformable mirrors) can be permanently damaged because of this. We present an object oriented Python-based infrastructure to safely automate hardware control and optical experiments. Specifically, conducting high-contrast imaging experiments while monitoring humidity and power status along with graceful shutdown processes even for unexpected errors. Python contains a construct called a “context manager” that allows you define code to run when a resource is opened or closed. Context managers ensure that a resource is properly closed, even when unhandled errors occur. Harnessing the context manager design, we also use Python’s multiprocessing library to monitor humidity and power status without interrupting the experiment. Upon detecting a safety problem, the master process sends an event to the child process that triggers the context managers to gracefully close any open resources. This infrastructure allows us to queue up several experiments and safely operate the testbed without a human in the loop.

  3. A Practical Introduction to HardwareSoftware Codesign

    CERN Document Server

    Schaumont, Patrick R

    2013-01-01

    This textbook provides an introduction to embedded systems design, with emphasis on integration of custom hardware components with software. The key problem addressed in the book is the following: how can an embedded systems designer strike a balance between flexibility and efficiency? The book describes how combining hardware design with software design leads to a solution to this important computer engineering problem. The book covers four topics in hardware/software codesign: fundamentals, the design space of custom architectures, the hardware/software interface and application examples. The book comes with an associated design environment that helps the reader to perform experiments in hardware/software codesign. Each chapter also includes exercises and further reading suggestions. Improvements in this second edition include labs and examples using modern FPGA environments from Xilinx and Altera, which make the material applicable to a greater number of courses where these tools are already in use.  Mo...

  4. The Hardware Topological Trigger of ATLAS: Commissioning and Operations

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00226165; The ATLAS collaboration

    2018-01-01

    The Level-1 trigger is the first rate-reducing step in the ATLAS trigger system with an output rate of 100 kHz and decision latency smaller than 2.5 μs. It consists of a calorimeter trigger, muon trigger and a central trigger processor. To improve the physics potential reach in ATLAS, during the LHC shutdown after Run 1, the Level-1 trigger system was upgraded at hardware, firmware and software level. In particular, a new electronics sub-system was introduced in the real-time data processing path: the Topological Processor System (L1Topo). It consists of a single AdvancedCTA shelf equipped with two Level-1 topological processor blades. For individual blades, real-time information from calorimeter and muon Level-1 trigger systems, is processed by four individual state-of-the-art FPGAs. It needs to deal with a large input bandwidth of up to 6 Tb/s, optical connectivity and low processing latency on the real-time data path. The L1Topo firmware apply measurements of angles between jets and/or leptons and several...

  5. New hardware and software design for electrical impedance tomography

    Science.gov (United States)

    Goharian, Mehran

    find a regularization parameter. Our results show that the TRS algorithm has the advantage that it does not require any knowledge of the norm of the noise for its process. (4) The second part of thesis discusses the designing, implementation, and testing a novel 48-channel multi-frequency EIT system. The system specifications proved to be comparable with the existing EIT systems with capability of 3-D measurement over selectable frequencies. The proposed algorithms are finally tested under experimental situation using designed EIT hardware. The conductivity and permittivity images for different targets were reconstructed using four different approaches: dog-leg, principal component analysis (PCA), Gauss-Newton, and difference imaging. In the case of the multi-frequency analysis, the PCA-based approach provided a substantial improvement over the Gauss-Newton technique in terms of systematic error reduction. Our EIT system recovered a conductivity value of 0.08 Sm-1 for the 0.07 Sm-1 piece of cucumber (14% error).

  6. Veggie Hardware Validation Test Preliminary Results and Lessons Learned

    Science.gov (United States)

    Massa, Gioia D.; Dufour, Nicole F.; Smith, T. M.

    2014-01-01

    The Veggie hardware validation test, VEG-01, was conducted on the International Space Station during Expeditions 39 and 40 from May through June of 2014. The Veggie hardware and the VEG-01 experiment payload were launched to station aboard the SpaceX-3 resupply mission in April, 2014. Veggie was installed in an Expedite-the-Processing-of-Experiments-to-Space-Station (ExPRESS) rack in the Columbus module, and the VEG-01 validation test was initiated. Veggie installation was successful, and power was supplied to the unit. The hardware was programmed and the root mat reservoir and plant pillows were installed without issue. As expected, a small amount of growth media was observed in the sealed bags which enclosed the plant pillows when they were destowed. Astronaut Steve Swanson used the wet/dry vacuum to clean up the escaped particles. Water insertion or priming the first plant pillow was unsuccessful as an issue prevented water movement through the quick disconnect. All subsequent pillows were successfully primed, and the initial pillow was replaced with a backup pillow and successfully primed. Six pillows were primed, but only five pillows had plants which germinated. After about a week and a half it was observed that plants were not growing well and that pillow wicks were dry. This indicated that the reservoir was not supplying sufficient water to the pillows via wicking, and so the team reverted to an operational fix which added water directly to the plant pillows. Direct watering of the pillows led to a recovery in several of the stressed plants; a couple of which did not recover. An important lesson learned involved Veggie's bellows. The bellows tended to float and interfere with operations when opened, so Steve secured them to the baseplate during plant tending operations. Due to the perceived intensity of the LED lights, the crew found it challenging to both work under the lights and read crew procedures on their computer. Although the lights are not a safety

  7. On a Model for the Storage of Files on a Hardware: Statistics at a Fixed Time and Asymptotic Regimes

    Directory of Open Access Journals (Sweden)

    Vincent Bansaye

    2009-01-01

    Full Text Available We consider a version in continuous time of the parking problem of Knuth. Files arrive following a Poisson point process and are stored on a hardware identified with the real line, in the closest free portions at the right of the arrival location. We specify the distribution of the space of unoccupied locations at a fixed time and give asymptotic regimes when the hardware is becoming full.

  8. Porting the Core Flight System to the Dellingr Cubesat

    Science.gov (United States)

    Cudmore, Alan

    2017-01-01

    Dellingr is a 6U Cubesat developed by NASA Goddard Space Flight Center. It was delivered to the International Space Station in August 2017, and is scheduled to be deployed in November 2017. Compared to a typical NASA satellite, the Dellingr Cubesat had an extremely low budget and short schedule. Although the Dellingr Cubesat has minimal hardware resources, the cFS was ultimately chosen for the flight software. Using the cFS on the Dellingr Cubesat presented a few challenges, but also offered opportunities to help speed up development and verify the ACS flight software. This presentation will cover the lessons learned in porting the cFS to the Dellingr Cubesat, including working with the limited hardware resources, porting the cFS to FreeRTOS, and overcoming limitations related to data storage and file transfer. This presentation will also cover how hardware abstraction was used to run the flight software on multiple platforms and interface with the 42 dynamic simulator.

  9. Characterization of the sources and processes of organic and inorganic aerosols in New York city with a high-resolution time-of-flight aerosol mass apectrometer

    Directory of Open Access Journals (Sweden)

    Y.-L. Sun

    2011-02-01

    Full Text Available Submicron aerosol particles (PM1 were measured in-situ using a High-Resolution Time-of-Flight Aerosol Mass Spectrometer during the summer 2009 Field Intensive Study at Queens College in New York, NY. Organic aerosol (OA and sulfate are the two dominant species, accounting for 54% and 24%, respectively, of the total PM1 mass. The average mass-based size distribution of OA presents a small mode peaking at ~150 nm (Dva and an accumulation mode (~550 nm that is internally mixed with sulfate, nitrate, and ammonium. The diurnal cycles of both sulfate and OA peak between 01:00–02:00 p.m. EST due to photochemical production. The average (±σ oxygen-to-carbon (O/C, hydrogen-to-carbon (H/C, and nitrogen-to-carbon (N/C ratios of OA in NYC are 0.36 (±0.09, 1.49 (±0.08, and 0.012 (±0.005, respectively, corresponding to an average organic mass-to-carbon (OM/OC ratio of 1.62 (±0.11. Positive matrix factorization (PMF of the high resolution mass spectra identified two primary OA (POA sources, traffic and cooking, and three secondary OA (SOA components including a highly oxidized, regional low-volatility oxygenated OA (LV-OOA; O/C = 0.63, a less oxidized, semi-volatile SV-OOA (O/C = 0.38 and a unique nitrogen-enriched OA (NOA; N/C = 0.053 characterized with prominent CxH2x + 2N+ peaks likely from amino compounds. Our results indicate that cooking and traffic are two distinct and mass-equivalent POA sources in NYC, together contributing ~30% of the total OA mass during this study. The OA composition is dominated by secondary species, especially during high PM events. SV-OOA and LV-OOA on average account for 34% and 30%, respectively, of the total OA mass. The chemical evolution of SOA in NYC appears to progress with a continuous oxidation from SV-OOA to LV-OOA, which is further supported by a gradual increase of O/C ratio and a simultaneous decrease of H/C ratio in total OOA. Detailed

  10. Application of statistical process control and process capability analysis procedures in orbiter processing activities at the Kennedy Space Center

    Science.gov (United States)

    Safford, Robert R.; Jackson, Andrew E.; Swart, William W.; Barth, Timothy S.

    1994-01-01

    Successful ground processing at KSC requires that flight hardware and ground support equipment conform to specifications at tens of thousands of checkpoints. Knowledge of conformance is an essential requirement for launch. That knowledge of conformance at every requisite point does not, however, enable identification of past problems with equipment, or potential problem areas. This paper describes how the introduction of Statistical Process Control and Process Capability Analysis identification procedures into existing shuttle processing procedures can enable identification of potential problem areas and candidates for improvements to increase processing performance measures. Results of a case study describing application of the analysis procedures to Thermal Protection System processing are used to illustrate the benefits of the approaches described in the paper.

  11. Flight Operations . [Zero Knowledge to Mission Complete

    Science.gov (United States)

    Forest, Greg; Apyan, Alex; Hillin, Andrew

    2016-01-01

    Outline the process that takes new hires with zero knowledge all the way to the point of completing missions in Flight Operations. Audience members should be able to outline the attributes of a flight controller and instructor, outline the training flow for flight controllers and instructors, and identify how the flight controller and instructor attributes are necessary to ensure operational excellence in mission prep and execution. Identify how the simulation environment is used to develop crisis management, communication, teamwork, and leadership skills for SGT employees beyond what can be provided by classroom training.

  12. Time Manager Software for a Flight Processor

    Science.gov (United States)

    Zoerne, Roger

    2012-01-01

    Data analysis is a process of inspecting, cleaning, transforming, and modeling data to highlight useful information and suggest conclusions. Accurate timestamps and a timeline of vehicle events are needed to analyze flight data. By moving the timekeeping to the flight processor, there is no longer a need for a redundant time source. If each flight processor is initially synchronized to GPS, they can freewheel and maintain a fairly accurate time throughout the flight with no additional GPS time messages received. How ever, additional GPS time messages will ensure an even greater accuracy. When a timestamp is required, a gettime function is called that immediately reads the time-base register.

  13. Development of an active structure flight experiment

    Science.gov (United States)

    Manning, R. A.; Wyse, R. E.; Schubert, S. R.

    1993-02-01

    The design and development of the Air Force and TRW's Advanced Control Technology Experiment (ACTEX) flight experiment is described in this paper. The overall objective of ACTEX is to provide an active structure trailblazer which will demonstrate the compatibility of active structures with operational spacecraft performance and lifetime measures. At the heart of the experiment is an active tripod driven by a digitally-programmable analog control electronics subsystem. Piezoceramic sensors and actuators embedded in a graphite epoxy host material provide the sensing and actuation mechanism for the active tripod. Low noise ground-programmable electronics provide a virtually unlimited number of control schemes that can be implemented in the space environment. The flight experiment program provides the opportunity to gather performance, reliability, adaptability, and lifetime performance data on vibration suppression hardware for the next generation of DoD and NASA spacecraft.

  14. Electric propulsion flight experience and technology readiness

    Science.gov (United States)

    Pollard, J. E.; Jackson, D. E.; Marvin, D. C.; Jenkin, A. B.; Janson, S. W.

    1993-06-01

    Spacecraft electric propulsion technology is reviewed here to provide mission planners and potential users with a better appreciation of its capabilities and limitations. Flight experience provides the best measure of EP technology readiness. We describe and document the flight history and development status of EP in domestic, foreign, and commercial programs. Low-power resistojets, arcjets, ion engines, and plasma thrusters are applicable today for stationkeeping and drag compensation. Future high-power systems would enable large velocity-change maneuvers. The trade-space of EP encompasses significant performance benefits (reduced propellant mass, enhanced payload, system-level synergism), along with challenges (hardware development, system operations, non-technical issues). The choice of design parameters (thrust, specific impulse, input power) depends on how much of a change from traditional spacecraft operations is acceptable for a given mission - greater change will yield a greater payoff.

  15. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Neubauer, M; The ATLAS collaboration

    2011-01-01

    In hadron collider experiments, triggering the detector to store interesting events for offline analysis is a challenge due to the high rates and multiplicities of particles produced. The LHC will soon operate at a center-of-mass energy of 14 TeV and at high instantaneous luminosities of the order of $10^{34}$ to $10^{35}$ cm$^{-2}$ s$^{-1}$. A multi-level trigger strategy is used in ATLAS, with the first level (LVL1) implemented in hardware and the second and third levels (LVL2 and EF) implemented in a large computer farm. Maintaining high trigger efficiency for the physics we are most interested in while at the same time suppressing high rate physics from inclusive QCD processes is a difficult but important problem. It is essential that the trigger system be flexible and robust, with sufficient redundancy and operating margin. Providing high quality track reconstruction over the full ATLAS detector by the start of processing at LVL2 is an important element to achieve these needs. As the instantaneous lumino...

  16. Secure management of biomedical data with cryptographic hardware.

    Science.gov (United States)

    Canim, Mustafa; Kantarcioglu, Murat; Malin, Bradley

    2012-01-01

    The biomedical community is increasingly migrating toward research endeavors that are dependent on large quantities of genomic and clinical data. At the same time, various regulations require that such data be shared beyond the initial collecting organization (e.g., an academic medical center). It is of critical importance to ensure that when such data are shared, as well as managed, it is done so in a manner that upholds the privacy of the corresponding individuals and the overall security of the system. In general, organizations have attempted to achieve these goals through deidentification methods that remove explicitly, and potentially, identifying features (e.g., names, dates, and geocodes). However, a growing number of studies demonstrate that deidentified data can be reidentified to named individuals using simple automated methods. As an alternative, it was shown that biomedical data could be shared, managed, and analyzed through practical cryptographic protocols without revealing the contents of any particular record. Yet, such protocols required the inclusion of multiple third parties, which may not always be feasible in the context of trust or bandwidth constraints. Thus, in this paper, we introduce a framework that removes the need for multiple third parties by collocating services to store and to process sensitive biomedical data through the integration of cryptographic hardware. Within this framework, we define a secure protocol to process genomic data and perform a series of experiments to demonstrate that such an approach can be run in an efficient manner for typical biomedical investigations.

  17. Demonstrating Hybrid Learning in a Flexible Neuromorphic Hardware System.

    Science.gov (United States)

    Friedmann, Simon; Schemmel, Johannes; Grubl, Andreas; Hartel, Andreas; Hock, Matthias; Meier, Karlheinz

    2017-02-01

    We present results from a new approach to learning and plasticity in neuromorphic hardware systems: to enable flexibility in implementable learning mechanisms while keeping high efficiency associated with neuromorphic implementations, we combine a general-purpose processor with full-custom analog elements. This processor is operating in parallel with a fully parallel neuromorphic system consisting of an array of synapses connected to analog, continuous time neuron circuits. Novel analog correlation sensor circuits process spike events for each synapse in parallel and in real-time. The processor uses this pre-processing to compute new weights possibly using additional information following its program. Therefore, to a certain extent, learning rules can be defined in software giving a large degree of flexibility. Synapses realize correlation detection geared towards Spike-Timing Dependent Plasticity (STDP) as central computational primitive in the analog domain. Operating at a speed-up factor of 1000 compared to biological time-scale, we measure time-constants from tens to hundreds of micro-seconds. We analyze variability across multiple chips and demonstrate learning using a multiplicative STDP rule. We conclude that the presented approach will enable flexible and efficient learning as a platform for neuroscientific research and technological applications.

  18. Ground-facilities at the DLR Institute of Aerospace Medicine for preparation of flight experiments

    Science.gov (United States)

    Hemmersbach, Ruth; Hendrik Anken, Ralf; Hauslage, Jens; von der Wiesche, Melanie; Baerwalde, Sven; Schuber, Marianne

    In order to investigate the influence of altered gravity on biological systems and to identify gravisensitive processes, various experimental platforms have been developed, which are useful to simulate weightlessness or are able to produce hypergravity. At the Institute of Aerospace Medicine, DLR Cologne, a broad spectrum of applications is offered to scientists: clinostats with one rotation axis and variable rotation speeds for cultivation of small objects (including aquatic organisms) in simulated weightlessness conditions, for online microscopic observations and for online kinetic measurements. Own research concentrates on comparative studies with other kinds of methods to simulate weightlessness, also available at the institute: Rotating Wall Vessel (RWV) for aquatic studies, Random Positioning Machine (RPM; manufactured by Dutch Space, Leiden, The Netherlands). Correspondingly, various centrifuge devices are available to study different test objects under hypergravity conditions -such as NIZEMI, a slow rotating centrifuge microscope, and MUSIC, a multi-sample centrifuge. Mainly for experiments with human test subjects (artificial gravity), but also for biological systems or for testing various kinds of (flight-) hardware, the SAHC, a short arm human centrifuge -loaned by ESA -was installed in Cologne and completes our experimental scenario. Furthermore, due to our specific tasks such as providing laboratories during the German Parabolic Flight Experiments starting from Cologne and being the Facility Responsible Center for BIOLAB, a science rack in the Columbus module aboard the ISS, scientists have the possibility for an optimal preparation of their flight experiments.

  19. Dedicated hardware processor and corresponding system-on-chip design for real-time laser speckle imaging.

    Science.gov (United States)

    Jiang, Chao; Zhang, Hongyan; Wang, Jia; Wang, Yaru; He, Heng; Liu, Rui; Zhou, Fangyuan; Deng, Jialiang; Li, Pengcheng; Luo, Qingming

    2011-11-01

    Laser speckle imaging (LSI) is a noninvasive and full-field optical imaging technique which produces two-dimensional blood flow maps of tissues from the raw laser speckle images captured by a CCD camera without scanning. We present a hardware-friendly algorithm for the real-time processing of laser speckle imaging. The algorithm is developed and optimized specifically for LSI processing in the field programmable gate array (FPGA). Based on this algorithm, we designed a dedicated hardware processor for real-time LSI in FPGA. The pipeline processing scheme and parallel computing architecture are introduced into the design of this LSI hardware processor. When the LSI hardware processor is implemented in the FPGA running at the maximum frequency of 130 MHz, up to 85 raw images with the resolution of 640×480 pixels can be processed per second. Meanwhile, we also present a system on chip (SOC) solution for LSI processing by integrating the CCD controller, memory controller, LSI hardware processor, and LCD display controller into a single FPGA chip. This SOC solution also can be used to produce an application specific integrated circuit for LSI processing.

  20. Targeting multiple heterogeneous hardware platforms with OpenCL

    Science.gov (United States)

    Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.

    2014-06-01

    The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware

  1. Detailed requirements document for Stowage List and Hardware Tracking System (SLAHTS). [computer based information management system in support of space shuttle orbiter stowage configuration

    Science.gov (United States)

    Keltner, D. J.

    1975-01-01

    The stowage list and hardware tracking system, a computer based information management system, used in support of the space shuttle orbiter stowage configuration and the Johnson Space Center hardware tracking is described. The input, processing, and output requirements that serve as a baseline for system development are defined.

  2. LIDAR TS for ITER core plasma. Part I: layout & hardware

    Science.gov (United States)

    Salzmann, H.; Gowers, C.; Nielsen, P.

    2017-12-01

    The original time-of-flight design of the Thomson scattering diagnostic for the ITER core plasma has been shown up by ITER. This decision was justified by insufficiencies of some of the components. In this paper we show that with available, present day technology a LIDAR TS system is feasible which meets all the ITER specifications. As opposed to the conventional TS system the LIDAR TS also measures the high field side of the plasma. The optical layout of the front end has been changed only little in comparison with the latest one considered by ITER. The main change is that it offers an optical collection without any vignetting over the low field side. The throughput of the system is defined only by the size and the angle of acceptance of the detectors. This, in combination with the fact that the LIDAR system uses only one set of spectral channels for the whole line of sight, means that no absolute calibration using Raman or Rayleigh scattering from a non-hydrogen isotope gas fill of the vessel is needed. Alignment of the system is easy since the collection optics view the footprint of the laser on the inner wall. In the described design we use, simultaneously, two different wavelength pulses from a Nd:YAG laser system. Its fundamental wavelength ensures measurements of 2 keV up to more than 40 keV, whereas the injection of the second harmonic enables measurements of low temperatures. As it is the purpose of this paper to show the technological feasibility of the LIDAR system, the hardware is considered in Part I of the paper. In Part II we demonstrate by numerical simulations that the accuracy of the measurements as required by ITER is maintained throughout the given plasma parameter range. The effect of enhanced background radiation in the wavelength range 400 nm-500 nm is considered. In Part III the recovery of calibration in case of changing spectral transmission of the front end is treated. We also investigate how to improve the spatial resolution at the

  3. FPGA Acceleration by Dynamically-Loaded Hardware Libraries

    DEFF Research Database (Denmark)

    Lomuscio, Andrea; Nannarelli, Alberto; Re, Marco

    Hardware acceleration is a viable solution to obtain energy efficiency in data intensive computation. In this work, we present a hardware framework to dynamically load hardware libraries, HLL, on reconfigurable platforms (FPGAs). Provided a library of application-specific processors, we load on......-the-y the speciffic processor in the FPGA, and we transfer the execution from the CPU to the FPGA-based accelerator. Results show that significant speed-up and energy efficiency can be obtained by HLL acceleration on system-on-chips where reconfigurable fabric is placed next to the CPUs....

  4. Hardware support for collecting performance counters directly to memory

    Science.gov (United States)

    Gara, Alan; Salapura, Valentina; Wisniewski, Robert W.

    2012-09-25

    Hardware support for collecting performance counters directly to memory, in one aspect, may include a plurality of performance counters operable to collect one or more counts of one or more selected activities. A first storage element may be operable to store an address of a memory location. A second storage element may be operable to store a value indicating whether the hardware should begin copying. A state machine may be operable to detect the value in the second storage element and trigger hardware copying of data in selected one or more of the plurality of performance counters to the memory location whose address is stored in the first storage element.

  5. Hardware Realization of Chaos Based Symmetric Image Encryption

    KAUST Repository

    Barakat, Mohamed L.

    2012-06-01

    This thesis presents a novel work on hardware realization of symmetric image encryption utilizing chaos based continuous systems as pseudo random number generators. Digital implementation of chaotic systems results in serious degradations in the dynamics of the system. Such defects are illuminated through a new technique of generalized post proceeding with very low hardware cost. The thesis further discusses two encryption algorithms designed and implemented as a block cipher and a stream cipher. The security of both systems is thoroughly analyzed and the performance is compared with other reported systems showing a superior results. Both systems are realized on Xilinx Vetrix-4 FPGA with a hardware and throughput performance surpassing known encryption systems.

  6. X-37 Flight Demonstrator: X-40A Flight Test Approach

    Science.gov (United States)

    Mitchell, Dan

    2004-01-01

    The flight test objectives are: Evaluate calculated air data system (CADS) experiment. Evaluate Honeywell SIGI (GPS/INS) under flight conditions. Flight operation control center (FOCC) site integration and flight test operations. Flight test and tune GN&C algorithms. Conduct PID maneuvers to improve the X-37 aero database. Develop computer air date system (CADS) flight data to support X-37 system design.

  7. An application of characteristic function in order to predict reliability and lifetime of aeronautical hardware

    International Nuclear Information System (INIS)

    Żurek, Józef; Kaleta, Ryszard; Zieja, Mariusz

    2016-01-01

    The forecasting of reliability and life of aeronautical hardware requires recognition of many and various destructive processes that deteriorate the health/maintenance status thereof. The aging of technical components of aircraft as an armament system proves of outstanding significance to reliability and safety of the whole system. The aging process is usually induced by many and various factors, just to mention mechanical, biological, climatic, or chemical ones. The aging is an irreversible process and considerably affects (i.e. reduces) reliability and lifetime of aeronautical equipment. Application of the characteristic function of the aging process is suggested to predict reliability and lifetime of aeronautical hardware. An increment in values of diagnostic parameters is introduced to formulate then, using the characteristic function and after some rearrangements, the partial differential equation. An analytical dependence for the characteristic function of the aging process is a solution to this equation. With the inverse transformation applied, the density function of the aging of aeronautical hardware is found. Having found the density function, one can determine the aeronautical equipment’s reliability and lifetime. The in-service collected or the life tests delivered data are used to attain this goal. Coefficients in this relationship are found using the likelihood function.

  8. A Perspective on Development Flight Instrumentation and Flight Test Analysis Plans for Ares I-X

    Science.gov (United States)

    Huebner, Lawrence D.; Richards, James S.; Brunty, Joseph A.; Smith, R. Marshall; Trombetta, Dominic R.

    2009-01-01

    NASA. s Constellation Program will take a significant step toward completion of the Ares I crew launch vehicle with the flight test of Ares I-X and completion of the Ares I-X post-flight evaluation. The Ares I-X flight test vehicle is an ascent development flight test that will acquire flight data early enough to impact the design and development of the Ares I. As the primary customer for flight data from the Ares I-X mission, Ares I has been the major driver in the definition of the Development Flight Instrumentation (DFI). This paper focuses on the DFI development process and the plans for post-flight evaluation of the resulting data to impact the Ares I design. Efforts for determining the DFI for Ares I-X began in the fall of 2005, and significant effort to refine and implement the Ares I-X DFI has been expended since that time. This paper will present a perspective in the development and implementation of the DFI. Emphasis will be placed on the process by which the list was established and changes were made to that list due to imposed constraints. The paper will also discuss the plans for the analysis of the DFI data following the flight and a summary of flight evaluation tasks to be performed in support of tools and models validation for design and development.

  9. Agile hardware and software systems engineering for critical military space applications

    Science.gov (United States)

    Huang, Philip M.; Knuth, Andrew A.; Krueger, Robert O.; Garrison-Darrin, Margaret A.

    2012-06-01

    The Multi Mission Bus Demonstrator (MBD) is a successful demonstration of agile program management and system engineering in a high risk technology application where utilizing and implementing new, untraditional development strategies were necessary. MBD produced two fully functioning spacecraft for a military/DOD application in a record breaking time frame and at dramatically reduced costs. This paper discloses the adaptation and application of concepts developed in agile software engineering to hardware product and system development for critical military applications. This challenging spacecraft did not use existing key technology (heritage hardware) and created a large paradigm shift from traditional spacecraft development. The insertion of new technologies and methods in space hardware has long been a problem due to long build times, the desire to use heritage hardware, and lack of effective process. The role of momentum in the innovative process can be exploited to tackle ongoing technology disruptions and allowing risk interactions to be mitigated in a disciplined manner. Examples of how these concepts were used during the MBD program will be delineated. Maintaining project momentum was essential to assess the constant non recurring technological challenges which needed to be retired rapidly from the engineering risk liens. Development never slowed due to tactical assessment of the hardware with the adoption of the SCRUM technique. We adapted this concept as a representation of mitigation of technical risk while allowing for design freeze later in the program's development cycle. By using Agile Systems Engineering and Management techniques which enabled decisive action, the product development momentum effectively was used to produce two novel space vehicles in a fraction of time with dramatically reduced cost.

  10. Performance comparison between ISCSI and other hardware and software solutions

    CERN Document Server

    Gug, M

    2003-01-01

    We report on our investigations on some technologies that can be used to build disk servers and networks of disk servers using commodity hardware and software solutions. It focuses on the performance that can be achieved by these systems and gives measured figures for different configurations. It is divided into two parts : iSCSI and other technologies and hardware and software RAID solutions. The first part studies different technologies that can be used by clients to access disk servers using a gigabit ethernet network. It covers block access technologies (iSCSI, hyperSCSI, ENBD). Experimental figures are given for different numbers of clients and servers. The second part compares a system based on 3ware hardware RAID controllers, a system using linux software RAID and IDE cards and a system mixing both hardware RAID and software RAID. Performance measurements for reading and writing are given for different RAID levels.

  11. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Dominique Houzet

    2006-08-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  12. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Ouadjaout Salim

    2006-01-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  13. Towards hardware-intrinsic security foundations and practice

    CERN Document Server

    Sadeghi, Ahmad-Reza; Tuyls, Pim

    2010-01-01

    Hardware-intrinsic security is a young field dealing with secure secret key storage. This book features contributions from researchers and practitioners with backgrounds in physics, mathematics, cryptography, coding theory and processor theory.

  14. International Space Station (ISS) Addition of Hardware - Computer Generated Art

    Science.gov (United States)

    1995-01-01

    This computer generated scene of the International Space Station (ISS) represents the first addition of hardware following the completion of Phase II. The 8-A Phase shows the addition of the S-9 truss.

  15. Preventive Safety Measures: A Guide to Security Hardware.

    Science.gov (United States)

    Gottwalt, T. J.

    2003-01-01

    Emphasizes the importance of an annual security review of a school facility's door hardware and provides a description of the different types of locking devices typically used on schools and where they are best applied. (EV)

  16. Hardware device to physical structure binding and authentication

    Science.gov (United States)

    Hamlet, Jason R.; Stein, David J.; Bauer, Todd M.

    2013-08-20

    Detection and deterrence of device tampering and subversion may be achieved by including a cryptographic fingerprint unit within a hardware device for authenticating a binding of the hardware device and a physical structure. The cryptographic fingerprint unit includes an internal physically unclonable function ("PUF") circuit disposed in or on the hardware device, which generate an internal PUF value. Binding logic is coupled to receive the internal PUF value, as well as an external PUF value associated with the physical structure, and generates a binding PUF value, which represents the binding of the hardware device and the physical structure. The cryptographic fingerprint unit also includes a cryptographic unit that uses the binding PUF value to allow a challenger to authenticate the binding.

  17. Aspects of system modelling in Hardware/Software partitioning

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1996-01-01

    This paper addresses fundamental aspects of system modelling and partitioning algorithms in the area of Hardware/Software Codesign. Three basic system models for partitioning are presented and the consequences of partitioning according to each of these are analyzed. The analysis shows the importa......This paper addresses fundamental aspects of system modelling and partitioning algorithms in the area of Hardware/Software Codesign. Three basic system models for partitioning are presented and the consequences of partitioning according to each of these are analyzed. The analysis shows...... the importance of making a clear distinction between the model used for partitioning and the model used for evaluation It also illustrates the importance of having a realistic hardware model such that hardware sharing can be taken into account. Finally, the importance of integrating scheduling and allocation...

  18. Hardware Implementation Of Line Clipping A lgorithm By Using FPGA

    Directory of Open Access Journals (Sweden)

    Amar Dawod

    2013-04-01

    Full Text Available The computer graphics system performance is increasing faster than any other computing application. Algorithms for line clipping against convex polygons and lines have been studied for a long time and many research papers have been published so far. In spite of the latest graphical hardware development and significant increase of performance the clipping is still a bottleneck of any graphical system. So its implementation in hardware is essential for real time applications. In this paper clipping operation is discussed and a hardware implementation of the line clipping algorithm is presented and finally formulated and tested using Field Programmable Gate Arrays (FPGA. The designed hardware unit consists of two parts : the first is positional code generator unit and the second is the clipping unit. Finally it is worth mentioning that the  designed unit is capable of clipping (232524 line segments per second.       

  19. JPL's Space Flight Operations Center: Development project overview

    Science.gov (United States)

    Ebersole, M.

    1991-01-01

    The topics are covered in view graph form and include the following: (1) major elements of deep space flight programs; (2) development schedule; (3) primary design goals; (4) Space Flight Operations Center (SFOC) data systems architecture; (5) technical guidelines; (6) SFOC data system functional architecture; (7) typical SFOC node; (8) SFOC components; (9) SFOC software categories; (10) planned subsystem core diagram for Mars observer; (11) SFOC use of public domain/3rd party software; (12) SFOC hardware; (13) SFOC target six mission configuration; and (14) SFOC development status and plans.

  20. Activity on improving performance of time-of-flight detector at CDF

    International Nuclear Information System (INIS)

    Menzione, A.; Cerri, C.; Vataga, E.; Prokoshin, F.; Tokar, S.

    2002-01-01

    The paper describes activity on improving the time resolution of the Time-of-Flight detector at CDF. The main goal of the detector is the identification of kaons and pions for b-quark (B-meson) flavour tagging. Construction of the detector has been described as well as proposals on detector design changes to improve its time resolution. Monte Carlo simulation of the detector response to MIP was performed. The results of the simulation showed that the proposed modifications (at least with currently available materials) bring modest or no improvement of the detector time resolution. An automated set-up was assembled to test and check out the changes in the electronic readout system of the detector. Sophisticated software has been developed for this set-up to provide control of the system as well as processing and presentation of data from the detector. This software can perform various tests using different implementations of the hardware set-up

  1. Conceptual Design Approach to Implementing Hardware-based Security Controls in Data Communication Systems

    International Nuclear Information System (INIS)

    Ibrahim, Ahmad Salah; Jung, Jaecheon

    2016-01-01

    In the Korean Advanced Power Reactor (APR1400), safety control systems network is electrically isolated and physically separated from non-safety systems data network. Unidirectional gateways, include data diode fiber-optic cabling and computer-based servers, transmit the plant safety critical parameters to the main control room (MCR) for control and monitoring processes. The data transmission is only one-way from safety to non-safety. Reverse communication is blocked so that safety systems network is protected from potential cyberattacks or intrusions from non-safety side. Most of commercials off-the-shelf (COTS) security devices are software-based solutions that require operating systems and processors to perform its functions. Field Programmable Gate Arrays (FPGAs) offer digital hardware solutions to implement security controls such as data packet filtering and deep data packet inspection. This paper presents a conceptual design to implement hardware-based network security controls for maintaining the availability of gateway servers. A conceptual design of hardware-based network security controls was discussed in this paper. The proposed design is aiming at utilizing the hardware-based capabilities of FPGAs together with filtering and DPI functions of COTS software-based firewalls and intrusion detection and prevention systems (IDPS). The proposed design implemented a network security perimeter between the DCN-I zone and gateway servers zone. Security control functions are to protect the gateway servers from potential DoS attacks that could affect the data availability and integrity

  2. System-level protection and hardware Trojan detection using weighted voting.

    Science.gov (United States)

    Amin, Hany A M; Alkabani, Yousra; Selim, Gamal M I

    2014-07-01

    The problem of hardware Trojans is becoming more serious especially with the widespread of fabless design houses and design reuse. Hardware Trojans can be embedded on chip during manufacturing or in third party intellectual property cores (IPs) during the design process. Recent research is performed to detect Trojans embedded at manufacturing time by comparing the suspected chip with a golden chip that is fully trusted. However, Trojan detection in third party IP cores is more challenging than other logic modules especially that there is no golden chip. This paper proposes a new methodology to detect/prevent hardware Trojans in third party IP cores. The method works by gradually building trust in suspected IP cores by comparing the outputs of different untrusted implementations of the same IP core. Simulation results show that our method achieves higher probability of Trojan detection over a naive implementation of simple voting on the output of different IP cores. In addition, experimental results show that the proposed method requires less hardware overhead when compared with a simple voting technique achieving the same degree of security.

  3. System-level protection and hardware Trojan detection using weighted voting

    Directory of Open Access Journals (Sweden)

    Hany A.M. Amin

    2014-07-01

    Full Text Available The problem of hardware Trojans is becoming more serious especially with the widespread of fabless design houses and design reuse. Hardware Trojans can be embedded on chip during manufacturing or in third party intellectual property cores (IPs during the design process. Recent research is performed to detect Trojans embedded at manufacturing time by comparing the suspected chip with a golden chip that is fully trusted. However, Trojan detection in third party IP cores is more challenging than other logic modules especially that there is no golden chip. This paper proposes a new methodology to detect/prevent hardware Trojans in third party IP cores. The method works by gradually building trust in suspected IP cores by comparing the outputs of different untrusted implementations of the same IP core. Simulation results show that our method achieves higher probability of Trojan detection over a naive implementation of simple voting on the output of different IP cores. In addition, experimental results show that the proposed method requires less hardware overhead when compared with a simple voting technique achieving the same degree of security.

  4. Hardware Commissioning of the LHC Quality Assurance, follow-up and storing of the test results

    CERN Document Server

    Barbero, E

    2005-01-01

    During the commissioning of the LHC technical systems [1] (the so-called Hardware Commissioning) a large number of test sequences and procedures will be applied to the different systems and components of the accelerator. All the information related to the coordination of the Hardware Commissioning will be structured and managed towards the final objective of integrating all the data produced in the Manufacturing and Test Folders (MTF) [2] at both equipment level (i.e. individual system tests) and commissioning level (i.e.Hardware Commissioning). The MTF for Hardware Commissioning will be mainly used to archive the results of the tests (i.e. status, parameters and waveforms) which will be used later as reference during the operation with beam. Also it is an indispensable tool for monitoring the progress of the different tests and ensuring the proper follow-up of the procedures described in the engineering specifications; in this way, the Quality Assurance process will be completed. This paper describes the spe...

  5. Conceptual Design Approach to Implementing Hardware-based Security Controls in Data Communication Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ibrahim, Ahmad Salah; Jung, Jaecheon [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2016-10-15

    In the Korean Advanced Power Reactor (APR1400), safety control systems network is electrically isolated and physically separated from non-safety systems data network. Unidirectional gateways, include data diode fiber-optic cabling and computer-based servers, transmit the plant safety critical parameters to the main control room (MCR) for control and monitoring processes. The data transmission is only one-way from safety to non-safety. Reverse communication is blocked so that safety systems network is protected from potential cyberattacks or intrusions from non-safety side. Most of commercials off-the-shelf (COTS) security devices are software-based solutions that require operating systems and processors to perform its functions. Field Programmable Gate Arrays (FPGAs) offer digital hardware solutions to implement security controls such as data packet filtering and deep data packet inspection. This paper presents a conceptual design to implement hardware-based network security controls for maintaining the availability of gateway servers. A conceptual design of hardware-based network security controls was discussed in this paper. The proposed design is aiming at utilizing the hardware-based capabilities of FPGAs together with filtering and DPI functions of COTS software-based firewalls and intrusion detection and prevention systems (IDPS). The proposed design implemented a network security perimeter between the DCN-I zone and gateway servers zone. Security control functions are to protect the gateway servers from potential DoS attacks that could affect the data availability and integrity.

  6. Testing the LIGO inspiral analysis with hardware injections

    International Nuclear Information System (INIS)

    Brown, D A

    2004-01-01

    Injection of simulated binary inspiral signals into detector hardware provides an excellent test of the inspiral detection pipeline. By recovering the physical parameters of an injected signal, we test our understanding of both instrumental calibration and the data analysis pipeline. We describe an inspiral search code and results from hardware injection tests and demonstrate that injected signals can be recovered by the data analysis pipeline. The parameters of the recovered signals match those of the injected signals

  7. Fifty Years of Observing Hardware and Human Behavior

    Science.gov (United States)

    McMann, Joe

    2011-01-01

    During this half-day workshop, Joe McMann presented the lessons learned during his 50 years of experience in both industry and government, which included all U.S. manned space programs, from Mercury to the ISS. He shared his thoughts about hardware and people and what he has learned from first-hand experience. Included were such topics as design, testing, design changes, development, failures, crew expectations, hardware, requirements, and meetings.

  8. Hardware control system using modular software under RSX-11D

    International Nuclear Information System (INIS)

    Kittell, R.S.; Helland, J.A.

    1978-01-01

    A modular software system used to control extensive hardware is described. The development, operation, and experience with this software are discussed. Included are the methods employed to implement this system while taking advantage of the Real-Time features of RSX-11D. Comparisons are made between this system and an earlier nonmodular system. The controlled hardware includes magnet power supplies, stepping motors, DVM's, and multiplexors, and is interfaced through CAMAC. 4 figures

  9. Accelerator Technology: Injection and Extraction Related Hardware: Kickers and Septa

    CERN Document Server

    Barnes, M J; Mertens, V

    2013-01-01

    This document is part of Subvolume C 'Accelerators and Colliders' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the the Section '8.7 Injection and Extraction Related Hardware: Kickers and Septa' of the Chapter '8 Accelerator Technology' with the content: 8.7 Injection and Extraction Related Hardware: Kickers and Septa 8.7.1 Fast Pulsed Systems (Kickers) 8.7.2 Electrostatic and Magnetic Septa

  10. Basics of spectroscopic instruments. Hardware of NMR spectrometer

    International Nuclear Information System (INIS)

    Sato, Hajime

    2009-01-01

    NMR is a powerful tool for structure analysis of small molecules, natural products, biological macromolecules, synthesized polymers, samples from material science and so on. Magnetic Resonance Imaging (MRI) is applicable to plants and animals Because most of NMR experiments can be done by an automation mode, one can forget hardware of NMR spectrometers. It would be good to understand features and performance of NMR spectrometers. Here I present hardware of a modern NMR spectrometer which is fully equipped with digital technology. (author)

  11. A Survey on Hardware Implementations of Visual Object Trackers

    OpenAIRE

    El-Shafie, Al-Hussein A.; Habib, S. E. D.

    2017-01-01

    Visual object tracking is an active topic in the computer vision domain with applications extending over numerous fields. The main sub-tasks required to build an object tracker (e.g. object detection, feature extraction and object tracking) are computation-intensive. In addition, real-time operation of the tracker is indispensable for almost all of its applications. Therefore, complete hardware or hardware/software co-design approaches are pursued for better tracker implementations. This pape...

  12. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Kimura, N; The ATLAS collaboration

    2012-01-01

    Selecting interesting events with triggering is very challenging at the LHC due to the busy hadronic environment. Starting in 2014 the LHC will run with an energy of 13 or 14 TeV and instantaneous luminosities which could exceed 1034 interactions per cm2 and per second. The triggering in the ATLAS detector is realized using a three level trigger approach, in which the first level (Level-1) is hardware based and the second (Level-2) and third (EF) stag are realized using large computing farms. It is a crucial and non-trivial task for triggering to maintain a high efficiency for events of interest while suppressing effectively the very high rates of inclusive QCD process, which constitute mainly background. At the same time the trigger system has to be robust and provide sufficient operational margins to adapt to changes in the running environment. In the current design track reconstruction can be performed only in limited regions of interest at L2 and the CPU requirements may limit this even further at the hig...

  13. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Kimura, N; The ATLAS collaboration

    2012-01-01

    Selecting interesting events with triggering is very challenging at the LHC due to the busy hadronic environment. Starting in 2014 the LHC will run with an energy of 14TeV and instantaneous luminosities which could exceed 10^34 interactions per cm^2 and per second. The triggering in the ATLAS detector is realized using a three level trigger approach, in which the first level (L1) is hardware based and the second (L2) and third (EF) stag are realized using large computing farms. It is a crucial and non-trivial task for triggering to maintain a high efficiency for events of interest while suppressing effectively the very high rates of inclusive QCD process, which constitute mainly background. At the same time the trigger system has to be robust and provide sufficient operational margins to adapt to changes in the running environment. In the current design track reconstruction can be performed only in limited regions of interest at L2 and the CPU requirements may limit this even further at the highest instantane...

  14. Hardware implementation of the ORNL fissile mass flow monitor

    International Nuclear Information System (INIS)

    McEvers, J.; Sumner, J.; Jones, R.; Ferrell, R.; Martin, C.; Uckan, T.; March-Leuba, J.

    1998-01-01

    This paper provides an overall description of the implementation of the Oak Ridge National Laboratory (ORNL) Fissile Mass Flow Monitor, which is part of a Blend Down Monitoring System (BDMS) developed by the US Department of Energy (DOE). The Fissile Mass Flow Monitor is designed to measure the mass flow of fissile material through a gaseous or liquid process stream. It consists of a source-modulator assembly, a detector assembly, and a cabinet that houses all control, data acquisition, and supporting electronics equipment. The development of this flow monitor was first funded by DOE/NE in September 95, and an initial demonstration by ORNL was described in previous INMM meetings. This methodology was chosen by DOE/NE for implementation in November 1996, and the hardware/software development is complete. Successful BDMS installation and operation of the complete BDMS has been demonstrated in the Paducah Gaseous Diffusion Plant (PGDP), which is operated by Lockheed Martin Utility Services, Inc. for the US Enrichment Corporation and regulated by the Nuclear Regulatory Commission. Equipment for two BDMS units has been shipped to the Russian Federation

  15. Commodity hardware and open source solutions in FTU data management

    International Nuclear Information System (INIS)

    Centioli, C.; Bracco, G.; Eccher, S.; Iannone, F.; Maslennikov, A.; Panella, M.; Vitale, V.

    2004-01-01

    Frascati Tokamak Upgrade (FTU) data management system underwent several developments in the last year, mainly due to the availability of huge amount of open source software and cheap commodity hardware. First of all, we replaced the old and expensive four SUN/SOLARIS servers running AFS (Andrew File System) fusione.it cell with three SuperServer Supermicro SC-742. Secondly Linux 2.4 OS has been installed on our new cell servers and OpenAFS 1.2.8 open source distributed file system has replaced the commercial IBM/Transarc AFS. A pioneering solution - SGI's XFS file system for Linux - has been adopted to format one terabyte of FTU storage system on which the AFS volumes are based. Benchmark tests have shown the good performances of XFS compared to the classical ext3 Linux file system. Third, the data access software has been ported to Linux, together with the interfaces to Matlab and IDL, as well as the locally developed data display utility, SHOX. Finally a new Object-Oriented Data Model (OODM) has been developed for FTU shots data to build and maintain a FTU data warehouse (DW). FTU OODM has been developed using ROOT, an object oriented data analysis framework well-known in high energy physics. Since large volumes of data are involved, a parallel data extraction process, developed in the ROOT framework, has been implemented taking advantage of the AFS distributed environment of FTU computing system

  16. Hardware Middleware for Person Tracking on Embedded Distributed Smart Cameras

    Directory of Open Access Journals (Sweden)

    Ali Akbar Zarezadeh

    2012-01-01

    Full Text Available Tracking individuals is a prominent application in such domains like surveillance or smart environments. This paper provides a development of a multiple camera setup with jointed view that observes moving persons in a site. It focuses on a geometry-based approach to establish correspondence among different views. The expensive computational parts of the tracker are hardware accelerated via a novel system-on-chip (SoC design. In conjunction with this vision application, a hardware object request broker (ORB middleware is presented as the underlying communication system. The hardware ORB provides a hardware/software architecture to achieve real-time intercommunication among multiple smart cameras. Via a probing mechanism, a performance analysis is performed to measure network latencies, that is, time traversing the TCP/IP stack, in both software and hardware ORB approaches on the same smart camera platform. The empirical results show that using the proposed hardware ORB as client and server in separate smart camera nodes will considerably reduce the network latency up to 100 times compared to the software ORB.

  17. Compiling quantum circuits to realistic hardware architectures using temporal planners

    Science.gov (United States)

    Venturelli, Davide; Do, Minh; Rieffel, Eleanor; Frank, Jeremy

    2018-04-01

    To run quantum algorithms on emerging gate-model quantum hardware, quantum circuits must be compiled to take into account constraints on the hardware. For near-term hardware, with only limited means to mitigate decoherence, it is critical to minimize the duration of the circuit. We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus on compiling to superconducting hardware architectures with nearest neighbor constraints. Our initial experiments focus on compiling Quantum Alternating Operator Ansatz (QAOA) circuits whose high number of commuting gates allow great flexibility in the order in which the gates can be applied. That freedom makes it more challenging to find optimal compilations but also means there is a greater potential win from more optimized compilation than for less flexible circuits. We map this quantum circuit compilation problem to a temporal planning problem, and generated a test suite of compilation problems for QAOA circuits of various sizes to a realistic hardware architecture. We report compilation results from several state-of-the-art temporal planners on this test set. This early empirical evaluation demonstrates that temporal planning is a viable approach to quantum circuit compilation.

  18. Space Flight Resource Management for ISS Operations

    Science.gov (United States)

    Schmidt, Larry; Slack, Kelley; O'Keefe, William; Huning, Therese; Sipes, Walter; Holland, Albert

    2011-01-01

    This slide presentation reviews the International Space Station (ISS) Operations space flight resource management, which was adapted to the ISS from the shuttle processes. It covers crew training and behavior elements.

  19. Defining and Enforcing Hardware Security Requirements

    Science.gov (United States)

    2011-12-01

    The Nether - lands: Gordon and Breach, 1997. [26] M. Harrison, W. Ruzzo, and J. Ullman, “Protection in operating systems,” Communications of the ACM...Verification: Principles and Processes. Dordrecht, The Nether - lands: Springer, 2006. [70] M. Glinz, “On non-functional requirements,” IEEE

  20. Implementation of Hardware Accelerators on Zynq

    DEFF Research Database (Denmark)

    Toft, Jakob Kenn

    benchmarks, a Monte Carlo simulation of European stock options and a Telco telephone billing application. Each of the accelerators test different aspects of the Zynq platform in terms of floating-point and binary coded decimal processing speed. The two accelerators are compared with the performance...

  1. Use of hardware accelerators for ATLAS computing

    CERN Document Server

    Bauce, Matteo; Dankel, Maik; Howard, Jacob; Kama, Sami

    2015-01-01

    Modern HEP experiments produce tremendous amounts of data. These data are processed by in-house built software frameworks which have lifetimes longer than the detector itself. Such frameworks were traditionally based on serial code and relied on advances in CPU technologies, mainly clock frequency, to cope with increasing data volumes. With the advent of many-core architectures and GPGPUs this paradigm has to shift to parallel processing and has to include the use of co-processors. However, since the design of most existing frameworks is based on the assumption of frequency scaling and predate co-processors, parallelisation and integration of co-processors are not an easy task. The ATLAS experiment is an example of such a big experiment with a big software framework called Athena. In this talk we will present the studies on parallelisation and co-processor (GPGPU) use in data preparation and tracking for trigger and offline reconstruction as well as their integration into a multiple process based Athena frame...

  2. Use of hardware accelerators for ATLAS computing

    CERN Document Server

    Dankel, Maik; The ATLAS collaboration; Howard, Jacob; Bauce, Matteo; Boing, Rene

    2015-01-01

    Modern HEP experiments produce tremendous amounts of data. This data is processed by in-house built software frameworks which have lifetimes longer than the detector it- self. Such frameworks were traditionally based on serial code and relied on advances in CPU technologies, mainly clock frequency, to cope with increasing data volumes. With the advent of many-core architectures and GPGPUs this paradigm has to shift to paral- lel processing and has to include the use of co-processors. However, since the design of most existing frameworks is based on the assumption of frequency scaling and predate co-processors, parallelisation and integration of co-processors are not an easy task. The ATLAS experiment is an example of such a big experiment with a big software frame- work called Athena. In this proceedings we will present the studies on parallelisation and co-processor (GPGPU) use in data preparation and tracking for trigger and offline recon- struction as well as their integration into a multiple process based...

  3. Hardware stream cipher with controllable chaos generator for colour image encryption

    KAUST Repository

    Barakat, Mohamed L.

    2014-01-01

    This study presents hardware realisation of chaos-based stream cipher utilised for image encryption applications. A third-order chaotic system with signum non-linearity is implemented and a new post processing technique is proposed to eliminate the bias from the original chaotic sequence. The proposed stream cipher utilises the processed chaotic output to mask and diffuse input pixels through several stages of XORing and bit permutations. The performance of the cipher is tested with several input images and compared with previously reported systems showing superior security and higher hardware efficiency. The system is experimentally verified on XilinxVirtex 4 field programmable gate array (FPGA) achieving small area utilisation and a throughput of 3.62 Gb/s. © The Institution of Engineering and Technology 2013.

  4. Bat flight: aerodynamics, kinematics and flight morphology.

    Science.gov (United States)

    Hedenström, Anders; Johansson, L Christoffer

    2015-03-01

    Bats evolved the ability of powered flight more than 50 million years ago. The modern bat is an efficient flyer and recent research on bat flight has revealed many intriguing facts. By using particle image velocimetry to visualize wake vortices, both the magnitude and time-history of aerodynamic forces can be estimated. At most speeds the downstroke generates both lift and thrust, whereas the function of the upstroke changes with forward flight speed. At hovering and slow speed bats use a leading edge vortex to enhance the lift beyond that allowed by steady aerodynamics and an inverted wing during the upstroke to further aid weight support. The bat wing and its skeleton exhibit many features and control mechanisms that are presumed to improve flight performance. Whereas bats appear aerodynamically less efficient than birds when it comes to cruising flight, they have the edge over birds when it comes to manoeuvring. There is a direct relationship between kinematics and the aerodynamic performance, but there is still a lack of knowledge about how (and if) the bat controls the movements and shape (planform and camber) of the wing. Considering the relatively few bat species whose aerodynamic tracks have been characterized, there is scope for new discoveries and a need to study species representing more extreme positions in the bat morphospace. © 2015. Published by The Company of Biologists Ltd.

  5. Workstation-Based Avionics Simulator to Support Mars Science Laboratory Flight Software Development

    Science.gov (United States)

    Henriquez, David; Canham, Timothy; Chang, Johnny T.; McMahon, Elihu

    2008-01-01

    The Mars Science Laboratory developed the WorkStation TestSet (WSTS) to support flight software development. The WSTS is the non-real-time flight avionics simulator that is designed to be completely software-based and run on a workstation class Linux PC. This provides flight software developers with their own virtual avionics testbed and allows device-level and functional software testing when hardware testbeds are either not yet available or have limited availability. The WSTS has successfully off-loaded many flight software development activities from the project testbeds. At the writing of this paper, the WSTS has averaged an order of magnitude more usage than the project's hardware testbeds.

  6. Man-rated flight software for the F-8 DFBW program

    Science.gov (United States)

    Bairnsfather, R. R.

    1975-01-01

    The design, implementation, and verification of the flight control software used in the F-8 DFBW program are discussed. Since the DFBW utilizes an Apollo computer and hardware, the procedures, controls, and basic management techniques employed are based on those developed for the Apollo software system. Program Assembly Control, simulator configuration control, erasable-memory load generation, change procedures and anomaly reporting are discussed. The primary verification tools--the all-digital simulator, the hybrid simulator, and the Iron Bird simulator--are described, as well as the program test plans and their implementation on the various simulators. Failure-effects analysis and the creation of special failure-generating software for testing purposes are described. The quality of the end product is evidenced by the F-8 DFBW flight test program in which 42 flights, totaling 58 hours of flight time, were successfully made without any DFCS inflight software, or hardware, failures.

  7. A Survey on Open-Source Flight Control Platforms of Unmanned Aerial Vehicle

    DEFF Research Database (Denmark)

    Ebeid, Emad Samuel Malki; Skriver, Martin; Jin, Jie

    2017-01-01

    Recently, Unmanned Aerial Vehicle (UAV), so-called drones, have gotten a lot of attention in academic research and commercial applications due to their simple structure, ease of operations and low-cost hardware components. Flight controller, embedded electronics component, represents the core part...... of the drone. It aims at performing the main operations of the drone (e.g., autonomous control and navigation). There are various types of flight controllers and each of them has its own characteristics and features. This paper presents an extensive survey on the publicly available open-source flight...... controllers that can be used for academic research. The paper introduces the basics of UAV system with its components. The survey fully covers both hardware and software open-source flight controller platforms and compares their main features....

  8. The stable isotopic composition of water vapour above Corsica during the HyMeX SOP1 campaign: insight into vertical mixing processes from lower-tropospheric survey flights

    Science.gov (United States)

    Sodemann, Harald; Aemisegger, Franziska; Pfahl, Stephan; Bitter, Mark; Corsmeier, Ulrich; Feuerle, Thomas; Graf, Pascal; Hankers, Rolf; Hsiao, Gregor; Schulz, Helmut; Wieser, Andreas; Wernli, Heini

    2017-05-01

    Stable isotopes of water vapour are powerful indicators of meteorological processes on a broad range of scales, reflecting evaporation, condensation, and air mass mixing processes. With the recent advent of fast laser-based spectroscopic methods, it has become possible to measure the stable isotopic composition of atmospheric water vapour in situ at a high temporal resolution. Here we present results from such comprehensive airborne spectroscopic isotope measurements in water vapour over the western Mediterranean at a high spatial and temporal resolution. Measurements have been acquired by a customized Picarro L2130-i cavity-ring down spectrometer deployed onboard the Dornier 128 D-IBUF aircraft together with a meteorological flux measurement package during the HyMeX SOP1 (Hydrological cycle in Mediterranean Experiment special observation period 1) field campaign in Corsica, France, during September and October 2012. Taking into account memory effects of the air inlet pipe, the typical time resolution of the measurements was about 15-30 s, resulting in an average horizontal resolution of about 1-2 km. Cross-calibration of the water vapour measurements from all humidity sensors showed good agreement under most flight conditions but the most turbulent ones. In total 21 successful stable isotope flights with 59 flight hours have been performed. Our data provide quasi-climatological autumn average conditions and vertical profiles of the stable isotope parameters δD, δ18O, and d-excess during the study period. A d-excess minimum in the overall average profile is reached in the region of the boundary-layer top, possibly caused by precipitation evaporation. This minimum is bracketed by higher d-excess values near the surface caused by non-equilibrium fractionation, and a maximum above the boundary layer related to the increasing d-excess in very depleted and dry high-altitude air masses. Repeated flights along the same pattern reveal pronounced day-to-day variability

  9. Sequential Principal Component Analysis -An Optimal and Hardware-Implementable Transform for Image Compression

    Science.gov (United States)

    Duong, Tuan A.; Duong, Vu A.

    2009-01-01

    This paper presents the JPL-developed Sequential Principal Component Analysis (SPCA) algorithm for feature extraction / image compression, based on "dominant-term selection" unsupervised learning technique that requires an order-of-magnitude lesser computation and has simpler architecture compared to the state of the art gradient-descent techniques. This algorithm is inherently amenable to a compact, low power and high speed VLSI hardware embodiment. The paper compares the lossless image compression performance of the JPL's SPCA algorithm with the state of the art JPEG2000, widely used due to its simplified hardware implementability. JPEG2000 is not an optimal data compression technique because of its fixed transform characteristics, regardless of its data structure. On the other hand, conventional Principal Component Analysis based transform (PCA-transform) is a data-dependent-structure transform. However, it is not easy to implement the PCA in compact VLSI hardware, due to its highly computational and architectural complexity. In contrast, the JPL's "dominant-term selection" SPCA algorithm allows, for the first time, a compact, low-power hardware implementation of the powerful PCA algorithm. This paper presents a direct comparison of the JPL's SPCA versus JPEG2000, incorporating the Huffman and arithmetic coding for completeness of the data compression operation. The simulation results show that JPL's SPCA algorithm is superior as an optimal data-dependent-transform over the state of the art JPEG2000. When implemented in hardware, this technique is projected to be ideally suited to future NASA missions for autonomous on-board image data processing to improve the bandwidth of communication.

  10. The Mars Science Laboratory (MSL) Entry, Descent And Landing Instrumentation (MEDLI): Hardware Performance and Data Reconstruction

    Science.gov (United States)

    Little, Alan; Bose, Deepak; Karlgaard, Chris; Munk, Michelle; Kuhl, Chris; Schoenenberger, Mark; Antill, Chuck; Verhappen, Ron; Kutty, Prasad; White, Todd

    2013-01-01

    The Mars Science Laboratory (MSL) Entry, Descent and Landing Instrumentation (MEDLI) hardware was a first-of-its-kind sensor system that gathered temperature and pressure readings on the MSL heatshield during Mars entry on August 6, 2012. MEDLI began as challenging instrumentation problem, and has been a model of collaboration across multiple NASA organizations. After the culmination of almost 6 years of effort, the sensors performed extremely well, collecting data from before atmospheric interface through parachute deploy. This paper will summarize the history of the MEDLI project and hardware development, including key lessons learned that can apply to future instrumentation efforts. MEDLI returned an unprecedented amount of high-quality engineering data from a Mars entry vehicle. We will present the performance of the 3 sensor types: pressure, temperature, and isotherm tracking, as well as the performance of the custom-built sensor support electronics. A key component throughout the MEDLI project has been the ground testing and analysis effort required to understand the returned flight data. Although data analysis is ongoing through 2013, this paper will reveal some of the early findings on the aerothermodynamic environment that MSL encountered at Mars, the response of the heatshield material to that heating environment, and the aerodynamic performance of the entry vehicle. The MEDLI data results promise to challenge our engineering assumptions and revolutionize the way we account for margins in entry vehicle design.

  11. A Flexible Design for Optimization of Hardware Architecture in Distributed Arithmetic based FIR Filters

    OpenAIRE

    Fazel Sharifi; Saba Amanollahi; Mohammad Amin Taherkhani; Omid Hashemipour

    2012-01-01

    FIR filters are used in many performance/power critical applications such as mobile communication devices, analogue to digital converters and digital signal processing applications. Design of appropriate FIR filters usually causes the order of filter to be increased. Synthesis and tape-out of high-order FIR filters with reasonable delay, area and power has become an important challenge for hardware designers. In many cases the complexity of high-order filters causes the constraints of the tot...

  12. Hardware in the loop testing and evaluation of seaborne search radars

    CSIR Research Space (South Africa)

    Strydom, JJ

    2012-09-01

    Full Text Available for independent testing and evaluation of radar systems. The CSIR digital radio frequency memory (DRFM) hardware technology is used as the basis of these test systems. DRFM's are traditionally used for EW applications, but processing power of field programmable...-band radar. In these figures the red areas indicate very “spiky” clutter, whereas the blue areas indicate Rayleigh (Gaussian) clutter. Table 1 shows the conditions relating to the light clutter and heavy clutter scenarios. Table 1: Measurement parameters...

  13. Enforcing Hardware-Assisted Integrity for Secure Transactions from Commodity Operating Systems

    Science.gov (United States)

    2015-08-17

    3 Configuration and Power Interface (ACPI) to control the switching between OSes. We employ Trusted Platform Module ( TPM ) during boot-up to...ensure the integrity of the BIOS. There is no reliance on the TPM after the system boot-up process is complete. The combination of BIOS and TPM provides...party device drivers). Moreover, the BIOS code can be set to read-only at boot-up using TPM or other hardware lock and thus protected from being

  14. Hardware format pattern banks for the Associative memory boards in the ATLAS Fast Tracker Trigger System

    CERN Document Server

    Grewcoe, Clay James

    2014-01-01

    The aim of this project is to streamline and update the process of encoding the pattern bank to hardware format in the Associative memory board (AM) of the Fast Tracker (FTK) for the ATLAS detector. The encoding is also adapted to Gray code to eliminate possible misreadings in high frequency devices such as this one, ROOT files are used to store the pattern banks because of the compression utilized in ROOT.

  15. Speed test results and hardware/software study of computational speed problem, appendix D

    Science.gov (United States)

    1984-01-01

    The HP9845C is a desktop computer which is tested and evaluated for processing speed. A study was made to determine the availability and approximate cost of computers and/or hardware accessories necessary to meet the 20 ms sample period speed requirements. Additional requirements were that the control algorithm could be programmed in a high language and that the machine have sufficient storage to store the data from a complete experiment.

  16. James Webb Space Telescope Core 2 Test - Cryogenic Thermal Balance Test of the Observatorys Core Area Thermal Control Hardware

    Science.gov (United States)

    Cleveland, Paul; Parrish, Keith; Thomson, Shaun; Marsh, James; Comber, Brian

    2016-01-01

    The James Webb Space Telescope (JWST), successor to the Hubble Space Telescope, will be the largest astronomical telescope ever sent into space. To observe the very first light of the early universe, JWST requires a large deployed 6.5-meter primary mirror cryogenically cooled to less than 50 Kelvin. Three scientific instruments are further cooled via a large radiator system to less than 40 Kelvin. A fourth scientific instrument is cooled to less than 7 Kelvin using a combination pulse-tube Joule-Thomson mechanical cooler. Passive cryogenic cooling enables the large scale of the telescope which must be highly folded for launch on an Ariane 5 launch vehicle and deployed once on orbit during its journey to the second Earth-Sun Lagrange point. Passive cooling of the observatory is enabled by the deployment of a large tennis court sized five layer Sunshield combined with the use of a network of high efficiency radiators. A high purity aluminum heat strap system connects the three instrument's detector systems to the radiator systems to dissipate less than a single watt of parasitic and instrument dissipated heat. JWST's large scale features, while enabling passive cooling, also prevent the typical flight configuration fully-deployed thermal balance test that is the keystone of most space missions' thermal verification plans. This paper describes the JWST Core 2 Test, which is a cryogenic thermal balance test of a full size, high fidelity engineering model of the Observatory's 'Core' area thermal control hardware. The 'Core' area is the key mechanical and cryogenic interface area between all Observatory elements. The 'Core' area thermal control hardware allows for temperature transition of 300K to approximately 50 K by attenuating heat from the room temperature IEC (instrument electronics) and the Spacecraft Bus. Since the flight hardware is not available for test, the Core 2 test uses high fidelity and flight-like reproductions.

  17. Implementing the lattice Boltzmann model on commodity graphics hardware

    International Nuclear Information System (INIS)

    Kaufman, Arie; Fan, Zhe; Petkov, Kaloian

    2009-01-01

    Modern graphics processing units (GPUs) can perform general-purpose computations in addition to the native specialized graphics operations. Due to the highly parallel nature of graphics processing, the GPU has evolved into a many-core coprocessor that supports high data parallelism. Its performance has been growing at a rate of squared Moore's law, and its peak floating point performance exceeds that of the CPU by an order of magnitude. Therefore, it is a viable platform for time-sensitive and computationally intensive applications. The lattice Boltzmann model (LBM) computations are carried out via linear operations at discrete lattice sites, which can be implemented efficiently using a GPU-based architecture. Our simulations produce results comparable to the CPU version while improving performance by an order of magnitude. We have demonstrated that the GPU is well suited for interactive simulations in many applications, including simulating fire, smoke, lightweight objects in wind, jellyfish swimming in water, and heat shimmering and mirage (using the hybrid thermal LBM). We further advocate the use of a GPU cluster for large scale LBM simulations and for high performance computing. The Stony Brook Visual Computing Cluster has been the platform for several applications, including simulations of real-time plume dispersion in complex urban environments and thermal fluid dynamics in a pressurized water reactor. Major GPU vendors have been targeting the high performance computing market with GPU hardware implementations. Software toolkits such as NVIDIA CUDA provide a convenient development platform that abstracts the GPU and allows access to its underlying stream computing architecture. However, software programming for a GPU cluster remains a challenging task. We have therefore developed the Zippy framework to simplify GPU cluster programming. Zippy is based on global arrays combined with the stream programming model and it hides the low-level details of the

  18. Obtaining Valid Safety Data for Software Safety Measurement and Process Improvement

    Science.gov (United States)

    Basili, Victor r.; Zelkowitz, Marvin V.; Layman, Lucas; Dangle, Kathleen; Diep, Madeline

    2010-01-01

    We report on a preliminary case study to examine software safety risk in the early design phase of the NASA Constellation spaceflight program. Our goal is to provide NASA quality assurance managers with information regarding the ongoing state of software safety across the program. We examined 154 hazard reports created during the preliminary design phase of three major flight hardware systems within the Constellation program. Our purpose was two-fold: 1) to quantify the relative importance of software with respect to system safety; and 2) to identify potential risks due to incorrect application of the safety process, deficiencies in the safety process, or the lack of a defined process. One early outcome of this work was to show that there are structural deficiencies in collecting valid safety data that make software safety different from hardware safety. In our conclusions we present some of these deficiencies.

  19. Ranking different factors influencing flight delay

    Directory of Open Access Journals (Sweden)

    Meysam Kazemi Asfe

    2014-07-01

    Full Text Available Flight interruption is one of the most important issues in today’s airline industry. Every year, most airlines spend significant amount of money to compensate flight delays. Therefore, it is important to detect important factors influencing on flight delays. This paper presents an empirical investigation to determine important factors on this issue. The study also asks some decision makers to make pairwise comparison and ranks various factors using the art of analytical hierarchy process. The study determines that technical defects and delayed entry were among the most important factors to blame for flight delays. In addition, announcing the postponement, replacement aircraft and path replacement are among the most important decisions facing managers in the aviation industry during the disruption of the flight.

  20. Oxygen Generation System Laptop Bus Controller Flight Software

    Science.gov (United States)

    Rowe, Chad; Panter, Donna

    2009-01-01

    The Oxygen Generation System Laptop Bus Controller Flight Software was developed to allow the International Space Station (ISS) program to activate specific components of the Oxygen Generation System (OGS) to perform a checkout of key hardware operation in a microgravity environment, as well as to perform preventative maintenance operations of system valves during a long period of what would otherwise be hardware dormancy. The software provides direct connectivity to the OGS Firmware Controller with pre-programmed tasks operated by on-orbit astronauts to exercise OGS valves and motors. The software is used to manipulate the pump, separator, and valves to alleviate the concerns of hardware problems due to long-term inactivity and to allow for operational verification of microgravity-sensitive components early enough so that, if problems are found, they can be addressed before the hardware is required for operation on-orbit. The decision was made to use existing on-orbit IBM ThinkPad A31p laptops and MIL-STD-1553B interface cards as the hardware configuration. The software at the time of this reporting was developed and tested for use under the Windows 2000 Professional operating system to ensure compatibility with the existing on-orbit computer systems.

  1. Automatic Generation of Optimized and Synthesizable Hardware Implementation from High-Level Dataflow Programs

    Directory of Open Access Journals (Sweden)

    Khaled Jerbi

    2012-01-01

    Full Text Available In this paper, we introduce the Reconfigurable Video Coding (RVC standard based on the idea that video processing algorithms can be defined as a library of components that can be updated and standardized separately. MPEG RVC framework aims at providing a unified high-level specification of current MPEG coding technologies using a dataflow language called Cal Actor Language (CAL. CAL is associated with a set of tools to design dataflow applications and to generate hardware and software implementations. Before this work, the existing CAL hardware compilers did not support high-level features of the CAL. After presenting the main notions of the RVC standard, this paper introduces an automatic transformation process that analyses the non-compliant features and makes the required changes in the intermediate representation of the compiler while keeping the same behavior. Finally, the implementation results of the transformation on video and still image decoders are summarized. We show that the obtained results can largely satisfy the real time constraints for an embedded design on FPGA as we obtain a throughput of 73 FPS for MPEG 4 decoder and 34 FPS for coding and decoding process of the LAR coder using a video of CIF image size. This work resolves the main limitation of hardware generation from CAL designs.

  2. A Heterogeneous Multi-core Architecture with a Hardware Kernel for Control Systems

    DEFF Research Database (Denmark)

    Li, Gang; Guan, Wei; Sierszecki, Krzysztof

    2012-01-01

    Rapid industrialisation has resulted in a demand for improved embedded control systems with features such as predictability, high processing performance and low power consumption. Software kernel implementation on a single processor is becoming more difficult to satisfy those constraints. This pa......Rapid industrialisation has resulted in a demand for improved embedded control systems with features such as predictability, high processing performance and low power consumption. Software kernel implementation on a single processor is becoming more difficult to satisfy those constraints......). Second, a heterogeneous multi-core architecture is investigated, focusing on its performance in relation to hard real-time constraints and predictable behavior. Third, the hardware implementation of HARTEX is designated to support the heterogeneous multi-core architecture. This hardware kernel has...... several advantages over a similar kernel implemented in software: higher-speed processing capability, parallel computation, and separation between the kernel itself and the applications being run. A microbenchmark has been used to compare the hardware kernel with the software kernel, and compare...

  3. OS friendly microprocessor architecture: Hardware level computer security

    Science.gov (United States)

    Jungwirth, Patrick; La Fratta, Patrick

    2016-05-01

    We present an introduction to the patented OS Friendly Microprocessor Architecture (OSFA) and hardware level computer security. Conventional microprocessors have not tried to balance hardware performance and OS performance at the same time. Conventional microprocessors have depended on the Operating System for computer security and information assurance. The goal of the OS Friendly Architecture is to provide a high performance and secure microprocessor and OS system. We are interested in cyber security, information technology (IT), and SCADA control professionals reviewing the hardware level security features. The OS Friendly Architecture is a switched set of cache memory banks in a pipeline configuration. For light-weight threads, the memory pipeline configuration provides near instantaneous context switching times. The pipelining and parallelism provided by the cache memory pipeline provides for background cache read and write operations while the microprocessor's execution pipeline is running instructions. The cache bank selection controllers provide arbitration to prevent the memory pipeline and microprocessor's execution pipeline from accessing the same cache bank at the same time. This separation allows the cache memory pages to transfer to and from level 1 (L1) caching while the microprocessor pipeline is executing instructions. Computer security operations are implemented in hardware. By extending Unix file permissions bits to each cache memory bank and memory address, the OSFA provides hardware level computer security.

  4. GOSH! A roadmap for open-source science hardware

    CERN Document Server

    Stefania Pandolfi

    2016-01-01

    The goal of the Gathering for Open Science Hardware (GOSH! 2016), held from 2 to 5 March 2016 at IdeaSquare, was to lay the foundations of the open-source hardware for science movement.   The participants in the GOSH! 2016 meeting gathered in IdeaSquare. (Image: GOSH Community) “Despite advances in technology, many scientific innovations are held back because of a lack of affordable and customisable hardware,” says François Grey, a professor at the University of Geneva and coordinator of Citizen Cyberlab – a partnership between CERN, the UN Institute for Training and Research and the University of Geneva – which co-organised the GOSH! 2016 workshop. “This scarcity of accessible science hardware is particularly obstructive for citizen science groups and humanitarian organisations that don’t have the same economic means as a well-funded institution.” Instead, open sourcing science hardware co...

  5. A teaching experience using a flight simulator: Educational Simulation in practice

    Directory of Open Access Journals (Sweden)

    Sergio Ruiz

    2014-09-01

    Full Text Available The use of appropriate Educational Simulation systems (software and hardware for learning purposes may contribute to the application of the “Learning by Doing” (LbD paradigm in classroom, thus helping the students to assimilate the theoretical concepts of a subject and acquire certain pre-defined competencies in a more didactical way. The main objective of this work is to conduct a teaching experience using a flight simulation environment so that the students of Aeronautical Management degree can assume the role of an aircraft pilot, in order to allow the students understanding the basic processes of the air navigation and observe how the new technologies can transform and improve these processes. This is especially helpful in classroom to teach the contents of the Single European Sky ATM Research (SESAR programme, an European project that introduces a new Air Traffic Management (ATM paradigm based on several relevant technological and procedural changes that will affect the entire air transportation system in the short and medium term. After the execution of several activities with a flight simulator in the classroom a short test and a satisfaction survey have been requested to the students in order to assess the teaching experience.

  6. Ultra-high-performance supercritical fluid chromatography with quadrupole-time-of-flight mass spectrometry (UHPSFC/QTOF-MS) for analysis of lignin-derived monomeric compounds in processed lignin samples.

    Science.gov (United States)

    Prothmann, Jens; Sun, Mingzhe; Spégel, Peter; Sandahl, Margareta; Turner, Charlotta

    2017-12-01

    The conversion of lignin to potentially high-value low molecular weight compounds often results in complex mixtures of monomeric and oligomeric compounds. In this study, a method for the quantitative and qualitative analysis of 40 lignin-derived compounds using ultra-high-performance supercritical fluid chromatography coupled to quadrupole-time-of-flight mass spectrometry (UHPSFC/QTOF-MS) has been developed. Seven different columns were explored for maximum selectivity. Makeup solvent composition and ion source settings were optimised using a D-optimal design of experiment (DoE). Differently processed lignin samples were analysed and used for the method validation. The new UHPSFC/QTOF-MS method showed good separation of the 40 compounds within only 6-min retention time, and out of these, 36 showed high ionisation efficiency in negative electrospray ionisation mode. Graphical abstract A rapid and selective method for the quantitative and qualitative analysis of 40 lignin-derived compounds using ultra-high-performance supercritical fluid chromatography coupled to quadrupole-time-of-flight mass spectrometry (UHPSFC/QTOF-MS).

  7. Exploring flight crew behaviour

    Science.gov (United States)

    Helmreich, R. L.

    1987-01-01

    A programme of research into the determinants of flight crew performance in commercial and military aviation is described, along with limitations and advantages associated with the conduct of research in such settings. Preliminary results indicate significant relationships among personality factors, attitudes regarding flight operations, and crew performance. The potential theoretical and applied utility of the research and directions for further research are discussed.

  8. Adaptive Hardware Cryptography Engine Based on FPGA

    International Nuclear Information System (INIS)

    Afify, M.A.A.

    2011-01-01

    In the last two decades, with spread of the real time applications over public networks or communications the need for information security become more important but with very high speed for data processing, to keep up with the real time applications requirements, that is the reason for using FPGA as an implementation platform for the proposed cryptography engine. Hence in this thesis a new S-Box design has been demonstrated and implemented, there is a comparison for the simulation results for proposed S-Box simulation results with respect to different designs for S-Box in DES, Two fish and Rijndael algorithms and another comparison among proposed S-Box with different sizes. The proposed S-Box implemented with 32-bits Input data lines and compared with different designs in the encryption algorithms with the same input lines, the proposed S-Box gives implementation results for the maximum frequency 120 MHz but the DES S-Box gives 34 MHz and Rijndael gives 71 MHz, on the other hand the proposed design gives the best implementation area, hence it gives 50 Configurable logic Block CLB but DES gives 88 CLB. The proposed S-Box implemented in different sizes 64-bits, 128-bits, and 256-bits for input data lines. The implementation carried out by using UniDAq PCI card with FPGA Chip XCV 800, synthesizing carried out for all designs by using Leonardo spectrum and simulation carried out by using model sim simulator program form the FPGA advantage package. Finally the results evaluation and verifications carried out using the UniDAq FPGA PCI card with chip XCV 800. Different cases study have been implemented, data encryption, images encryption, voice encryption, and video encryption. A prototype for Remote Monitoring Control System has been implemented. Finally the proposed design for S-Box has a significant achievement in maximum frequency, implementation area, and encryption strength.

  9. An application of the Multi-Purpose System Simulation /MPSS/ model to the Monitor and Control Display System /MACDS/ at the National Aeronautics and Space Administration /NASA/ Goddard Space Flight Center /GSFC/

    Science.gov (United States)

    Mill, F. W.; Krebs, G. N.; Strauss, E. S.

    1976-01-01

    The Multi-Purpose System Simulator (MPSS) model was used to investigate the current and projected performance of the Monitor and Control Display System (MACDS) at the Goddard Space Flight Center in processing and displaying launch data adequately. MACDS consists of two interconnected mini-computers with associated terminal input and display output equipment and a disk-stored data base. Three configurations of MACDS were evaluated via MPSS and their performances ascertained. First, the current version of MACDS was found inadequate to handle projected launch data loads because of unacceptable data backlogging. Second, the current MACDS hardware with enhanced software was capable of handling two times the anticipated data loads. Third, an up-graded hardware ensemble combined with the enhanced software was capable of handling four times the anticipated data loads.

  10. Plutonium Protection System (PPS). Volume 2. Hardware description. Final report

    International Nuclear Information System (INIS)

    Miyoshi, D.S.

    1979-05-01

    The Plutonium Protection System (PPS) is an integrated safeguards system developed by Sandia Laboratories for the Department of Energy, Office of Safeguards and Security. The system is designed to demonstrate and test concepts for the improved safeguarding of plutonium. Volume 2 of the PPS final report describes the hardware elements of the system. The major areas containing hardware elements are the vault, where plutonium is stored, the packaging room, where plutonium is packaged into Container Modules, the Security Operations Center, which controls movement of personnel, the Material Accountability Center, which maintains the system data base, and the Material Operations Center, which monitors the operating procedures in the system. References are made to documents in which details of the hardware items can be found

  11. DAQ Hardware and software development for the ATLAS Pixel Detector

    CERN Document Server

    Stramaglia, Maria Elena; The ATLAS collaboration

    2015-01-01

    In 2014, the Pixel Detector of the ATLAS experiment was extended by about 12 million pixels with the installation of the Insertable B-Layer (IBL). Data-taking and tuning procedures have been implemented by employing newly designed read-out hardware, which supports the full detector bandwidth even for calibration. The hardware is supported by an embedded software stack running on the read-out boards. The same boards will be used to upgrade the read-out bandwidth for the two outermost layers of the ATLAS Pixel Barrel (54 million pixels). We present the IBL read-out hardware and the supporting software architecture used to calibrate and operate the 4-layer ATLAS Pixel detector. We discuss the technical implementations and status for data taking, validation of the DAQ system in recent cosmic ray data taking, in-situ calibrations, and results from additional tests in preparation for Run 2 at the LHC.

  12. Asymmetric Hardware Distortions in Receive Diversity Systems: Outage Performance Analysis

    KAUST Repository

    Javed, Sidrah

    2017-02-22

    This paper studies the impact of asymmetric hardware distortion (HWD) on the performance of receive diversity systems using linear and switched combining receivers. The asymmetric attribute of the proposed model motivates the employment of improper Gaussian signaling (IGS) scheme rather than the traditional proper Gaussian signaling (PGS) scheme. The achievable rate performance is analyzed for the ideal and non-ideal hardware scenarios using PGS and IGS transmission schemes for different combining receivers. In addition, the IGS statistical characteristics are optimized to maximize the achievable rate performance. Moreover, the outage probability performance of the receive diversity systems is analyzed yielding closed form expressions for both PGS and IGS based transmission schemes. HWD systems that employ IGS is proven to efficiently combat the self interference caused by the HWD. Furthermore, the obtained analytic expressions are validated through Monte-Carlo simulations. Eventually, non-ideal hardware transceivers degradation and IGS scheme acquired compensation are quantified through suitable numerical results.

  13. Hardware Realization of Chaos-based Symmetric Video Encryption

    KAUST Repository

    Ibrahim, Mohamad A.

    2013-05-01

    This thesis reports original work on hardware realization of symmetric video encryption using chaos-based continuous systems as pseudo-random number generators. The thesis also presents some of the serious degradations caused by digitally implementing chaotic systems. Subsequently, some techniques to eliminate such defects, including the ultimately adopted scheme are listed and explained in detail. Moreover, the thesis describes original work on the design of an encryption system to encrypt MPEG-2 video streams. Information about the MPEG-2 standard that fits this design context is presented. Then, the security of the proposed system is exhaustively analyzed and the performance is compared with other reported systems, showing superiority in performance and security. The thesis focuses more on the hardware and the circuit aspect of the system’s design. The system is realized on Xilinx Vetrix-4 FPGA with hardware parameters and throughput performance surpassing conventional encryption systems.

  14. Description and Flight Test Results of the NASA F-8 Digital Fly-by-Wire Control System

    Science.gov (United States)

    1975-01-01

    A NASA program to develop digital fly-by-wire (DFBW) technology for aircraft applications is discussed. Phase I of the program demonstrated the feasibility of using a digital fly-by-wire system for aircraft control through developing and flight testing a single channel system, which used Apollo hardware, in an F-8C airplane. The objective of Phase II of the program is to establish a technology base for designing practical DFBW systems. It will involve developing and flight testing a triplex digital fly-by-wire system using state-of-the-art airborne computers, system hardware, software, and redundancy concepts. The papers included in this report describe the Phase I system and its development and present results from the flight program. Man-rated flight software and the effects of lightning on digital flight control systems are also discussed.

  15. A sociotechnical model of the flight crew task.

    Science.gov (United States)

    Cahill, Joan; McDonald, Nick; Losa, Gabriel

    2014-12-01

    The objective of this research was to advance an improved model of Flight Crew task performance. Existing task models present a "local" description of Flight Crew task performance. Process mapping workshops, interviews, and observations were conducted with both pilots and flight operations personnel from five airlines, as part of the Human Integration into the Lifecycle of Aviation Systems (HILAS) project. The functional logic of the process dictates Flight Crew task requirements and specific task workflows. The Flight Crew task involves managing different levels of operational and environmental complexity, associated with the particular flight context. In so doing, the Flight Crew act as a coordinating interface between different human agents involved in the Active Flight Operations process and other processes that interface with this process. This article presents a new sociotechnical model of the Flight Crew task. The proposed model reflects a shift from a local explanation of Flight Crew task activity to a broader process-centric explanation. In so doing, it illuminates the complex role of procedures in commercial operations. The task model suggests specific requirements for pilot task support tools, procedures design, performance evaluation and crew resource management (CRM) training. Also, this model might be used to assess future operational concepts and associated technology requirements. Lastly, this model provides the basis for the operational validation of both existing and future cockpit technologies.

  16. Long migration flights of birds

    International Nuclear Information System (INIS)

    Denny, Mark

    2014-01-01

    The extremely long migration flights of some birds are carried out in one hop, necessitating a substantial prior build-up of fat fuel. We summarize the basic elements of bird flight physics with a simple model, and show how the fat reserves influence flight distance, flight speed and the power expended by the bird during flight. (paper)

  17. Long migration flights of birds

    Science.gov (United States)

    Denny, Mark

    2014-05-01

    The extremely long migration flights of some birds are carried out in one hop, necessitating a substantial prior build-up of fat fuel. We summarize the basic elements of bird flight physics with a simple model, and show how the fat reserves influence flight distance, flight speed and the power expended by the bird during flight.

  18. Analog-to-Digital Cognitive Radio: Sampling, Detection, and Hardware

    Science.gov (United States)

    Cohen, Deborah; Tsiper, Shahar; Eldar, Yonina C.

    2018-01-01

    The proliferation of wireless communications has recently created a bottleneck in terms of spectrum availability. Motivated by the observation that the root of the spectrum scarcity is not a lack of resources but an inefficient managing that can be solved, dynamic opportunistic exploitation of spectral bands has been considered, under the name of Cognitive Radio (CR). This technology allows secondary users to access currently idle spectral bands by detecting and tracking the spectrum occupancy. The CR application revisits this traditional task with specific and severe requirements in terms of spectrum sensing and detection performance, real-time processing, robustness to noise and more. Unfortunately, conventional methods do not satisfy these demands for typical signals, that often have very high Nyquist rates. Recently, several sampling methods have been proposed that exploit signals' a priori known structure to sample them below the Nyquist rate. Here, we review some of these techniques and tie them to the task of spectrum sensing in the context of CR. We then show how issues related to spectrum sensing can be tackled in the sub-Nyquist regime. First, to cope with low signal to noise ratios, we propose to recover second-order statistics from the low rate samples, rather than the signal itself. In particular, we consider cyclostationary based detection, and investigate CR networks that perform collaborative spectrum sensing to overcome channel effects. To enhance the efficiency of the available spectral bands detection, we present joint spectrum sensing and direction of arrival estimation methods. Throughout this work, we highlight the relation between theoretical algorithms and their practical implementation. We show hardware simulations performed on a prototype we built, demonstrating the feasibility of sub-Nyquist spectrum sensing in the context of CR.

  19. Rodent growth, behavior, and physiology resulting from flight on the Space Life Sciences-1 mission

    Science.gov (United States)

    Jahns, G.; Meylor, J.; Fast, T.; Hawes, N.; Zarow, G.

    1992-01-01

    A rodent-based spaceflight study is conducted to investigate physiological changes in rats vs humans and the effects of changes in the design of the Research Animal Holding Facility (RAHF) and the Animal Enclosure Module (AEM). Rats were housed in the AEM and the RAHF, and controls were kept in identical flight hardware on earth subjected to the same flight-environmental profile. Biosamples and organ weights are taken to compare the rats before and after flight, and food/water intake are also compared. Weight gain, body weight, and food consumptions in the flight rats are significantly lower than corresponding values for the control subjects. Flight rats tend to have smaller postexperiment spleens and hearts, and flight rats consumed more water in the AEM than in the RAHF. The rodents' behavior is analogous to humans with respect to physiological and reconditioning effects, showing that the rat is a good model for basic research into the effects of spaceflight on humans.

  20. FLASH fly-by-light flight control demonstration results overview

    Science.gov (United States)

    Halski, Don J.

    1996-10-01

    The Fly-By-Light Advanced Systems Hardware (FLASH) program developed Fly-By-Light (FBL) and Power-By-Wire (PBW) technologies for military and commercial aircraft. FLASH consists of three tasks. Task 1 developed the fiber optic cable, connectors, testers and installation and maintenance procedures. Task 3 developed advanced smart, rotary thin wing and electro-hydrostatic (EHA) actuators. Task 2, which is the subject of this paper,l focused on integration of fiber optic sensors and data buses with cable plant components from Task 1 and actuators from Task 3 into centralized and distributed flight control systems. Both open loop and piloted hardware-in-the-loop demonstrations were conducted with centralized and distributed flight control architectures incorporating the AS-1773A optical bus, active hand controllers, optical sensors, optimal flight control laws in high speed 32-bit processors, and neural networks for EHA monitoring and fault diagnosis. This paper overviews the systems level testing conducted under the FLASH Flight Control task. Preliminary results are summarized. Companion papers provide additional information.