WorldWideScience

Sample records for onboard image processor

  1. On-board landmark navigation and attitude reference parallel processor system

    Science.gov (United States)

    Gilbert, L. E.; Mahajan, D. T.

    1978-01-01

    An approach to autonomous navigation and attitude reference for earth observing spacecraft is described along with the landmark identification technique based on a sequential similarity detection algorithm (SSDA). Laboratory experiments undertaken to determine if better than one pixel accuracy in registration can be achieved consistent with onboard processor timing and capacity constraints are included. The SSDA is implemented using a multi-microprocessor system including synchronization logic and chip library. The data is processed in parallel stages, effectively reducing the time to match the small known image within a larger image as seen by the onboard image system. Shared memory is incorporated in the system to help communicate intermediate results among microprocessors. The functions include finding mean values and summation of absolute differences over the image search area. The hardware is a low power, compact unit suitable to onboard application with the flexibility to provide for different parameters depending upon the environment.

  2. Satellite on-board real-time SAR processor prototype

    Science.gov (United States)

    Bergeron, Alain; Doucet, Michel; Harnisch, Bernd; Suess, Martin; Marchese, Linda; Bourqui, Pascal; Desnoyers, Nicholas; Legros, Mathieu; Guillot, Ludovic; Mercier, Luc; Châteauneuf, François

    2017-11-01

    A Compact Real-Time Optronic SAR Processor has been successfully developed and tested up to a Technology Readiness Level of 4 (TRL4), the breadboard validation in a laboratory environment. SAR, or Synthetic Aperture Radar, is an active system allowing day and night imaging independent of the cloud coverage of the planet. The SAR raw data is a set of complex data for range and azimuth, which cannot be compressed. Specifically, for planetary missions and unmanned aerial vehicle (UAV) systems with limited communication data rates this is a clear disadvantage. SAR images are typically processed electronically applying dedicated Fourier transformations. This, however, can also be performed optically in real-time. Originally the first SAR images were optically processed. The optical Fourier processor architecture provides inherent parallel computing capabilities allowing real-time SAR data processing and thus the ability for compression and strongly reduced communication bandwidth requirements for the satellite. SAR signal return data are in general complex data. Both amplitude and phase must be combined optically in the SAR processor for each range and azimuth pixel. Amplitude and phase are generated by dedicated spatial light modulators and superimposed by an optical relay set-up. The spatial light modulators display the full complex raw data information over a two-dimensional format, one for the azimuth and one for the range. Since the entire signal history is displayed at once, the processor operates in parallel yielding real-time performances, i.e. without resulting bottleneck. Processing of both azimuth and range information is performed in a single pass. This paper focuses on the onboard capabilities of the compact optical SAR processor prototype that allows in-orbit processing of SAR images. Examples of processed ENVISAT ASAR images are presented. Various SAR processor parameters such as processing capabilities, image quality (point target analysis), weight and

  3. Onboard spectral imager data processor

    Science.gov (United States)

    Otten, Leonard J.; Meigs, Andrew D.; Franklin, Abraham J.; Sears, Robert D.; Robison, Mark W.; Rafert, J. Bruce; Fronterhouse, Donald C.; Grotbeck, Ronald L.

    1999-10-01

    Previous papers have described the concept behind the MightySat II.1 program, the satellite's Fourier Transform imaging spectrometer's optical design, the design for the spectral imaging payload, and its initial qualification testing. This paper discusses the on board data processing designed to reduce the amount of downloaded data by an order of magnitude and provide a demonstration of a smart spaceborne spectral imaging sensor. Two custom components, a spectral imager interface 6U VME card that moves data at over 30 MByte/sec, and four TI C-40 processors mounted to a second 6U VME and daughter card, are used to adapt the sensor to the spacecraft and provide the necessary high speed processing. A system architecture that offers both on board real time image processing and high-speed post data collection analysis of the spectral data has been developed. In addition to the on board processing of the raw data into a usable spectral data volume, one feature extraction technique has been incorporated. This algorithm operates on the basic interferometric data. The algorithm is integrated within the data compression process to search for uploadable feature descriptions.

  4. Spacecube: A Family of Reconfigurable Hybrid On-Board Science Data Processors

    Science.gov (United States)

    Flatley, Thomas P.

    2015-01-01

    SpaceCube is a family of Field Programmable Gate Array (FPGA) based on-board science data processing systems developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. SpaceCube is based on the Xilinx Virtex family of FPGAs, which include processor, FPGA logic and digital signal processing (DSP) resources. These processing elements are leveraged to produce a hybrid science data processing platform that accelerates the execution of algorithms by distributing computational functions to the most suitable elements. This approach enables the implementation of complex on-board functions that were previously limited to ground based systems, such as on-board product generation, data reduction, calibration, classification, eventfeature detection, data mining and real-time autonomous operations. The system is fully reconfigurable in flight, including data parameters, software and FPGA logic, through either ground commanding or autonomously in response to detected eventsfeatures in the instrument data stream.

  5. Onboard Data Processors for Planetary Ice-Penetrating Sounding Radars

    Science.gov (United States)

    Tan, I. L.; Friesenhahn, R.; Gim, Y.; Wu, X.; Jordan, R.; Wang, C.; Clark, D.; Le, M.; Hand, K. P.; Plaut, J. J.

    2011-12-01

    Among the many concerns faced by outer planetary missions, science data storage and transmission hold special significance. Such missions must contend with limited onboard storage, brief data downlink windows, and low downlink bandwidths. A potential solution to these issues lies in employing onboard data processors (OBPs) to convert raw data into products that are smaller and closely capture relevant scientific phenomena. In this paper, we present the implementation of two OBP architectures for ice-penetrating sounding radars tasked with exploring Europa and Ganymede. Our first architecture utilizes an unfocused processing algorithm extended from the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS, Jordan et. al. 2009). Compared to downlinking raw data, we are able to reduce data volume by approximately 100 times through OBP usage. To ensure the viability of our approach, we have implemented, simulated, and synthesized this architecture using both VHDL and Matlab models (with fixed-point and floating-point arithmetic) in conjunction with Modelsim. Creation of a VHDL model of our processor is the principle step in transitioning to actual digital hardware, whether in a FPGA (field-programmable gate array) or an ASIC (application-specific integrated circuit), and successful simulation and synthesis strongly indicate feasibility. In addition, we examined the tradeoffs faced in the OBP between fixed-point accuracy, resource consumption, and data product fidelity. Our second architecture is based upon a focused fast back projection (FBP) algorithm that requires a modest amount of computing power and on-board memory while yielding high along-track resolution and improved slope detection capability. We present an overview of the algorithm and details of our implementation, also in VHDL. With the appropriate tradeoffs, the use of OBPs can significantly reduce data downlink requirements without sacrificing data product fidelity. Through the development

  6. Video image processor on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/

    Science.gov (United States)

    Lindgren, R. W.; Tarbell, T. D.

    1981-01-01

    The SOUP instrument is designed to obtain diffraction-limited digital images of the sun with high photometric accuracy. The Video Processor originated from the requirement to provide onboard real-time image processing, both to reduce the telemetry rate and to provide meaningful video displays of scientific data to the payload crew. This original concept has evolved into a versatile digital processing system with a multitude of other uses in the SOUP program. The central element in the Video Processor design is a 16-bit central processing unit based on 2900 family bipolar bit-slice devices. All arithmetic, logical and I/O operations are under control of microprograms, stored in programmable read-only memory and initiated by commands from the LSI-11. Several functions of the Video Processor are described, including interface to the High Rate Multiplexer downlink, cosmetic and scientific data processing, scan conversion for crew displays, focus and exposure testing, and use as ground support equipment.

  7. Bank switched memory interface for an image processor

    International Nuclear Information System (INIS)

    Barron, M.; Downward, J.

    1980-09-01

    A commercially available image processor is interfaced to a PDP-11/45 through an 8K window of memory addresses. When the image processor was not in use it was desired to be able to use the 8K address space as real memory. The standard method of accomplishing this would have been to use UNIBUS switches to switch in either the physical 8K bank of memory or the image processor memory. This method has the disadvantage of being rather expensive. As a simple alternative, a device was built to selectively enable or disable either an 8K bank of memory or the image processor memory. To enable the image processor under program control, GEN is contracted in size, the memory is disabled, a device partition for the image processor is created above GEN, and the image processor memory is enabled. The process is reversed to restore memory to GEN. The hardware to enable/disable the image and computer memories is controlled using spare bits from a DR-11K output register. The image processor and physical memory can be switched in or out on line with no adverse affects on the system's operation

  8. A UNIX-based prototype biomedical virtual image processor

    International Nuclear Information System (INIS)

    Fahy, J.B.; Kim, Y.

    1987-01-01

    The authors have developed a multiprocess virtual image processor for the IBM PC/AT, in order to maximize image processing software portability for biomedical applications. An interprocess communication scheme, based on two-way metacode exchange, has been developed and verified for this purpose. Application programs call a device-independent image processing library, which transfers commands over a shared data bridge to one or more Autonomous Virtual Image Processors (AVIP). Each AVIP runs as a separate process in the UNIX operating system, and implements the device-independent functions on the image processor to which it corresponds. Application programs can control multiple image processors at a time, change the image processor configuration used at any time, and are completely portable among image processors for which an AVIP has been implemented. Run-time speeds have been found to be acceptable for higher level functions, although rather slow for lower level functions, owing to the overhead associated with sending commands and data over the shared data bridge

  9. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery.

    Science.gov (United States)

    Qi, Baogui; Shi, Hao; Zhuang, Yin; Chen, He; Chen, Liang

    2018-04-25

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited.

  10. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery

    Science.gov (United States)

    Qi, Baogui; Zhuang, Yin; Chen, He; Chen, Liang

    2018-01-01

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited. PMID:29693585

  11. Integrated fuel processor development

    International Nuclear Information System (INIS)

    Ahmed, S.; Pereira, C.; Lee, S. H. D.; Krumpelt, M.

    2001-01-01

    The Department of Energy's Office of Advanced Automotive Technologies has been supporting the development of fuel-flexible fuel processors at Argonne National Laboratory. These fuel processors will enable fuel cell vehicles to operate on fuels available through the existing infrastructure. The constraints of on-board space and weight require that these fuel processors be designed to be compact and lightweight, while meeting the performance targets for efficiency and gas quality needed for the fuel cell. This paper discusses the performance of a prototype fuel processor that has been designed and fabricated to operate with liquid fuels, such as gasoline, ethanol, methanol, etc. Rated for a capacity of 10 kWe (one-fifth of that needed for a car), the prototype fuel processor integrates the unit operations (vaporization, heat exchange, etc.) and processes (reforming, water-gas shift, preferential oxidation reactions, etc.) necessary to produce the hydrogen-rich gas (reformate) that will fuel the polymer electrolyte fuel cell stacks. The fuel processor work is being complemented by analytical and fundamental research. With the ultimate objective of meeting on-board fuel processor goals, these studies include: modeling fuel cell systems to identify design and operating features; evaluating alternative fuel processing options; and developing appropriate catalysts and materials. Issues and outstanding challenges that need to be overcome in order to develop practical, on-board devices are discussed

  12. The Danish real-time SAR processor: first results

    DEFF Research Database (Denmark)

    Dall, Jørgen; Jørgensen, Jørn Hjelm; Netterstrøm, Anders

    1993-01-01

    A real-time processor (RTP) for the Danish airborne Synthetic Aperture Radar (SAR) has been designed and constructed at the Electromagnetics Institute. The implementation was completed in mid 1992, and since then the RTP has been operated successfully on several test and demonstration flights....... The processor is capable of focusing the entire swath of the raw SAR data into full resolution, and depending on the choice made by the on-board operator, either a high resolution one-look zoom image or a spatially multilooked overview image is displayed. After a brief design review, the paper addresses various...

  13. Architectural design and analysis of a programmable image processor

    International Nuclear Information System (INIS)

    Siyal, M.Y.; Chowdhry, B.S.; Rajput, A.Q.K.

    2003-01-01

    In this paper we present an architectural design and analysis of a programmable image processor, nicknamed Snake. The processor was designed with a high degree of parallelism to speed up a range of image processing operations. Data parallelism found in array processors has been included into the architecture of the proposed processor. The implementation of commonly used image processing algorithms and their performance evaluation are also discussed. The performance of Snake is also compared with other types of processor architectures. (author)

  14. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    and evaluated. On-board there are six video cameras each capturing images of 1024times1024 pixels of 12 bpp at a frame rate of 15 fps, thus totalling 1080 Mbits/s. In comparison the average downlink data rate for these images is projected to be 50 kbit/s. This calls for efficient on-board processing to select...

  15. Embedded processor extensions for image processing

    Science.gov (United States)

    Thevenin, Mathieu; Paindavoine, Michel; Letellier, Laurent; Heyrman, Barthélémy

    2008-04-01

    The advent of camera phones marks a new phase in embedded camera sales. By late 2009, the total number of camera phones will exceed that of both conventional and digital cameras shipped since the invention of photography. Use in mobile phones of applications like visiophony, matrix code readers and biometrics requires a high degree of component flexibility that image processors (IPs) have not, to date, been able to provide. For all these reasons, programmable processor solutions have become essential. This paper presents several techniques geared to speeding up image processors. It demonstrates that a gain of twice is possible for the complete image acquisition chain and the enhancement pipeline downstream of the video sensor. Such results confirm the potential of these computing systems for supporting future applications.

  16. Processor breadboard for on-board RFI detection and mitigation in MetOp-SG radiometers

    DEFF Research Database (Denmark)

    Skou, Niels; Kristensen, Steen S.; Kovanen, Arhippa

    2015-01-01

    Radio Frequency Interference (RFI) is an increasing threat to proper operation of space-borne Earth viewing microwave radiometer systems. There is a steady growth in active services, and tougher requirements to sensitivity and fidelity of future radiometer systems. Thus it has been decided...... that the next generation MetOp satellites must include some kind of RFI detection and mitigation system at Ku band. This paper describes a breadboard processor that detects and mitigates RFI on-board the satellite. Thus cleaned data can be generated in real time, and following suitable integration, downloaded...... to ground at the modest data rate usually associated with radiometer systems....

  17. Emergency product generation for disaster management using RISAT and DMSAR quick look SAR processors

    Science.gov (United States)

    Desai, Nilesh; Sharma, Ritesh; Kumar, Saravana; Misra, Tapan; Gujraty, Virendra; Rana, SurinderSingh

    2006-12-01

    Since last few years, ISRO has embarked upon the development of two complex Synthetic Aperture Radar (SAR) missions, viz. Spaceborne Radar Imaging Satellite (RISAT) and Airborne SAR for Disaster Mangement (DMSAR), as a capacity building measure under country's Disaster Management Support (DMS) Program, for estimating the extent of damage over large areas (~75 Km) and also assess the effectiveness of the relief measures undertaken during natural disasters such as cyclones, epidemics, earthquakes, floods and landslides, forest fires, crop diseases etc. Synthetic Aperture Radar (SAR) has an unique role to play in mapping and monitoring of large areas affected by natural disasters especially floods, owing to its unique capability to see through clouds as well as all-weather imaging capability. The generation of SAR images with quick turn around time is very essential to meet the above DMS objectives. Thus the development of SAR Processors, for these two SAR systems poses considerable challenges and design efforts. Considering the growing user demand and inevitable necessity for a full-fledged high throughput processor, to process SAR data and generate image in real or near-real time, the design and development of a generic SAR Processor has been taken up and evolved, which will meet the SAR processing requirements for both Airborne and Spaceborne SAR systems. This hardware SAR processor is being built, to the extent possible, using only Commercial-Off-The-Shelf (COTS) DSP and other hardware plug-in modules on a Compact PCI (cPCI) platform. Thus, the major thrust has been on working out Multi-processor Digital Signal Processor (DSP) architecture and algorithm development and optimization rather than hardware design and fabrication. For DMSAR, this generic SAR Processor operates as a Quick Look SAR Processor (QLP) on-board the aircraft to produce real time full swath DMSAR images and as a ground based Near-Real Time high precision full swath Processor (NRTP). It will

  18. Digital tomosynthesis with an on-board kilovoltage imaging device

    International Nuclear Information System (INIS)

    Godfrey, Devon J.; Yin, F.-F.; Oldham, Mark; Yoo, Sua; Willett, Christopher

    2006-01-01

    Purpose: To generate on-board digital tomosynthesis (DTS) and reference DTS images for three-dimensional image-guided radiation therapy (IGRT) as an alternative to conventional portal imaging or on-board cone-beam computed tomography (CBCT). Methods and Materials: Three clinical cases (prostate, head-and-neck, and liver) were selected to illustrate the capabilities of on-board DTS for IGRT. Corresponding reference DTS images were reconstructed from digitally reconstructed radiographs computed from planning CT image sets. The effect of scan angle on DTS slice thickness was examined by computing the mutual information between coincident CBCT and DTS images, as the DTS scan angle was varied from 0 o to 165 o . A breath-hold DTS acquisition strategy was implemented to remove respiratory motion artifacts. Results: Digital tomosynthesis slices appeared similar to coincident CBCT planes and yielded substantially more anatomic information than either kilovoltage or megavoltage radiographs. Breath-hold DTS acquisition improved soft-tissue visibility by suppressing respiratory motion. Conclusions: Improved bony and soft-tissue visibility in DTS images is likely to improve target localization compared with radiographic verification techniques and might allow for daily localization of a soft-tissue target. Breath-hold DTS is a potential alternative to on-board CBCT for sites prone to respiratory motion

  19. High performance graphics processors for medical imaging applications

    International Nuclear Information System (INIS)

    Goldwasser, S.M.; Reynolds, R.A.; Talton, D.A.; Walsh, E.S.

    1989-01-01

    This paper describes a family of high- performance graphics processors with special hardware for interactive visualization of 3D human anatomy. The basic architecture expands to multiple parallel processors, each processor using pipelined arithmetic and logical units for high-speed rendering of Computed Tomography (CT), Magnetic Resonance (MR) and Positron Emission Tomography (PET) data. User-selectable display alternatives include multiple 2D axial slices, reformatted images in sagittal or coronal planes and shaded 3D views. Special facilities support applications requiring color-coded display of multiple datasets (such as radiation therapy planning), or dynamic replay of time- varying volumetric data (such as cine-CT or gated MR studies of the beating heart). The current implementation is a single processor system which generates reformatted images in true real time (30 frames per second), and shaded 3D views in a few seconds per frame. It accepts full scale medical datasets in their native formats, so that minimal preprocessing delay exists between data acquisition and display

  20. Digital image processing software system using an array processor

    International Nuclear Information System (INIS)

    Sherwood, R.J.; Portnoff, M.R.; Journeay, C.H.; Twogood, R.E.

    1981-01-01

    A versatile array processor-based system for general-purpose image processing was developed. At the heart of this system is an extensive, flexible software package that incorporates the array processor for effective interactive image processing. The software system is described in detail, and its application to a diverse set of applications at LLNL is briefly discussed. 4 figures, 1 table

  1. A VLSI image processor via pseudo-mersenne transforms

    International Nuclear Information System (INIS)

    Sei, W.J.; Jagadeesh, J.M.

    1986-01-01

    The computational burden on image processing in medical fields where a large amount of information must be processed quickly and accurately has led to consideration of special-purpose image processor chip design for some time. The very large scale integration (VLSI) resolution has made it cost-effective and feasible to consider the design of special purpose chips for medical imaging fields. This paper describes a VLSI CMOS chip suitable for parallel implementation of image processing algorithms and cyclic convolutions by using Pseudo-Mersenne Number Transform (PMNT). The main advantages of the PMNT over the Fast Fourier Transform (FFT) are: (1) no multiplications are required; (2) integer arithmetic is used. The design and development of this processor, which operates on 32-point convolution or 5 x 5 window image, are described

  2. Median and Morphological Specialized Processors for a Real-Time Image Data Processing

    Directory of Open Access Journals (Sweden)

    Kazimierz Wiatr

    2002-01-01

    Full Text Available This paper presents the considerations on selecting a multiprocessor MISD architecture for fast implementation of the vision image processing. Using the author′s earlier experience with real-time systems, implementing of specialized hardware processors based on the programmable FPGA systems has been proposed in the pipeline architecture. In particular, the following processors are presented: median filter and morphological processor. The structure of a universal reconfigurable processor developed has been proposed as well. Experimental results are presented as delays on LCA level implementation for median filter, morphological processor, convolution processor, look-up-table processor, logic processor and histogram processor. These times compare with delays in general purpose processor and DSP processor.

  3. A new on-board imaging treatment technique for palliative and emergency treatments in radiation oncology

    International Nuclear Information System (INIS)

    Held, Mareike

    2016-01-01

    This dissertation focuses on the use of on-board imaging systems as the basis for treatment planning, presenting an additional application for on-board images. A clinical workflow is developed to simulate, plan, and deliver a simple radiation oncology treatment rapidly, using 3D patient scans. The work focuses on an on-line dose planning and delivery process based on on-board images entirely performed with the patient set up on the treatment couch of the linear accelerator. This potentially reduces the time between patient simulation and treatment to about 30 minutes. The basis for correct dose calculation is the accurate image gray scale to tissue density calibration. The gray scale, which is defined in CT Numbers, is dependent on the energy spectrum of the beam. Therefore, an understanding of the physics characteristics of each on-board system is required to evaluate the impact on image quality, especially regarding the underlying cause of image noise, contrast, and non-uniformity. Modern on-board imaging systems, including kV and megavoltage (MV) cone beam (CB) CT as well as MV CT, are characterized in terms of image quality and stability. A library of phantom and patient CT images is used to evaluate the dose calculation accuracy for the on-board images. The dose calculation objective is to stay within 5% local dose differences compared to standard kV CT dose planning. The objective is met in many treatment cases. However, dose calculation accuracy depends on the anatomical treatment site. While on-board CT-based treatments of the head and extremities are predictable within 5% on all systems, lung tissue and air cavities may create local dose discrepancies of more than 5%. The image quality varies between the tested units. Consequently, the CT number-to-density calibration is defined independently for each system. In case of some imaging systems, the CT numbers of the images are dependent on the protocol used for on-board imaging, which defines the imaging dose

  4. A new on-board imaging treatment technique for palliative and emergency treatments in radiation oncology

    Energy Technology Data Exchange (ETDEWEB)

    Held, Mareike

    2016-03-23

    This dissertation focuses on the use of on-board imaging systems as the basis for treatment planning, presenting an additional application for on-board images. A clinical workflow is developed to simulate, plan, and deliver a simple radiation oncology treatment rapidly, using 3D patient scans. The work focuses on an on-line dose planning and delivery process based on on-board images entirely performed with the patient set up on the treatment couch of the linear accelerator. This potentially reduces the time between patient simulation and treatment to about 30 minutes. The basis for correct dose calculation is the accurate image gray scale to tissue density calibration. The gray scale, which is defined in CT Numbers, is dependent on the energy spectrum of the beam. Therefore, an understanding of the physics characteristics of each on-board system is required to evaluate the impact on image quality, especially regarding the underlying cause of image noise, contrast, and non-uniformity. Modern on-board imaging systems, including kV and megavoltage (MV) cone beam (CB) CT as well as MV CT, are characterized in terms of image quality and stability. A library of phantom and patient CT images is used to evaluate the dose calculation accuracy for the on-board images. The dose calculation objective is to stay within 5% local dose differences compared to standard kV CT dose planning. The objective is met in many treatment cases. However, dose calculation accuracy depends on the anatomical treatment site. While on-board CT-based treatments of the head and extremities are predictable within 5% on all systems, lung tissue and air cavities may create local dose discrepancies of more than 5%. The image quality varies between the tested units. Consequently, the CT number-to-density calibration is defined independently for each system. In case of some imaging systems, the CT numbers of the images are dependent on the protocol used for on-board imaging, which defines the imaging dose

  5. Autonomous onboard optical processor for driving aid

    Science.gov (United States)

    Attia, Mondher; Servel, Alain; Guibert, Laurent

    1995-01-01

    We take advantage of recent technological advances in the field of ferroelectric liquid crystal silicon back plane optoelectronic devices. These are well suited to perform massively parallel processing tasks. That choice enables the design of low cost vision systems and allows the implementation of an on-board system. We focus on transport applications such as road sign recognition. Preliminary in-car experimental results are presented.

  6. A Versatile Image Processor For Digital Diagnostic Imaging And Its Application In Computed Radiography

    Science.gov (United States)

    Blume, H.; Alexandru, R.; Applegate, R.; Giordano, T.; Kamiya, K.; Kresina, R.

    1986-06-01

    In a digital diagnostic imaging department, the majority of operations for handling and processing of images can be grouped into a small set of basic operations, such as image data buffering and storage, image processing and analysis, image display, image data transmission and image data compression. These operations occur in almost all nodes of the diagnostic imaging communications network of the department. An image processor architecture was developed in which each of these functions has been mapped into hardware and software modules. The modular approach has advantages in terms of economics, service, expandability and upgradeability. The architectural design is based on the principles of hierarchical functionality, distributed and parallel processing and aims at real time response. Parallel processing and real time response is facilitated in part by a dual bus system: a VME control bus and a high speed image data bus, consisting of 8 independent parallel 16-bit busses, capable of handling combined up to 144 MBytes/sec. The presented image processor is versatile enough to meet the video rate processing needs of digital subtraction angiography, the large pixel matrix processing requirements of static projection radiography, or the broad range of manipulation and display needs of a multi-modality diagnostic work station. Several hardware modules are described in detail. For illustrating the capabilities of the image processor, processed 2000 x 2000 pixel computed radiographs are shown and estimated computation times for executing the processing opera-tions are presented.

  7. The European project Merlin on multi-gigabit, energy-efficient, ruggedized lightwave engines for advanced on-board digital processors

    Science.gov (United States)

    Stampoulidis, L.; Kehayas, E.; Karppinen, M.; Tanskanen, A.; Heikkinen, V.; Westbergh, P.; Gustavsson, J.; Larsson, A.; Grüner-Nielsen, L.; Sotom, M.; Venet, N.; Ko, M.; Micusik, D.; Kissinger, D.; Ulusoy, A. C.; King, R.; Safaisini, R.

    2017-11-01

    Modern broadband communication networks rely on satellites to complement the terrestrial telecommunication infrastructure. Satellites accommodate global reach and enable world-wide direct broadcasting by facilitating wide access to the backbone network from remote sites or areas where the installation of ground segment infrastructure is not economically viable. At the same time the new broadband applications increase the bandwidth demands in every part of the network - and satellites are no exception. Modern telecom satellites incorporate On-Board Processors (OBP) having analogue-to-digital (ADC) and digital-to-analogue converters (DAC) at their inputs/outputs and making use of digital processing to handle hundreds of signals; as the amount of information exchanged increases, so do the physical size, mass and power consumption of the interconnects required to transfer massive amounts of data through bulk electric wires.

  8. Onboard functional and molecular imaging: A design investigation for robotic multipinhole SPECT

    International Nuclear Information System (INIS)

    Bowsher, James; Giles, William; Yin, Fang-Fang; Yan, Susu; Roper, Justin

    2014-01-01

    Purpose: Onboard imaging—currently performed primarily by x-ray transmission modalities—is essential in modern radiation therapy. As radiation therapy moves toward personalized medicine, molecular imaging, which views individual gene expression, may also be important onboard. Nuclear medicine methods, such as single photon emission computed tomography (SPECT), are premier modalities for molecular imaging. The purpose of this study is to investigate a robotic multipinhole approach to onboard SPECT. Methods: Computer-aided design (CAD) studies were performed to assess the feasibility of maneuvering a robotic SPECT system about a patient in position for radiation therapy. In order to obtain fast, high-quality SPECT images, a 49-pinhole SPECT camera was designed which provides high sensitivity to photons emitted from an imaging region of interest. This multipinhole system was investigated by computer-simulation studies. Seventeen hot spots 10 and 7 mm in diameter were placed in the breast region of a supine female phantom. Hot spot activity concentration was six times that of background. For the 49-pinhole camera and a reference, more conventional, broad field-of-view (FOV) SPECT system, projection data were computer simulated for 4-min scans and SPECT images were reconstructed. Hot-spot localization was evaluated using a nonprewhitening forced-choice numerical observer. Results: The CAD simulation studies found that robots could maneuver SPECT cameras about patients in position for radiation therapy. In the imaging studies, most hot spots were apparent in the 49-pinhole images. Average localization errors for 10-mm- and 7-mm-diameter hot spots were 0.4 and 1.7 mm, respectively, for the 49-pinhole system, and 3.1 and 5.7 mm, respectively, for the reference broad-FOV system. Conclusions: A robot could maneuver a multipinhole SPECT system about a patient in position for radiation therapy. The system could provide onboard functional and molecular imaging with 4-min

  9. Onboard functional and molecular imaging: A design investigation for robotic multipinhole SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Bowsher, James, E-mail: james.bowsher@duke.edu; Giles, William; Yin, Fang-Fang [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 and Medical Physics Graduate Program, Duke University, Durham, North Carolina 27710 (United States); Yan, Susu [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27710 (United States); Roper, Justin [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States)

    2014-01-15

    Purpose: Onboard imaging—currently performed primarily by x-ray transmission modalities—is essential in modern radiation therapy. As radiation therapy moves toward personalized medicine, molecular imaging, which views individual gene expression, may also be important onboard. Nuclear medicine methods, such as single photon emission computed tomography (SPECT), are premier modalities for molecular imaging. The purpose of this study is to investigate a robotic multipinhole approach to onboard SPECT. Methods: Computer-aided design (CAD) studies were performed to assess the feasibility of maneuvering a robotic SPECT system about a patient in position for radiation therapy. In order to obtain fast, high-quality SPECT images, a 49-pinhole SPECT camera was designed which provides high sensitivity to photons emitted from an imaging region of interest. This multipinhole system was investigated by computer-simulation studies. Seventeen hot spots 10 and 7 mm in diameter were placed in the breast region of a supine female phantom. Hot spot activity concentration was six times that of background. For the 49-pinhole camera and a reference, more conventional, broad field-of-view (FOV) SPECT system, projection data were computer simulated for 4-min scans and SPECT images were reconstructed. Hot-spot localization was evaluated using a nonprewhitening forced-choice numerical observer. Results: The CAD simulation studies found that robots could maneuver SPECT cameras about patients in position for radiation therapy. In the imaging studies, most hot spots were apparent in the 49-pinhole images. Average localization errors for 10-mm- and 7-mm-diameter hot spots were 0.4 and 1.7 mm, respectively, for the 49-pinhole system, and 3.1 and 5.7 mm, respectively, for the reference broad-FOV system. Conclusions: A robot could maneuver a multipinhole SPECT system about a patient in position for radiation therapy. The system could provide onboard functional and molecular imaging with 4-min

  10. The study of image processing of parallel digital signal processor

    International Nuclear Information System (INIS)

    Liu Jie

    2000-01-01

    The author analyzes the basic characteristic of parallel DSP (digital signal processor) TMS320C80 and proposes related optimized image algorithm and the parallel processing method based on parallel DSP. The realtime for many image processing can be achieved in this way

  11. Advanced Hybrid On-Board Data Processor - SpaceCube 2.0

    Data.gov (United States)

    National Aeronautics and Space Administration — Develop advanced on-board processing to meet the requirements of the Decadal Survey missions: advanced instruments (hyper-spectral, SAR, etc) require advanced...

  12. Image matrix processor for fast multi-dimensional computations

    Science.gov (United States)

    Roberson, George P.; Skeate, Michael F.

    1996-01-01

    An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.

  13. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    Science.gov (United States)

    Downie, John D.

    1990-01-01

    A ground-based adaptive optics imaging telescope system attempts to improve image quality by detecting and correcting for atmospherically induced wavefront aberrations. The required control computations during each cycle will take a finite amount of time. Longer time delays result in larger values of residual wavefront error variance since the atmosphere continues to change during that time. Thus an optical processor may be well-suited for this task. This paper presents a study of the accuracy requirements in a general optical processor that will make it competitive with, or superior to, a conventional digital computer for the adaptive optics application. An optimization of the adaptive optics correction algorithm with respect to an optical processor's degree of accuracy is also briefly discussed.

  14. High performance deformable image registration algorithms for manycore processors

    CERN Document Server

    Shackleford, James; Sharp, Gregory

    2013-01-01

    High Performance Deformable Image Registration Algorithms for Manycore Processors develops highly data-parallel image registration algorithms suitable for use on modern multi-core architectures, including graphics processing units (GPUs). Focusing on deformable registration, we show how to develop data-parallel versions of the registration algorithm suitable for execution on the GPU. Image registration is the process of aligning two or more images into a common coordinate frame and is a fundamental step to be able to compare or fuse data obtained from different sensor measurements. E

  15. On-board image compression for the RAE lunar mission

    Science.gov (United States)

    Miller, W. H.; Lynch, T. J.

    1976-01-01

    The requirements, design, implementation, and flight performance of an on-board image compression system for the lunar orbiting Radio Astronomy Explorer-2 (RAE-2) spacecraft are described. The image to be compressed is a panoramic camera view of the long radio astronomy antenna booms used for gravity-gradient stabilization of the spacecraft. A compression ratio of 32 to 1 is obtained by a combination of scan line skipping and adaptive run-length coding. The compressed imagery data are convolutionally encoded for error protection. This image compression system occupies about 1000 cu cm and consumes 0.4 W.

  16. Hydrogen production by onboard gasoline processing – Process simulation and optimization

    Energy Technology Data Exchange (ETDEWEB)

    Bisaria, Vega; Smith, R.J. Byron,

    2013-12-15

    Highlights: • Process flow sheet for an onboard fuel processor for 100 kW fuel cell output was simulated. • Gasoline fuel requirement was found to be 30.55 kg/hr. • The fuel processor efficiency was found to be 95.98%. • An heat integrated optimum flow sheet was developed. - Abstract: Fuel cell vehicles have reached the commercialization stage and hybrid vehicles are already on the road. While hydrogen storage and infrastructure remain critical issues in stand alone commercialization of the technology, researchers are developing onboard fuel processors, which can convert a variety of fuels into hydrogen to power these fuel cell vehicles. The feasibility study of a 100 kW on board fuel processor based on gasoline fuel is carried out using process simulation. The steady state model has been developed with the help of Aspen HYSYS to analyze the fuel processor and total system performance. The components of the fuel processor are the fuel reforming unit, CO clean-up unit and auxiliary units. Optimization studies were carried out by analyzing the influence of various operating parameters such as oxygen to carbon ratio, steam to carbon ratio, temperature and pressure on the process equipments. From the steady state model optimization using Aspen HYSYS, an optimized reaction composition in terms of hydrogen production and carbon monoxide concentration corresponds to: oxygen to carbon ratio of 0.5 and steam to carbon ratio of 0.5. The fuel processor efficiency of 95.98% is obtained under these optimized conditions. The heat integration of the system using the composite curve, grand composite curve and utility composite curve were studied for the system. The most appropriate heat exchanger network from the generated ones was chosen and that was incorporated into the optimized flow sheet of the100 kW fuel processor. A completely heat integrated 100 kW fuel processor flow sheet using gasoline as fuel was thus successfully simulated and optimized.

  17. Hydrogen production by onboard gasoline processing – Process simulation and optimization

    International Nuclear Information System (INIS)

    Bisaria, Vega; Smith, R.J. Byron

    2013-01-01

    Highlights: • Process flow sheet for an onboard fuel processor for 100 kW fuel cell output was simulated. • Gasoline fuel requirement was found to be 30.55 kg/hr. • The fuel processor efficiency was found to be 95.98%. • An heat integrated optimum flow sheet was developed. - Abstract: Fuel cell vehicles have reached the commercialization stage and hybrid vehicles are already on the road. While hydrogen storage and infrastructure remain critical issues in stand alone commercialization of the technology, researchers are developing onboard fuel processors, which can convert a variety of fuels into hydrogen to power these fuel cell vehicles. The feasibility study of a 100 kW on board fuel processor based on gasoline fuel is carried out using process simulation. The steady state model has been developed with the help of Aspen HYSYS to analyze the fuel processor and total system performance. The components of the fuel processor are the fuel reforming unit, CO clean-up unit and auxiliary units. Optimization studies were carried out by analyzing the influence of various operating parameters such as oxygen to carbon ratio, steam to carbon ratio, temperature and pressure on the process equipments. From the steady state model optimization using Aspen HYSYS, an optimized reaction composition in terms of hydrogen production and carbon monoxide concentration corresponds to: oxygen to carbon ratio of 0.5 and steam to carbon ratio of 0.5. The fuel processor efficiency of 95.98% is obtained under these optimized conditions. The heat integration of the system using the composite curve, grand composite curve and utility composite curve were studied for the system. The most appropriate heat exchanger network from the generated ones was chosen and that was incorporated into the optimized flow sheet of the100 kW fuel processor. A completely heat integrated 100 kW fuel processor flow sheet using gasoline as fuel was thus successfully simulated and optimized

  18. Image Matrix Processor for Volumetric Computations Final Report CRADA No. TSB-1148-95

    Energy Technology Data Exchange (ETDEWEB)

    Roberson, G. Patrick [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Browne, Jolyon [Advanced Research & Applications Corporation, Sunnyvale, CA (United States)

    2018-01-22

    The development of an Image Matrix Processor (IMP) was proposed that would provide an economical means to perform rapid ray-tracing processes on volume "Giga Voxel" data sets. This was a multi-phased project. The objective of the first phase of the IMP project was to evaluate the practicality of implementing a workstation-based Image Matrix Processor for use in volumetric reconstruction and rendering using hardware simulation techniques. Additionally, ARACOR and LLNL worked together to identify and pursue further funding sources to complete a second phase of this project.

  19. IBIS: the imager on-board integral

    International Nuclear Information System (INIS)

    Ubertini, P.; Bazzano, A.; Lebrun, F.; Goldwurm, A.; Laurent, P.; Mirabel, I.F.; Vigroux, L.; Di Cocco, G.; Labanti, C.; Bird, A.J.; Broenstad, K.; La Rosa, G.; Sacco, B.; Quadrini, E.M.; Ramsey, B.; Weisskopf, M.C.; Reglero, V.; Sabau, L.; Staubert, R.; Zdziarski, A.A.

    2003-01-01

    The IBIS telescope is the high angular resolution gamma-ray imager on-board the INTEGRAL Observatory, successfully launched from Baikonur (Kazakhstan) on October 2002. This medium size ESA project, planned for a 2 year mission with possible extension to 5, is devoted to the observation of the gamma-ray sky in the energy range from 3 keV to 10 MeV (Winkler 2001). The IBIS imaging system is based on two independent solid state detector arrays optimised for low (15-1000 keV) and high (0.175-10.0 MeV) energies surrounded by an active VETO System. This high efficiency shield is essential to minimise the background induced by high energy particles in the highly ex-centric out of van Allen belt orbit. A Tungsten Coded Aperture Mask, 16 mm thick and ∼ 1 squared meter in dimension is the imaging device. The IBIS telescope will serve the scientific community at large providing a unique combination of unprecedented high energy wide field imaging capability coupled with broad band spectroscopy and high resolution timing over the energy range from X to gamma rays. To date the IBIS telescope is working nominally in orbit since more than 9 month. (authors)

  20. Unified and Modular Modeling and Functional Verification Framework of Real-Time Image Signal Processors

    Directory of Open Access Journals (Sweden)

    Abhishek Jain

    2016-01-01

    Full Text Available In VLSI industry, image signal processing algorithms are developed and evaluated using software models before implementation of RTL and firmware. After the finalization of the algorithm, software models are used as a golden reference model for the image signal processor (ISP RTL and firmware development. In this paper, we are describing the unified and modular modeling framework of image signal processing algorithms used for different applications such as ISP algorithms development, reference for hardware (HW implementation, reference for firmware (FW implementation, and bit-true certification. The universal verification methodology- (UVM- based functional verification framework of image signal processors using software reference models is described. Further, IP-XACT based tools for automatic generation of functional verification environment files and model map files are described. The proposed framework is developed both with host interface and with core using virtual register interface (VRI approach. This modeling and functional verification framework is used in real-time image signal processing applications including cellphone, smart cameras, and image compression. The main motivation behind this work is to propose the best efficient, reusable, and automated framework for modeling and verification of image signal processor (ISP designs. The proposed framework shows better results and significant improvement is observed in product verification time, verification cost, and quality of the designs.

  1. MOBS - A modular on-board switching system

    Science.gov (United States)

    Berner, W.; Grassmann, W.; Piontek, M.

    The authors describe a multibeam satellite system that is designed for business services and for communications at a high bit rate. The repeater is regenerative with a modular onboard switching system. It acts not only as baseband switch but also as the central node of the network, performing network control and protocol evaluation. The hardware is based on a modular bus/memory architecture with associated processors.

  2. List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor

    Science.gov (United States)

    Ryder, W. J.; Angelis, G. I.; Bashar, R.; Gillam, J. E.; Fulton, R.; Meikle, S.

    2014-03-01

    List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging

  3. Optimization of an on-board imaging system for extremely rapid radiation therapy

    International Nuclear Information System (INIS)

    Cherry Kemmerling, Erica M.; Wu, Meng; Yang, He; Fahrig, Rebecca; Maxim, Peter G.; Loo, Billy W.

    2015-01-01

    Purpose: Next-generation extremely rapid radiation therapy systems could mitigate the need for motion management, improve patient comfort during the treatment, and increase patient throughput for cost effectiveness. Such systems require an on-board imaging system that is competitively priced, fast, and of sufficiently high quality to allow good registration between the image taken on the day of treatment and the image taken the day of treatment planning. In this study, three different detectors for a custom on-board CT system were investigated to select the best design for integration with an extremely rapid radiation therapy system. Methods: Three different CT detectors are proposed: low-resolution (all 4 × 4 mm pixels), medium-resolution (a combination of 4 × 4 mm pixels and 2 × 2 mm pixels), and high-resolution (all 1 × 1 mm pixels). An in-house program was used to generate projection images of a numerical anthropomorphic phantom and to reconstruct the projections into CT datasets, henceforth called “realistic” images. Scatter was calculated using a separate Monte Carlo simulation, and the model included an antiscatter grid and bowtie filter. Diagnostic-quality images of the phantom were generated to represent the patient scan at the time of treatment planning. Commercial deformable registration software was used to register the diagnostic-quality scan to images produced by the various on-board detector configurations. The deformation fields were compared against a “gold standard” deformation field generated by registering initial and deformed images of the numerical phantoms that were used to make the diagnostic and treatment-day images. Registrations of on-board imaging system data were judged by the amount their deformation fields differed from the corresponding gold standard deformation fields—the smaller the difference, the better the system. To evaluate the registrations, the pointwise distance between gold standard and realistic registration

  4. Optimization of an on-board imaging system for extremely rapid radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Cherry Kemmerling, Erica M.; Wu, Meng, E-mail: mengwu@stanford.edu; Yang, He; Fahrig, Rebecca [Department of Radiology, Stanford University, Stanford, California 94305 (United States); Maxim, Peter G.; Loo, Billy W. [Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Stanford Cancer Institute, Stanford University School of Medicine, Stanford, California 94305 (United States)

    2015-11-15

    Purpose: Next-generation extremely rapid radiation therapy systems could mitigate the need for motion management, improve patient comfort during the treatment, and increase patient throughput for cost effectiveness. Such systems require an on-board imaging system that is competitively priced, fast, and of sufficiently high quality to allow good registration between the image taken on the day of treatment and the image taken the day of treatment planning. In this study, three different detectors for a custom on-board CT system were investigated to select the best design for integration with an extremely rapid radiation therapy system. Methods: Three different CT detectors are proposed: low-resolution (all 4 × 4 mm pixels), medium-resolution (a combination of 4 × 4 mm pixels and 2 × 2 mm pixels), and high-resolution (all 1 × 1 mm pixels). An in-house program was used to generate projection images of a numerical anthropomorphic phantom and to reconstruct the projections into CT datasets, henceforth called “realistic” images. Scatter was calculated using a separate Monte Carlo simulation, and the model included an antiscatter grid and bowtie filter. Diagnostic-quality images of the phantom were generated to represent the patient scan at the time of treatment planning. Commercial deformable registration software was used to register the diagnostic-quality scan to images produced by the various on-board detector configurations. The deformation fields were compared against a “gold standard” deformation field generated by registering initial and deformed images of the numerical phantoms that were used to make the diagnostic and treatment-day images. Registrations of on-board imaging system data were judged by the amount their deformation fields differed from the corresponding gold standard deformation fields—the smaller the difference, the better the system. To evaluate the registrations, the pointwise distance between gold standard and realistic registration

  5. Sensitometric control of roentgen film processors

    International Nuclear Information System (INIS)

    Forsberg, H.; Karolinska Sjukhuset, Stockholm

    1987-01-01

    Monitoring of film processors performance is essential since image quality, patient dose and costs are influenced by the performance. A system for sensitometric constancy control of film processors and their associated components is described. Experience with the system for 3 years is given when implemented on 17 film processors. Modern high quality film processors have a stability that makes a test frequency of once a week sufficient to maintain adequate image quality. The test system is so sensitive that corrective actions almost invariably have been taken before any technical problem degraded the image quality to a visible degree. (orig.)

  6. Block iterative restoration of astronomical images with the massively parallel processor

    International Nuclear Information System (INIS)

    Heap, S.R.; Lindler, D.J.

    1987-01-01

    A method is described for algebraic image restoration capable of treating astronomical images. For a typical 500 x 500 image, direct algebraic restoration would require the solution of a 250,000 x 250,000 linear system. The block iterative approach is used to reduce the problem to solving 4900 121 x 121 linear systems. The algorithm was implemented on the Goddard Massively Parallel Processor, which can solve a 121 x 121 system in approximately 0.06 seconds. Examples are shown of the results for various astronomical images

  7. Real Time Corner Detection for Miniaturized Electro-Optical Sensors Onboard Small Unmanned Aerial Systems

    Directory of Open Access Journals (Sweden)

    Antonio Moccia

    2012-01-01

    Full Text Available This paper describes the target detection algorithm for the image processor of a vision-based system that is installed onboard an unmanned helicopter. It has been developed in the framework of a project of the French national aerospace research center Office National d’Etudes et de Recherches Aérospatiales (ONERA which aims at developing an air-to-ground target tracking mission in an unknown urban environment. In particular, the image processor must detect targets and estimate ground motion in proximity of the detected target position. Concerning the target detection function, the analysis has dealt with realizing a corner detection algorithm and selecting the best choices in terms of edge detection methods, filtering size and type and the more suitable criterion of detection of the points of interest in order to obtain a very fast algorithm which fulfills the computation load requirements. The compared criteria are the Harris-Stephen and the Shi-Tomasi, ones, which are the most widely used in literature among those based on intensity. Experimental results which illustrate the performance of the developed algorithm and demonstrate that the detection time is fully compliant with the requirements of the real-time system are discussed.

  8. Comparison of onboard low-field magnetic resonance imaging versus onboard computed tomography for anatomy visualization in radiotherapy.

    Science.gov (United States)

    Noel, Camille E; Parikh, Parag J; Spencer, Christopher R; Green, Olga L; Hu, Yanle; Mutic, Sasa; Olsen, Jeffrey R

    2015-01-01

    Onboard magnetic resonance imaging (OB-MRI) for daily localization and adaptive radiotherapy has been under development by several groups. However, no clinical studies have evaluated whether OB-MRI improves visualization of the target and organs at risk (OARs) compared to standard onboard computed tomography (OB-CT). This study compared visualization of patient anatomy on images acquired on the MRI-(60)Co ViewRay system to those acquired with OB-CT. Fourteen patients enrolled on a protocol approved by the Institutional Review Board (IRB) and undergoing image-guided radiotherapy for cancer in the thorax (n = 2), pelvis (n = 6), abdomen (n = 3) or head and neck (n = 3) were imaged with OB-MRI and OB-CT. For each of the 14 patients, the OB-MRI and OB-CT datasets were displayed side-by-side and independently reviewed by three radiation oncologists. Each physician was asked to evaluate which dataset offered better visualization of the target and OARs. A quantitative contouring study was performed on two abdominal patients to assess if OB-MRI could offer improved inter-observer segmentation agreement for adaptive planning. In total 221 OARs and 10 targets were compared for visualization on OB-MRI and OB-CT by each of the three physicians. The majority of physicians (two or more) evaluated visualization on MRI as better for 71% of structures, worse for 10% of structures, and equivalent for 14% of structures. 5% of structures were not visible on either. Physicians agreed unanimously for 74% and in majority for > 99% of structures. Targets were better visualized on MRI in 4/10 cases, and never on OB-CT. Low-field MR provides better anatomic visualization of many radiotherapy targets and most OARs as compared to OB-CT. Further studies with OB-MRI should be pursued.

  9. Formulation of consumables management models: Mission planning processor payload interface definition

    Science.gov (United States)

    Torian, J. G.

    1977-01-01

    Consumables models required for the mission planning and scheduling function are formulated. The relation of the models to prelaunch, onboard, ground support, and postmission functions for the space transportation systems is established. Analytical models consisting of an orbiter planning processor with consumables data base is developed. A method of recognizing potential constraint violations in both the planning and flight operations functions, and a flight data file storage/retrieval of information over an extended period which interfaces with a flight operations processor for monitoring of the actual flights is presented.

  10. Fiducial-Based Translational Localization Accuracy of Electromagnetic Tracking System and On-Board Kilovoltage Imaging System

    International Nuclear Information System (INIS)

    Santanam, Lakshmi; Malinowski, Kathleen; Hubenshmidt, James; Dimmer, Steve; Mayse, Martin L.; Bradley, Jeffrey; Chaudhari, Amir; Lechleiter, Kirsten; Goddu, Sree Krishna Murty; Esthappan, Jacqueline; Mutic, Sasa; Low, Daniel A.; Parikh, Parag

    2008-01-01

    Purpose: The Calypso medical four-dimensional localization system uses AC electromagnetics, which do not require ionizing radiation, for accurate, real-time tumor tracking. This investigation compared the static and dynamic tracking accuracy of this system to that of an on-board imaging kilovoltage X-ray system for concurrent use of the two systems. Methods and Materials: The localization accuracies of a kilovoltage imaging system and a continuous electromagnetic tracking system were compared. Using an in-house developed four-dimensional stage, quality-assurance fixture containing three radiofrequency transponders was positioned at a series of static locations and then moved through the ellipsoidal and nonuniform continuous paths. The transponder positions were tracked concurrently by the Calypso system. For static localization, the transponders were localized using portal images and digitally reconstructed radiographs by commercial matching software. For dynamic localization, the transponders were fluoroscopically imaged, and their positions were determined retrospectively using custom-written image processing programs. The localization data sets were synchronized with and compared to the known quality assurance fixture positions. The experiment was repeated to retrospectively track three transponders implanted in a canine lung. Results: The root mean square error of the on-board imaging and Calypso systems was 0.1 cm and 0.0 cm, respectively, for static localization, 0.22 mm and 0.33 mm for dynamic phantom positioning, and 0.42 mm for the canine study. Conclusion: The results showed that both localization systems provide submillimeter accuracy. The Calypso and on-board imaging tracking systems offer distinct sets of advantages and, given their compatibility, patients could benefit from the complementary nature of the two systems when used concurrently

  11. The hard x-ray imager onboard IXO

    Science.gov (United States)

    Nakazawa, Kazuhiro; Takahashi, Tadayuki; Limousin, Olivier; Kokubun, Motohide; Watanabe, Shin; Laurent, Philippe; Arnaud, Monique; Tajima, Hiroyasu

    2010-07-01

    The Hard X-ray Imager (HXI) is one of the instruments onboard International X-ray Observatory (IXO), to be launched into orbit in 2020s. It covers the energy band of 10-40 keV, providing imaging-spectroscopy with a field of view of 8 x 8 arcmin2. The HXI is attached beneath the Wide Field Imager (WFI) covering 0.1-15 keV. Combined with the super-mirror coating on the mirror assembly, this configuration provides observation of X-ray source in wide energy band (0.1-40.0 keV) simultaneously, which is especially important for varying sources. The HXI sensor part consists of the semiconductor imaging spectrometer, using Si in the medium energy detector and CdTe in the high energy detector as its material, and an active shield covering its back to reduce background in orbit. The HXI technology is based on those of the Japanese-lead new generation X-ray observatory ASTRO-H, and partly from those developed for Simbol-X. Therefore, the technological development is in good progress. In the IXO mission, HXI will provide a major assets to identify the nature of the object by penetrating into thick absorbing materials and determined the inherent spectral shape in the energy band well above the structure around Fe-K lines and edges.

  12. On-board processing for telecommunications satellites

    Science.gov (United States)

    Nuspl, P. P.; Dong, G.

    1991-01-01

    In this decade, communications satellite systems will probably face dramatic challenges from alternative transmission means. To balance and overcome such competition, and to prepare for new requirements, INTELSAT has developed several on-board processing techniques, including Satellite-Switched TDMA (SS-TDMA), Satellite-Switched FDMA (SS-FDMA), several Modulators/Demodulators (Modem), a Multicarrier Multiplexer and Demodulator MCDD), an International Business Service (IBS)/Intermediate Data Rate (IDR) BaseBand Processor (BBP), etc. Some proof-of-concept hardware and software were developed, and tested recently in the INTELSAT Technical Laboratories. These techniques and some test results are discussed.

  13. A New Algorithm for the On-Board Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Raúl Guerra

    2018-03-01

    Full Text Available Hyperspectral sensors are able to provide information that is useful for many different applications. However, the huge amounts of data collected by these sensors are not exempt of drawbacks, especially in remote sensing environments where the hyperspectral images are collected on-board satellites and need to be transferred to the earth’s surface. In this situation, an efficient compression of the hyperspectral images is mandatory in order to save bandwidth and storage space. Lossless compression algorithms have been traditionally preferred, in order to preserve all the information present in the hyperspectral cube for scientific purposes, despite their limited compression ratio. Nevertheless, the increment in the data-rate of the new-generation sensors is making more critical the necessity of obtaining higher compression ratios, making it necessary to use lossy compression techniques. A new transform-based lossy compression algorithm, namely Lossy Compression Algorithm for Hyperspectral Image Systems (HyperLCA, is proposed in this manuscript. This compressor has been developed for achieving high compression ratios with a good compression performance at a reasonable computational burden. An extensive amount of experiments have been performed in order to evaluate the goodness of the proposed HyperLCA compressor using different calibrated and uncalibrated hyperspectral images from the AVIRIS and Hyperion sensors. The results provided by the proposed HyperLCA compressor have been evaluated and compared against those produced by the most relevant state-of-the-art compression solutions. The theoretical and experimental evidence indicates that the proposed algorithm represents an excellent option for lossy compressing hyperspectral images, especially for applications where the available computational resources are limited, such as on-board scenarios.

  14. A quality assurance program for the on-board imager[reg

    International Nuclear Information System (INIS)

    Yoo, Sua; Kim, Gwe-Ya; Hammoud, Rabih

    2006-01-01

    To develop a quality assurance (QA) program for the On-Board Imager (OBI) system and to summarize the results of these QA tests over extended periods from multiple institutions. Both the radiographic and cone-beam computed tomography (CBCT) mode of operation have been evaluated. The QA programs from four institutions have been combined to generate a series of tests for evaluating the performance of the On-Board Imager. The combined QA program consists of three parts: (1) safety and functionality (2) geometry, and (3) image quality. Safety and functionality tests evaluate the functionality of safety features and the clinical operation of the entire system during the tube warm-up. Geometry QA verifies the geometric accuracy and stability of the OBI/CBCT hardware/software. Image quality QA monitors spatial resolution and contrast sensitivity of the radiographic images. Image quality QA for CBCT includes tests for Hounsfield Unit (HU) linearity, HU uniformity, spatial linearity, and scan slice geometry, in addition. All safety and functionality tests passed on a daily basis. The average accuracy of the OBI isocenter was better than 1.5 mm with a range of variation of less than 1 mm over 8 months. The average accuracy of arm positions in the mechanical geometry QA was better than 1 mm, with a range of variation of less than 1 mm over 8 months. Measurements of other geometry QA tests showed stable results within tolerance throughout the test periods. Radiographic contrast sensitivity ranged between 2.2% and 3.2% and spatial resolution ranged between 1.25 and 1.6 lp/mm. Over four months the CBCT images showed stable spatial linearity, scan slice geometry, contrast resolution (1%; 6 lp/cm). The HU linearity was within ±40 HU for all measurements. By combining test methods from multiple institutions, we have developed a comprehensive, yet practical, set of QA tests for the OBI system. Use of the tests over extended periods show that the OBI system has reliable mechanical

  15. DESIGN AND IMPLEMENTATION OF A VHDL PROCESSOR FOR DCT BASED IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    Md. Shabiul Islam

    2017-11-01

    Full Text Available This paper describes the design and implementation of a VHDL processor meant for performing 2D-Discrete Cosine Transform (DCT to use in image compression applications. The design flow starts from the system specification to implementation on silicon and the entire process is carried out using an advanced workstation based design environment for digital signal processing. The software allows the bit-true analysis to ensure that the designed VLSI processor satisfies the required specifications. The bit-true analysis is performed on all levels of abstraction (behavior, VHDL etc.. The motivation behind the work is smaller size chip area, faster processing, reducing the cost of the chip

  16. Optical Associative Processors For Visual Perception"

    Science.gov (United States)

    Casasent, David; Telfer, Brian

    1988-05-01

    We consider various associative processor modifications required to allow these systems to be used for visual perception, scene analysis, and object recognition. For these applications, decisions on the class of the objects present in the input image are required and thus heteroassociative memories are necessary (rather than the autoassociative memories that have been given most attention). We analyze the performance of both associative processors and note that there is considerable difference between heteroassociative and autoassociative memories. We describe associative processors suitable for realizing functions such as: distortion invariance (using linear discriminant function memory synthesis techniques), noise and image processing performance (using autoassociative memories in cascade with with a heteroassociative processor and with a finite number of autoassociative memory iterations employed), shift invariance (achieved through the use of associative processors operating on feature space data), and the analysis of multiple objects in high noise (which is achieved using associative processing of the output from symbolic correlators). We detail and provide initial demonstrations of the use of associative processors operating on iconic, feature space and symbolic data, as well as adaptive associative processors.

  17. FPGA-based reconfigurable processor for ultrafast interlaced ultrasound and photoacoustic imaging.

    Science.gov (United States)

    Alqasemi, Umar; Li, Hai; Aguirre, Andrés; Zhu, Quing

    2012-07-01

    In this paper, we report, to the best of our knowledge, a unique field-programmable gate array (FPGA)-based reconfigurable processor for real-time interlaced co-registered ultrasound and photoacoustic imaging and its application in imaging tumor dynamic response. The FPGA is used to control, acquire, store, delay-and-sum, and transfer the data for real-time co-registered imaging. The FPGA controls the ultrasound transmission and ultrasound and photoacoustic data acquisition process of a customized 16-channel module that contains all of the necessary analog and digital circuits. The 16-channel module is one of multiple modules plugged into a motherboard; their beamformed outputs are made available for a digital signal processor (DSP) to access using an external memory interface (EMIF). The FPGA performs a key role through ultrafast reconfiguration and adaptation of its structure to allow real-time switching between the two imaging modes, including transmission control, laser synchronization, internal memory structure, beamforming, and EMIF structure and memory size. It performs another role by parallel accessing of internal memories and multi-thread processing to reduce the transfer of data and the processing load on the DSP. Furthermore, because the laser will be pulsing even during ultrasound pulse-echo acquisition, the FPGA ensures that the laser pulses are far enough from the pulse-echo acquisitions by appropriate time-division multiplexing (TDM). A co-registered ultrasound and photoacoustic imaging system consisting of four FPGA modules (64-channels) is constructed, and its performance is demonstrated using phantom targets and in vivo mouse tumor models.

  18. Soft x-ray imager (SXI) onboard the NeXT satellite

    Science.gov (United States)

    Tsuru, Takeshi Go; Takagi, Shin-Ichiro; Matsumoto, Hironori; Inui, Tatsuya; Ozawa, Midori; Koyama, Katsuji; Tsunemi, Hiroshi; Hayashida, Kiyoshi; Miyata, Emi; Ozawa, Hideki; Touhiguchi, Masakuni; Matsuura, Daisuke; Dotani, Tadayasu; Ozaki, Masanobu; Murakami, Hiroshi; Kohmura, Takayoshi; Kitamoto, Shunji; Awaki, Hisamitsu

    2006-06-01

    We give overview and the current status of the development of the Soft X-ray Imager (SXI) onboard the NeXT satellite. SXI is an X-ray CCD camera placed at the focal plane detector of the Soft X-ray Telescopes for Imaging (SXT-I) onboard NeXT. The pixel size and the format of the CCD is 24 x 24μm (IA) and 2048 x 2048 x 2 (IA+FS). Currently, we have been developing two types of CCD as candidates for SXI, in parallel. The one is front illumination type CCD with moderate thickness of the depletion layer (70 ~ 100μm) as a baseline plan. The other one is the goal plan, in which we develop back illumination type CCD with a thick depletion layer (200 ~ 300μm). For the baseline plan, we successfully developed the proto model 'CCD-NeXT1' with the pixel size of 12μm x 12μm and the CCD size of 24mm x 48mm. The depletion layer of the CCD has reached 75 ~ 85μm. The goal plan is realized by introduction of a new type of CCD 'P-channel CCD', which collects holes in stead of electrons in the common 'N-channel CCD'. By processing a test model of P-channel CCD we have confirmed high quantum efficiency above 10 keV with an equivalent depletion layer of 300μm. A back illumination type of P-channel CCD with a depletion layer of 200μm with aluminum coating for optical blocking has been also successfully developed. We have been also developing a thermo-electric cooler (TEC) with the function of the mechanically support of the CCD wafer without standoff insulators, for the purpose of the reduction of thermal input to the CCD through the standoff insulators. We have been considering the sensor housing and the onboard electronics for the CCD clocking, readout and digital processing of the frame date.

  19. Real time image synthesis on a SIMD linear array processor: algorithms and architectures

    International Nuclear Information System (INIS)

    Letellier, Laurent

    1993-01-01

    Nowadays, image synthesis has become a widely used technique. The impressive computing power required for real time applications necessitates the use of parallel architectures. In this context, we evaluate an SIMD linear parallel architecture, SYMPATI2, dedicated to image processing. The objective of this study is to propose a cost-effective graphics accelerator relying on SYMPATI2's modular and programmable structure. The parallelization of basic image synthesis algorithms on SYMPATI2 enables us to determine its limits in this application field. These limits lead us to evaluate a new structure with a fast intercommunication network between processors, but processors have to support the message consistency, which brings about a strong decrease in performance. To solve this problem, we suggest a simple network whose access priorities are represented by tokens. The simulations of this new architecture indicate that the SIMD mode causes a drastic cut in parallelism. To cope with this drawback, we propose a context switching procedure which reduces the SIMD rigidity and increases the parallelism rate significantly. Then, the graphics accelerator we propose is compared with existing graphics workstations. This comparison indicates that our structure, which is able to accelerate both image synthesis and image processing, is competitive and well-suited for multimedia applications. (author) [fr

  20. WE-G-17A-01: Improving Tracking Image Spatial Resolution for Onboard MR Image Guided Radiation Therapy Using the WHISKEE Technique

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Y; Mutic, S; Du, D; Green, O [Washington University School of Medicine, Saint Louis, MO (United States); Zeng, Q; Nana, R; Patrick, J; Shvartsman, S; Dempsey, J [ViewRay Incorporated, Oakwood Village, OH (United States)

    2014-06-15

    Purpose: To evaluate the feasibility of using the weighted hybrid iterative spiral k-space encoded estimation (WHISKEE) technique to improve spatial resolution of tracking images for onboard MR image guided radiation therapy (MR-IGRT). Methods: MR tracking images of abdomen and pelvis had been acquired from healthy volunteers using the ViewRay onboard MRIGRT system (ViewRay Inc. Oakwood Village, OH) at a spatial resolution of 2.0mm*2.0mm*5.0mm. The tracking MR images were acquired using the TrueFISP sequence. The temporal resolution had to be traded off to 2 frames per second (FPS) to achieve the 2.0mm in-plane spatial resolution. All MR images were imported into the MATLAB software. K-space data were synthesized through the Fourier Transform of the MR images. A mask was created to selected k-space points that corresponded to the under-sampled spiral k-space trajectory with an acceleration (or undersampling) factor of 3. The mask was applied to the fully sampled k-space data to synthesize the undersampled k-space data. The WHISKEE method was applied to the synthesized undersampled k-space data to reconstructed tracking MR images at 6 FPS. As a comparison, the undersampled k-space data were also reconstructed using the zero-padding technique. The reconstructed images were compared to the original image. The relatively reconstruction error was evaluated using the percentage of the norm of the differential image over the norm of the original image. Results: Compared to the zero-padding technique, the WHISKEE method was able to reconstruct MR images with better image quality. It significantly reduced the relative reconstruction error from 39.5% to 3.1% for the pelvis image and from 41.5% to 4.6% for the abdomen image at an acceleration factor of 3. Conclusion: We demonstrated that it was possible to use the WHISKEE method to expedite MR image acquisition for onboard MR-IGRT systems to achieve good spatial and temporal resolutions simultaneously. Y. Hu and O. green

  1. WE-G-17A-01: Improving Tracking Image Spatial Resolution for Onboard MR Image Guided Radiation Therapy Using the WHISKEE Technique

    International Nuclear Information System (INIS)

    Hu, Y; Mutic, S; Du, D; Green, O; Zeng, Q; Nana, R; Patrick, J; Shvartsman, S; Dempsey, J

    2014-01-01

    Purpose: To evaluate the feasibility of using the weighted hybrid iterative spiral k-space encoded estimation (WHISKEE) technique to improve spatial resolution of tracking images for onboard MR image guided radiation therapy (MR-IGRT). Methods: MR tracking images of abdomen and pelvis had been acquired from healthy volunteers using the ViewRay onboard MRIGRT system (ViewRay Inc. Oakwood Village, OH) at a spatial resolution of 2.0mm*2.0mm*5.0mm. The tracking MR images were acquired using the TrueFISP sequence. The temporal resolution had to be traded off to 2 frames per second (FPS) to achieve the 2.0mm in-plane spatial resolution. All MR images were imported into the MATLAB software. K-space data were synthesized through the Fourier Transform of the MR images. A mask was created to selected k-space points that corresponded to the under-sampled spiral k-space trajectory with an acceleration (or undersampling) factor of 3. The mask was applied to the fully sampled k-space data to synthesize the undersampled k-space data. The WHISKEE method was applied to the synthesized undersampled k-space data to reconstructed tracking MR images at 6 FPS. As a comparison, the undersampled k-space data were also reconstructed using the zero-padding technique. The reconstructed images were compared to the original image. The relatively reconstruction error was evaluated using the percentage of the norm of the differential image over the norm of the original image. Results: Compared to the zero-padding technique, the WHISKEE method was able to reconstruct MR images with better image quality. It significantly reduced the relative reconstruction error from 39.5% to 3.1% for the pelvis image and from 41.5% to 4.6% for the abdomen image at an acceleration factor of 3. Conclusion: We demonstrated that it was possible to use the WHISKEE method to expedite MR image acquisition for onboard MR-IGRT systems to achieve good spatial and temporal resolutions simultaneously. Y. Hu and O. green

  2. High-Speed On-Board Data Processing for Science Instruments: HOPS

    Science.gov (United States)

    Beyon, Jeffrey

    2015-01-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 â€" April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.

  3. Design Through Integration of On-Board Calibration Device with Imaging Spectroscopy Instruments

    Science.gov (United States)

    Stange, Michael

    2012-01-01

    The main purpose of the Airborne Visible and Infrared Imaging Spectroscopy (AVIRIS) project is to "identify, measure, and monitor constituents of the Earth's surface and atmosphere based on molecular absorption and particle scattering signatures." The project designs, builds, and tests various imaging spectroscopy instruments that use On-Board Calibration devices (OBC) to check the accuracy of the data collected by the spectrometers. The imaging instrument records the spectral signatures of light collected during flight. To verify the data is correct, the OBC shines light which is collected by the imaging spectrometer and compared against previous calibration data to track spectral response changes in the instrument. The spectral data has the calibration applied to it based on the readings from the OBC data in order to ensure accuracy.

  4. Technology Readiness Level (TRL) Advancement of the MSPI On-Board Processing Platform for the ACE Decadal Survey Mission

    Science.gov (United States)

    Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.; Wilson, Thor O.

    2011-01-01

    The Xilinx Virtex-5QV is a new Single-event Immune Reconfigurable FPGA (SIRF) device that is targeted as the spaceborne processor for the NASA Decadal Survey Aerosol-Cloud-Ecosystem (ACE) mission's Multiangle SpectroPolarimetric Imager (MSPI) instrument, currently under development at JPL. A key technology needed for MSPI is on-board processing (OBP) to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's ESTO1 AIST2 Program, JPL is demonstrating how signal data at 95 Mbytes/sec over 16 channels for each of the 9 multi-angle cameras can be reduced to 0.45 Mbytes/sec, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information. This is done via a least-squares fitting algorithm implemented on the Virtex-5 FPGA operating in real-time on the raw video data stream.

  5. Implementation of the program of quality control of the system on-board imager of varian: initial assessment; Puesta en marcha del programa de control de calidad del sistema on-board imager de varian: evaluacion inicial

    Energy Technology Data Exchange (ETDEWEB)

    Ortega Martin, I.; Ruiz Morales, C.; Lopez Sanchez, F.; Tobarra Gonzalez, B. M.

    2013-07-01

    This work aims to present evidence that are part of our quality control system on-board Imager of Varian, elaborated from recommendations and national and international protocols, as well as a first assessment of the results obtained to date. (Author)

  6. Memory-Efficient Onboard Rock Segmentation

    Science.gov (United States)

    Burl, Michael C.; Thompson, David R.; Bornstein, Benjamin J.; deGranville, Charles K.

    2013-01-01

    Rockster-MER is an autonomous perception capability that was uploaded to the Mars Exploration Rover Opportunity in December 2009. This software provides the vision front end for a larger software system known as AEGIS (Autonomous Exploration for Gathering Increased Science), which was recently named 2011 NASA Software of the Year. As the first step in AEGIS, Rockster-MER analyzes an image captured by the rover, and detects and automatically identifies the boundary contours of rocks and regions of outcrop present in the scene. This initial segmentation step reduces the data volume from millions of pixels into hundreds (or fewer) of rock contours. Subsequent stages of AEGIS then prioritize the best rocks according to scientist- defined preferences and take high-resolution, follow-up observations. Rockster-MER has performed robustly from the outset on the Mars surface under challenging conditions. Rockster-MER is a specially adapted, embedded version of the original Rockster algorithm ("Rock Segmentation Through Edge Regrouping," (NPO- 44417) Software Tech Briefs, September 2008, p. 25). Although the new version performs the same basic task as the original code, the software has been (1) significantly upgraded to overcome the severe onboard re source limitations (CPU, memory, power, time) and (2) "bulletproofed" through code reviews and extensive testing and profiling to avoid the occurrence of faults. Because of the limited computational power of the RAD6000 flight processor on Opportunity (roughly two orders of magnitude slower than a modern workstation), the algorithm was heavily tuned to improve its speed. Several functional elements of the original algorithm were removed as a result of an extensive cost/benefit analysis conducted on a large set of archived rover images. The algorithm was also required to operate below a stringent 4MB high-water memory ceiling; hence, numerous tricks and strategies were introduced to reduce the memory footprint. Local filtering

  7. Natrium: Use of FPGA embedded processors for real-time data compression

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R; Salamon, A; Salina, G [INFN Sezione di Roma Tor Vergata, Rome (Italy); Biagioni, A; Frezza, O; Cicero, F Lo; Lonardo, A; Rossetti, D; Simula, F; Tosoratto, L; Vicini, P [INFN Sezione di Roma, Rome (Italy)

    2011-12-15

    We present test results and characterization of a data compression system for the readout of the NA62 liquid krypton calorimeter trigger processor. The Level-0 electromagnetic calorimeter trigger processor of the NA62 experiment at CERN receives digitized data from the calorimeter main readout board. These data are stored on an on-board DDR2 RAM memory and read out upon reception of a Level-0 accept signal. The maximum raw data throughput from the trigger front-end cards is 2.6 Gbps. To readout these data over two Gbit Ethernet interfaces we investigated different implementations of a data compression system based on the Rice-Golomb coding: one is implemented in the FPGA as a custom block and one is implemented on the FPGA embedded processor running a C code. The two implementations are tested on a set of sample events and compared with respect to achievable readout bandwidth.

  8. Natrium: Use of FPGA embedded processors for real-time data compression

    International Nuclear Information System (INIS)

    Ammendola, R; Salamon, A; Salina, G; Biagioni, A; Frezza, O; Cicero, F Lo; Lonardo, A; Rossetti, D; Simula, F; Tosoratto, L; Vicini, P

    2011-01-01

    We present test results and characterization of a data compression system for the readout of the NA62 liquid krypton calorimeter trigger processor. The Level-0 electromagnetic calorimeter trigger processor of the NA62 experiment at CERN receives digitized data from the calorimeter main readout board. These data are stored on an on-board DDR2 RAM memory and read out upon reception of a Level-0 accept signal. The maximum raw data throughput from the trigger front-end cards is 2.6 Gbps. To readout these data over two Gbit Ethernet interfaces we investigated different implementations of a data compression system based on the Rice-Golomb coding: one is implemented in the FPGA as a custom block and one is implemented on the FPGA embedded processor running a C code. The two implementations are tested on a set of sample events and compared with respect to achievable readout bandwidth.

  9. PixonVision real-time video processor

    Science.gov (United States)

    Puetter, R. C.; Hier, R. G.

    2007-09-01

    PixonImaging LLC and DigiVision, Inc. have developed a real-time video processor, the PixonVision PV-200, based on the patented Pixon method for image deblurring and denoising, and DigiVision's spatially adaptive contrast enhancement processor, the DV1000. The PV-200 can process NTSC and PAL video in real time with a latency of 1 field (1/60 th of a second), remove the effects of aerosol scattering from haze, mist, smoke, and dust, improve spatial resolution by up to 2x, decrease noise by up to 6x, and increase local contrast by up to 8x. A newer version of the processor, the PV-300, is now in prototype form and can handle high definition video. Both the PV-200 and PV-300 are FPGA-based processors, which could be spun into ASICs if desired. Obvious applications of these processors include applications in the DOD (tanks, aircraft, and ships), homeland security, intelligence, surveillance, and law enforcement. If developed into an ASIC, these processors will be suitable for a variety of portable applications, including gun sights, night vision goggles, binoculars, and guided munitions. This paper presents a variety of examples of PV-200 processing, including examples appropriate to border security, battlefield applications, port security, and surveillance from unmanned aerial vehicles.

  10. Imaging Sensor Flight and Test Equipment Software

    Science.gov (United States)

    Freestone, Kathleen; Simeone, Louis; Robertson, Byran; Frankford, Maytha; Trice, David; Wallace, Kevin; Wilkerson, DeLisa

    2007-01-01

    The Lightning Imaging Sensor (LIS) is one of the components onboard the Tropical Rainfall Measuring Mission (TRMM) satellite, and was designed to detect and locate lightning over the tropics. The LIS flight code was developed to run on a single onboard digital signal processor, and has operated the LIS instrument since 1997 when the TRMM satellite was launched. The software provides controller functions to the LIS Real-Time Event Processor (RTEP) and onboard heaters, collects the lightning event data from the RTEP, compresses and formats the data for downlink to the satellite, collects housekeeping data and formats the data for downlink to the satellite, provides command processing and interface to the spacecraft communications and data bus, and provides watchdog functions for error detection. The Special Test Equipment (STE) software was designed to operate specific test equipment used to support the LIS hardware through development, calibration, qualification, and integration with the TRMM spacecraft. The STE software provides the capability to control instrument activation, commanding (including both data formatting and user interfacing), data collection, decompression, and display and image simulation. The LIS STE code was developed for the DOS operating system in the C programming language. Because of the many unique data formats implemented by the flight instrument, the STE software was required to comprehend the same formats, and translate them for the test operator. The hardware interfaces to the LIS instrument using both commercial and custom computer boards, requiring that the STE code integrate this variety into a working system. In addition, the requirement to provide RTEP test capability dictated the need to provide simulations of background image data with short-duration lightning transients superimposed. This led to the development of unique code used to control the location, intensity, and variation above background for simulated lightning strikes

  11. Characterization of the onboard imaging unit for the first clinical magnetic resonance image guided radiation therapy system

    International Nuclear Information System (INIS)

    Hu, Yanle; Rankine, Leith; Green, Olga L.; Kashani, Rojano; Li, H. Harold; Li, Hua; Rodriguez, Vivian; Santanam, Lakshmi; Wooten, H. Omar; Mutic, Sasa; Nana, Roger; Shvartsman, Shmaryu; Victoria, James; Dempsey, James F.

    2015-01-01

    Purpose: To characterize the performance of the onboard imaging unit for the first clinical magnetic resonance image guided radiation therapy (MR-IGRT) system. Methods: The imaging performance characterization included four components: ACR (the American College of Radiology) phantom test, spatial integrity, coil signal to noise ratio (SNR) and uniformity, and magnetic field homogeneity. The ACR phantom test was performed in accordance with the ACR phantom test guidance. The spatial integrity test was evaluated using a 40.8 × 40.8 × 40.8 cm 3 spatial integrity phantom. MR and computed tomography (CT) images of the phantom were acquired and coregistered. Objects were identified around the surfaces of 20 and 35 cm diameters of spherical volume (DSVs) on both the MR and CT images. Geometric distortion was quantified using deviation in object location between the MR and CT images. The coil SNR test was performed according to the national electrical manufacturers association (NEMA) standards MS-1 and MS-9. The magnetic field homogeneity test was measured using field camera and spectral peak methods. Results: For the ACR tests, the slice position error was less than 0.10 cm, the slice thickness error was less than 0.05 cm, the resolved high-contrast spatial resolution was 0.09 cm, the resolved low-contrast spokes were more than 25, the image intensity uniformity was above 93%, and the percentage ghosting was less than 0.22%. All were within the ACR recommended specifications. The maximum geometric distortions within the 20 and 35 cm DSVs were 0.10 and 0.18 cm for high spatial resolution three-dimensional images and 0.08 and 0.20 cm for high temporal resolution two dimensional cine images based on the distance-to-phantom-center method. The average SNR was 12.0 for the body coil, 42.9 for the combined torso coil, and 44.0 for the combined head and neck coil. Magnetic field homogeneities at gantry angles of 0°, 30°, 60°, 90°, and 120° were 23.55, 20.43, 18.76, 19

  12. New system applying image processor to automatically separate cation exchange resin and anion exchange resin for condensate demineralizer

    International Nuclear Information System (INIS)

    Adachi, Tsuneyasu; Nagao, Nobuaki; Yoshimori, Yasuhide; Inoue, Takashi; Yoda, Shuji

    2014-01-01

    In PWR plant, condensate demineralizer is equipped to remove corrosive ion in condensate water. Mixed bed packing cation exchange resin (CER) and anion exchange resin (AER) is generally applied, and these are regenerated after separation to each layer periodically. Since the AER particle is slightly lighter than the CER particle, the AER layer is brought up onto the CER layer by feeding water upward from the bottom of column (backwashing). The separation performance is affected by flow rate and temperature of water for backwashing, so normally operators set the proper condition parameters regarding separation manually every time for regeneration. The authors have developed the new separation system applying CCD camera and image processor. The system is comprised of CCD camera, LED lamp, image processor, controller, flow control valves and background color panel. Blue color of the panel, which is corresponding to the complementary color against both ivory color of AER and brown color of CER, is key to secure the system precision. At first the color image of the CER via the CCD camera is digitized and memorized by the image processor. The color of CER in the field of vision of the camera is scanned by the image processor, and the position where the maximum difference of digitized color index is indicated is judged as the interface. The detected interface is able to make the accordance with the set point by adjusting the flow rate of backwashing. By adopting the blue background panel, it is also possible to draw the AER out of the column since detecting the interface of the CER clearly. The system has provided the reduction of instability factor concerning separation of resin during regeneration process. The system has been adopted in two PWR plants in Japan, it has been demonstrating its stable and precise performance. (author)

  13. The TMS34010 graphic processor - an architecture for image visualization in NMR tomography

    International Nuclear Information System (INIS)

    Slaets, Jan Frans Willem; Paiva, Maria Stela Veludo de; Almeida, Lirio O.B.

    1989-01-01

    This abstract presents a description of the minimum system implemented with the graphic processor TMS34010, which will be used in the reconstruction, treatment and interpretation f images obtained by NMR tomography. The project is being developed in the LIE (Electronic Instrumentation Laboratory), of the Sao Carlos Chemistry and Physical Institute, S P, Brazil and is already in operation

  14. Neurovision processor for designing intelligent sensors

    Science.gov (United States)

    Gupta, Madan M.; Knopf, George K.

    1992-03-01

    A programmable multi-task neuro-vision processor, called the Positive-Negative (PN) neural processor, is proposed as a plausible hardware mechanism for constructing robust multi-task vision sensors. The computational operations performed by the PN neural processor are loosely based on the neural activity fields exhibited by certain nervous tissue layers situated in the brain. The neuro-vision processor can be programmed to generate diverse dynamic behavior that may be used for spatio-temporal stabilization (STS), short-term visual memory (STVM), spatio-temporal filtering (STF) and pulse frequency modulation (PFM). A multi- functional vision sensor that performs a variety of information processing operations on time- varying two-dimensional sensory images can be constructed from a parallel and hierarchical structure of numerous individually programmed PN neural processors.

  15. Real time processor for array speckle interferometry

    Science.gov (United States)

    Chin, Gordon; Florez, Jose; Borelli, Renan; Fong, Wai; Miko, Joseph; Trujillo, Carlos

    1989-02-01

    The authors are constructing a real-time processor to acquire image frames, perform array flat-fielding, execute a 64 x 64 element two-dimensional complex FFT (fast Fourier transform) and average the power spectrum, all within the 25 ms coherence time for speckles at near-IR (infrared) wavelength. The processor will be a compact unit controlled by a PC with real-time display and data storage capability. This will provide the ability to optimize observations and obtain results on the telescope rather than waiting several weeks before the data can be analyzed and viewed with offline methods. The image acquisition and processing, design criteria, and processor architecture are described.

  16. Array processors: an introduction to their architecture, software, and applications in nuclear medicine

    International Nuclear Information System (INIS)

    King, M.A.; Doherty, P.W.; Rosenberg, R.J.; Cool, S.L.

    1983-01-01

    Array processors are ''number crunchers'' that dramatically enhance the processing power of nuclear medicine computer systems for applicatons dealing with the repetitive operations involved in digital image processing of large segments of data. The general architecture and the programming of array processors are introduced, along with some applications of array processors to the reconstruction of emission tomographic images, digital image enhancement, and functional image formation

  17. Characterization of the onboard imaging unit for the first clinical magnetic resonance image guided radiation therapy system

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Yanle, E-mail: Hu.Yanle@mayo.edu [Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri 63110 and Department of Radiation Oncology, Mayo Clinic in Arizona, Phoenix, Arizona 85054 (United States); Rankine, Leith; Green, Olga L.; Kashani, Rojano; Li, H. Harold; Li, Hua; Rodriguez, Vivian; Santanam, Lakshmi; Wooten, H. Omar; Mutic, Sasa [Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri 63110 (United States); Nana, Roger; Shvartsman, Shmaryu; Victoria, James; Dempsey, James F. [ViewRay, Inc., Oakwood Village, Ohio 44146 (United States)

    2015-10-15

    Purpose: To characterize the performance of the onboard imaging unit for the first clinical magnetic resonance image guided radiation therapy (MR-IGRT) system. Methods: The imaging performance characterization included four components: ACR (the American College of Radiology) phantom test, spatial integrity, coil signal to noise ratio (SNR) and uniformity, and magnetic field homogeneity. The ACR phantom test was performed in accordance with the ACR phantom test guidance. The spatial integrity test was evaluated using a 40.8 × 40.8 × 40.8 cm{sup 3} spatial integrity phantom. MR and computed tomography (CT) images of the phantom were acquired and coregistered. Objects were identified around the surfaces of 20 and 35 cm diameters of spherical volume (DSVs) on both the MR and CT images. Geometric distortion was quantified using deviation in object location between the MR and CT images. The coil SNR test was performed according to the national electrical manufacturers association (NEMA) standards MS-1 and MS-9. The magnetic field homogeneity test was measured using field camera and spectral peak methods. Results: For the ACR tests, the slice position error was less than 0.10 cm, the slice thickness error was less than 0.05 cm, the resolved high-contrast spatial resolution was 0.09 cm, the resolved low-contrast spokes were more than 25, the image intensity uniformity was above 93%, and the percentage ghosting was less than 0.22%. All were within the ACR recommended specifications. The maximum geometric distortions within the 20 and 35 cm DSVs were 0.10 and 0.18 cm for high spatial resolution three-dimensional images and 0.08 and 0.20 cm for high temporal resolution two dimensional cine images based on the distance-to-phantom-center method. The average SNR was 12.0 for the body coil, 42.9 for the combined torso coil, and 44.0 for the combined head and neck coil. Magnetic field homogeneities at gantry angles of 0°, 30°, 60°, 90°, and 120° were 23.55, 20.43, 18.76, 19

  18. New On-board Microprocessors

    Science.gov (United States)

    Weigand, R.

    (for SW development on PC etc.), or to consider using it as a PCI master controller in an on-board system. Advanced SEU fault tolerance is in- troduced by design, using triple modular redundancy (TMR) flip-flops for all registers and EDAC protection for all memories. The device will be manufactured in a radia- tion hard Atmel 0.25 um technology, targeting 100 MHz processor clock frequency. The non fault-tolerant LEON processor VHDL model is available as free source code, and the SPARC architecture is a well-known industry standard. Therefore, know-how, software tools and operating systems are widely available.

  19. Real-time image processing of TOF range images using a reconfigurable processor system

    Science.gov (United States)

    Hussmann, S.; Knoll, F.; Edeler, T.

    2011-07-01

    During the last years, Time-of-Flight sensors achieved a significant impact onto research fields in machine vision. In comparison to stereo vision system and laser range scanners they combine the advantages of active sensors providing accurate distance measurements and camera-based systems recording a 2D matrix at a high frame rate. Moreover low cost 3D imaging has the potential to open a wide field of additional applications and solutions in markets like consumer electronics, multimedia, digital photography, robotics and medical technologies. This paper focuses on the currently implemented 4-phase-shift algorithm in this type of sensors. The most time critical operation of the phase-shift algorithm is the arctangent function. In this paper a novel hardware implementation of the arctangent function using a reconfigurable processor system is presented and benchmarked against the state-of-the-art CORDIC arctangent algorithm. Experimental results show that the proposed algorithm is well suited for real-time processing of the range images of TOF cameras.

  20. Implementation of the program of quality control of the system on-board imager of varian: initial assessment

    International Nuclear Information System (INIS)

    Ortega Martin, I.; Ruiz Morales, C.; Lopez Sanchez, F.; Tobarra Gonzalez, B. M.

    2013-01-01

    This work aims to present evidence that are part of our quality control system on-board Imager of Varian, elaborated from recommendations and national and international protocols, as well as a first assessment of the results obtained to date. (Author)

  1. Accuracies Of Optical Processors For Adaptive Optics

    Science.gov (United States)

    Downie, John D.; Goodman, Joseph W.

    1992-01-01

    Paper presents analysis of accuracies and requirements concerning accuracies of optical linear-algebra processors (OLAP's) in adaptive-optics imaging systems. Much faster than digital electronic processor and eliminate some residual distortion. Question whether errors introduced by analog processing of OLAP overcome advantage of greater speed. Paper addresses issue by presenting estimate of accuracy required in general OLAP that yields smaller average residual aberration of wave front than digital electronic processor computing at given speed.

  2. Four dimensional digital tomosynthesis using on-board imager for the verification of respiratory motion.

    Directory of Open Access Journals (Sweden)

    Justin C Park

    Full Text Available PURPOSE: To evaluate respiratory motion of a patient by generating four-dimensional digital tomosynthesis (4D DTS, extracting respiratory signal from patients' on-board projection data, and ensuring the feasibility of 4D DTS as a localization tool for the targets which have respiratory movement. METHODS AND MATERIALS: Four patients with lung and liver cancer were included to verify the feasibility of 4D-DTS with an on-board imager. CBCT acquisition (650-670 projections was used to reconstruct 4D DTS images and the breath signal of the patients was generated by extracting the motion of diaphragm during data acquisition. Based on the extracted signal, the projection data was divided into four phases: peak-exhale phase, mid-inhale phase, peak-inhale phase, and mid-exhale phase. The binned projection data was then used to generate 4D DTS, where the total scan angle was assigned as ±22.5° from rotation center, centered on 0° and 180° for coronal "half-fan" 4D DTS, and 90° and 270° for sagittal "half-fan" 4D DTS. The result was then compared with 4D CBCT which we have also generated with the same phase distribution. RESULTS: The motion of the diaphragm was evident from the 4D DTS results for peak-exhale, mid-inhale, peak-inhale and mid-exhale phase assignment which was absent in 3D DTS. Compared to the result of 4D CBCT, the view aliasing effect due to arbitrary angle reconstruction was less severe. In addition, the severity of metal artifacts, the image distortion due to presence of metal, was less than that of the 4D CBCT results. CONCLUSION: We have implemented on-board 4D DTS on patients data to visualize the movement of anatomy due to respiratory motion. The results indicate that 4D-DTS could be a promising alternative to 4D CBCT for acquiring the respiratory motion of internal organs just prior to radiotherapy treatment.

  3. Video frame processor

    International Nuclear Information System (INIS)

    Joshi, V.M.; Agashe, Alok; Bairi, B.R.

    1993-01-01

    This report provides technical description regarding the Video Frame Processor (VFP) developed at Bhabha Atomic Research Centre. The instrument provides capture of video images available in CCIR format. Two memory planes each with a capacity of 512 x 512 x 8 bit data enable storage of two video image frames. The stored image can be processed on-line and on-line image subtraction can also be carried out for image comparisons. The VFP is a PC Add-on board and is I/O mapped within the host IBM PC/AT compatible computer. (author). 9 refs., 4 figs., 19 photographs

  4. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    Science.gov (United States)

    Downie, John D.; Goodman, Joseph W.

    1989-10-01

    The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.

  5. Implementation of an EPICS IOC on an Embedded Soft Core Processor Using Field Programmable Gate Arrays

    International Nuclear Information System (INIS)

    Douglas Curry; Alicia Hofler; Hai Dong; Trent Allison; J. Hovater; Kelly Mahoney

    2005-01-01

    At Jefferson Lab, we have been evaluating soft core processors running an EPICS IOC over μClinux on our custom hardware. A soft core processor is a flexible CPU architecture that is configured in the FPGA as opposed to a hard core processor which is fixed in silicon. Combined with an on-board Ethernet port, the technology incorporates the IOC and digital control hardware within a single FPGA. By eliminating the general purpose computer IOC, the designer is no longer tied to a specific platform, e.g. PC, VME, or VXI, to serve as the intermediary between the high level controls and the field hardware. This paper will discuss the design and development process as well as specific applications for JLab's next generation low-level RF controls and Machine Protection Systems

  6. A 1,000 Frames/s Programmable Vision Chip with Variable Resolution and Row-Pixel-Mixed Parallel Image Processors

    Directory of Open Access Journals (Sweden)

    Nanjian Wu

    2009-07-01

    Full Text Available A programmable vision chip with variable resolution and row-pixel-mixed parallel image processors is presented. The chip consists of a CMOS sensor array, with row-parallel 6-bit Algorithmic ADCs, row-parallel gray-scale image processors, pixel-parallel SIMD Processing Element (PE array, and instruction controller. The resolution of the image in the chip is variable: high resolution for a focused area and low resolution for general view. It implements gray-scale and binary mathematical morphology algorithms in series to carry out low-level and mid-level image processing and sends out features of the image for various applications. It can perform image processing at over 1,000 frames/s (fps. A prototype chip with 64 × 64 pixels resolution and 6-bit gray-scale image is fabricated in 0.18 mm Standard CMOS process. The area size of chip is 1.5 mm × 3.5 mm. Each pixel size is 9.5 μm × 9.5 μm and each processing element size is 23 μm × 29 μm. The experiment results demonstrate that the chip can perform low-level and mid-level image processing and it can be applied in the real-time vision applications, such as high speed target tracking.

  7. MERTIS: the thermal infrared imaging spectrometer onboard of the Mercury Planetary Orbiter

    Science.gov (United States)

    Zeh, T.; Peter, G.; Walter, I.; Kopp, E.; Knollenberg, J.; Helbert, J.; Gebhardt, A.; Weber, I.; Hiesinger, Harry

    2017-11-01

    The MERTIS instrument is a thermal infrared imaging spectrometer onboard of ESA's cornerstone mission BepiColombo to Mercury. MERTIS has four goals: the study of Mercury's surface composition, identification of rock-forming minerals, mapping of the surface mineralogy, and the study of the surface temperature variations and thermal inertia. MERTIS will provide detailed information about the mineralogical composition of Mercury's surface layer by measuring the spectral emittance in the spectral range from 7-14 μm at high spatial and spectral resolution. Furthermore MERTIS will obtain radiometric measurements in the spectral range from 7-40 μm to study the thermo-physical properties of the surface material. The MERTIS detector is based on an uncooled micro-bolometer array providing spectral separation and spatial resolution according to its 2-dimensional shape. The operation principle is characterized by intermediate scanning of the planet surface and three different calibration targets - free space view and two on-board black body sources. In the current project phase, the MERTIS Qualification Model (QM) is under a rigorous testing program. Besides a general overview of the instrument principles, the papers addresses major aspects of the instrument design, manufacturing and verification.

  8. SU-E-P-41: Imaging Coordination of Cone Beam CT, On-Board Image Conjunction with Optical Image Guidance for SBRT Treatment with Respiratory Motion Management

    International Nuclear Information System (INIS)

    Liu, Y; Campbell, J

    2015-01-01

    Purpose: To spare normal tissue for SBRT lung/liver patients, especially for patients with significant tumor motion, image guided respiratory motion management has been widely implemented in clinical practice. The purpose of this study was to evaluate imaging coordination of cone beam CT, on-board X-ray image conjunction with optical image guidance for SBRT treatment with motion management. Methods: Currently in our clinic a Varian Novlis Tx was utilized for treating SBRT patients implementing CBCT. A BrainLAB X-ray ExacTrac imaging system in conjunction with optical guidance was primarily used for SRS patients. CBCT and X-ray imaging system were independently calibrated with 1.0 mm tolerance. For SBRT lung/liver patients, the magnitude of tumor motion was measured based-on 4DCT and the measurement was analyzed to determine if patients would be beneficial with respiratory motion management. For patients eligible for motion management, an additional CT with breath holding would be scanned and used as primary planning CT and as reference images for Cone beam CT. During the SBRT treatment, a CBCT with pause and continuing technology would be performed with patients holding breath, which may require 3–4 partially scanned CBCT to combine as a whole CBCT depending on how long patients capable of holding breath. After patients being setup by CBCT images, the ExactTrac X-ray imaging system was implemented with patients’ on-board X-ray images compared to breath holding CT-based DRR. Results: For breath holding patients SBRT treatment, after initially localizing patients with CBCT, we then position patients with ExacTrac X-ray and optical imaging system. The observed deviations of real-time optical guided position average at 3.0, 2.5 and 1.5 mm in longitudinal, vertical and lateral respectively based on 35 treatments. Conclusion: The respiratory motion management clinical practice improved our physician confidence level to give tighter tumor margin for sparing normal

  9. A programmable systolic array correlator as a trigger processor for electron pairs in rich (ring image Cherenkov) counters

    Science.gov (United States)

    Männer, R.

    1989-12-01

    This paper describes a systolic array processor for a ring image Cherenkov counter which is capable of identifying pairs of electron circles with a known radius and a certain minimum distance within 15 μs. The processor is a very flexible and fast device. It consists of 128 x 128 processing elements (PEs), where one PE is assigned to each pixel of the image. All PEs run synchronously at 40 MHz. The identification of electron circles is done by correlating the detector image with the proper circle circumference. Circle centers are found by peak detection in the correlation result. A second correlation with a circle disc allows circles of closed electron pairs to be rejected. The trigger decision is generated if a pseudo adder detects at least two remaining circles. The device is controlled by a freely programmable sequencer. A VLSI chip containing 8 x 8 PEs is being developed using a VENUS design system and will be produced in 2μ CMOS technology.

  10. A programmable systolic array correlator as a trigger processor for electron pairs in RICH (ring image Cherenkov) counters

    International Nuclear Information System (INIS)

    Maenner, R.

    1989-01-01

    This paper describes a systolic array processor for a ring image Cherenkov counter which is capable of identifying pairs of electron circles with a known radius and a certain minimum distance within 15 μs. The processor is a very flexible and fast device. It consists of 128x128 processing elements (PEs), where one PE is assigned to each pixel of the image. All PEs run synchronously at 40 MHz. The identification of electron circles is done by correlating the detector image with the proper circle circumference. Circle centers are found by peak detection in the correlation result. A second correlation with a circle disc allows circles of closed electron pairs to be rejected. The trigger decision is generated if a pseudo adder detects at least two remaining circles. The device is controlled by a freely programmable sequencer. A VLSI chip containing 8x8 PEs is being developed using a VENUS design system and will be produced in 2μ CMOS technology. (orig.)

  11. High-Speed On-Board Data Processing Platform for LIDAR Projects at NASA Langley Research Center

    Science.gov (United States)

    Beyon, J.; Ng, T. K.; Davis, M. J.; Adams, J. K.; Lin, B.

    2015-12-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 - April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.

  12. The development of a specialized processor for a space-based multispectral earth imager

    Science.gov (United States)

    Khedr, Mostafa E.

    2008-10-01

    This work was done in the Department of Computer Engineering, Lvov Polytechnic National University, Lvov, Ukraine, as a thesis entitled "Space Imager Computer System for Raw Video Data Processing" [1]. This work describes the synthesis and practical implementation of a specialized computer system for raw data control and processing onboard a satellite MultiSpectral earth imager. This computer system is intended for satellites with resolution in the range of one meter with 12-bit precession. The design is based mostly on general off-the-shelf components such as (FPGAs) plus custom designed software for interfacing with PC and test equipment. The designed system was successfully manufactured and now fully functioning in orbit.

  13. A hardware investigation of robotic SPECT for functional and molecular imaging onboard radiation therapy systems

    International Nuclear Information System (INIS)

    Yan, Susu; Tough, MengHeng; Bowsher, James; Yin, Fang-Fang; Cheng, Lin

    2014-01-01

    Purpose: To construct a robotic SPECT system and to demonstrate its capability to image a thorax phantom on a radiation therapy flat-top couch, as a step toward onboard functional and molecular imaging in radiation therapy. Methods: A robotic SPECT imaging system was constructed utilizing a gamma camera detector (Digirad 2020tc) and a robot (KUKA KR150 L110 robot). An imaging study was performed with a phantom (PET CT Phantom TM ), which includes five spheres of 10, 13, 17, 22, and 28 mm diameters. The phantom was placed on a flat-top couch. SPECT projections were acquired either with a parallel-hole collimator or a single-pinhole collimator, both without background in the phantom and with background at 1/10th the sphere activity concentration. The imaging trajectories of parallel-hole and pinhole collimated detectors spanned 180° and 228°, respectively. The pinhole detector viewed an off-centered spherical common volume which encompassed the 28 and 22 mm spheres. The common volume for parallel-hole system was centered at the phantom which encompassed all five spheres in the phantom. The maneuverability of the robotic system was tested by navigating the detector to trace the phantom and flat-top table while avoiding collision and maintaining the closest possible proximity to the common volume. The robot base and tool coordinates were used for image reconstruction. Results: The robotic SPECT system was able to maneuver parallel-hole and pinhole collimated SPECT detectors in close proximity to the phantom, minimizing impact of the flat-top couch on detector radius of rotation. Without background, all five spheres were visible in the reconstructed parallel-hole image, while four spheres, all except the smallest one, were visible in the reconstructed pinhole image. With background, three spheres of 17, 22, and 28 mm diameters were readily observed with the parallel-hole imaging, and the targeted spheres (22 and 28 mm diameters) were readily observed in the pinhole

  14. The architecture of a video image processor for the space station

    Science.gov (United States)

    Yalamanchili, S.; Lee, D.; Fritze, K.; Carpenter, T.; Hoyme, K.; Murray, N.

    1987-01-01

    The architecture of a video image processor for space station applications is described. The architecture was derived from a study of the requirements of algorithms that are necessary to produce the desired functionality of many of these applications. Architectural options were selected based on a simulation of the execution of these algorithms on various architectural organizations. A great deal of emphasis was placed on the ability of the system to evolve and grow over the lifetime of the space station. The result is a hierarchical parallel architecture that is characterized by high level language programmability, modularity, extensibility and can meet the required performance goals.

  15. The AGILE on-board Kalman filter

    International Nuclear Information System (INIS)

    Giuliani, A.; Cocco, V.; Mereghetti, S.; Pittori, C.; Tavani, M.

    2006-01-01

    On-board reduction of particle background is one of the main challenges of space instruments dedicated to gamma-ray astrophysics. We present in this paper a discussion of the method and main simulation results of the on-board background filter of the Gamma-Ray Imaging Detector (GRID) of the AGILE mission. The GRID is capable of detecting and imaging with optimal point spread function gamma-ray photons in the range 30MeV-30GeV. The AGILE planned orbit is equatorial, with an altitude of 550km. This is an optimal orbit from the point of view of the expected particle background. For this orbit, electrons and positrons of kinetic energies between 20MeV and hundreds of MeV dominate the particle background, with significant contributions from high-energy (primary) and low-energy protons, and gamma-ray albedo-photons. We present here the main results obtained by extensive simulations of the on-board AGILE-GRID particle/photon background rejection algorithms based on a special application of Kalman filter techniques. This filter is applied (Level-2) sequentially after other data processing techniques characterizing the Level-1 processing. We show that, in conjunction with the Level-1 processing, the adopted Kalman filtering is expected to reduce the total particle/albedo-photon background rate to a value (=<10-30Hz) that is compatible with the AGILE telemetry. The AGILE on-board Kalman filter is also effective in reducing the Earth-albedo-photon background rate, and therefore contributes to substantially increase the AGILE exposure for celestial gamma-ray sources

  16. New technologies for supporting real-time on-board software development

    Science.gov (United States)

    Kerridge, D.

    1995-03-01

    The next generation of on-board data management systems will be significantly more complex than current designs, and will be required to perform more complex and demanding tasks in software. Improved hardware technology, in the form of the MA31750 radiation hard processor, is one key component in addressing the needs of future embedded systems. However, to complement these hardware advances, improved support for the design and implementation of real-time data management software is now needed. This will help to control the cost and risk assoicated with developing data management software development as it becomes an increasingly significant element within embedded systems. One particular problem with developing embedded software is managing the non-functional requirements in a systematic way. This paper identifies how Logica has exploited recent developments in hard real-time theory to address this problem through the use of new hard real-time analysis and design methods which can be supported by specialized tools. The first stage in transferring this technology from the research domain to industrial application has already been completed. The MA37150 Hard Real-Time Embedded Software Support Environment (HESSE) is a loosely integrated set of hardware and software tools which directly support the process of hard real-time analysis for software targeting the MA31750 processor. With further development, this HESSE promises to provide embedded system developers with software tools which can reduce the risks associated with developing complex hard real-time software. Supported in this way by more sophisticated software methods and tools, it is foreseen that MA31750 based embedded systems can meet the processing needs for the next generation of on-board data management systems.

  17. Optimization of Planck-LFI on-board data handling

    Energy Technology Data Exchange (ETDEWEB)

    Maris, M; Galeotta, S; Frailis, M; Zacchei, A; Fogliani, S; Gasparo, F [INAF-OATs, Via G.B. Tiepolo 11, 34131 Trieste (Italy); Tomasi, M; Bersanelli, M [Universita di Milano, Dipartimento di Fisica, Via G. Celoria 16, 20133 Milano (Italy); Miccolis, M [Thales Alenia Space Italia S.p.A., S.S. Padana Superiore 290, 20090 Vimodrone (Italy); Hildebrandt, S; Chulani, H; Gomez, F [Instituto de Astrofisica de Canarias (IAC), C/o Via Lactea, s/n E38205 - La Laguna, Tenerife (Spain); Rohlfs, R; Morisset, N; Binko, P [ISDC Data Centre for Astrophysics, University of Geneva, ch. d' Ecogia 16, 1290 Versoix (Switzerland); Burigana, C; Butler, R C; Cuttaia, F; Franceschi, E [INAF-IASF Bologna, Via P. Gobetti, 101, 40129 Bologna (Italy); D' Arcangelo, O, E-mail: maris@oats.inaf.i [IFP-CNR, via Cozzi 53, 20125 Milano (Italy)

    2009-12-15

    statistics and the processing parameters to be tuned. This model will be of interest for the instrument data analysis to asses the level of signal distortion introduced in the data by the on-board processing. The method was applied during ground tests when the instrument was operating in conditions representative of flight. Optimized parameters were obtained and inserted in the on-board processor and the performance has been verified against the requirements with the result that the required data rate of 35.5 Kbps has been achieved while keeping the processing error at a level of 3.8% of the instrumental white noise and well below the target 10% level.

  18. Rapid Onboard Trajectory Design for Autonomous Spacecraft in Multibody Systems

    Science.gov (United States)

    Trumbauer, Eric Michael

    This research develops automated, on-board trajectory planning algorithms in order to support current and new mission concepts. These include orbiter missions to Phobos or Deimos, Outer Planet Moon orbiters, and robotic and crewed missions to small bodies. The challenges stem from the limited on-board computing resources which restrict full trajectory optimization with guaranteed convergence in complex dynamical environments. The approach taken consists of leveraging pre-mission computations to create a large database of pre-computed orbits and arcs. Such a database is used to generate a discrete representation of the dynamics in the form of a directed graph, which acts to index these arcs. This allows the use of graph search algorithms on-board in order to provide good approximate solutions to the path planning problem. Coupled with robust differential correction and optimization techniques, this enables the determination of an efficient path between any boundary conditions with very little time and computing effort. Furthermore, the optimization methods developed here based on sequential convex programming are shown to have provable convergence properties, as well as generating feasible major iterates in case of a system interrupt -- a key requirement for on-board application. The outcome of this project is thus the development of an algorithmic framework which allows the deployment of this approach in a variety of specific mission contexts. Test cases related to missions of interest to NASA and JPL such as a Phobos orbiter and a Near Earth Asteroid interceptor are demonstrated, including the results of an implementation on the RAD750 flight processor. This method fills a gap in the toolbox being developed to create fully autonomous space exploration systems.

  19. TU-AB-303-10: KVCBCT, MVCBCT and MVCT On-Board Imaging Suitability for An Urgent Radiotherapy Treatment Workflow

    Energy Technology Data Exchange (ETDEWEB)

    Held, M; Morin, O; Pouliot, J [UC San Francisco, San Francisco, CA (United States)

    2015-06-15

    Purpose: A comparison of image quality and dose calculation accuracy to study the suitability of available on-board imaging systems for a new treatment workflow in emergency radiotherapy situations. Methods: Water and anthropomorphic phantom images were acquired on four different Linac on-board imaging systems, including kVCBCT (Varian TrueBeam and Elekta VersaHD), MVCBCT (Siemens Artiste) and MVCT (Accuray Tomotherapy). Simple treatments of single or opposed beams were planned on the respective kVCT images and copied to all CT images. Image suitability for dose planning was based on the overall mean dose differences and 3D gamma index with 3%/3mm criteria for a prescription of 100 monitor units (MU) and differences in calculated MUs per plan for dose prescriptions to mid-plane. Results: TrueBeam kVCBCT and Tomotherapy MVCT images produced most accurate dose calculation for all tested cases (γ-index >95%). MVCBCT and VersaHD kVCBCT images resulted in minimum γ-passing rates of 94% and 87%, respectively. MUs calculated from treatment plans prescribed to mid-plane were within differences of 5% relative to kVCT-based plans in all cases. However, VersaHD images showed considerable local image artifacts in the pelvis water and anthropomorphic neck phantom that complicated accurate Hounsfield unit (HU) to electron density conversion, thus causing local dose differences of more than 10% relative to kVCT-based dose distributions. Conclusion: TrueBeam kVCBCT, MVCBCT and MVCT systems provide image quality that allows for accurate simple treatment plan calculation. Prescription points should be placed away from areas found to cause local dose discrepancies, such as air cavities. Improved image filter settings and HU-to-electron density calibration adjustments may be required for the VersaHD system to obtain an overall accurate dose distribution. To evaluate a system’s overall suitability, its clinical features also require consideration, such as imaging field of view

  20. Clinical Experiences With Onboard Imager KV Images for Linear Accelerator-Based Stereotactic Radiosurgery and Radiotherapy Setup

    International Nuclear Information System (INIS)

    Hong, Linda X.; Chen, Chin C.; Garg, Madhur; Yaparpalvi, Ravindra; Mah, Dennis

    2009-01-01

    Purpose: To report our clinical experiences with on-board imager (OBI) kV image verification for cranial stereotactic radiosurgery (SRS) and radiotherapy (SRT) treatments. Methods and Materials: Between January 2007 and May 2008, 42 patients (57 lesions) were treated with SRS with head frame immobilization and 13 patients (14 lesions) were treated with SRT with face mask immobilization at our institution. No margin was added to the gross tumor for SRS patients, and a 3-mm three-dimensional margin was added to the gross tumor to create the planning target volume for SRT patients. After localizing the patient with stereotactic target positioner (TaPo), orthogonal kV images using OBI were taken and fused to planning digital reconstructed radiographs. Suggested couch shifts in vertical, longitudinal, and lateral directions were recorded. kV images were also taken immediately after treatment for 21 SRS patients and on a weekly basis for 6 SRT patients to assess any intrafraction changes. Results: For SRS patients, 57 pretreatment kV images were evaluated and the suggested shifts were all within 1 mm in any direction (i.e., within the accuracy of image fusion). For SRT patients, the suggested shifts were out of the 3-mm tolerance for 31 of 309 setups. Intrafraction motions were detected in 3 SRT patients. Conclusions: kV imaging provided a useful tool for SRS or SRT setups. For SRS setup with head frame, it provides radiographic confirmation of localization using the stereotactic target positioner. For SRT with mask, a 3-mm margin is adequate and feasible for routine setup when TaPo is combined with kV imaging

  1. 3D Seismic Imaging through Reverse-Time Migration on Homogeneous and Heterogeneous Multi-Core Processors

    Directory of Open Access Journals (Sweden)

    Mauricio Araya-Polo

    2009-01-01

    Full Text Available Reverse-Time Migration (RTM is a state-of-the-art technique in seismic acoustic imaging, because of the quality and integrity of the images it provides. Oil and gas companies trust RTM with crucial decisions on multi-million-dollar drilling investments. But RTM requires vastly more computational power than its predecessor techniques, and this has somewhat hindered its practical success. On the other hand, despite multi-core architectures promise to deliver unprecedented computational power, little attention has been devoted to mapping efficiently RTM to multi-cores. In this paper, we present a mapping of the RTM computational kernel to the IBM Cell/B.E. processor that reaches close-to-optimal performance. The kernel proves to be memory-bound and it achieves a 98% utilization of the peak memory bandwidth. Our Cell/B.E. implementation outperforms a traditional processor (PowerPC 970MP in terms of performance (with an 15.0× speedup and energy-efficiency (with a 10.0× increase in the GFlops/W delivered. Also, it is the fastest RTM implementation available to the best of our knowledge. These results increase the practical usability of RTM. Also, the RTM-Cell/B.E. combination proves to be a strong competitor in the seismic arena.

  2. Multi-gigabit optical interconnects for next-generation on-board digital equipment

    Science.gov (United States)

    Venet, Norbert; Favaro, Henri; Sotom, Michel; Maignan, Michel; Berthon, Jacques

    2017-11-01

    Parallel optical interconnects are experimentally assessed as a technology that may offer the high-throughput data communication capabilities required to the next-generation on-board digital processing units. An optical backplane interconnect was breadboarded, on the basis of a digital transparent processor that provides flexible connectivity and variable bandwidth in telecom missions with multi-beam antenna coverage. The unit selected for the demonstration required that more than tens of Gbit/s be supported by the backplane. The demonstration made use of commercial parallel optical link modules at 850 nm wavelength, with 12 channels running at up to 2.5 Gbit/s. A flexible optical fibre circuit was developed so as to route board-to-board connections. It was plugged to the optical transmitter and receiver modules through 12-fibre MPO connectors. BER below 10-14 and optical link budgets in excess of 12 dB were measured, which would enable to integrate broadcasting. Integration of the optical backplane interconnect was successfully demonstrated by validating the overall digital processor functionality.

  3. Band-to-Band Misregistration of the Images of MODIS Onboard Calibrators and Its Impact on Calibration

    Science.gov (United States)

    Wang, Zhipeng; Xiong, Xiaoxiong

    2017-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard Terra and Aqua satellites are radiometrically calibrated on-orbit with a set of onboard calibrators (OBCs), including a solar diffuser, a blackbody, and a space view port through which the detectors can view the dark space. As a whisk-broom scanning spectroradiometer, thirty-six MODIS spectral bands are assembled in the along-scan direction on four focal plane assemblies (FPAs). These bands capture images of the same target sequentially with the motion of a scan mirror. Then the images are coregistered onboard by delaying the appropriate band-dependent amount of time, depending on the band locations on the FPA. While this coregistration mechanismis functioning well for the far-field remote targets such as earth view scenes or the moon, noticeable band-to-band misregistration in the along-scan direction has been observed for near field targets, particularly in OBCs. In this paper, the misregistration phenomenon is presented and analyzed. It is concluded that the root cause of the misregistration is that the rotating element of the instrument, the scan mirror, is displaced from the focus of the telescope primary mirror. The amount of the misregistrationis proportional to the band location on the FPA and is inversely proportional to the distance between the target and the scan mirror. The impact of this misregistration on the calibration of MODIS bands is discussed. In particular, the calculation of the detector gain coefficient m1of bands 8-16 (412 nm 870 nm) is improved by up to 1.5% for Aqua MODIS.

  4. Evaluation of onboard hyperspectral-image compression techniques for a parallel push-broom sensor

    Energy Technology Data Exchange (ETDEWEB)

    Briles, S.

    1996-04-01

    A single hyperspectral imaging sensor can produce frames with spatially-continuous rows of differing, but adjacent, spectral wavelength. If the frame sample-rate of the sensor is such that subsequent hyperspectral frames are spatially shifted by one row, then the sensor can be thought of as a parallel (in wavelength) push-broom sensor. An examination of data compression techniques for such a sensor is presented. The compression techniques are intended to be implemented onboard a space-based platform and to have implementation speeds that match the date rate of the sensor. Data partitions examined extend from individually operating on a single hyperspectral frame to operating on a data cube comprising the two spatial axes and the spectral axis. Compression algorithms investigated utilize JPEG-based image compression, wavelet-based compression and differential pulse code modulation. Algorithm performance is quantitatively presented in terms of root-mean-squared error and root-mean-squared correlation coefficient error. Implementation issues are considered in algorithm development.

  5. Evaluation of regional pulmonary ventilation by videodensitometry using a new X-ray image processor

    International Nuclear Information System (INIS)

    Fujii, Tadashige; Kanai, Hisakata; Handa, Kenjiro; Takizawa, Masaomi

    1988-01-01

    A new video image processing device has been produced in order to assess regional pulmonary ventilation. This device consists of a microcomputer, digital frame memory, digitizer, videomonitor, joystick and videotape recorder. The changing radiographic density of the lungs during deep respiration and forced expiration is recorded by the videotape recorder, which is connected to an image intensifier television system. This device allows the examining physician to place 6 rectangular windows of variable size over any portion of the video image using the joystick, and to measure the brightness level within these windows simultaneously. It is very characteristic that the video-densitometric curve and marks of the windows are superimposed on the frozen final frame of the sampled images. By this procedure, fair videodensigrams were obtained in various respiratory diseases, and reduction of ventilatory amplitude was shown in the hypoventilatory regions. The joint use of video-densitometry and perfusion lung scintigraphy provided helpful information concerning the regional ventilation/perfusion relationship. The videodensitometry of the lung the new X-ray image processor offers routine screening evaluation of regional pulmonary ventilation abnormalities over the entire video image of the lungs without more effort required of the patients. (author)

  6. Low-Latency Embedded Vision Processor (LLEVS)

    Science.gov (United States)

    2016-03-01

    algorithms, low-latency video processing, embedded image processor, wearable electronics, helmet-mounted systems, alternative night / day imaging...external subsystems and data sources with the device. The establishment of data interfaces in terms of data transfer rates, formats and types are...video signals from Near-visible Infrared (NVIR) sensor, Shortwave IR (SWIR) and Longwave IR (LWIR) is the main processing for Night Vision (NI) system

  7. Performance of a Processor for On-Board RFI Detection and Mitigation in MetOpSG Radiometers

    DEFF Research Database (Denmark)

    Skou, Niels; Kristensen, Steen Savstrup; Lahtinen, J.

    2016-01-01

    An RFI processor breadboard has been designed and developed for the second generation MetOp satellites. RFI detection is based on the anomalous amplitude, kurtosis, and cross-frequency algorithms. These are implemented in VHDL code in an FPGA. Thus algorithm performance can very well be assessed...... by proper code simulation. Such simulations show that the kurtosis algorithm as implemented works according to theory when subjected to pulsed sinusoidal and QPSK signals....

  8. Review of the first 50 cases completed by the RACR mammography QA programme; Phantom image quality, processor control and dose considerations

    International Nuclear Information System (INIS)

    McLean, D.; Chan, W.; Eckert, M.; Heard, R.

    1997-01-01

    The Mammography Quality Assurance Programme, recently established by the Royal Australasian College of Radiologists, has processed the first 50 applications. This programme, which closely follows the programme of the American College of Radiology (ACR), utilizes phantom film images, thermoluminescent dosimetry measurement of mean glandular dose, processor control charts, clinical images, equipment reports and required survey information to establish that a centre conforms to a minimum standard in mammography. The present paper describes the initial results of the first phantom images, dose measurements, processor control and survey information. Fifty films have been evaluated up to the present time with a failure rate of 26%. The major causes of failure were unacceptable film artefacts and poor contrast (as indicated by reduced fibre and mass visibility). A surprising result was the high failure in processing, where 23% of units reviewed had significant problems, including failure to keep the processor within required control limits. Only one centre recorded a mean glandular dose above 2 mGy with no centre over the 3 mGy limit. A review of the frequency of the quality control testing shows that the acceptance of quality assurance in mammography, while greater than in the initial stages of the ACR programme, is less than current US practice. These initial results for the accreditation process probably reflect an initial period of adjustment, as seen by the high pass rate achieved by centres that have re submitted material to gain accreditation. (authors)

  9. Metal membrane-type 25-kW methanol fuel processor for fuel-cell hybrid vehicle

    Science.gov (United States)

    Han, Jaesung; Lee, Seok-Min; Chang, Hyuksang

    A 25-kW on-board methanol fuel processor has been developed. It consists of a methanol steam reformer, which converts methanol to hydrogen-rich gas mixture, and two metal membrane modules, which clean-up the gas mixture to high-purity hydrogen. It produces hydrogen at rates up to 25 N m 3/h and the purity of the product hydrogen is over 99.9995% with a CO content of less than 1 ppm. In this fuel processor, the operating condition of the reformer and the metal membrane modules is nearly the same, so that operation is simple and the overall system construction is compact by eliminating the extensive temperature control of the intermediate gas streams. The recovery of hydrogen in the metal membrane units is maintained at 70-75% by the control of the pressure in the system, and the remaining 25-30% hydrogen is recycled to a catalytic combustion zone to supply heat for the methanol steam-reforming reaction. The thermal efficiency of the fuel processor is about 75% and the inlet air pressure is as low as 4 psi. The fuel processor is currently being integrated with 25-kW polymer electrolyte membrane fuel-cell (PEMFC) stack developed by the Hyundai Motor Company. The stack exhibits the same performance as those with pure hydrogen, which proves that the maximum power output as well as the minimum stack degradation is possible with this fuel processor. This fuel-cell 'engine' is to be installed in a hybrid passenger vehicle for road testing.

  10. QERx- A Faster than Real-Time Emulator for Space Processors

    Science.gov (United States)

    Carvalho, B.; Pidgeon, A.; Robinson, P.

    2012-08-01

    Developing software for space systems is challenging. Especially because, in order to be sure it can cope with the harshness of the environment and the imperative requirements and constrains imposed by the platform were it will run, it needs to be tested exhaustively. Software Validation Facilities (SVF) are known to the industry and developers, and provide the means to run the On-Board Software (OBSW) in a realistic environment, allowing the development team to debug and test the software.But the challenge is to be able to keep up with the performance of the new processors (LEON2 and LEON3), which need to be emulated within the SVF. Such processor emulators are also used in Operational Simulators, used to support mission preparation and train mission operators. These simulators mimic the satellite and its behaviour, as realistically as possible. For test/operational efficiency reasons and because they will need to interact with external systems, both these uses cases require the processor emulators to provide real-time, or faster, performance.It is known to the industry that the performance of previously available emulators is not enough to cope with the performance of the new processors available in the market. SciSys approached this problem with dynamic translation technology trying to keep costs down by avoiding a hardware solution and keeping the integration flexibility of full software emulation.SciSys presented “QERx: A High Performance Emulator for Software Validation and Simulations” [1], in a previous DASIA event. Since then that idea has evolved and QERx has been successfully validated. SciSys is now presenting QERx as a product that can be tailored to fit different emulation needs. This paper will present QERx latest developments and current status.

  11. Radiation Tolerant Software Defined Video Processor, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — MaXentric's is proposing a radiation tolerant Software Define Video Processor, codenamed SDVP, for the problem of advanced motion imaging in the space environment....

  12. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array-Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique.

    Science.gov (United States)

    Yang, Chen; Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue

    2017-06-24

    With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array-application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.

  13. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array−Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique

    Directory of Open Access Journals (Sweden)

    Chen Yang

    2017-06-01

    Full Text Available With the development of satellite load technology and very large scale integrated (VLSI circuit technology, onboard real-time synthetic aperture radar (SAR imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT, which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array−application-specific integrated circuit (FPGA-ASIC hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.

  14. Processors for wavelet analysis and synthesis: NIFS and TI-C80 MVP

    Science.gov (United States)

    Brooks, Geoffrey W.

    1996-03-01

    Two processors are considered for image quadrature mirror filtering (QMF). The neuromorphic infrared focal-plane sensor (NIFS) is an existing prototype analog processor offering high speed spatio-temporal Gaussian filtering, which could be used for the QMF low- pass function, and difference of Gaussian filtering, which could be used for the QMF high- pass function. Although not designed specifically for wavelet analysis, the biologically- inspired system accomplishes the most computationally intensive part of QMF processing. The Texas Instruments (TI) TMS320C80 Multimedia Video Processor (MVP) is a 32-bit RISC master processor with four advanced digital signal processors (DSPs) on a single chip. Algorithm partitioning, memory management and other issues are considered for optimal performance. This paper presents these considerations with simulated results leading to processor implementation of high-speed QMF analysis and synthesis.

  15. Sensitometric Control of Automatic Processors in a Hospital Center : Retrospective Study

    International Nuclear Information System (INIS)

    Lobato Busto, R.; Pombar Camean, M.

    1992-01-01

    This paper analyses the results obtained between February (1990) and July (1991) of the sensitometric control of the seven automatic processors which are in Hospital General de Galicia-Clinico Universitario (Santiago de Compostela). The deviations with regard to the reference values of each processor, permitting the precocious detection of disturbances before being revealed by the image, were analysed. In this analysis, it was achieved that the days in which the automatic processors were out of standing only varied between 2.3% and 5% from the checked days. (author)

  16. Compact gasoline fuel processor for passenger vehicle APU

    Science.gov (United States)

    Severin, Christopher; Pischinger, Stefan; Ogrzewalla, Jürgen

    Due to the increasing demand for electrical power in today's passenger vehicles, and with the requirements regarding fuel consumption and environmental sustainability tightening, a fuel cell-based auxiliary power unit (APU) becomes a promising alternative to the conventional generation of electrical energy via internal combustion engine, generator and battery. It is obvious that the on-board stored fuel has to be used for the fuel cell system, thus, gasoline or diesel has to be reformed on board. This makes the auxiliary power unit a complex integrated system of stack, air supply, fuel processor, electrics as well as heat and water management. Aside from proving the technical feasibility of such a system, the development has to address three major barriers:start-up time, costs, and size/weight of the systems. In this paper a packaging concept for an auxiliary power unit is presented. The main emphasis is placed on the fuel processor, as good packaging of this large subsystem has the strongest impact on overall size. The fuel processor system consists of an autothermal reformer in combination with water-gas shift and selective oxidation stages, based on adiabatic reactors with inter-cooling. The configuration was realized in a laboratory set-up and experimentally investigated. The results gained from this confirm a general suitability for mobile applications. A start-up time of 30 min was measured, while a potential reduction to 10 min seems feasible. An overall fuel processor efficiency of about 77% was measured. On the basis of the know-how gained by the experimental investigation of the laboratory set-up a packaging concept was developed. Using state-of-the-art catalyst and heat exchanger technology, the volumes of these components are fixed. However, the overall volume is higher mainly due to mixing zones and flow ducts, which do not contribute to the chemical or thermal function of the system. Thus, the concept developed mainly focuses on minimization of those

  17. A Geometric Algebra Co-Processor for Color Edge Detection

    Directory of Open Access Journals (Sweden)

    Biswajit Mishra

    2015-01-01

    Full Text Available This paper describes advancement in color edge detection, using a dedicated Geometric Algebra (GA co-processor implemented on an Application Specific Integrated Circuit (ASIC. GA provides a rich set of geometric operations, giving the advantage that many signal and image processing operations become straightforward and the algorithms intuitive to design. The use of GA allows images to be represented with the three R, G, B color channels defined as a single entity, rather than separate quantities. A novel custom ASIC is proposed and fabricated that directly targets GA operations and results in significant performance improvement for color edge detection. Use of the hardware described in this paper also shows that the convolution operation with the rotor masks within GA belongs to a class of linear vector filters and can be applied to image or speech signals. The contribution of the proposed approach has been demonstrated by implementing three different types of edge detection schemes on the proposed hardware. The overall performance gains using the proposed GA Co-Processor over existing software approaches are more than 3.2× faster than GAIGEN and more than 2800× faster than GABLE. The performance of the fabricated GA co-processor is approximately an order of magnitude faster than previously published results for hardware implementations.

  18. Microlens array processor with programmable weight mask and direct optical input

    Science.gov (United States)

    Schmid, Volker R.; Lueder, Ernst H.; Bader, Gerhard; Maier, Gert; Siegordner, Jochen

    1999-03-01

    We present an optical feature extraction system with a microlens array processor. The system is suitable for online implementation of a variety of transforms such as the Walsh transform and DCT. Operating with incoherent light, our processor accepts direct optical input. Employing a sandwich- like architecture, we obtain a very compact design of the optical system. The key elements of the microlens array processor are a square array of 15 X 15 spherical microlenses on acrylic substrate and a spatial light modulator as transmissive mask. The light distribution behind the mask is imaged onto the pixels of a customized a-Si image sensor with adjustable gain. We obtain one output sample for each microlens image and its corresponding weight mask area as summation of the transmitted intensity within one sensor pixel. The resulting architecture is very compact and robust like a conventional camera lens while incorporating a high degree of parallelism. We successfully demonstrate a Walsh transform into the spatial frequency domain as well as the implementation of a discrete cosine transform with digitized gray values. We provide results showing the transformation performance for both synthetic image patterns and images of natural texture samples. The extracted frequency features are suitable for neural classification of the input image. Other transforms and correlations can be implemented in real-time allowing adaptive optical signal processing.

  19. Semantic Information Extraction of Lanes Based on Onboard Camera Videos

    Science.gov (United States)

    Tang, L.; Deng, T.; Ren, C.

    2018-04-01

    In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.

  20. WE-E-18A-11: Fluoro-Tomographic Images From Projections of On-Board Imager (OBI) While Gantry Is Moving

    Energy Technology Data Exchange (ETDEWEB)

    Yi, B; Hu, E; Yu, C; Lasio, G [Univ. of Maryland School Of Medicine, Baltimore, MD (United States)

    2014-06-15

    Purpose: A method to generate a series of fluoro-tomographic images (FTI) of the slice of interest (SOI) from the projection images of the On-board imager (OBI) while gantry is moving is developed and tested. Methods: Tomographic image via background subtraction, TIBS has been published by our group. TIBS uses a priori anatomical information from a previous CT scan to isolate a SOI from a planar kV image by factoring out the attenuations by tissues outside the SOI (background). We extended the idea to 4D TIBS, which enables to generate from the projection of different gantry angles. A set of background images for different angles are prepared. A background image at a given gantry angle is subtracted from the projection image at the same angle to generate a TIBS image. Then the TIBS image is converted to a reference angle. The 4D TIBS is the set of TIBS that originated from gantry angles other than the reference angle. Projection images of lung patients for CBCT acquisition are used to test the 4D TIBS. Results: Fluoroscopic images of a coronal plane of lung patients are acquired from the CBCT projections at different gantry angles and times. Change of morphology of hilar vessels due to breathing and heart beating are visible in the coronal plane, which are generated from the set of the projection images at gantry angles other than antero-posterior. Breathing surrogate or sorting process is not needed. Unlike tomosynthesis, FTI from 4D TIBS maintains the independence of each of the projections thereby reveals temporal variations within the SOI. Conclusion: FTI, fluoroscopic imaging of a SOI with x-ray projections, directly generated from the x-ray projection images at different gantry angles is tested with a lung case and proven feasible. This technique can be used for on-line imaging of moving targets. NIH Grant R01CA133539.

  1. Automotive Fuel Processor Development and Demonstration with Fuel Cell Systems

    Energy Technology Data Exchange (ETDEWEB)

    Nuvera Fuel Cells

    2005-04-15

    The potential for fuel cell systems to improve energy efficiency and reduce emissions over conventional power systems has generated significant interest in fuel cell technologies. While fuel cells are being investigated for use in many applications such as stationary power generation and small portable devices, transportation applications present some unique challenges for fuel cell technology. Due to their lower operating temperature and non-brittle materials, most transportation work is focusing on fuel cells using proton exchange membrane (PEM) technology. Since PEM fuel cells are fueled by hydrogen, major obstacles to their widespread use are the lack of an available hydrogen fueling infrastructure and hydrogen's relatively low energy storage density, which leads to a much lower driving range than conventional vehicles. One potential solution to the hydrogen infrastructure and storage density issues is to convert a conventional fuel such as gasoline into hydrogen onboard the vehicle using a fuel processor. Figure 2 shows that gasoline stores roughly 7 times more energy per volume than pressurized hydrogen gas at 700 bar and 4 times more than liquid hydrogen. If integrated properly, the fuel processor/fuel cell system would also be more efficient than traditional engines and would give a fuel economy benefit while hydrogen storage and distribution issues are being investigated. Widespread implementation of fuel processor/fuel cell systems requires improvements in several aspects of the technology, including size, startup time, transient response time, and cost. In addition, the ability to operate on a number of hydrocarbon fuels that are available through the existing infrastructure is a key enabler for commercializing these systems. In this program, Nuvera Fuel Cells collaborated with the Department of Energy (DOE) to develop efficient, low-emission, multi-fuel processors for transportation applications. Nuvera's focus was on (1) developing fuel

  2. Toward an Ultralow-Power Onboard Processor for Tongue Drive System.

    Science.gov (United States)

    Viseh, Sina; Ghovanloo, Maysam; Mohsenin, Tinoosh

    2015-02-01

    The Tongue Drive System (TDS) is a new unobtrusive, wireless, and wearable assistive device that allows for real-time tracking of the voluntary tongue motion in the oral space for communication, control, and navigation applications. The latest TDS prototype appears as a wireless headphone and has been tested in human subject trials. However, the robustness of the external TDS (eTDS) in real-life outdoor conditions may not meet safety regulations because of the limited mechanical stability of the headset. The intraoral TDS (iTDS), which is in the shape of a dental retainer, firmly clasps to the upper teeth and resists sensor misplacement. However, the iTDS has more restrictions on its dimensions, limiting the battery size and consequently requiring a considerable reduction in its power consumption to operate over an extended period of two days on a single charge. In this brief, we propose an ultralow-power local processor for the TDS that performs all signal processing on the transmitter side, following the sensors. Assuming the TDS user on average issuing one command/s, implementing the computational engine reduces the data volume that needs to be wirelessly transmitted to a PC or smartphone by a factor of 1500×, from 12 kb/s to ~8 b/s. The proposed design is implemented on an ultralow-power IGLOO nano field-programmable gate array (FPGA) and is tested on AGLN250 prototype board. According to our post-place-and-route results, implementing the engine on the FPGA significantly drops the required data transmission, while an application-specific integrated circuit (ASIC) implementation in a 65-nm CMOS results in a 15× power saving compared to the FPGA solution and occupies a 0.02-mm 2 footprint. As a result, the power consumption and size of the iTDS will be significantly reduced through the use of a much smaller rechargeable battery. Moreover, the system can operate longer following every recharge, improving the iTDS usability.

  3. WE-G-BRD-06: Volumetric Cine MRI (VC-MRI) Estimated Based On Prior Knowledge for On-Board Target Localization

    International Nuclear Information System (INIS)

    Harris, W; Yin, F; Cai, J; Zhang, Y; Ren, L

    2015-01-01

    Purpose: To develop a technique to generate on-board VC-MRI using patient prior 4D-MRI, motion modeling and on-board 2D-cine MRI for real-time 3D target verification of liver and lung radiotherapy. Methods: The end-expiration phase images of a 4D-MRI acquired during patient simulation are used as patient prior images. Principal component analysis (PCA) is used to extract 3 major respiratory deformation patterns from the Deformation Field Maps (DFMs) generated between end-expiration phase and all other phases. On-board 2D-cine MRI images are acquired in the axial view. The on-board VC-MRI at any instant is considered as a deformation of the prior MRI at the end-expiration phase. The DFM is represented as a linear combination of the 3 major deformation patterns. The coefficients of the deformation patterns are solved by matching the corresponding 2D slice of the estimated VC-MRI with the acquired single 2D-cine MRI. The method was evaluated using both XCAT (a computerized patient model) simulation of lung cancer patients and MRI data from a real liver cancer patient. The 3D-MRI at every phase except end-expiration phase was used to simulate the ground-truth on-board VC-MRI at different instances, and the center-tumor slice was selected to simulate the on-board 2D-cine images. Results: Image subtraction of ground truth with estimated on-board VC-MRI shows fewer differences than image subtraction of ground truth with prior image. Excellent agreement between profiles was achieved. The normalized cross correlation coefficients between the estimated and ground-truth in the axial, coronal and sagittal views for each time step were >= 0.982, 0.905, 0.961 for XCAT data and >= 0.998, 0.911, 0.9541 for patient data. For XCAT data, the maximum-Volume-Percent-Difference between ground-truth and estimated tumor volumes was 1.6% and the maximum-Center-of-Mass-Shift was 0.9 mm. Conclusion: Preliminary studies demonstrated the feasibility to estimate real-time VC-MRI for on-board

  4. WE-G-BRD-06: Volumetric Cine MRI (VC-MRI) Estimated Based On Prior Knowledge for On-Board Target Localization

    Energy Technology Data Exchange (ETDEWEB)

    Harris, W; Yin, F; Cai, J; Zhang, Y; Ren, L [Duke University Medical Center, Durham, NC (United States)

    2015-06-15

    Purpose: To develop a technique to generate on-board VC-MRI using patient prior 4D-MRI, motion modeling and on-board 2D-cine MRI for real-time 3D target verification of liver and lung radiotherapy. Methods: The end-expiration phase images of a 4D-MRI acquired during patient simulation are used as patient prior images. Principal component analysis (PCA) is used to extract 3 major respiratory deformation patterns from the Deformation Field Maps (DFMs) generated between end-expiration phase and all other phases. On-board 2D-cine MRI images are acquired in the axial view. The on-board VC-MRI at any instant is considered as a deformation of the prior MRI at the end-expiration phase. The DFM is represented as a linear combination of the 3 major deformation patterns. The coefficients of the deformation patterns are solved by matching the corresponding 2D slice of the estimated VC-MRI with the acquired single 2D-cine MRI. The method was evaluated using both XCAT (a computerized patient model) simulation of lung cancer patients and MRI data from a real liver cancer patient. The 3D-MRI at every phase except end-expiration phase was used to simulate the ground-truth on-board VC-MRI at different instances, and the center-tumor slice was selected to simulate the on-board 2D-cine images. Results: Image subtraction of ground truth with estimated on-board VC-MRI shows fewer differences than image subtraction of ground truth with prior image. Excellent agreement between profiles was achieved. The normalized cross correlation coefficients between the estimated and ground-truth in the axial, coronal and sagittal views for each time step were >= 0.982, 0.905, 0.961 for XCAT data and >= 0.998, 0.911, 0.9541 for patient data. For XCAT data, the maximum-Volume-Percent-Difference between ground-truth and estimated tumor volumes was 1.6% and the maximum-Center-of-Mass-Shift was 0.9 mm. Conclusion: Preliminary studies demonstrated the feasibility to estimate real-time VC-MRI for on-board

  5. Air-Lubricated Thermal Processor For Dry Silver Film

    Science.gov (United States)

    Siryj, B. W.

    1980-09-01

    Since dry silver film is processed by heat, it may be viewed on a light table only seconds after exposure. On the other hand, wet films require both bulky chemicals and substantial time before an image can be analyzed. Processing of dry silver film, although simple in concept, is not so simple when reduced to practice. The main concern is the effect of film temperature gradients on uniformity of optical film density. RCA has developed two thermal processors, different in implementation but based on the same philosophy. Pressurized air is directed to both sides of the film to support the film and to conduct the heat to the film. Porous graphite is used as the medium through which heat and air are introduced. The initial thermal processor was designed to process 9.5-inch-wide film moving at speeds ranging from 0.0034 to 0.008 inch per second. The processor configuration was curved to match the plane generated by the laser recording beam. The second thermal processor was configured to process 5-inch-wide film moving at a continuously variable rate ranging from 0.15 to 3.5 inches per second. Due to field flattening optics used in this laser recorder, the required film processing area was plane. In addition, this processor was sectioned in the direction of film motion, giving the processor the capability of varying both temperature and effective processing area.

  6. Random On-Board Pixel Sampling (ROPS) X-Ray Camera

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zhehui [Los Alamos; Iaroshenko, O. [Los Alamos; Li, S. [Los Alamos; Liu, T. [Fermilab; Parab, N. [Argonne (main); Chen, W. W. [Purdue U.; Chu, P. [Los Alamos; Kenyon, G. [Los Alamos; Lipton, R. [Fermilab; Sun, K.-X. [Nevada U., Las Vegas

    2017-09-25

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.

  7. A single chip pulse processor for nuclear spectroscopy

    International Nuclear Information System (INIS)

    Hilsenrath, F.; Bakke, J.C.; Voss, H.D.

    1985-01-01

    A high performance digital pulse processor, integrated into a single gate array microcircuit, has been developed for spaceflight applications. The new approach takes advantage of the latest CMOS high speed A/D flash converters and low-power gated logic arrays. The pulse processor measures pulse height, pulse area and the required timing information (e.g. multi detector coincidence and pulse pile-up detection). The pulse processor features high throughput rate (e.g. 0.5 Mhz for 2 usec gausssian pulses) and improved differential linearity (e.g. + or - 0.2 LSB for a + or - 1 LSB A/D). Because of the parallel digital architecture of the device, the interface is microprocessor bus compatible. A satellite flight application of this module is presented for use in the X-ray imager and high energy particle spectrometers of the PEM experiment on the Upper Atmospheric Research Satellite

  8. Real-Time Adaptive Lossless Hyperspectral Image Compression using CCSDS on Parallel GPGPU and Multicore Processor Systems

    Science.gov (United States)

    Hopson, Ben; Benkrid, Khaled; Keymeulen, Didier; Aranki, Nazeeh; Klimesh, Matt; Kiely, Aaron

    2012-01-01

    The proposed CCSDS (Consultative Committee for Space Data Systems) Lossless Hyperspectral Image Compression Algorithm was designed to facilitate a fast hardware implementation. This paper analyses that algorithm with regard to available parallelism and describes fast parallel implementations in software for GPGPU and Multicore CPU architectures. We show that careful software implementation, using hardware acceleration in the form of GPGPUs or even just multicore processors, can exceed the performance of existing hardware and software implementations by up to 11x and break the real-time barrier for the first time for a typical test application.

  9. Array processor architecture

    Science.gov (United States)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1983-01-01

    A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.

  10. TH-C-17A-06: A Hardware Implementation and Evaluation of Robotic SPECT: Toward Molecular Imaging Onboard Radiation Therapy Machines

    International Nuclear Information System (INIS)

    Yan, S; Touch, M; Bowsher, J; Yin, F; Cheng, L

    2014-01-01

    Purpose: To construct a robotic SPECT system and demonstrate its capability to image a thorax phantom on a radiation therapy flat-top couch. The system has potential for on-board functional and molecular imaging in radiation therapy. Methods: A robotic SPECT imaging system was developed utilizing a Digirad 2020tc detector and a KUKA KR150-L110 robot. An imaging study was performed with the PET CT Phantom, which includes 5 spheres: 10, 13, 17, 22 and 28 mm in diameter. Sphere-tobackground concentration ratio was 6:1 of Tc99m. The phantom was placed on a flat-top couch. SPECT projections were acquired with a parallel-hole collimator and a single pinhole collimator. The robotic system navigated the detector tracing the flat-top table to maintain the closest possible proximity to the phantom. For image reconstruction, detector trajectories were described by six parameters: radius-of-rotation, x and z detector shifts, and detector rotation θ, tilt ϕ and twist γ. These six parameters were obtained from the robotic system by calibrating the robot base and tool coordinates. Results: The robotic SPECT system was able to maneuver parallel-hole and pinhole collimated SPECT detectors in close proximity to the phantom, minimizing impact of the flat-top couch on detector-to-COR (center-ofrotation) distance. In acquisitions with background at 1/6th sphere activity concentration, photopeak contamination was heavy, yet the 17, 22, and 28 mm diameter spheres were readily observed with the parallel hole imaging, and the single, targeted sphere (28 mm diameter) was readily observed in the pinhole region-of-interest (ROI) imaging. Conclusion: Onboard SPECT could be achieved by a robot maneuvering a SPECT detector about patients in position for radiation therapy on a flat-top couch. The robot inherent coordinate frame could be an effective means to estimate detector pose for use in SPECT image reconstruction. PHS/NIH/NCI grant R21-CA156390-01A1

  11. Green Secure Processors: Towards Power-Efficient Secure Processor Design

    Science.gov (United States)

    Chhabra, Siddhartha; Solihin, Yan

    With the increasing wealth of digital information stored on computer systems today, security issues have become increasingly important. In addition to attacks targeting the software stack of a system, hardware attacks have become equally likely. Researchers have proposed Secure Processor Architectures which utilize hardware mechanisms for memory encryption and integrity verification to protect the confidentiality and integrity of data and computation, even from sophisticated hardware attacks. While there have been many works addressing performance and other system level issues in secure processor design, power issues have largely been ignored. In this paper, we first analyze the sources of power (energy) increase in different secure processor architectures. We then present a power analysis of various secure processor architectures in terms of their increase in power consumption over a base system with no protection and then provide recommendations for designs that offer the best balance between performance and power without compromising security. We extend our study to the embedded domain as well. We also outline the design of a novel hybrid cryptographic engine that can be used to minimize the power consumption for a secure processor. We believe that if secure processors are to be adopted in future systems (general purpose or embedded), it is critically important that power issues are considered in addition to performance and other system level issues. To the best of our knowledge, this is the first work to examine the power implications of providing hardware mechanisms for security.

  12. Comparison of the Spectral Properties of Pansharpened Images Generated from AVNIR-2 and Prism Onboard Alos

    Science.gov (United States)

    Matsuoka, M.

    2012-07-01

    A considerable number of methods for pansharpening remote-sensing images have been developed to generate higher spatial resolution multispectral images by the fusion of lower resolution multispectral images and higher resolution panchromatic images. Because pansharpening alters the spectral properties of multispectral images, method selection is one of the key factors influencing the accuracy of subsequent analyses such as land-cover classification or change detection. In this study, seven pixel-based pansharpening methods (additive wavelet intensity, additive wavelet principal component, generalized Laplacian pyramid with spectral distortion minimization, generalized intensity-hue-saturation (GIHS) transform, GIHS adaptive, Gram-Schmidt spectral sharpening, and block-based synthetic variable ratio) were compared using AVNIR-2 and PRISM onboard ALOS from the viewpoint of the preservation of spectral properties of AVNIR-2. A visual comparison was made between pansharpened images generated from spatially degraded AVNIR-2 and original images over urban, agricultural, and forest areas. The similarity of the images was evaluated in terms of the image contrast, the color distinction, and the brightness of the ground objects. In the quantitative assessment, three kinds of statistical indices, correlation coefficient, ERGAS, and Q index, were calculated by band and land-cover type. These scores were relatively superior in bands 2 and 3 compared with the other two bands, especially over urban and agricultural areas. Band 4 showed a strong dependency on the land-cover type. This was attributable to the differences in the observing spectral wavelengths of the sensors and local scene variances.

  13. Probabilistic programmable quantum processors

    International Nuclear Information System (INIS)

    Buzek, V.; Ziman, M.; Hillery, M.

    2004-01-01

    We analyze how to improve performance of probabilistic programmable quantum processors. We show how the probability of success of the probabilistic processor can be enhanced by using the processor in loops. In addition, we show that an arbitrary SU(2) transformations of qubits can be encoded in program state of a universal programmable probabilistic quantum processor. The probability of success of this processor can be enhanced by a systematic correction of errors via conditional loops. Finally, we show that all our results can be generalized also for qudits. (Abstract Copyright [2004], Wiley Periodicals, Inc.)

  14. Computations on the massively parallel processor at the Goddard Space Flight Center

    Science.gov (United States)

    Strong, James P.

    1991-01-01

    Described are four significant algorithms implemented on the massively parallel processor (MPP) at the Goddard Space Flight Center. Two are in the area of image analysis. Of the other two, one is a mathematical simulation experiment and the other deals with the efficient transfer of data between distantly separated processors in the MPP array. The first algorithm presented is the automatic determination of elevations from stereo pairs. The second algorithm solves mathematical logistic equations capable of producing both ordered and chaotic (or random) solutions. This work can potentially lead to the simulation of artificial life processes. The third algorithm is the automatic segmentation of images into reasonable regions based on some similarity criterion, while the fourth is an implementation of a bitonic sort of data which significantly overcomes the nearest neighbor interconnection constraints on the MPP for transferring data between distant processors.

  15. Comparison of Processor Performance of SPECint2006 Benchmarks of some Intel Xeon Processors

    OpenAIRE

    Abdul Kareem PARCHUR; Ram Asaray SINGH

    2012-01-01

    High performance is a critical requirement to all microprocessors manufacturers. The present paper describes the comparison of performance in two main Intel Xeon series processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310). The microarchitecture of these processors is implemented using the basis of a new family of processors from Intel starting with the Pentium 4 processor. These processors can provide a performance boost for many ke...

  16. The LASS hardware processor

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1976-01-01

    The problems of data analysis with hardware processors are reviewed and a description is given of a programmable processor. This processor, the 168/E, has been designed for use in the LASS multi-processor system; it has an execution speed comparable to the IBM 370/168 and uses the subset of IBM 370 instructions appropriate to the LASS analysis task. (Auth.)

  17. Event processing in X-IFU detector onboard Athena.

    Science.gov (United States)

    Ceballos, M. T.; Cobos, B.; van der Kuurs, J.; Fraga-Encinas, R.

    2015-05-01

    The X-ray Observatory ATHENA was proposed in April 2014 as the mission to implement the science theme "The Hot and Energetic Universe" selected by ESA for L2 (the second Large-class mission in ESA's Cosmic Vision science programme). One of the two X-ray detectors designed to be onboard ATHENA is X-IFU, a cryogenic microcalorimeter based on Transition Edge Sensor (TES) technology that will provide spatially resolved high-resolution spectroscopy. X-IFU will be developed by a consortium of European research institutions currently from France (leadership), Italy, The Netherlands, Belgium, UK, Germany and Spain. From Spain, IFCA (CSIC-UC) is involved in the Digital Readout Electronics (DRE) unit of the X-IFU detector, in particular in the Event Processor Subsytem. We at IFCA are in charge of the development and implementation in the DRE unit of the Event Processing algorithms, designed to recognize, from a noisy signal, the intensity pulses generated by the absorption of the X-ray photons, and lately extract their main parameters (coordinates, energy, arrival time, grade, etc.) Here we will present the design and performance of the algorithms developed for the event recognition (adjusted derivative), and pulse grading/qualification as well as the progress in the algorithms designed to extract the energy content of the pulses (pulse optimal filtering). IFCA will finally have the responsibility of the implementation on board in the (TBD) FPGAs or micro-processors of the DRE unit, where this Event Processing part will take place, to fit into the limited telemetry of the instrument.

  18. Comparison of Processor Performance of SPECint2006 Benchmarks of some Intel Xeon Processors

    Directory of Open Access Journals (Sweden)

    Abdul Kareem PARCHUR

    2012-08-01

    Full Text Available High performance is a critical requirement to all microprocessors manufacturers. The present paper describes the comparison of performance in two main Intel Xeon series processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310. The microarchitecture of these processors is implemented using the basis of a new family of processors from Intel starting with the Pentium 4 processor. These processors can provide a performance boost for many key application areas in modern generation. The scaling of performance in two major series of Intel Xeon processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310 has been analyzed using the performance numbers of 12 CPU2006 integer benchmarks, performance numbers that exhibit significant differences in performance. The results and analysis can be used by performance engineers, scientists and developers to better understand the performance scaling in modern generation processors.

  19. Simulation of a parallel processor on a serial processor: The neutron diffusion equation

    International Nuclear Information System (INIS)

    Honeck, H.C.

    1981-01-01

    Parallel processors could provide the nuclear industry with very high computing power at a very moderate cost. Will we be able to make effective use of this power. This paper explores the use of a very simple parallel processor for solving the neutron diffusion equation to predict power distributions in a nuclear reactor. We first describe a simple parallel processor and estimate its theoretical performance based on the current hardware technology. Next, we show how the parallel processor could be used to solve the neutron diffusion equation. We then present the results of some simulations of a parallel processor run on a serial processor and measure some of the expected inefficiencies. Finally we extrapolate the results to estimate how actual design codes would perform. We find that the standard numerical methods for solving the neutron diffusion equation are still applicable when used on a parallel processor. However, some simple modifications to these methods will be necessary if we are to achieve the full power of these new computers. (orig.) [de

  20. Limb clouds and dust on Mars from images obtained by the Visual Monitoring Camera (VMC) onboard Mars Express

    Science.gov (United States)

    Sánchez-Lavega, A.; Chen-Chen, H.; Ordoñez-Etxeberria, I.; Hueso, R.; del Río-Gaztelurrutia, T.; Garro, A.; Cardesín-Moinelo, A.; Titov, D.; Wood, S.

    2018-01-01

    The Visual Monitoring Camera (VMC) onboard the Mars Express (MEx) spacecraft is a simple camera aimed to monitor the release of the Beagle-2 lander on Mars Express and later used for public outreach. Here, we employ VMC as a scientific instrument to study and characterize high altitude aerosols events (dust and condensates) observed at the Martian limb. More than 21,000 images taken between 2007 and 2016 have been examined to detect and characterize elevated layers of dust in the limb, dust storms and clouds. We report a total of 18 events for which we give their main properties (areographic location, maximum altitude, limb projected size, Martian solar longitude and local time of occurrence). The top altitudes of these phenomena ranged from 40 to 85 km and their horizontal extent at the limb ranged from 120 to 2000 km. They mostly occurred at Equatorial and Tropical latitudes (between ∼30°N and 30°S) at morning and afternoon local times in the southern fall and northern winter seasons. None of them are related to the orographic clouds that typically form around volcanoes. Three of these events have been studied in detail using simultaneous images taken by the MARCI instrument onboard Mars Reconnaissance Orbiter (MRO) and studying the properties of the atmosphere using the predictions from the Mars Climate Database (MCD) General Circulation Model. This has allowed us to determine the three-dimensional structure and nature of these events, with one of them being a regional dust storm and the two others water ice clouds. Analyses based on MCD and/or MARCI images for the other cases studied indicate that the rest of the events correspond most probably to water ice clouds.

  1. Vector and parallel processors in computational science

    International Nuclear Information System (INIS)

    Duff, I.S.; Reid, J.K.

    1985-01-01

    This book presents the papers given at a conference which reviewed the new developments in parallel and vector processing. Topics considered at the conference included hardware (array processors, supercomputers), programming languages, software aids, numerical methods (e.g., Monte Carlo algorithms, iterative methods, finite elements, optimization), and applications (e.g., neutron transport theory, meteorology, image processing)

  2. Imaging design of the wide field x-ray monitor onboard the HETE satellite

    International Nuclear Information System (INIS)

    Zand, J.J.M. In'T; Fenimore, E.E.; Kawai, N.; Yoshida, A.; Matsuoka, M.; Yamauchi, M.

    1994-01-01

    The High Energy Transient Experiment (HETE), to be launched in 1995, will study Gamma-Ray Bursts in an unprecendented wide wavelength range from Gamma- and X-ray to UV wavelengths. The X-ray range (2 to 25 keV) will be covered by 2 perpendicularly oriented 1-dimensional coded aperture cameras. These instruments cover a wide field of view of 2 sr and thus have a relatively large potential to locate GRBs to a fraction of a degree, which is an order of magnitude better than BATSE. The imaging design of these coded aperture cameras relates to the design of the coded apertures and the decoding algorithm. The aperture pattern is to a large extent determined by the high background in this wide field application and the low number of pattern elements (∼100) in each direction. The result is a random pattern with an open fraction of 33%. The onboard decoding algorithm is dedicated to the localization of a single point source

  3. Real-time underwater image enhancement: An improved approach ...

    Indian Academy of Sciences (India)

    1School of Mechatronics, CSIR-Central Mechanical Engineering Research Institute, Durgapur 713209, India. 2Robotics and ...... a general purpose computer with Intel core i3 processor, frequency 2.20 ... Adobe Photoshop CS4 software. Table 6. .... 1In general, vision (e.g. camera) aided navigation requires on-board real-.

  4. Functional unit for a processor

    NARCIS (Netherlands)

    Rohani, A.; Kerkhoff, Hans G.

    2013-01-01

    The invention relates to a functional unit for a processor, such as a Very Large Instruction Word Processor. The invention further relates to a processor comprising at least one such functional unit. The invention further relates to a functional unit and processor capable of mitigating the effect of

  5. Adaptive signal processor

    Energy Technology Data Exchange (ETDEWEB)

    Walz, H.V.

    1980-07-01

    An experimental, general purpose adaptive signal processor system has been developed, utilizing a quantized (clipped) version of the Widrow-Hoff least-mean-square adaptive algorithm developed by Moschner. The system accommodates 64 adaptive weight channels with 8-bit resolution for each weight. Internal weight update arithmetic is performed with 16-bit resolution, and the system error signal is measured with 12-bit resolution. An adapt cycle of adjusting all 64 weight channels is accomplished in 8 ..mu..sec. Hardware of the signal processor utilizes primarily Schottky-TTL type integrated circuits. A prototype system with 24 weight channels has been constructed and tested. This report presents details of the system design and describes basic experiments performed with the prototype signal processor. Finally some system configurations and applications for this adaptive signal processor are discussed.

  6. Adaptive signal processor

    International Nuclear Information System (INIS)

    Walz, H.V.

    1980-07-01

    An experimental, general purpose adaptive signal processor system has been developed, utilizing a quantized (clipped) version of the Widrow-Hoff least-mean-square adaptive algorithm developed by Moschner. The system accommodates 64 adaptive weight channels with 8-bit resolution for each weight. Internal weight update arithmetic is performed with 16-bit resolution, and the system error signal is measured with 12-bit resolution. An adapt cycle of adjusting all 64 weight channels is accomplished in 8 μsec. Hardware of the signal processor utilizes primarily Schottky-TTL type integrated circuits. A prototype system with 24 weight channels has been constructed and tested. This report presents details of the system design and describes basic experiments performed with the prototype signal processor. Finally some system configurations and applications for this adaptive signal processor are discussed

  7. Multithreading in vector processors

    Science.gov (United States)

    Evangelinos, Constantinos; Kim, Changhoan; Nair, Ravi

    2018-01-16

    In one embodiment, a system includes a processor having a vector processing mode and a multithreading mode. The processor is configured to operate on one thread per cycle in the multithreading mode. The processor includes a program counter register having a plurality of program counters, and the program counter register is vectorized. Each program counter in the program counter register represents a distinct corresponding thread of a plurality of threads. The processor is configured to execute the plurality of threads by activating the plurality of program counters in a round robin cycle.

  8. A 600-µW ultra-low-power associative processor for image pattern recognition employing magnetic tunnel junction-based nonvolatile memories with autonomic intelligent power-gating scheme

    Science.gov (United States)

    Ma, Yitao; Miura, Sadahiko; Honjo, Hiroaki; Ikeda, Shoji; Hanyu, Takahiro; Ohno, Hideo; Endoh, Tetsuo

    2016-04-01

    A novel associative processor using magnetic tunnel junction (MTJ)-based nonvolatile memories has been proposed and fabricated under a 90 nm CMOS/70 nm perpendicular-MTJ (p-MTJ) hybrid process for achieving the exceptionally low-power performance of image pattern recognition. A four-transistor 2-MTJ (4T-2MTJ) spin transfer torque magnetoresistive random access memory was adopted to completely eliminate the standby power. A self-directed intelligent power-gating (IPG) scheme specialized for this associative processor is employed to optimize the operation power by only autonomously activating currently accessed memory cells. The operations of a prototype chip at 20 MHz are demonstrated by measurement. The proposed processor can successfully carry out single texture pattern matching within 6.5 µs using 128-dimension bag-of-feature patterns, and the measured average operation power of the entire processor core is only 600 µW. Compared with the twin chip designed with 6T static random access memory, 91.2% power reductions are achieved. More than 88.0% power reductions are obtained compared with the latest associative memories. The further power performance analysis is discussed in detail, which verifies the special superiority of the proposed processor in power consumption for large-capacity memory-based VLSI systems.

  9. Dual-core Itanium Processor

    CERN Multimedia

    2006-01-01

    Intel’s first dual-core Itanium processor, code-named "Montecito" is a major release of Intel's Itanium 2 Processor Family, which implements the Intel Itanium architecture on a dual-core processor with two cores per die (integrated circuit). Itanium 2 is much more powerful than its predecessor. It has lower power consumption and thermal dissipation.

  10. 3081/E processor

    International Nuclear Information System (INIS)

    Kunz, P.F.; Gravina, M.; Oxoby, G.

    1984-04-01

    The 3081/E project was formed to prepare a much improved IBM mainframe emulator for the future. Its design is based on a large amount of experience in using the 168/E processor to increase available CPU power in both online and offline environments. The processor will be at least equal to the execution speed of a 370/168 and up to 1.5 times faster for heavy floating point code. A single processor will thus be at least four times more powerful than the VAX 11/780, and five processors on a system would equal at least the performance of the IBM 3081K. With its large memory space and simple but flexible high speed interface, the 3081/E is well suited for the online and offline needs of high energy physics in the future

  11. Real time processor for array speckle interferometry

    International Nuclear Information System (INIS)

    Chin, G.; Florez, J.; Borelli, R.; Fong, W.; Miko, J.; Trujillo, C.

    1989-01-01

    With the construction of several new large aperture telescopes and the development of large format array detectors in the near IR, the ability to obtain diffraction limited seeing via IR array speckle interferometry offers a powerful tool. We are constructing a real-time processor to acquire image frames, perform array flat-fielding, execute a 64 x 64 element 2D complex FFT, and to average the power spectrum all within the 25 msec coherence time for speckles at near IR wavelength. The processor is a compact unit controlled by a PC with real time display and data storage capability. It provides the ability to optimize observations and obtain results on the telescope rather than waiting several weeks before the data can be analyzed and viewed with off-line methods

  12. MULTI-CORE AND OPTICAL PROCESSOR RELATED APPLICATIONS RESEARCH AT OAK RIDGE NATIONAL LABORATORY

    Energy Technology Data Exchange (ETDEWEB)

    Barhen, Jacob [ORNL; Kerekes, Ryan A [ORNL; ST Charles, Jesse Lee [ORNL; Buckner, Mark A [ORNL

    2008-01-01

    High-speed parallelization of common tasks holds great promise as a low-risk approach to achieving the significant increases in signal processing and computational performance required for next generation innovations in reconfigurable radio systems. Researchers at the Oak Ridge National Laboratory have been working on exploiting the parallelization offered by this emerging technology and applying it to a variety of problems. This paper will highlight recent experience with four different parallel processors applied to signal processing tasks that are directly relevant to signal processing required for SDR/CR waveforms. The first is the EnLight Optical Core Processor applied to matched filter (MF) correlation processing via fast Fourier transform (FFT) of broadband Dopplersensitive waveforms (DSW) using active sonar arrays for target tracking. The second is the IBM CELL Broadband Engine applied to 2-D discrete Fourier transform (DFT) kernel for image processing and frequency domain processing. And the third is the NVIDIA graphical processor applied to document feature clustering. EnLight Optical Core Processor. Optical processing is inherently capable of high-parallelism that can be translated to very high performance, low power dissipation computing. The EnLight 256 is a small form factor signal processing chip (5x5 cm2) with a digital optical core that is being developed by an Israeli startup company. As part of its evaluation of foreign technology, ORNL's Center for Engineering Science Advanced Research (CESAR) had access to a precursor EnLight 64 Alpha hardware for a preliminary assessment of capabilities in terms of large Fourier transforms for matched filter banks and on applications related to Doppler-sensitive waveforms. This processor is optimized for array operations, which it performs in fixed-point arithmetic at the rate of 16 TeraOPS at 8-bit precision. This is approximately 1000 times faster than the fastest DSP available today. The optical core

  13. MULTI-CORE AND OPTICAL PROCESSOR RELATED APPLICATIONS RESEARCH AT OAK RIDGE NATIONAL LABORATORY

    International Nuclear Information System (INIS)

    Barhen, Jacob; Kerekes, Ryan A.; St Charles, Jesse Lee; Buckner, Mark A.

    2008-01-01

    High-speed parallelization of common tasks holds great promise as a low-risk approach to achieving the significant increases in signal processing and computational performance required for next generation innovations in reconfigurable radio systems. Researchers at the Oak Ridge National Laboratory have been working on exploiting the parallelization offered by this emerging technology and applying it to a variety of problems. This paper will highlight recent experience with four different parallel processors applied to signal processing tasks that are directly relevant to signal processing required for SDR/CR waveforms. The first is the EnLight Optical Core Processor applied to matched filter (MF) correlation processing via fast Fourier transform (FFT) of broadband Dopplersensitive waveforms (DSW) using active sonar arrays for target tracking. The second is the IBM CELL Broadband Engine applied to 2-D discrete Fourier transform (DFT) kernel for image processing and frequency domain processing. And the third is the NVIDIA graphical processor applied to document feature clustering. EnLight Optical Core Processor. Optical processing is inherently capable of high-parallelism that can be translated to very high performance, low power dissipation computing. The EnLight 256 is a small form factor signal processing chip (5x5 cm2) with a digital optical core that is being developed by an Israeli startup company. As part of its evaluation of foreign technology, ORNL's Center for Engineering Science Advanced Research (CESAR) had access to a precursor EnLight 64 Alpha hardware for a preliminary assessment of capabilities in terms of large Fourier transforms for matched filter banks and on applications related to Doppler-sensitive waveforms. This processor is optimized for array operations, which it performs in fixed-point arithmetic at the rate of 16 TeraOPS at 8-bit precision. This is approximately 1000 times faster than the fastest DSP available today. The optical core

  14. 'Iconic' tracking algorithms for high energy physics using the TRAX-I massively parallel processor

    International Nuclear Information System (INIS)

    Vesztergombi, G.

    1989-01-01

    TRAX-I, a cost-effective parallel microcomputer, applying associative string processor (ASP) architecture with 16 K parallel processing elements, is being built by Aspex Microsystems Ltd. (UK). When applied to the tracking problem of very complex events with several hundred tracks, the large number of processors allows one to dedicate one or more processors to each wire (in MWPC), each pixel (in digitized images from streamer chambers or other visual detectors), or each pad (in TPC) to perform very efficient pattern recognition. Some linear tracking algorithms based on this ''ionic'' representation are presented. (orig.)

  15. 'Iconic' tracking algorithms for high energy physics using the TRAX-I massively parallel processor

    International Nuclear Information System (INIS)

    Vestergombi, G.

    1989-11-01

    TRAX-I, a cost-effective parallel microcomputer, applying Associative String Processor (ASP) architecture with 16 K parallel processing elements, is being built by Aspex Microsystems Ltd. (UK). When applied to the tracking problem of very complex events with several hundred tracks, the large number of processors allows one to dedicate one or more processors to each wire (in MWPC), each pixel (in digitized images from streamer chambers or other visual detectors), or each pad (in TPC) to perform very efficient pattern recognition. Some linear tracking algorithms based on this 'iconic' representation are presented. (orig.)

  16. Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning

    Science.gov (United States)

    Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael

    2009-01-01

    Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.

  17. High speed vision processor with reconfigurable processing element array based on full-custom distributed memory

    Science.gov (United States)

    Chen, Zhe; Yang, Jie; Shi, Cong; Qin, Qi; Liu, Liyuan; Wu, Nanjian

    2016-04-01

    In this paper, a hybrid vision processor based on a compact full-custom distributed memory for near-sensor high-speed image processing is proposed. The proposed processor consists of a reconfigurable processing element (PE) array, a row processor (RP) array, and a dual-core microprocessor. The PE array includes two-dimensional processing elements with a compact full-custom distributed memory. It supports real-time reconfiguration between the PE array and the self-organized map (SOM) neural network. The vision processor is fabricated using a 0.18 µm CMOS technology. The circuit area of the distributed memory is reduced markedly into 1/3 of that of the conventional memory so that the circuit area of the vision processor is reduced by 44.2%. Experimental results demonstrate that the proposed design achieves correct functions.

  18. Analysis of image factors of x-ray films : study for the intelligent replenishment system of automatic film processor

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sung Tae; Yoon, Chong Hyun; Park, Kwang BO; Auh, Yong Ho; Lee, Hyoung Jin; In, Kyung Hwan; Kim, Keon Chung [Asan Medical Center, Ulsan Univ. College of Medicine, Ulsan (Korea, Republic of)

    1998-06-01

    We analyzed image factors to determine the characteristic factors that need for intelligent replenishment system of the auto film processor. We processed the serial 300 sheets of radiographic films of chest phantom without replenishment of developing and fixation replenisher. We took the digital data by using film digitizer which scanned the films and automatically summed up the pixel values of the films. We analyzed characteristic curves, average gradients and relative speeds of individual film using densitometer and step densitometry. We also evaluated the pH of developer, fixer, and washer fluid with digital pH meter. Fixer residual rate and washing effect were measured by densitometer using the reagent methods. There was no significant reduction of the digital density numbers of the serial films without replenishment of developer and fixer. The average gradients were gradually decreased by 0.02 and relative speeds were also gradually decreased by 6.96% relative to initial standard step-densitometric measurement. The pHs of developer and fixer were reflected the inactivation of each fluid. The fixer residual rates and washing effects after processing each 25 sheets of films were in the normal range. We suggest that the digital data are not reliable due to limitation of the hardware and software of the film digitizer. We conclude that average gradient and relative speed which mean the film's contrast and sensitivity respectively are reliable factors for determining the need for the replenishment of the auto film processor. We need more study of simpler equations and programming for more intelligent replenishment system of the auto film processor.

  19. SU-D-18A-02: Towards Real-Time On-Board Volumetric Image Reconstruction for Intrafraction Target Verification in Radiation Therapy

    International Nuclear Information System (INIS)

    Xu, X; Iliopoulos, A; Zhang, Y; Pitsianis, N; Sun, X; Yin, F; Ren, L

    2014-01-01

    Purpose: To expedite on-board volumetric image reconstruction from limited-angle kV—MV projections for intrafraction verification. Methods: A limited-angle intrafraction verification (LIVE) system has recently been developed for real-time volumetric verification of moving targets, using limited-angle kV—MV projections. Currently, it is challenged by the intensive computational load of the prior-knowledge-based reconstruction method. To accelerate LIVE, we restructure the software pipeline to make it adaptable to model and algorithm parameter changes, while enabling efficient utilization of rapidly advancing, modern computer architectures. In particular, an innovative two-level parallelization scheme has been designed: At the macroscopic level, data and operations are adaptively partitioned, taking into account algorithmic parameters and the processing capacity or constraints of underlying hardware. The control and data flows of the pipeline are scheduled in such a way as to maximize operation concurrency and minimize total processing time. At the microscopic level, the partitioned functions act as independent modules, operating on data partitions in parallel. Each module is pre-parallelized and optimized for multi-core processors (CPUs) and graphics processing units (GPUs). Results: We present results from a parallel prototype, where most of the controls and module parallelization are carried out via Matlab and its Parallel Computing Toolbox. The reconstruction is 5 times faster on a data-set of twice the size, compared to recently reported results, without compromising on algorithmic optimization control. Conclusion: The prototype implementation and its results have served to assess the efficacy of our system concept. While a production implementation will yield much higher processing rates by approaching full-capacity utilization of CPUs and GPUs, some mutual constraints between algorithmic flow and architecture specifics remain. Based on a careful analysis

  20. Performance evaluation of throughput computing workloads using multi-core processors and graphics processors

    Science.gov (United States)

    Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.

    2017-11-01

    Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.

  1. The Molen Polymorphic Media Processor

    NARCIS (Netherlands)

    Kuzmanov, G.K.

    2004-01-01

    In this dissertation, we address high performance media processing based on a tightly coupled co-processor architectural paradigm. More specifically, we introduce a reconfigurable media augmentation of a general purpose processor and implement it into a fully operational processor prototype. The

  2. Spectrally and Radiometrically Stable, Wideband, Onboard Calibration Source

    Science.gov (United States)

    Coles, James B.; Richardson, Brandon S.; Eastwood, Michael L.; Sarture, Charles M.; Quetin, Gregory R.; Porter, Michael D.; Green, Robert O.; Nolte, Scott H.; Hernandez, Marco A.; Knoll, Linley A.

    2013-01-01

    The Onboard Calibration (OBC) source incorporates a medical/scientific-grade halogen source with a precisely designed fiber coupling system, and a fiber-based intensity-monitoring feedback loop that results in radiometric and spectral stabilities to within less than 0.3 percent over a 15-hour period. The airborne imaging spectrometer systems developed at the Jet Propulsion Laboratory incorporate OBC sources to provide auxiliary in-use system calibration data. The use of the OBC source will provide a significant increase in the quantitative accuracy, reliability, and resulting utility of the spectral data collected from current and future imaging spectrometer instruments.

  3. Efficient Backprojection-Based Synthetic Aperture Radar Computation with Many-Core Processors

    Directory of Open Access Journals (Sweden)

    Jongsoo Park

    2013-01-01

    Full Text Available Tackling computationally challenging problems with high efficiency often requires the combination of algorithmic innovation, advanced architecture, and thorough exploitation of parallelism. We demonstrate this synergy through synthetic aperture radar (SAR via backprojection, an image reconstruction method that can require hundreds of TFLOPS. Computation cost is significantly reduced by our new algorithm of approximate strength reduction; data movement cost is economized by software locality optimizations facilitated by advanced architecture support; parallelism is fully harnessed in various patterns and granularities. We deliver over 35 billion backprojections per second throughput per compute node on an Intel® Xeon® processor E5-2670-based cluster, equipped with Intel® Xeon Phi™ coprocessors. This corresponds to processing a 3K×3K image within a second using a single node. Our study can be extended to other settings: backprojection is applicable elsewhere including medical imaging, approximate strength reduction is a general code transformation technique, and many-core processors are emerging as a solution to energy-efficient computing.

  4. Acceleration of iterative tomographic reconstruction using graphics processors

    International Nuclear Information System (INIS)

    Belzunce, M.A.; Osorio, A.; Verrastro, C.A.

    2009-01-01

    Using iterative algorithms for image reconstruction in 3 D Positron Emission Tomography has shown to produce images with better quality than analytical methods. How ever, these algorithms are computationally expensive. New Graphic Processor Units (GPU) provides high performance at low cost and also programming tools that make possible to execute parallel algorithms easily in scientific applications. In this work, we try to achieve an acceleration of image reconstruction algorithms in 3 D PET by using a GPU. A parallel implementation of the algorithm ML-EM 3 D was developed using Siddon algorithm as Projector and Back-projector. Results show that accelerations of more than one order of magnitude can be achieved, keeping similar image quality. (author)

  5. Update on the Status of the Space Telescope Imaging Spectrograph onboard the Hubble Space Telescope

    Science.gov (United States)

    Hernandez, Svea; Aloisi, A.; Bostroem, K. A.; Cox, C.; Debes, J. H.; DiFelice, A.; Roman-Duval, J.; Hodge, P.; Holland, S.; Lindsay, K.; Lockwood, S. A.; Mason, E.; Oliveira, C. M.; Penton, S. V.; Proffitt, C. R.; Sonnentrucker, P.; Taylor, J. M.; Wheeler, T.

    2013-06-01

    The Space Telescope Imaging Spectrograph (STIS) has been on orbit for approximately 16 years as one of the 2nd generation instruments on the Hubble Space Telescope (HST). Its operations were interrupted by an electronics failure in 2004, but STIS was successfully repaired in May 2009 during Service Mission 4 (SM4) allowing it to resume science observations. The Instrument team continues to monitor its performance and work towards improving the quality of its products. Here we present updated information on the status of the FUV and NUV MAMA and the CCD detectors onboard STIS and describe recent changes to the STIS calibration pipeline. We also discuss the status of efforts to apply a pixel-based correction for charge transfer inefficiency (CTI) effects to STIS CCD data. These techniques show promise for ameliorating the effects of ongoing radiation damage on the quality of STIS CCD data.

  6. A digital-signal-processor-based optical tomographic system for dynamic imaging of joint diseases

    Science.gov (United States)

    Lasker, Joseph M.

    Over the last decade, optical tomography (OT) has emerged as viable biomedical imaging modality. Various imaging systems have been developed that are employed in preclinical as well as clinical studies, mostly targeting breast imaging, brain imaging, and cancer related studies. Of particular interest are so-called dynamic imaging studies where one attempts to image changes in optical properties and/or physiological parameters as they occur during a system perturbation. To successfully perform dynamic imaging studies, great effort is put towards system development that offers increasingly enhanced signal-to-noise performance at ever shorter data acquisition times, thus capturing high fidelity tomographic data within narrower time periods. Towards this goal, I have developed in this thesis a dynamic optical tomography system that is, unlike currently available analog instrumentation, based on digital data acquisition and filtering techniques. At the core of this instrument is a digital signal processor (DSP) that collects, collates, and processes the digitized data set. Complementary protocols between the DSP and a complex programmable logic device synchronizes the sampling process and organizes data flow. Instrument control is implemented through a comprehensive graphical user interface which integrates automated calibration, data acquisition, and signal post-processing. Real-time data is generated at frame rates as high as 140 Hz. An extensive dynamic range (˜190 dB) accommodates a wide scope of measurement geometries and tissue types. Performance analysis demonstrates very low system noise (˜1 pW rms noise equivalent power), excellent signal precision (˜0.04%--0.2%) and long term system stability (˜1% over 40 min). Experiments on tissue phantoms validate spatial and temporal accuracy of the system. As a potential new application of dynamic optical imaging I present the first application of this method to use vascular hemodynamics as a means of characterizing

  7. JPP: A Java Pre-Processor

    OpenAIRE

    Kiniry, Joseph R.; Cheong, Elaine

    1998-01-01

    The Java Pre-Processor, or JPP for short, is a parsing pre-processor for the Java programming language. Unlike its namesake (the C/C++ Pre-Processor, cpp), JPP provides functionality above and beyond simple textual substitution. JPP's capabilities include code beautification, code standard conformance checking, class and interface specification and testing, and documentation generation.

  8. Ellerman bombs observed with the new vacuum solar telescope and the atmospheric imaging assembly onboard the solar dynamics observatory

    Science.gov (United States)

    Chen, Yajie; Tian, Hui; Xu, Zhi; Xiang, Yongyuan; Fang, Yuliang; Yang, Zihao

    2017-12-01

    Ellerman bombs (EBs) are believed to be small-scale reconnection events occurring around the temperature minimum region in the solar atmosphere. They are often identified as significant enhancements in the extended Hα wings without obvious signatures in the Hα core. Here we explore the possibility of using the 1700 Å images taken by the Atmospheric Imaging Assembly (AIA) onboard the Solar Dynamics Observatory (SDO) to study EBs. From the Hα wing images obtained with the New Vacuum Solar Telescope (NVST) on 2015 May 2, we have identified 145 EBs and 51% of them clearly correspond to the bright points (BPs) in the AIA 1700 Å images. If we resize the NVST images using a linear interpolation to make the pixel sizes of the AIA and NVST images the same, some previously identified EBs disappear and about 71% of the remaining EBs are associated with BPs. Meanwhile, 66% of the compact brightenings in the AIA 1700 Å images can be identified as EBs in the Hα wings. The intensity enhancements of the EBs in the Hα wing images reveal a linear correlation with those of the BPs in the AIA 1700 Å images. Our study suggests that a significant fraction of EBs can be observed with the AIA 1700 Å filter, which is promising for large-sample statistical study of EBs as the seeing-free and full-disk SDO/AIA data are routinely available.

  9. A Ground-Based Validation System of Teleoperation for a Space Robot

    Directory of Open Access Journals (Sweden)

    Xueqian Wang

    2012-10-01

    Full Text Available Teleoperation of space robots is very important for future on-orbit service. In order to assure the task is accomplished successfully, ground experiments are required to verify the function and validity of the teleoperation system before a space robot is launched. In this paper, a ground-based validation subsystem is developed as a part of a teleoperation system. The subsystem is mainly composed of four parts: the input verification module, the onboard verification module, the dynamic and image workstation, and the communication simulator. The input verification module, consisting of hardware and software of the master, is used to verify the input ability. The onboard verification module, consisting of the same hardware and software as the onboard processor, is used to verify the processor's computing ability and execution schedule. In addition, the dynamic and image workstation calculates the dynamic response of the space robot and target, and generates emulated camera images, including the hand-eye cameras, global-vision camera and rendezvous camera. The communication simulator provides fidelity communication conditions, i.e., time delays and communication bandwidth. Lastly, we integrated a teleoperation system and conducted many experiments on the system. Experiment results show that the ground system is very useful for verified teleoperation technology.

  10. Supertracker: A Programmable Parallel Pipeline Arithmetic Processor For Auto-Cueing Target Processing

    Science.gov (United States)

    Mack, Harold; Reddi, S. S.

    1980-04-01

    Supertracker represents a programmable parallel pipeline computer architecture that has been designed to meet the real time image processing requirements of auto-cueing target data processing. The prototype bread-board currently under development will be designed to perform input video preprocessing and processing for 525-line and 875-line TV formats FLIR video, automatic display gain and contrast control, and automatic target cueing, classification, and tracking. The video preprocessor is capable of performing operations full frames of video data in real time, e.g., frame integration, storage, 3 x 3 convolution, and neighborhood processing. The processor architecture is being implemented using bit-slice microprogrammable arithmetic processors, operating in parallel. Each processor is capable of up to 20 million operations per second. Multiple frame memories are used for additional flexibility.

  11. Producing chopped firewood with firewood processors

    International Nuclear Information System (INIS)

    Kaerhae, K.; Jouhiaho, A.

    2009-01-01

    The TTS Institute's research and development project studied both the productivity of new, chopped firewood processors (cross-cutting and splitting machines) suitable for professional and independent small-scale production, and the costs of the chopped firewood produced. Seven chopped firewood processors were tested in the research, six of which were sawing processors and one shearing processor. The chopping work was carried out using wood feeding racks and a wood lifter. The work was also carried out without any feeding appliances. Altogether 132.5 solid m 3 of wood were chopped in the time studies. The firewood processor used had the most significant impact on chopping work productivity. In addition to the firewood processor, the stem mid-diameter, the length of the raw material, and of the firewood were also found to affect productivity. The wood feeding systems also affected productivity. If there is a feeding rack and hydraulic grapple loader available for use in chopping firewood, then it is worth using the wood feeding rack. A wood lifter is only worth using with the largest stems (over 20 cm mid-diameter) if a feeding rack cannot be used. When producing chopped firewood from small-diameter wood, i.e. with a mid-diameter less than 10 cm, the costs of chopping work were over 10 EUR solid m -3 with sawing firewood processors. The shearing firewood processor with a guillotine blade achieved a cost level of 5 EUR solid m -3 when the mid-diameter of the chopped stem was 10 cm. In addition to the raw material, the cost-efficient chopping work also requires several hundred annual operating hours with a firewood processor, which is difficult for individual firewood entrepreneurs to achieve. The operating hours of firewood processors can be increased to the required level by the joint use of the processors by a number of firewood entrepreneurs. (author)

  12. SmartScan: a robust pushbroom imaging concept for moderate spacecraft attitude stability

    Science.gov (United States)

    Janschek, K.; Tchernykh, V.; Dyblenko, S.; Harnisch, B.

    2017-11-01

    Pushbroom scan cameras with linear image sensors, commonly used for Earth observation from satellites, require high attitude stability during the image acquisition. Especially noticeable are the effects of high frequency attitude variations originating from micro shocks and vibrations, produced by momentum and reaction wheels, mechanically activated coolers, steering and deployment mechanics and other reasons. The SMARTSCAN imaging concept offers high quality imaging even with moderate satellite attitude stability on a sole opto-electronic basis without any moving parts. It uses real-time recording of the actual image motion in the focal plane of the remote sensing camera during the frame acquisition and a posteriori correction of the obtained image distortions on base of the image motion record. Exceptional real-time performances with subpixel accuracy image motion measurement are provided by an innovative high-speed onboard optoelectronic correlation processor. SMARTSCAN allows therefore using smart pushbroom cameras for hyper-spectral imagers on satellites and platforms which are not specially intended for imaging missions, e.g. micro satellites. The paper gives an overview on the system concept and main technologies used (advanced optical correlator for ultra high-speed image motion tracking), it discusses the conceptual design for a smart compact space camera and it reports on airborne test results of a functional breadboard model.

  13. Evaluation of Algorithms for Compressing Hyperspectral Data

    Science.gov (United States)

    Cook, Sid; Harsanyi, Joseph; Faber, Vance

    2003-01-01

    With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.

  14. AMD's 64-bit Opteron processor

    CERN Multimedia

    CERN. Geneva

    2003-01-01

    This talk concentrates on issues that relate to obtaining peak performance from the Opteron processor. Compiler options, memory layout, MPI issues in multi-processor configurations and the use of a NUMA kernel will be covered. A discussion of recent benchmarking projects and results will also be included.BiographiesDavid RichDavid directs AMD's efforts in high performance computing and also in the use of Opteron processors...

  15. Thermal Imaging Performance of TIR Onboard the Hayabusa2 Spacecraft

    Science.gov (United States)

    Arai, Takehiko; Nakamura, Tomoki; Tanaka, Satoshi; Demura, Hirohide; Ogawa, Yoshiko; Sakatani, Naoya; Horikawa, Yamato; Senshu, Hiroki; Fukuhara, Tetsuya; Okada, Tatsuaki

    2017-07-01

    The thermal infrared imager (TIR) is a thermal infrared camera onboard the Hayabusa2 spacecraft. TIR will perform thermography of a C-type asteroid, 162173 Ryugu (1999 JU3), and estimate its surface physical properties, such as surface thermal emissivity ɛ , surface roughness, and thermal inertia Γ, through remote in-situ observations in 2018 and 2019. In prelaunch tests of TIR, detector calibrations and evaluations, along with imaging demonstrations, were performed. The present paper introduces the experimental results of a prelaunch test conducted using a large-aperture collimator in conjunction with TIR under atmospheric conditions. A blackbody source, controlled at constant temperature, was measured using TIR in order to construct a calibration curve for obtaining temperatures from observed digital data. As a known thermal emissivity target, a sandblasted black almite plate warmed from the back using a flexible heater was measured by TIR in order to evaluate the accuracy of the calibration curve. As an analog target of a C-type asteroid, carbonaceous chondrites (50 mm × 2 mm in thickness) were also warmed from the back and measured using TIR in order to clarify the imaging performance of TIR. The calibration curve, which was fitted by a specific model of the Planck function, allowed for conversion to the target temperature within an error of 1°C (3σ standard deviation) for the temperature range of 30 to 100°C. The observed temperature of the black almite plate was consistent with the temperature measured using K-type thermocouples, within the accuracy of temperature conversion using the calibration curve when the temperature variation exhibited a random error of 0.3 °C (1σ ) for each pixel at a target temperature of 50°C. TIR can resolve the fine surface structure of meteorites, including cracks and pits with the specified field of view of 0.051°C (328 × 248 pixels). There were spatial distributions with a temperature variation of 3°C at the setting

  16. Simulated and measured performance of a real-time processor for RFI detection and mitigation on-board spaceborne microwave radiometers

    DEFF Research Database (Denmark)

    Skou, Niels; Kristensen, Steen Savstrup; Søbjærg, Sten Schmidl

    2017-01-01

    An RFI processor breadboard has been designed and developed for future spaceborne microwave radiometer systems. RFI detection is based on the anomalous amplitude, kurtosis, and cross-frequency algorithms. These are implemented in VHDL code in an FPGA. Thus algorithm performance can be assessed...... by proper code simulation. The breadboard has been integrated with a Ku band radiometer subjected to RFI-like signals from a laboratory generator. Simulations show that the algorithms as implemented work according to theory when subjected to pulsed sinusoidal and QPSK signals. The laboratory measurements...

  17. Composable processor virtualization for embedded systems

    NARCIS (Netherlands)

    Molnos, A.M.; Milutinovic, A.; She, D.; Goossens, K.G.W.

    2010-01-01

    Processor virtualization divides a physical processor's time among a set of virual machines, enabling efficient hardware utilization, application security and allowing co-existence of different operating systems on the same processor. Through initially intended for the server domain, virtualization

  18. EUV high resolution imager on-board solar orbiter: optical design and detector performances

    Science.gov (United States)

    Halain, J. P.; Mazzoli, A.; Rochus, P.; Renotte, E.; Stockman, Y.; Berghmans, D.; BenMoussa, A.; Auchère, F.

    2017-11-01

    The EUV high resolution imager (HRI) channel of the Extreme Ultraviolet Imager (EUI) on-board Solar Orbiter will observe the solar atmospheric layers at 17.4 nm wavelength with a 200 km resolution. The HRI channel is based on a compact two mirrors off-axis design. The spectral selection is obtained by a multilayer coating deposited on the mirrors and by redundant Aluminum filters rejecting the visible and infrared light. The detector is a 2k x 2k array back-thinned silicon CMOS-APS with 10 μm pixel pitch, sensitive in the EUV wavelength range. Due to the instrument compactness and the constraints on the optical design, the channel performance is very sensitive to the manufacturing, alignments and settling errors. A trade-off between two optical layouts was therefore performed to select the final optical design and to improve the mirror mounts. The effect of diffraction by the filter mesh support and by the mirror diffusion has been included in the overall error budget. Manufacturing of mirror and mounts has started and will result in thermo-mechanical validation on the EUI instrument structural and thermal model (STM). Because of the limited channel entrance aperture and consequently the low input flux, the channel performance also relies on the detector EUV sensitivity, readout noise and dynamic range. Based on the characterization of a CMOS-APS back-side detector prototype, showing promising results, the EUI detector has been specified and is under development. These detectors will undergo a qualification program before being tested and integrated on the EUI instrument.

  19. Quality assurance through constancy control for X-ray film processors

    International Nuclear Information System (INIS)

    Weberling, R.

    1982-01-01

    A control method to check the reproduction of X-ray film processors and necessary instruments is presented. The application of a light sensitometer allows the production of test films daily, independent of X-ray exposures, X-ray film cassettes and X-ray intensifying screens. The optical densities on the test films will be read by means of a densitometer and the results are plotted on a special control chart. A limitation through optical densities of +-0,15 for Speed Index and +-0,20 for Contrast Index determines the tolerance variation for X-ray film processors. Targets of this control method are uniform image quality, dose reduction and saving of cost. (orig.) [de

  20. Deterministic chaos in the processor load

    International Nuclear Information System (INIS)

    Halbiniak, Zbigniew; Jozwiak, Ireneusz J.

    2007-01-01

    In this article we present the results of research whose purpose was to identify the phenomenon of deterministic chaos in the processor load. We analysed the time series of the processor load during efficiency tests of database software. Our research was done on a Sparc Alpha processor working on the UNIX Sun Solaris 5.7 operating system. The conducted analyses proved the presence of the deterministic chaos phenomenon in the processor load in this particular case

  1. Reactive Goal Decomposition Hierarchies for On-Board Autonomy

    Science.gov (United States)

    Hartmann, L.

    2002-01-01

    to state and environment and in general can terminate the execution of a decomposition and attempt a new decomposition at any level in the hierarchy. This goal decomposition system is suitable for workstation, microprocessor and fpga implementation and thus is able to support the full range of prototyping activities, from mission design in the laboratory to development of the fpga firmware for the flight system. This approach is based on previous artificial intelligence work including (1) Brooks' subsumption architecture for robot control, (2) Firby's Reactive Action Package System (RAPS) for mediating between high level automated planning and low level execution and (3) hierarchical task networks for automated planning. Reactive goal decomposition hierarchies can be used for a wide variety of on-board autonomy applications including automating low level operation sequences (such as scheduling prerequisite operations, e.g., heaters, warm-up periods, monitoring power constraints), coordinating multiple spacecraft as in formation flying and constellations, robot manipulator operations, rendez-vous, docking, servicing, assembly, on-orbit maintenance, planetary rover operations, solar system and interstellar probes, intelligent science data gathering and disaster early warning. Goal decomposition hierarchies can support high level fault tolerance. Given models of on-board resources and goals to accomplish, the decomposition hierarchy could allocate resources to goals taking into account existing faults and in real-time reallocating resources as new faults arise. Resources to be modeled include memory (e.g., ROM, FPGA configuration memory, processor memory, payload instrument memory), processors, on-board and interspacecraft network nodes and links, sensors, actuators (e.g., attitude determination and control, guidance and navigation) and payload instruments. A goal decomposition hierarchy could be defined to map mission goals and tasks to available on-board resources. As

  2. Principal component analysis-based imaging angle determination for 3D motion monitoring using single-slice on-board imaging.

    Science.gov (United States)

    Chen, Ting; Zhang, Miao; Jabbour, Salma; Wang, Hesheng; Barbee, David; Das, Indra J; Yue, Ning

    2018-04-10

    Through-plane motion introduces uncertainty in three-dimensional (3D) motion monitoring when using single-slice on-board imaging (OBI) modalities such as cine MRI. We propose a principal component analysis (PCA)-based framework to determine the optimal imaging plane to minimize the through-plane motion for single-slice imaging-based motion monitoring. Four-dimensional computed tomography (4DCT) images of eight thoracic cancer patients were retrospectively analyzed. The target volumes were manually delineated at different respiratory phases of 4DCT. We performed automated image registration to establish the 4D respiratory target motion trajectories for all patients. PCA was conducted using the motion information to define the three principal components of the respiratory motion trajectories. Two imaging planes were determined perpendicular to the second and third principal component, respectively, to avoid imaging with the primary principal component of the through-plane motion. Single-slice images were reconstructed from 4DCT in the PCA-derived orthogonal imaging planes and were compared against the traditional AP/Lateral image pairs on through-plane motion, residual error in motion monitoring, absolute motion amplitude error and the similarity between target segmentations at different phases. We evaluated the significance of the proposed motion monitoring improvement using paired t test analysis. The PCA-determined imaging planes had overall less through-plane motion compared against the AP/Lateral image pairs. For all patients, the average through-plane motion was 3.6 mm (range: 1.6-5.6 mm) for the AP view and 1.7 mm (range: 0.6-2.7 mm) for the Lateral view. With PCA optimization, the average through-plane motion was 2.5 mm (range: 1.3-3.9 mm) and 0.6 mm (range: 0.2-1.5 mm) for the two imaging planes, respectively. The absolute residual error of the reconstructed max-exhale-to-inhale motion averaged 0.7 mm (range: 0.4-1.3 mm, 95% CI: 0.4-1.1 mm) using

  3. Implementation Of Vision-Based Landing Target Detection For VTOL UAV Using Raspberry Pi

    Directory of Open Access Journals (Sweden)

    Ei Ei Nyein

    2017-04-01

    Full Text Available This paper presents development and implementation of a real-time vision-based landing system for VTOL UAV. We use vision for precise target detection and recognition. A UAV is equipped with the onboard raspberry pi camera to take images and raspberry pi platform to operate the image processing techniques. Today image processing is used for various applications in this paper it is used for landing target extraction. And vision system is also used for take-off and landing function in VTOL UAV. Our landing target design is used as the helipad H shape. Firstly the image is captured to detect the target by the onboard camera. Next the capture image is operated in the onboard processor. Finally the alert sound signal is sent to the remote control RC for landing VTOL UAV. The information obtained from vision system is used to navigate a safe landing. The experimental results from real tests are presented.

  4. A Failure Detection Strategy for Intrafraction Prostate Motion Monitoring With On-Board Imagers for Fixed-Gantry IMRT

    International Nuclear Information System (INIS)

    Liu Wu; Luxton, Gary; Xing Lei

    2010-01-01

    Purpose: To develop methods to monitor prostate intrafraction motion during fixed-gantry intensity-modulated radiotherapy using MV treatment beam imaging together with minimal kV imaging for a failure detection strategy that ensures prompt detection when target displacement exceeds a preset threshold. Methods and Materials: Real-time two-dimensional (2D) marker position in the MV image plane was obtained by analyzing cine-MV images. The marker's in-line movement, and thus its time-varying three-dimensional (3D) position, was estimated by combining the 2D projection data with a previously established correlative relationship between the directional components of prostate motion. A confirmation request for more accurate localization using MV-kV triangulation was triggered when the estimated prostate displacement based on the cine-MV data was greater than 3 mm. An interventional action alert followed on positive MV-kV confirmation. To demonstrate the feasibility and accuracy of the proposed method, simulation studies of conventional-fraction intensity-modulated radiotherapy sessions were done using 536 Calypso-measured prostate trajectories from 17 radiotherapy patients. Results: A technique for intrafraction prostate motion management has been developed. The technique, using 'freely available' cine-MV images and minimum on-board kV imaging (on average 2.5 images/fraction), successfully limited 3D prostate movement to within a range of 3 mm relative to the MV beam for 99.4% of the total treatment time. On average, only approximately one intervention/fraction was needed to achieve this level of accuracy. Conclusion: Instead of seeking to accurately and continuously localize the prostate target as existing motion tracking systems do, the present technique effectively uses cine-MV data to provide a clinically valuable way to minimize kV usage, while maintaining high targeting accuracy.

  5. Dosimetric verification of lung cancer treatment using the CBCTs estimated from limited-angle on-board projections.

    Science.gov (United States)

    Zhang, You; Yin, Fang-Fang; Ren, Lei

    2015-08-01

    Lung cancer treatment is susceptible to treatment errors caused by interfractional anatomical and respirational variations of the patient. On-board treatment dose verification is especially critical for the lung stereotactic body radiation therapy due to its high fractional dose. This study investigates the feasibility of using cone-beam (CB)CT images estimated by a motion modeling and free-form deformation (MM-FD) technique for on-board dose verification. Both digital and physical phantom studies were performed. Various interfractional variations featuring patient motion pattern change, tumor size change, and tumor average position change were simulated from planning CT to on-board images. The doses calculated on the planning CT (planned doses), the on-board CBCT estimated by MM-FD (MM-FD doses), and the on-board CBCT reconstructed by the conventional Feldkamp-Davis-Kress (FDK) algorithm (FDK doses) were compared to the on-board dose calculated on the "gold-standard" on-board images (gold-standard doses). The absolute deviations of minimum dose (ΔDmin), maximum dose (ΔDmax), and mean dose (ΔDmean), and the absolute deviations of prescription dose coverage (ΔV100%) were evaluated for the planning target volume (PTV). In addition, 4D on-board treatment dose accumulations were performed using 4D-CBCT images estimated by MM-FD in the physical phantom study. The accumulated doses were compared to those measured using optically stimulated luminescence (OSL) detectors and radiochromic films. Compared with the planned doses and the FDK doses, the MM-FD doses matched much better with the gold-standard doses. For the digital phantom study, the average (± standard deviation) ΔDmin, ΔDmax, ΔDmean, and ΔV100% (values normalized by the prescription dose or the total PTV) between the planned and the gold-standard PTV doses were 32.9% (±28.6%), 3.0% (±2.9%), 3.8% (±4.0%), and 15.4% (±12.4%), respectively. The corresponding values of FDK PTV doses were 1.6% (±1

  6. VIRTUS: a multi-processor system in FASTBUS

    International Nuclear Information System (INIS)

    Ellett, J.; Jackson, R.; Ritter, R.; Schlein, P.; Yaeger, D.; Zweizig, J.

    1986-01-01

    VIRTUS is a system of parallel MC68000-based processors interconnected by FASTBUS that is used either on-line as an intelligent trigger component or off-line for full event processing. Each processor receives the complete set of data from one event. The host computer, a VAX 11/780, down-line loads all software to the processors, controls and monitors the functioning of all processors, and writes processed data to tape. Instructions, programs, and data are transferred among the processors and the host in the form of fixed format, variable length data blocks. (Auth.)

  7. Onboard calibration and monitoring for the SWIFT instrument

    International Nuclear Information System (INIS)

    Rahnama, P; McDade, I; Shepherd, G; Gault, W

    2012-01-01

    The SWIFT (Stratospheric Wind Interferometer for Transport studies) instrument is a proposed space-based field-widened Doppler Michelson interferometer designed to measure stratospheric winds and ozone densities using a passive optical technique called Doppler Michelson imaging interferometry. The onboard calibration and monitoring procedures for the SWIFT instrument are described in this paper. Sample results of the simulations of onboard calibration measurements are presented and discussed. This paper also discusses the results of the derivation of the calibrations and monitoring requirements for the SWIFT instrument. SWIFT's measurement technique and viewing geometry are briefly described. The reference phase calibration and filter monitoring for the SWIFT instrument are two of the main critical design issues. In this paper it is shown that in order to meet SWIFT's science requirements, Michelson interferometer optical path difference monitoring corresponding to a phase calibration accuracy of ∼10 −3 radians, filter passband monitoring corresponding to phase accuracy of ∼5 × 10 −3 radians and a thermal stability of 10 −3 K s −1 are required. (paper)

  8. A lock circuit for a multi-core processor

    DEFF Research Database (Denmark)

    2015-01-01

    An integrated circuit comprising a multiple processor cores and a lock circuit that comprises a queue register with respective bits set or reset via respective, connections dedicated to respective processor cores, whereby the queue register identifies those among the multiple processor cores...... that are enqueued in the queue register. Furthermore, the integrated circuit comprises a current register and a selector circuit configured to select a processor core and identify that processor core by a value in the current register. A selected processor core is a prioritized processor core among the cores...... configured with an integrated circuit; and a silicon die configured with an integrated circuit....

  9. Multi-processor system for real-time flow estimation in medical ultrasound imaging

    DEFF Research Database (Denmark)

    Stetson, Paul F.; Jensen, Jesper Lomborg; Antonius, Peter

    1997-01-01

    the processed data. The generous bandwidth of the links makes it easy to balance the computational load among the processors.In order to manage the shared system memory and to make use of the parallel processing capabilities of the system, a real-time multitasking kernel has been developed. The kernel uses...

  10. Special purpose processors for high energy physics applications

    International Nuclear Information System (INIS)

    Verkerk, C.

    1978-01-01

    The review on the subject of hardware processors from very fast decision logic for the split field magnet facility at CERN, to a point-finding processor used to relieve the data-acquisition minicomputer from the task of monitoring the SPS experiment is given. Block diagrams of decision making processor, point-finding processor, complanarity and opening angle processor and programmable track selector module are presented and discussed. The applications of fully programmable but slower processor on the one hand, and very fast and programmable decision logic on the other hand are given in this review

  11. The Central Trigger Processor (CTP)

    CERN Multimedia

    Franchini, Matteo

    2016-01-01

    The Central Trigger Processor (CTP) receives trigger information from the calorimeter and muon trigger processors, as well as from other sources of trigger. It makes the Level-1 decision (L1A) based on a trigger menu.

  12. EXPERIENCE WITH FPGA-BASED PROCESSOR CORE AS FRONT-END COMPUTER

    International Nuclear Information System (INIS)

    HOFF, L.T.

    2005-01-01

    The RHIC control system architecture follows the familiar ''standard model''. LINUX workstations are used as operator consoles. Front-end computers are distributed around the accelerator, close to equipment being controlled or monitored. These computers are generally based on VMEbus CPU modules running the VxWorks operating system. I/O is typically performed via the VMEbus, or via PMC daughter cards (via an internal PCI bus), or via on-board I/O interfaces (Ethernet or serial). Advances in FPGA size and sophistication now permit running virtual processor ''cores'' within the FPGA logic, including ''cores'' with advanced features such as memory management. Such systems offer certain advantages over traditional VMEbus Front-end computers. Advantages include tighter coupling with FPGA logic, and therefore higher I/O bandwidth, and flexibility in packaging, possibly resulting in a lower noise environment and/or lower cost. This paper presents the experience acquired while porting the RHIC control system to a PowerPC 405 core within a Xilinx FPGA for use in low-level RF control

  13. Very Long Instruction Word Processors

    Indian Academy of Sciences (India)

    Pentium Processor have modified the processor architecture to exploit parallelism in a program. .... The type of operation itself is encoded using 14 bits. .... text of designing simple architectures with low power consump- tion and execute x86 ...

  14. Experimental testing of the noise-canceling processor.

    Science.gov (United States)

    Collins, Michael D; Baer, Ralph N; Simpson, Harry J

    2011-09-01

    Signal-processing techniques for localizing an acoustic source buried in noise are tested in a tank experiment. Noise is generated using a discrete source, a bubble generator, and a sprinkler. The experiment has essential elements of a realistic scenario in matched-field processing, including complex source and noise time series in a waveguide with water, sediment, and multipath propagation. The noise-canceling processor is found to outperform the Bartlett processor and provide the correct source range for signal-to-noise ratios below -10 dB. The multivalued Bartlett processor is found to outperform the Bartlett processor but not the noise-canceling processor. © 2011 Acoustical Society of America

  15. System and method for three-dimensional image reconstruction using an absolute orientation sensor

    KAUST Repository

    Giancola, Silvio

    2018-01-18

    A three-dimensional image reconstruction system includes an image capture device, an inertial measurement unit (IMU), and an image processor. The image capture device captures image data. The inertial measurement unit (IMU) is affixed to the image capture device and records IMU data associated with the image data. The image processor includes one or more processing units and memory for storing instructions that are executed by the one or more processing units, wherein the image processor receives the image data and the IMU data as inputs and utilizes the IMU data to pre-align the first image and the second image, and wherein the image processor utilizes a registration algorithm to register the pre-aligned first and second images.

  16. SHARPENDING OF THE VNIR AND SWIR BANDS OF THE WIDE BAND SPECTRAL IMAGER ONBOARD TIANGONG-II IMAGERY USING THE SELECTED BANDS

    Directory of Open Access Journals (Sweden)

    Q. Liu

    2018-04-01

    Full Text Available The Tiangong-II space lab was launched at the Jiuquan Satellite Launch Center of China on September 15, 2016. The Wide Band Spectral Imager (WBSI onboard the Tiangong-II has 14 visible and near-infrared (VNIR spectral bands covering the range from 403–990 nm and two shortwave infrared (SWIR bands covering the range from 1230–1250 nm and 1628–1652 nm respectively. In this paper the selected bands are proposed which aims at considering the closest spectral similarities between the VNIR with 100 m spatial resolution and SWIR bands with 200 m spatial resolution. The evaluation of Gram-Schmidt transform (GS sharpening techniques embedded in ENVI software is presented based on four types of the different low resolution pan band. The experimental results indicated that the VNIR band with higher CC value with the raw SWIR Band was selected, more texture information was injected the corresponding sharpened SWIR band image, and at that time another sharpened SWIR band image preserve the similar spectral and texture characteristics to the raw SWIR band image.

  17. Event Pre Processor for the CZT Detector on MIRAX

    International Nuclear Information System (INIS)

    Kendziorra, Eckhard; Schanz, Thomas; Distratis, Giuseppe; Suchy, Slawomir

    2006-01-01

    We describe the Event Pre Processor (EPP) for the Hard X-ray Imager (HXI) on MIRAX. The EPP provides on board data reduction and event filtering for the HXI Cadmium Zinc Telluride strip detector. Emphasis is placed upon the EPP requirements, its implementation as VHDL design in a Field Programmable Gate Array (FPGA), and the description of a test environment for both the VHDL code and the FPGA hardware

  18. Development of a highly reliable CRT processor

    International Nuclear Information System (INIS)

    Shimizu, Tomoya; Saiki, Akira; Hirai, Kenji; Jota, Masayoshi; Fujii, Mikiya

    1996-01-01

    Although CRT processors have been employed by the main control board to reduce the operator's workload during monitoring, the control systems are still operated by hardware switches. For further advancement, direct controller operation through a display device is expected. A CRT processor providing direct controller operation must be as reliable as the hardware switches are. The authors are developing a new type of highly reliable CRT processor that enables direct controller operations. In this paper, we discuss the design principles behind a highly reliable CRT processor. The principles are defined by studies of software reliability and of the functional reliability of the monitoring and operation systems. The functional configuration of an advanced CRT processor is also addressed. (author)

  19. Computer Generated Inputs for NMIS Processor Verification

    International Nuclear Information System (INIS)

    J. A. Mullens; J. E. Breeding; J. A. McEvers; R. W. Wysor; L. G. Chiang; J. R. Lenarduzzi; J. T. Mihalczo; J. K. Mattingly

    2001-01-01

    Proper operation of the Nuclear Identification Materials System (NMIS) processor can be verified using computer-generated inputs [BIST (Built-In-Self-Test)] at the digital inputs. Preselected sequences of input pulses to all channels with known correlation functions are compared to the output of the processor. These types of verifications have been utilized in NMIS type correlation processors at the Oak Ridge National Laboratory since 1984. The use of this test confirmed a malfunction in a NMIS processor at the All-Russian Scientific Research Institute of Experimental Physics (VNIIEF) in 1998. The NMIS processor boards were returned to the U.S. for repair and subsequently used in NMIS passive and active measurements with Pu at VNIIEF in 1999

  20. 3081//sub E/ processor

    International Nuclear Information System (INIS)

    Kunz, P.F.; Gravina, M.; Oxoby, G.; Trang, Q.; Fucci, A.; Jacobs, D.; Martin, B.; Storr, K.

    1983-03-01

    Since the introduction of the 168//sub E/, emulating processors have been successful over an amazingly wide range of applications. This paper will describe a second generation processor, the 3081//sub E/. This new processor, which is being developed as a collaboration between SLAC and CERN, goes beyond just fixing the obvious faults of the 168//sub E/. Not only will the 3081//sub E/ have much more memory space, incorporate many more IBM instructions, and have much more memory space, incorporate many more IBM instructions, and have full double precision floating point arithmetic, but it will also have faster execution times and be much simpler to build, debug, and maintain. The simple interface and reasonable cost of the 168//sub E/ will be maintained for the 3081//sub E/

  1. SU-E-J-11: Measurement of Eye Lens Dose for Varian On-Board Imaging with Different CBCT Acquisition Techniques

    International Nuclear Information System (INIS)

    Deshpande, S; Dhote, D; Kumar, R; Thakur, K

    2015-01-01

    Purpose: To measure actual patient eye lens dose for different cone beam computed tomography (CBCT) acquisition protocol of Varian’s On Board Imagining (OBI) system using Optically Stimulated Luminescence (OSL) dosimeter and study the eye lens dose with patient geometry and distance of isocenter to the eye lens Methods: OSL dosimeter was used to measure eye lens dose of patient. OSL dosimeter was placed on patient forehead center during CBCT image acquisition to measure eye lens dose. For three different cone beam acquisition protocol (standard dose head, low dose head and high quality head) of Varian On-Board Imaging, eye lens doses were measured. Measured doses were correlated with patient geometry and distance between isocenter to eye lens. Results: Measured eye lens dose for standard dose head was in the range of 1.8 mGy to 3.2 mGy, for high quality head protocol dose was in range of 4.5mGy to 9.9 mGy whereas for low dose head was in the range of 0.3mGy to 0.7mGy. Dose to eye lens is depends upon position of isocenter. For posterioraly located tumor eye lens dose is less. Conclusion: From measured doses it can be concluded that by proper selection of imagining protocol and frequency of imaging, it is possible to restrict the eye lens dose below the new limit set by ICRP. However, undoubted advantages of imaging system should be counter balanced by careful consideration of imaging protocol especially for very intense imaging sequences for Adoptive Radiotherapy or IMRT

  2. Multimode power processor

    Science.gov (United States)

    O'Sullivan, George A.; O'Sullivan, Joseph A.

    1999-01-01

    In one embodiment, a power processor which operates in three modes: an inverter mode wherein power is delivered from a battery to an AC power grid or load; a battery charger mode wherein the battery is charged by a generator; and a parallel mode wherein the generator supplies power to the AC power grid or load in parallel with the battery. In the parallel mode, the system adapts to arbitrary non-linear loads. The power processor may operate on a per-phase basis wherein the load may be synthetically transferred from one phase to another by way of a bumpless transfer which causes no interruption of power to the load when transferring energy sources. Voltage transients and frequency transients delivered to the load when switching between the generator and battery sources are minimized, thereby providing an uninterruptible power supply. The power processor may be used as part of a hybrid electrical power source system which may contain, in one embodiment, a photovoltaic array, diesel engine, and battery power sources.

  3. Processors and systems (picture processing)

    Energy Technology Data Exchange (ETDEWEB)

    Gemmar, P

    1983-01-01

    Automatic picture processing requires high performance computers and high transmission capacities in the processor units. The author examines the possibilities of operating processors in parallel in order to accelerate the processing of pictures. He therefore discusses a number of available processors and systems for picture processing and illustrates their capacities for special types of picture processing. He stresses the fact that the amount of storage required for picture processing is exceptionally high. The author concludes that it is as yet difficult to decide whether very large groups of simple processors or highly complex multiprocessor systems will provide the best solution. Both methods will be aided by the development of VLSI. New solutions have already been offered (systolic arrays and 3-d processing structures) but they also are subject to losses caused by inherently parallel algorithms. Greater efforts must be made to produce suitable software for multiprocessor systems. Some possibilities for future picture processing systems are discussed. 33 references.

  4. Interference control by best-effort process duty-cycling in chip multi-processor systems for real-time medical image processing

    NARCIS (Netherlands)

    Westmijze, M.; Bekooij, Marco Jan Gerrit; Smit, Gerardus Johannes Maria

    2013-01-01

    Systems with chip multi-processors are currently used for several applications that have real-time requirements. In chip multi-processor architectures, many hardware resources such as parts of the cache hierarchy are shared between cores and by using such resources, applications can significantly

  5. Special processor for in-core control systems

    International Nuclear Information System (INIS)

    Golovanov, M.N.; Duma, V.R.; Levin, G.L.; Mel'nikov, A.V.; Polikanin, A.V.; Filatov, V.P.

    1978-01-01

    The BUTs-20 special processor is discussed, designed to control the units of the in-core control equipment which are incorporated into the VECTOR communication channel, and to provide preliminary data processing prior to computer calculations. A set of instructions and flowsheet of the processor, organization of its communication with memories and other units of the system are given. The processor components: a control unit and an arithmetic logical unit are discussed. It is noted that the special processor permits more effective utilization of the computer time

  6. Small SWAP 3D imaging flash ladar for small tactical unmanned air systems

    Science.gov (United States)

    Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.

    2015-05-01

    The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and system SWAP estimate of system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.

  7. Exploiting Auto-Collimation for Real-Time Onboard Monitoring of Space Optical Camera Geometric Parameters

    Science.gov (United States)

    Liu, W.; Wang, H.; Liu, D.; Miu, Y.

    2018-05-01

    Precise geometric parameters are essential to ensure the positioning accuracy for space optical cameras. However, state-of-the-art onorbit calibration method inevitably suffers from long update cycle and poor timeliness performance. To this end, in this paper we exploit the optical auto-collimation principle and propose a real-time onboard calibration scheme for monitoring key geometric parameters. Specifically, in the proposed scheme, auto-collimation devices are first designed by installing collimated light sources, area-array CCDs, and prisms inside the satellite payload system. Through utilizing those devices, the changes in the geometric parameters are elegantly converted into changes in the spot image positions. The variation of geometric parameters can be derived via extracting and processing the spot images. An experimental platform is then set up to verify the feasibility and analyze the precision index of the proposed scheme. The experiment results demonstrate that it is feasible to apply the optical auto-collimation principle for real-time onboard monitoring.

  8. Functional Verification of Enhanced RISC Processor

    OpenAIRE

    SHANKER NILANGI; SOWMYA L

    2013-01-01

    This paper presents design and verification of a 32-bit enhanced RISC processor core having floating point computations integrated within the core, has been designed to reduce the cost and complexity. The designed 3 stage pipelined 32-bit RISC processor is based on the ARM7 processor architecture with single precision floating point multiplier, floating point adder/subtractor for floating point operations and 32 x 32 booths multiplier added to the integer core of ARM7. The binary representati...

  9. Effect of processor temperature on film dosimetry

    International Nuclear Information System (INIS)

    Srivastava, Shiv P.; Das, Indra J.

    2012-01-01

    Optical density (OD) of a radiographic film plays an important role in radiation dosimetry, which depends on various parameters, including beam energy, depth, field size, film batch, dose, dose rate, air film interface, postexposure processing time, and temperature of the processor. Most of these parameters have been studied for Kodak XV and extended dose range (EDR) films used in radiation oncology. There is very limited information on processor temperature, which is investigated in this study. Multiple XV and EDR films were exposed in the reference condition (d max. , 10 × 10 cm 2 , 100 cm) to a given dose. An automatic film processor (X-Omat 5000) was used for processing films. The temperature of the processor was adjusted manually with increasing temperature. At each temperature, a set of films was processed to evaluate OD at a given dose. For both films, OD is a linear function of processor temperature in the range of 29.4–40.6°C (85–105°F) for various dose ranges. The changes in processor temperature are directly related to the dose by a quadratic function. A simple linear equation is provided for the changes in OD vs. processor temperature, which could be used for correcting dose in radiation dosimetry when film is used.

  10. The design of a graphics processor

    International Nuclear Information System (INIS)

    Holmes, M.; Thorne, A.R.

    1975-12-01

    The design of a graphics processor is described which takes into account known and anticipated user requirements, the availability of cheap minicomputers, the state of integrated circuit technology, and the overall need to minimise cost for a given performance. The main user needs are the ability to display large high resolution pictures, and to dynamically change the user's view in real time by means of fast coordinate processing hardware. The transformations that can be applied to 2D or 3D coordinates either singly or in combination are: translation, scaling, mirror imaging, rotation, and the ability to map the transformation origin on to any point on the screen. (author)

  11. LAT Onboard Science: Gamma-Ray Burst Identification

    International Nuclear Information System (INIS)

    Kuehn, Frederick; Hughes, Richard; Smith, Patrick; Winer, Brian; Bonnell, Jerry; Norris, Jay; Ritz, Steven; Russell, James

    2007-01-01

    The main goal of the Large Area Telescope (LAT) onboard science program is to provide quick identification and localization of Gamma Ray Bursts (GRB) onboard the LAT for follow-up observations by other observatories. The GRB identification and localization algorithm will provide celestial coordinates with an error region that will be distributed via the Gamma ray burst Coordinate Network (GCN). We present results that show our sensitivity to bursts as characterized using Monte Carlo simulations of the GLAST observatory. We describe and characterize the method of onboard track determination and the GRB identification and localization algorithm. Onboard track determination is considerably different than in the on-ground case, resulting in a substantially altered point spread function. The algorithm contains tunable parameters which may be adjusted after launch when real bursts characteristics at very high energies have been identified

  12. VON WISPR Family Processors: Volume 1

    National Research Council Canada - National Science Library

    Wagstaff, Ronald

    1997-01-01

    ...) and the background noise they are embedded in. Processors utilizing those fluctuations such as the von WISPR Family Processors discussed herein, are methods or algorithms that preferentially attenuate the fluctuating signals and noise...

  13. Low Latency DESDynI Data Products for Disaster Response, Resource Management and Other Applications

    Science.gov (United States)

    Doubleday, Joshua R.; Chien, Steve A.; Lou, Yunling

    2011-01-01

    We are developing onboard processor technology targeted at the L-band SAR instrument onboard the planned DESDynI mission to enable formation of SAR images onboard opening possibilities for near-real-time data products to augment full data streams. Several image processing and/or interpretation techniques are being explored as possible direct-broadcast products for use by agencies in need of low-latency data, responsible for disaster mitigation and assessment, resource management, agricultural development, shipping, etc. Data collected through UAVSAR (L-band) serves as surrogate to the future DESDynI instrument. We have explored surface water extent as a tool for flooding response, and disturbance images on polarimetric backscatter of repeat pass imagery potentially useful for structural collapse (earthquake), mud/land/debris-slides etc. We have also explored building vegetation and snow/ice classifiers, via support vector machines utilizing quad-pol backscatter, cross-pol phase, and a number of derivatives (radar vegetation index, dielectric estimates, etc.). We share our qualitative and quantitative results thus far.

  14. Implementing An Image Understanding System Architecture Using Pipe

    Science.gov (United States)

    Luck, Randall L.

    1988-03-01

    This paper will describe PIPE and how it can be used to implement an image understanding system. Image understanding is the process of developing a description of an image in order to make decisions about its contents. The tasks of image understanding are generally split into low level vision and high level vision. Low level vision is performed by PIPE -a high performance parallel processor with an architecture specifically designed for processing video images at up to 60 fields per second. High level vision is performed by one of several types of serial or parallel computers - depending on the application. An additional processor called ISMAP performs the conversion from iconic image space to symbolic feature space. ISMAP plugs into one of PIPE's slots and is memory mapped into the high level processor. Thus it forms the high speed link between the low and high level vision processors. The mechanisms for bottom-up, data driven processing and top-down, model driven processing are discussed.

  15. Image processor for high resolution video

    International Nuclear Information System (INIS)

    Pessoa, P.P.; Assis, J.T.; Cardoso, S.B.; Lopes, R.T.

    1989-01-01

    In this paper, we discuss an image presentation and processing system developed in Turbo Pascal 5.0 Language. Our system allows the visualization and processing of images in 16 different colors, taken at a time from a set of 64 possible ones. Digital filters of the mean, mediam Laplacian, gradient and histograms equalization type have been implemented, so as to allow a better image quality. Possible applications of our system are also discussed e.g., satellites, computerized tomography, medicine, microscopes. (author) [pt

  16. Many - body simulations using an array processor

    International Nuclear Information System (INIS)

    Rapaport, D.C.

    1985-01-01

    Simulations of microscopic models of water and polypeptides using molecular dynamics and Monte Carlo techniques have been carried out with the aid of an FPS array processor. The computational techniques are discussed, with emphasis on the development and optimization of the software to take account of the special features of the processor. The computing requirements of these simulations exceed what could be reasonably carried out on a normal 'scientific' computer. While the FPS processor is highly suited to the kinds of models described, several other computationally intensive problems in statistical mechanics are outlined for which alternative processor architectures are more appropriate

  17. Architecture and VHDL behavioural validation of a parallel processor dedicated to computer vision

    International Nuclear Information System (INIS)

    Collette, Thierry

    1992-01-01

    Speeding up image processing is mainly obtained using parallel computers; SIMD processors (single instruction stream, multiple data stream) have been developed, and have proven highly efficient regarding low-level image processing operations. Nevertheless, their performances drop for most intermediate of high level operations, mainly when random data reorganisations in processor memories are involved. The aim of this thesis was to extend the SIMD computer capabilities to allow it to perform more efficiently at the image processing intermediate level. The study of some representative algorithms of this class, points out the limits of this computer. Nevertheless, these limits can be erased by architectural modifications. This leads us to propose SYMPATIX, a new SIMD parallel computer. To valid its new concept, a behavioural model written in VHDL - Hardware Description Language - has been elaborated. With this model, the new computer performances have been estimated running image processing algorithm simulations. VHDL modeling approach allows to perform the system top down electronic design giving an easy coupling between system architectural modifications and their electronic cost. The obtained results show SYMPATIX to be an efficient computer for low and intermediate level image processing. It can be connected to a high level computer, opening up the development of new computer vision applications. This thesis also presents, a top down design method, based on the VHDL, intended for electronic system architects. (author) [fr

  18. Multi-processor network implementations in Multibus II and VME

    International Nuclear Information System (INIS)

    Briegel, C.

    1992-01-01

    ACNET (Fermilab Accelerator Controls Network), a proprietary network protocol, is implemented in a multi-processor configuration for both Multibus II and VME. The implementations are contrasted by the bus protocol and software design goals. The Multibus II implementation provides for multiple processors running a duplicate set of tasks on each processor. For a network connected task, messages are distributed by a network round-robin scheduler. Further, messages can be stopped, continued, or re-routed for each task by user-callable commands. The VME implementation provides for multiple processors running one task across all processors. The process can either be fixed to a particular processor or dynamically allocated to an available processor depending on the scheduling algorithm of the multi-processing operating system. (author)

  19. An end-to-end examination of geometric accuracy of IGRT using a new digital accelerator equipped with onboard imaging system.

    Science.gov (United States)

    Wang, Lei; Kielar, Kayla N; Mok, Ed; Hsu, Annie; Dieterich, Sonja; Xing, Lei

    2012-02-07

    The Varian's new digital linear accelerator (LINAC), TrueBeam STx, is equipped with a high dose rate flattening filter free (FFF) mode (6 MV and 10 MV), a high definition multileaf collimator (2.5 mm leaf width), as well as onboard imaging capabilities. A series of end-to-end phantom tests were performed, TrueBeam-based image guided radiation therapy (IGRT), to determine the geometric accuracy of the image-guided setup and dose delivery process for all beam modalities delivered using intensity modulated radiation therapy (IMRT) and RapidArc. In these tests, an anthropomorphic phantom with a Ball Cube II insert and the analysis software (FilmQA (3cognition)) were used to evaluate the accuracy of TrueBeam image-guided setup and dose delivery. Laser cut EBT2 films with 0.15 mm accuracy were embedded into the phantom. The phantom with the film inserted was first scanned with a GE Discovery-ST CT scanner, and the images were then imported to the planning system. Plans with steep dose fall off surrounding hypothetical targets of different sizes were created using RapidArc and IMRT with FFF and WFF (with flattening filter) beams. Four RapidArc plans (6 MV and 10 MV FFF) and five IMRT plans (6 MV and 10 MV FFF; 6 MV, 10 MV and 15 MV WFF) were studied. The RapidArc plans with 6 MV FFF were planned with target diameters of 1 cm (0.52 cc), 2 cm (4.2 cc) and 3 cm (14.1 cc), and all other plans with a target diameter of 3 cm. Both onboard planar and volumetric imaging procedures were used for phantom setup and target localization. The IMRT and RapidArc plans were then delivered, and the film measurements were compared with the original treatment plans using a gamma criteria of 3%/1 mm and 3%/2 mm. The shifts required in order to align the film measured dose with the calculated dose distributions was attributed to be the targeting error. Targeting accuracy of image-guided treatment using TrueBeam was found to be within 1 mm. For irradiation of the 3 cm target, the gammas (3%, 1

  20. Evaluation of the Intel Sandy Bridge-EP server processor

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A; CERN. Geneva. IT Department

    2012-01-01

    In this paper we report on a set of benchmark results recently obtained by CERN openlab when comparing an 8-core “Sandy Bridge-EP” processor with Intel’s previous microarchitecture, the “Westmere-EP”. The Intel marketing names for these processors are “Xeon E5-2600 processor series” and “Xeon 5600 processor series”, respectively. Both processors are produced in a 32nm process, and both platforms are dual-socket servers. Multiple benchmarks were used to get a good understanding of the performance of the new processor. We used both industry-standard benchmarks, such as SPEC2006, and specific High Energy Physics benchmarks, representing both simulation of physics detectors and data analysis of physics events. Before summarizing the results we must stress the fact that benchmarking of modern processors is a very complex affair. One has to control (at least) the following features: processor frequency, overclocking via Turbo mode, the number of physical cores in use, the use of logical cores ...

  1. Array processors based on Gaussian fraction-free method

    Energy Technology Data Exchange (ETDEWEB)

    Peng, S; Sedukhin, S [Aizu Univ., Aizuwakamatsu, Fukushima (Japan); Sedukhin, I

    1998-03-01

    The design of algorithmic array processors for solving linear systems of equations using fraction-free Gaussian elimination method is presented. The design is based on a formal approach which constructs a family of planar array processors systematically. These array processors are synthesized and analyzed. It is shown that some array processors are optimal in the framework of linear allocation of computations and in terms of number of processing elements and computing time. (author)

  2. Multiple Embedded Processors for Fault-Tolerant Computing

    Science.gov (United States)

    Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy

    2005-01-01

    A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.

  3. Online track processor for the CDF upgrade

    International Nuclear Information System (INIS)

    Thomson, E. J.

    2002-01-01

    A trigger track processor, called the eXtremely Fast Tracker (XFT), has been designed for the CDF upgrade. This processor identifies high transverse momentum (> 1.5 GeV/c) charged particles in the new central outer tracking chamber for CDF II. The XFT design is highly parallel to handle the input rate of 183 Gbits/s and output rate of 44 Gbits/s. The processor is pipelined and reports the result for a new event every 132 ns. The processor uses three stages: hit classification, segment finding, and segment linking. The pattern recognition algorithms for the three stages are implemented in programmable logic devices (PLDs) which allow in-situ modification of the algorithm at any time. The PLDs reside on three different types of modules. The complete system has been installed and commissioned at CDF II. An overview of the track processor and performance in CDF Run II are presented

  4. Design Principles for Synthesizable Processor Cores

    DEFF Research Database (Denmark)

    Schleuniger, Pascal; McKee, Sally A.; Karlsson, Sven

    2012-01-01

    As FPGAs get more competitive, synthesizable processor cores become an attractive choice for embedded computing. Currently popular commercial processor cores do not fully exploit current FPGA architectures. In this paper, we propose general design principles to increase instruction throughput...

  5. Data register and processor for multiwire chambers

    International Nuclear Information System (INIS)

    Karpukhin, V.V.

    1985-01-01

    A data register and a processor for data receiving and processing from drift chambers of a device for investigating relativistic positroniums are described. The data are delivered to the register input in the form of the Grey 8 bit code, memorized and transformed to a position code. The register information is delivered to the KAMAK trunk and to the front panel plug. The processor selects particle tracks in a horizontal plane of the facility. ΔY maximum coordinate divergence and minimum point quantity on the track are set from the processor front panel. Processor solution time is 16 μs maximum quantity of simultaneously analyzed coordinates is 16

  6. A CNN-Specific Integrated Processor

    Directory of Open Access Journals (Sweden)

    Suleyman Malki

    2009-01-01

    Full Text Available Integrated Processors (IP are algorithm-specific cores that either by programming or by configuration can be re-used within many microelectronic systems. This paper looks at Cellular Neural Networks (CNN to become realized as IP. First current digital implementations are reviewed, and the memoryprocessor bandwidth issues are analyzed. Then a generic view is taken on the structure of the network, and a new intra-communication protocol based on rotating wheels is proposed. It is shown that this provides for guaranteed high-performance with a minimal network interface. The resulting node is small and supports multi-level CNN designs, giving the system a 30-fold increase in capacity compared to classical designs. As it facilitates multiple operations on a single image, and single operations on multiple images, with minimal access to the external image memory, balancing the internal and external data transfer requirements optimizes the system operation. In conventional digital CNN designs, the treatment of boundary nodes requires additional logic to handle the CNN value propagation scheme. In the new architecture, only a slight modification of the existing cells is necessary to model the boundary effect. A typical prototype for visual pattern recognition will house 4096 CNN cells with a 2% overhead for making it an IP.

  7. The TMS34010 graphic processor - an architecture for image visualization in NMR tomography; O processador grafico TMS34010 - uma arquitetura para visualizacao de imagem em tomografia por RMN

    Energy Technology Data Exchange (ETDEWEB)

    Slaets, Jan Frans Willem; Paiva, Maria Stela Veludo de; Almeida, Lirio O B

    1990-12-31

    This abstract presents a description of the minimum system implemented with the graphic processor TMS34010, which will be used in the reconstruction, treatment and interpretation f images obtained by NMR tomography. The project is being developed in the LIE (Electronic Instrumentation Laboratory), of the Sao Carlos Chemistry and Physical Institute, S P, Brazil and is already in operation 4 refs., 7 figs.

  8. The TMS34010 graphic processor - an architecture for image visualization in NMR tomography; O processador grafico TMS34010 - uma arquitetura para visualizacao de imagem em tomografia por RMN

    Energy Technology Data Exchange (ETDEWEB)

    Slaets, Jan Frans Willem; Paiva, Maria Stela Veludo de; Almeida, Lirio O.B

    1989-12-31

    This abstract presents a description of the minimum system implemented with the graphic processor TMS34010, which will be used in the reconstruction, treatment and interpretation f images obtained by NMR tomography. The project is being developed in the LIE (Electronic Instrumentation Laboratory), of the Sao Carlos Chemistry and Physical Institute, S P, Brazil and is already in operation 4 refs., 7 figs.

  9. Development methods for VLSI-processors

    International Nuclear Information System (INIS)

    Horninger, K.; Sandweg, G.

    1982-01-01

    The aim of this project, which was originally planed for 3 years, was the development of modern system and circuit concepts, for VLSI-processors having a 32 bit wide data path. The result of this first years work is the concept of a general purpose processor. This processor is not only logically but also physically (on the chip) divided into four functional units: a microprogrammable instruction unit, an execution unit in slice technique, a fully associative cache memory and an I/O unit. For the ALU of the execution unit circuits in PLA and slice techniques have been realized. On the basis of regularity, area consumption and achievable performance the slice technique has been prefered. The designs utilize selftesting circuitry. (orig.) [de

  10. The performance of an LSI-11/23 with a SKYMNK-Q array processor as a high speed front end processor

    International Nuclear Information System (INIS)

    Clark, D.L.

    1983-01-01

    The NSRL has recently installed a VAX-11/750 based data acquisition system which is networked to two LSI-11/23 satellite processors. Each of the LSI's are connected to CAMAC branch drivers. The LSI's have small array processors installed for use in preprocessing data. The objective is to provide an easy to use high speed processor that will relieve the VAX of some of the real-time data analysis tasks. The basic operation of the array processor and some of the results of performance tests are described

  11. Embedded Processor Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Embedded Processor Laboratory provides the means to design, develop, fabricate, and test embedded computers for missile guidance electronics systems in support...

  12. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    Science.gov (United States)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  13. On-board Payload Data Processing from Earth to Space Segment

    Science.gov (United States)

    Tragni, M.; Abbattista, C.; Amoruso, L.; Cinquepalmi, L.; Bgongiari, F.; Errico, W.

    2013-09-01

    GS algorithms to approach the problem in the Space scenario, i.e. for Synthetic Aperture Radar (SAR) application the typical focalization of the raw image needs to be improved to be effectively in this context. Many works are actually available on that, the authors have developed a specific ones for neural network algorithms. By the information directly "acquired" (so computed) on-board and without intervention of typical ground systems facilities, the spacecraft can take autonomously decision regarding a re-planning of acquisition for itself (at high performance modalities) or other platforms in constellation or affiliated reducing the time elapse as in the nowadays approach. For no EO missions it is big advantage to reduce the large round trip flight of transmission. In general the saving of resources is extensible to memory and RF transmission band resources, time reaction (like civil protection applications), etc. enlarging the flexibility of missions and improving the final results. SpacePDP main HW and SW characteristics: • Compactness: size and weight of each module are fitted in a Eurocard 3U 8HP format with «Inter-Board» connection through cPCI peripheral bus. • Modularity: the Payload is usually composed by several sub-systems. • Flexibility: coprocessor FPGA, on-board memory and support avionic protocols are flexible, allowing different modules customization according to mission needs • Completeness: the two core boards (CPU and Companion) are enough to obtain a first complete payload data processing system in a basic configuration. • Integrability: The payload data processing system is open to accept custom modules to be connected on its open peripheral bus. • CPU HW module (one or more) based on a RISC processor (LEON2FT, a SPARC V8 architecture, 80Mips @100MHz on ASIC ATMEL AT697F) • DSP HW module (optional with more instances) based on a FPGA dedicated architecture to ensure an effective multitasking control and to offer high numerical

  14. Analytical Bounds on the Threads in IXP1200 Network Processor

    OpenAIRE

    Ramakrishna, STGS; Jamadagni, HS

    2003-01-01

    Increasing link speeds have placed enormous burden on the processing requirements and the processors are expected to carry out a variety of tasks. Network Processors (NP) [1] [2] is the blanket name given to the processors, which are traded for flexibility and performance. Network Processors are offered by a number of vendors; to take the main burden of processing requirement of network related operations from the conventional processors. The Network Processors cover a spectrum of design trad...

  15. THOR Fields and Wave Processor - FWP

    Science.gov (United States)

    Soucek, Jan; Rothkaehl, Hanna; Ahlen, Lennart; Balikhin, Michael; Carr, Christopher; Dekkali, Moustapha; Khotyaintsev, Yuri; Lan, Radek; Magnes, Werner; Morawski, Marek; Nakamura, Rumi; Uhlir, Ludek; Yearby, Keith; Winkler, Marek; Zaslavsky, Arnaud

    2017-04-01

    If selected, Turbulence Heating ObserveR (THOR) will become the first spacecraft mission dedicated to the study of plasma turbulence. The Fields and Waves Processor (FWP) is an integrated electronics unit for all electromagnetic field measurements performed by THOR. FWP will interface with all THOR fields sensors: electric field antennas of the EFI instrument, the MAG fluxgate magnetometer, and search-coil magnetometer (SCM), and perform signal digitization and on-board data processing. FWP box will house multiple data acquisition sub-units and signal analyzers all sharing a common power supply and data processing unit and thus a single data and power interface to the spacecraft. Integrating all the electromagnetic field measurements in a single unit will improve the consistency of field measurement and accuracy of time synchronization. The scientific value of highly sensitive electric and magnetic field measurements in space has been demonstrated by Cluster (among other spacecraft) and THOR instrumentation will further improve on this heritage. Large dynamic range of the instruments will be complemented by a thorough electromagnetic cleanliness program, which will prevent perturbation of field measurements by interference from payload and platform subsystems. Taking advantage of the capabilities of modern electronics and the large telemetry bandwidth of THOR, FWP will provide multi-component electromagnetic field waveforms and spectral data products at a high time resolution. Fully synchronized sampling of many signals will allow to resolve wave phase information and estimate wavelength via interferometric correlations between EFI probes. FWP will also implement a plasma resonance sounder and a digital plasma quasi-thermal noise analyzer designed to provide high cadence measurements of plasma density and temperature complementary to data from particle instruments. FWP will rapidly transmit information about magnetic field vector and spacecraft potential to the

  16. Onboard software of Plasma Wave Experiment aboard Arase: instrument management and signal processing of Waveform Capture/Onboard Frequency Analyzer

    Science.gov (United States)

    Matsuda, Shoya; Kasahara, Yoshiya; Kojima, Hirotsugu; Kasaba, Yasumasa; Yagitani, Satoshi; Ozaki, Mitsunori; Imachi, Tomohiko; Ishisaka, Keigo; Kumamoto, Atsushi; Tsuchiya, Fuminori; Ota, Mamoru; Kurita, Satoshi; Miyoshi, Yoshizumi; Hikishima, Mitsuru; Matsuoka, Ayako; Shinohara, Iku

    2018-05-01

    We developed the onboard processing software for the Plasma Wave Experiment (PWE) onboard the Exploration of energization and Radiation in Geospace, Arase satellite. The PWE instrument has three receivers: Electric Field Detector, Waveform Capture/Onboard Frequency Analyzer (WFC/OFA), and the High-Frequency Analyzer. We designed a pseudo-parallel processing scheme with a time-sharing system and achieved simultaneous signal processing for each receiver. Since electric and magnetic field signals are processed by the different CPUs, we developed a synchronized observation system by using shared packets on the mission network. The OFA continuously measures the power spectra, spectral matrices, and complex spectra. The OFA obtains not only the entire ELF/VLF plasma waves' activity but also the detailed properties (e.g., propagation direction and polarization) of the observed plasma waves. We performed simultaneous observation of electric and magnetic field data and successfully obtained clear wave properties of whistler-mode chorus waves using these data. In order to measure raw waveforms, we developed two modes for the WFC, `chorus burst mode' (65,536 samples/s) and `EMIC burst mode' (1024 samples/s), for the purpose of the measurement of the whistler-mode chorus waves (typically in a frequency range from several hundred Hz to several kHz) and the EMIC waves (typically in a frequency range from a few Hz to several hundred Hz), respectively. We successfully obtained the waveforms of electric and magnetic fields of whistler-mode chorus waves and ion cyclotron mode waves along the Arase's orbit. We also designed the software-type wave-particle interaction analyzer mode. In this mode, we measure electric and magnetic field waveforms continuously and transfer them to the mission data recorder onboard the Arase satellite. We also installed an onboard signal calibration function (onboard SoftWare CALibration; SWCAL). We performed onboard electric circuit diagnostics and

  17. The communication processor of TUMULT-64

    NARCIS (Netherlands)

    Smit, Gerardus Johannes Maria; Jansen, P.G.

    1988-01-01

    Tumult (Twente University MULTi-processor system) is a modular extendible multi-processor system designed and implemented at the Twente University of Technology in co-operation with Oce Nederland B.V. and the Dr. Neher Laboratories (Dutch PTT). Characteristics of the hardware are: MIMD type,

  18. High-Speed General Purpose Genetic Algorithm Processor.

    Science.gov (United States)

    Hoseini Alinodehi, Seyed Pourya; Moshfe, Sajjad; Saber Zaeimian, Masoumeh; Khoei, Abdollah; Hadidi, Khairollah

    2016-07-01

    In this paper, an ultrafast steady-state genetic algorithm processor (GAP) is presented. Due to the heavy computational load of genetic algorithms (GAs), they usually take a long time to find optimum solutions. Hardware implementation is a significant approach to overcome the problem by speeding up the GAs procedure. Hence, we designed a digital CMOS implementation of GA in [Formula: see text] process. The proposed processor is not bounded to a specific application. Indeed, it is a general-purpose processor, which is capable of performing optimization in any possible application. Utilizing speed-boosting techniques, such as pipeline scheme, parallel coarse-grained processing, parallel fitness computation, parallel selection of parents, dual-population scheme, and support for pipelined fitness computation, the proposed processor significantly reduces the processing time. Furthermore, by relying on a built-in discard operator the proposed hardware may be used in constrained problems that are very common in control applications. In the proposed design, a large search space is achievable through the bit string length extension of individuals in the genetic population by connecting the 32-bit GAPs. In addition, the proposed processor supports parallel processing, in which the GAs procedure can be run on several connected processors simultaneously.

  19. Making CSB + -Trees Processor Conscious

    DEFF Research Database (Denmark)

    Samuel, Michael; Pedersen, Anders Uhl; Bonnet, Philippe

    2005-01-01

    of the CSB+-tree. We argue that it is necessary to consider a larger group of parameters in order to adapt CSB+-tree to processor architectures as different as Pentium and Itanium. We identify this group of parameters and study how it impacts the performance of CSB+-tree on Itanium 2. Finally, we propose......Cache-conscious indexes, such as CSB+-tree, are sensitive to the underlying processor architecture. In this paper, we focus on how to adapt the CSB+-tree so that it performs well on a range of different processor architectures. Previous work has focused on the impact of node size on the performance...... a systematic method for adapting CSB+-tree to new platforms. This work is a first step towards integrating CSB+-tree in MySQL’s heap storage manager....

  20. Daily Orthogonal Kilovoltage Imaging Using a Gantry-Mounted On-Board Imaging System Results in a Reduction in Radiation Therapy Delivery Errors

    Energy Technology Data Exchange (ETDEWEB)

    Russo, Gregory A., E-mail: gregory.russo@bmc.org [Department of Radiation Oncology, Boston Medical Center and Boston University School of Medicine, Boston, Massachusetts (United States); Qureshi, Muhammad M.; Truong, Minh-Tam; Hirsch, Ariel E.; Orlina, Lawrence; Bohrs, Harry; Clancy, Pauline; Willins, John; Kachnic, Lisa A. [Department of Radiation Oncology, Boston Medical Center and Boston University School of Medicine, Boston, Massachusetts (United States)

    2012-11-01

    Purpose: To determine whether the use of routine image guided radiation therapy (IGRT) using pretreatment on-board imaging (OBI) with orthogonal kilovoltage X-rays reduces treatment delivery errors. Methods and Materials: A retrospective review of documented treatment delivery errors from 2003 to 2009 was performed. Following implementation of IGRT in 2007, patients received daily OBI with orthogonal kV X-rays prior to treatment. The frequency of errors in the pre- and post-IGRT time frames was compared. Treatment errors (TEs) were classified as IGRT-preventable or non-IGRT-preventable. Results: A total of 71,260 treatment fractions were delivered to 2764 patients. A total of 135 (0.19%) TEs occurred in 39 (1.4%) patients (3.2% in 2003, 1.1% in 2004, 2.5% in 2005, 2% in 2006, 0.86% in 2007, 0.24% in 2008, and 0.22% in 2009). In 2007, the TE rate decreased by >50% and has remained low (P = .00007, compared to before 2007). Errors were classified as being potentially preventable with IGRT (e.g., incorrect site, patient, or isocenter) vs. not. No patients had any IGRT-preventable TEs from 2007 to 2009, whereas there were 9 from 2003 to 2006 (1 in 2003, 2 in 2004, 2 in 2005, and 4 in 2006; P = .0058) before the implementation of IGRT. Conclusions: IGRT implementation has a patient safety benefit with a significant reduction in treatment delivery errors. As such, we recommend the use of IGRT in routine practice to complement existing quality assurance measures.

  1. Lipsi: Probably the Smallest Processor in the World

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2018-01-01

    While research on high-performance processors is important, it is also interesting to explore processor architectures at the other end of the spectrum: tiny processor cores for auxiliary functions. While it is common to implement small circuits for such functions, such as a serial port, in dedica...... at a minimal cost....

  2. Recommending the heterogeneous cluster type multi-processor system computing

    International Nuclear Information System (INIS)

    Iijima, Nobukazu

    2010-01-01

    Real-time reactor simulator had been developed by reusing the equipment of the Musashi reactor and its performance improvement became indispensable for research tools to increase sampling rate with introduction of arithmetic units using multi-Digital Signal Processor(DSP) system (cluster). In order to realize the heterogeneous cluster type multi-processor system computing, combination of two kinds of Control Processor (CP) s, Cluster Control Processor (CCP) and System Control Processor (SCP), were proposed with Large System Control Processor (LSCP) for hierarchical cluster if needed. Faster computing performance of this system was well evaluated by simulation results for simultaneous execution of plural jobs and also pipeline processing between clusters, which showed the system led to effective use of existing system and enhancement of the cost performance. (T. Tanaka)

  3. SU-E-I-72: First Experimental Study of On-Board CBCT Imaging Using 2.5MV Beam On a Radiotherapy Linac

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Q [Department of Radiation Oncology, Stanford University, Stanford, CA (United States); Institute of Image Processing and Pattern Recognition, Xi' an Jiaotong University, Xi' an (China); Li, R; Yang, Y; Xing, L [Department of Radiation Oncology, Stanford University, Stanford, CA (United States)

    2014-06-01

    Purpose: Varian TrueBeam version 2.0 comes with a new inline 2.5MV beam modality for image guided patient setup. In this work we develop an iterative volumetric image reconstruction technique specific to the beam and investigate the possibility of obtaining metal artifact free CBCT images using the new imaging modality. Methods: An iterative reconstruction algorithm with a sparse representation constraint based on dictionary learning is developed, in which both sparse projection and low dose rate (10 MU/min) are considered. Two CBCT experiments were conducted using the newly available 2.5MV beam on a Varian TrueBeam linac. First, a Rando anthropomorphic head phantom with and without a copper bar inserted in the center was scanned using both 2.5MV and kV (100kVp) beams. In a second experiment, an MRI phantom with many coils was scanned using 2.5MV, 6MV, and kV (100kVp) beams. Imaging dose and the resultant image quality is studied. Results: Qualitative assessment suggests that there were no visually detectable metal artifacts in MV CBCT images, compared with significant metal artifacts in kV CBCT images, especially in the MRI phantom. For a region near the metal object in the head phantom, the 2.5MV CBCT gave a more accurate quantification of the electron density compared with kV CBCT, with a ∼50% reduction in mean HU error. As expected, the contrast between bone and soft-tissue in 2.5MV CBCT decreased compared with kV CBCT. Conclusion: On-board CBCT imaging with the new 2.5MV beam can effectively reduce metal artifacts, although with a reduced softtissue contrast. Combination of kV and MV scanning may lead to metal artifact free CBCT images with uncompromised soft-tissue contrast.

  4. First level trigger processor for the ZEUS calorimeter

    International Nuclear Information System (INIS)

    Dawson, J.W.; Talaga, R.L.; Burr, G.W.; Laird, R.J.; Smith, W.; Lackey, J.

    1990-01-01

    This paper discusses the design of the first level trigger processor for the ZEUS calorimeter. This processor accepts data from the 13,000 photomultipliers of the calorimeter which is topologically divided into 16 regions, and after regional preprocessing, performs logical and numerical operations which cross regional boundaries. Because the crossing period at the HERA collider is 96 ns, it is necessary that first-level trigger decisions be made in pipelined hardware. One microsecond is allowed for the processor to perform the required logical and numerical operations, during which time the data from ten crossings would be resident in the processor while being clocked through the pipelined hardware. The circuitry is implemented in 100K ECL, Advanced CMOS discrete devices, and programmable gate arrays, and operates in a VME environment. All tables and registers are written/read from VME, and all diagnostic codes are executed from VME. Preprocessed data flows into the processor at a rate of 5.2GB/s, and processed data flows from the processor to the Global First-Level Trigger at a rate of 700MB/s. The system allows for subsets of the logic to be configured by software and for various important variables to be histogrammed as they flow through the processor. 2 refs., 3 figs

  5. First-level trigger processor for the ZEUS calorimeter

    International Nuclear Information System (INIS)

    Dawson, J.W.; Talaga, R.L.; Burr, G.W.; Laird, R.J.; Smith, W.; Lackey, J.

    1990-01-01

    The design of the first-level trigger processor for the Zeus calorimeter is discussed. This processor accepts data from the 13,000 photomultipliers of the calorimeter, which is topologically divided into 16 regions, and after regional preprocessing performs logical and numerical operations that cross regional boundaries. Because the crossing period at the HERA collider is 96 ns, it is necessary that first-level trigger decisions be made in pipelined hardware. One microsecond is allowed for the processor to perform the required logical and numerical operations, during which time the data from ten crossings would be resident in the processor while being clocked through the pipelined hardware. The circuitry is implemented in 100K emitter-coupled logic (ECL), advanced CMOS discrete devices and programmable gate arrays, and operates in a VME environment. All tables and registers are written/read from VME, and all diagnostic codes are executed from VME. Preprocessed data flows into the processor at a rate of 5.2 Gbyte/s, and processed data flows from the processor to the global first-level trigger at a rate of 70 Mbyte/s. The system allows for subsets of the logic to be configured by software and for various important variables to be histogrammed as they flow through the processor

  6. ON-BOARD COMPUTER SYSTEM FOR KITSAT-1 AND 2

    Directory of Open Access Journals (Sweden)

    H. S. Kim

    1996-06-01

    Full Text Available KITSAT-1 and 2 are microsatellites weighting 50kg and all the on-board data are processed by the on-board computer system. Hence, these on-board computers require to be highly reliable and be designed with tight power consumption, mass and size constraints. On-board computer(OBC systems for KITSAT-1 and 2 are also designed with a simple flexible hardware for reliability and software takes more responsibility than hardware. KITSAT-1 and 2 on-board computer system consist of OBC 186 as the primary OBC and OBC80 as its backup. OBC186 runs spacecraft operating system (SCOS which has real-time multi-tasking capability. Since their launch, OBC186 and OBC80 have been operating successfully until today. In this paper, we describe the development of OBC186 hardware and software and analyze its in-orbit operation performance.

  7. A digital retina-like low-level vision processor.

    Science.gov (United States)

    Mertoguno, S; Bourbakis, N G

    2003-01-01

    This correspondence presents the basic design and the simulation of a low level multilayer vision processor that emulates to some degree the functional behavior of a human retina. This retina-like multilayer processor is the lower part of an autonomous self-organized vision system, called Kydon, that could be used on visually impaired people with a damaged visual cerebral cortex. The Kydon vision system, however, is not presented in this paper. The retina-like processor consists of four major layers, where each of them is an array processor based on hexagonal, autonomous processing elements that perform a certain set of low level vision tasks, such as smoothing and light adaptation, edge detection, segmentation, line recognition and region-graph generation. At each layer, the array processor is a 2D array of k/spl times/m hexagonal identical autonomous cells that simultaneously execute certain low level vision tasks. Thus, the hardware design and the simulation at the transistor level of the processing elements (PEs) of the retina-like processor and its simulated functionality with illustrative examples are provided in this paper.

  8. On-Board Mining in the Sensor Web

    Science.gov (United States)

    Tanner, S.; Conover, H.; Graves, S.; Ramachandran, R.; Rushing, J.

    2004-12-01

    On-board data mining can contribute to many research and engineering applications, including natural hazard detection and prediction, intelligent sensor control, and the generation of customized data products for direct distribution to users. The ability to mine sensor data in real time can also be a critical component of autonomous operations, supporting deep space missions, unmanned aerial and ground-based vehicles (UAVs, UGVs), and a wide range of sensor meshes, webs and grids. On-board processing is expected to play a significant role in the next generation of NASA, Homeland Security, Department of Defense and civilian programs, providing for greater flexibility and versatility in measurements of physical systems. In addition, the use of UAV and UGV systems is increasing in military, emergency response and industrial applications. As research into the autonomy of these vehicles progresses, especially in fleet or web configurations, the applicability of on-board data mining is expected to increase significantly. Data mining in real time on board sensor platforms presents unique challenges. Most notably, the data to be mined is a continuous stream, rather than a fixed store such as a database. This means that the data mining algorithms must be modified to make only a single pass through the data. In addition, the on-board environment requires real time processing with limited computing resources, thus the algorithms must use fixed and relatively small amounts of processing time and memory. The University of Alabama in Huntsville is developing an innovative processing framework for the on-board data and information environment. The Environment for On-Board Processing (EVE) and the Adaptive On-board Data Processing (AODP) projects serve as proofs-of-concept of advanced information systems for remote sensing platforms. The EVE real-time processing infrastructure will upload, schedule and control the execution of processing plans on board remote sensors. These plans

  9. Java Processor Optimized for RTSJ

    Directory of Open Access Journals (Sweden)

    Tu Shiliang

    2007-01-01

    Full Text Available Due to the preeminent work of the real-time specification for Java (RTSJ, Java is increasingly expected to become the leading programming language in real-time systems. To provide a Java platform suitable for real-time applications, a Java processor which can execute Java bytecode is directly proposed in this paper. It provides efficient support in hardware for some mechanisms specified in the RTSJ and offers a simpler programming model through ameliorating the scoped memory of the RTSJ. The worst case execution time (WCET of the bytecodes implemented in this processor is predictable by employing the optimization method proposed in our previous work, in which all the processing interfering predictability is handled before bytecode execution. Further advantage of this method is to make the implementation of the processor simpler and suited to a low-cost FPGA chip.

  10. A Forest Fire Sensor Web Concept with UAVSAR

    Science.gov (United States)

    Lou, Y.; Chien, S.; Clark, D.; Doubleday, J.; Muellerschoen, R.; Zheng, Y.

    2008-12-01

    We developed a forest fire sensor web concept with a UAVSAR-based smart sensor and onboard automated response capability that will allow us to monitor fire progression based on coarse initial information provided by an external source. This autonomous disturbance detection and monitoring system combines the unique capabilities of imaging radar with high throughput onboard processing technology and onboard automated response capability based on specific science algorithms. In this forest fire sensor web scenario, a fire is initially located by MODIS/RapidFire or a ground-based fire observer. This information is transmitted to the UAVSAR onboard automated response system (CASPER). CASPER generates a flight plan to cover the alerted fire area and executes the flight plan. The onboard processor generates the fuel load map from raw radar data, used with wind and elevation information, predicts the likely fire progression. CASPER then autonomously alters the flight plan to track the fire progression, providing this information to the fire fighting team on the ground. We can also relay the precise fire location to other remote sensing assets with autonomous response capability such as Earth Observation-1 (EO-1)'s hyper-spectral imager to acquire the fire data.

  11. Orienting and Onboarding Clinical Nurse Specialists: A Process Improvement Project.

    Science.gov (United States)

    Garcia, Mayra G; Watt, Jennifer L; Falder-Saeed, Karie; Lewis, Brennan; Patton, Lindsey

    Clinical nurse specialists (CNSs) have a unique advanced practice role. This article describes a process useful in establishing a comprehensive orientation and onboarding program for a newly hired CNS. The project team used the National Association of Clinical Nurse Specialists core competencies as a guide to construct a process for effectively onboarding and orienting newly hired CNSs. Standardized documents were created for the orientation process including a competency checklist, needs assessment template, and professional evaluation goals. In addition, other documents were revised to streamline the orientation process. Standardizing the onboarding and orientation process has demonstrated favorable results. As of 2016, 3 CNSs have successfully been oriented and onboarded using the new process. Unique healthcare roles require special focus when onboarding and orienting into a healthcare system. The use of the National Association of Clinical Nurse Specialists core competencies guided the project in establishing a successful orientation and onboarding process for newly hired CNSs.

  12. Data collection from FASTBUS to a DEC UNIBUS processor through the UNIBUS-Processor Interface

    International Nuclear Information System (INIS)

    Larwill, M.; Barsotti, E.; Lesny, D.; Pordes, R.

    1983-01-01

    This paper describes the use of the UNIBUS Processor Interface, an interface between FASTBUS and the Digital Equipment Corporation UNIBUS. The UPI was developed by Fermilab and the University of Illinois. Details of the use of this interface in a high energy physics experiment at Fermilab are given. The paper includes a discussion of the operation of the UPI on the UNIBUS of a VAX-11, and plans for using the UPI to perform data acquisition from FASTBUS to a VAX-11 Processor

  13. New development for low energy electron beam processor

    International Nuclear Information System (INIS)

    Takei, Taro; Goto, Hitoshi; Oizumi, Matsutoshi; Hirakawa, Tetsuya; Ochi, Masafumi

    2003-01-01

    Newly developed low-energy electron beam (EB) processors that have unique designs and configurations compared to conventional ones enable electron-beam treatment of small three-dimensional objects, such as grain-like agricultural products and small plastic parts. As the EB processor can irradiate the products from the whole angles, the uniform EB treatment can be achieved at one time regardless the complex shapes of the product. Here presented are two new EB processors: the first system has cylindrical process zone, which allows three-dimensional objects to be irradiated with one-pass treatment. The second is a tube-type small EB processor, achieving not only its compactor design, but also higher beam extraction efficiency and flexible installation of the irradiation heads. The basic design of each processor and potential applications with them will be presented in this paper. (author)

  14. Simulation of continuously logical base cells (CL BC) with advanced functions for analog-to-digital converters and image processors

    Science.gov (United States)

    Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.

    2017-10-01

    The paper considers results of design and modeling of continuously logical base cells (CL BC) based on current mirrors (CM) with functions of preliminary analogue and subsequent analogue-digital processing for creating sensor multichannel analog-to-digital converters (SMC ADCs) and image processors (IP). For such with vector or matrix parallel inputs-outputs IP and SMC ADCs it is needed active basic photosensitive cells with an extended electronic circuit, which are considered in paper. Such basic cells and ADCs based on them have a number of advantages: high speed and reliability, simplicity, small power consumption, high integration level for linear and matrix structures. We show design of the CL BC and ADC of photocurrents and their various possible implementations and its simulations. We consider CL BC for methods of selection and rank preprocessing and linear array of ADCs with conversion to binary codes and Gray codes. In contrast to our previous works here we will dwell more on analogue preprocessing schemes for signals of neighboring cells. Let us show how the introduction of simple nodes based on current mirrors extends the range of functions performed by the image processor. Each channel of the structure consists of several digital-analog cells (DC) on 15-35 CMOS. The amount of DC does not exceed the number of digits of the formed code, and for an iteration type, only one cell of DC, complemented by the device of selection and holding (SHD), is required. One channel of ADC with iteration is based on one DC-(G) and SHD, and it has only 35 CMOS transistors. In such ADCs easily parallel code can be realized and also serial-parallel output code. The circuits and simulation results of their design with OrCAD are shown. The supply voltage of the DC is 1.8÷3.3V, the range of an input photocurrent is 0.1÷24μA, the transformation time is 20÷30nS at 6-8 bit binary or Gray codes. The general power consumption of the ADC with iteration is only 50÷100μW, if the

  15. A data base processor semantics specification package

    Science.gov (United States)

    Fishwick, P. A.

    1983-01-01

    A Semantics Specification Package (DBPSSP) for the Intel Data Base Processor (DBP) is defined. DBPSSP serves as a collection of cross assembly tools that allow the analyst to assemble request blocks on the host computer for passage to the DBP. The assembly tools discussed in this report may be effectively used in conjunction with a DBP compatible data communications protocol to form a query processor, precompiler, or file management system for the database processor. The source modules representing the components of DBPSSP are fully commented and included.

  16. Pulses processor modeling of the AR-PET tomograph

    International Nuclear Information System (INIS)

    Martinez Garbino, Lucio J.; Venialgo, E.; Estryk, Daniel S.; Verrastro, Claudio A.

    2009-01-01

    The detection of two gamma photons in time coincidence is the main process in Positron Emission Tomography. The front end processor estimate the energy and the time stamp of each incident gamma photon, the accuracy of such estimation improves the quality of contrast and resolution of final images. In this work a modeling tool of the full detection chain is described. Starting from stochastic generation of light photons, followed by photoelectrons time transit spread inside the photomultiplier, preamplifier response and digitalisation process were modeling and finally, several algorithms of Energy and Time Stamp estimation were evaluated and compared. (author)

  17. Globe hosts launch of new processor

    CERN Multimedia

    2006-01-01

    Launch of the quadecore processor chip at the Globe. On 14 November, in a series of major media events around the world, the chip-maker Intel launched its new 'quadcore' processor. For the regions of Europe, the Middle East and Africa, the day-long launch event took place in CERN's Globe of Science and Innovation, with over 30 journalists in attendance, coming from as far away as Johannesburg and Dubai. CERN was a significant choice for the event: the first tests of this new generation of processor in Europe had been made at CERN over the preceding months, as part of CERN openlab, a research partnership with leading IT companies such as Intel, HP and Oracle. The event also provided the opportunity for the journalists to visit ATLAS and the CERN Computer Centre. The strategy of putting multiple processor cores on the same chip, which has been pursued by Intel and other chip-makers in the last few years, represents an important departure from the more traditional improvements in the sheer speed of such chips. ...

  18. WE-AB-303-09: Rapid Projection Computations for On-Board Digital Tomosynthesis in Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Iliopoulos, AS; Sun, X [Duke University, Durham, NC (United States); Pitsianis, N [Aristotle University of Thessaloniki (Greece); Duke University, Durham, NC (United States); Yin, FF; Ren, L [Duke University Medical Center, Durham, NC (United States)

    2015-06-15

    Purpose: To facilitate fast and accurate iterative volumetric image reconstruction from limited-angle on-board projections. Methods: Intrafraction motion hinders the clinical applicability of modern radiotherapy techniques, such as lung stereotactic body radiation therapy (SBRT). The LIVE system may impact clinical practice by recovering volumetric information via Digital Tomosynthesis (DTS), thus entailing low time and radiation dose for image acquisition during treatment. The DTS is estimated as a deformation of prior CT via iterative registration with on-board images; this shifts the challenge to the computational domain, owing largely to repeated projection computations across iterations. We address this issue by composing efficient digital projection operators from their constituent parts. This allows us to separate the static (projection geometry) and dynamic (volume/image data) parts of projection operations by means of pre-computations, enabling fast on-board processing, while also relaxing constraints on underlying numerical models (e.g. regridding interpolation kernels). Further decoupling the projectors into simpler ones ensures the incurred memory overhead remains low, within the capacity of a single GPU. These operators depend only on the treatment plan and may be reused across iterations and patients. The dynamic processing load is kept to a minimum and maps well to the GPU computational model. Results: We have integrated efficient, pre-computable modules for volumetric ray-casting and FDK-based back-projection with the LIVE processing pipeline. Our results show a 60x acceleration of the DTS computations, compared to the previous version, using a single GPU; presently, reconstruction is attained within a couple of minutes. The present implementation allows for significant flexibility in terms of the numerical and operational projection model; we are investigating the benefit of further optimizations and accurate digital projection sub

  19. WE-AB-303-09: Rapid Projection Computations for On-Board Digital Tomosynthesis in Radiation Therapy

    International Nuclear Information System (INIS)

    Iliopoulos, AS; Sun, X; Pitsianis, N; Yin, FF; Ren, L

    2015-01-01

    Purpose: To facilitate fast and accurate iterative volumetric image reconstruction from limited-angle on-board projections. Methods: Intrafraction motion hinders the clinical applicability of modern radiotherapy techniques, such as lung stereotactic body radiation therapy (SBRT). The LIVE system may impact clinical practice by recovering volumetric information via Digital Tomosynthesis (DTS), thus entailing low time and radiation dose for image acquisition during treatment. The DTS is estimated as a deformation of prior CT via iterative registration with on-board images; this shifts the challenge to the computational domain, owing largely to repeated projection computations across iterations. We address this issue by composing efficient digital projection operators from their constituent parts. This allows us to separate the static (projection geometry) and dynamic (volume/image data) parts of projection operations by means of pre-computations, enabling fast on-board processing, while also relaxing constraints on underlying numerical models (e.g. regridding interpolation kernels). Further decoupling the projectors into simpler ones ensures the incurred memory overhead remains low, within the capacity of a single GPU. These operators depend only on the treatment plan and may be reused across iterations and patients. The dynamic processing load is kept to a minimum and maps well to the GPU computational model. Results: We have integrated efficient, pre-computable modules for volumetric ray-casting and FDK-based back-projection with the LIVE processing pipeline. Our results show a 60x acceleration of the DTS computations, compared to the previous version, using a single GPU; presently, reconstruction is attained within a couple of minutes. The present implementation allows for significant flexibility in terms of the numerical and operational projection model; we are investigating the benefit of further optimizations and accurate digital projection sub

  20. Analysis of DuPont and Kodak duplicating films and chemistries in a Fultron spray processor

    Science.gov (United States)

    Weinstein, M. S.

    1972-01-01

    A test program was conducted with duPont duplicating film type SR 112 and SCOLOR developer and Kodak duplicating film types 2430, 2422, and FE 2628 (SO-467) and MX-641 developer to determine sensitometric and image quality characteristics of these materials when used with a fultron spray processor. The test results show that the SCOLOR developer foams excessively in the fultron processor when used with or without the addition of an antifoaming agent. The Kodak type FE 2628 film with MX-641 chemistry had the longest linear Log E range at a 1.0 gamma. Sensitometric curves and granularity traces for all film process combinations tested are included.

  1. Identifying Onboarding Heuristics for Free-to-Play Mobile Games

    DEFF Research Database (Denmark)

    Thomsen, Line Ebdrup; Weigert Petersen, Falko; Drachen, Anders

    2016-01-01

    a set of heuristics for the design of onboarding phases in mobile games is presented. The heuristics are identified by a lab-based mixed-methods experiment, utilizing lightweight psycho-physiological measures together with self-reported player responses, across three titles that cross the genres...... of puzzle games, base builders and arcade games, and utilize different onboarding phase design approaches. Results showcase how heuristics can be used to design engaging onboarding phases in mobile games....

  2. XL-100S microprogrammable processor

    International Nuclear Information System (INIS)

    Gorbunov, N.V.; Guzik, Z.; Sutulin, V.A.; Forytski, A.

    1983-01-01

    The XL-100S microprogrammable processor providing the multiprocessor operation mode in the XL system crate is described. The processor meets the EUR 6500 CAMAC standards, address up to 4 Mbyte memory, and interacts with 7 CAMAC branchas. Eight external requests initiate operations preset by a sequence of microcommands in a memory of the capacity up to 64 kwords of 32-Git. The microprocessor architecture allows one to emulate commands of the majority of mini- or micro-computers, including floating point operations. The XL-100S processor may be used in various branches of experimental physics: for physical experiment apparatus control, fast selection of useful physical events, organization of the of input/output operations, organization of direct assess to memory included, etc. The Am2900 microprocessor set is used as an elementary base. The device is made in the form of a single width CAMAC module

  3. Acoustooptic linear algebra processors - Architectures, algorithms, and applications

    Science.gov (United States)

    Casasent, D.

    1984-01-01

    Architectures, algorithms, and applications for systolic processors are described with attention to the realization of parallel algorithms on various optical systolic array processors. Systolic processors for matrices with special structure and matrices of general structure, and the realization of matrix-vector, matrix-matrix, and triple-matrix products and such architectures are described. Parallel algorithms for direct and indirect solutions to systems of linear algebraic equations and their implementation on optical systolic processors are detailed with attention to the pipelining and flow of data and operations. Parallel algorithms and their optical realization for LU and QR matrix decomposition are specifically detailed. These represent the fundamental operations necessary in the implementation of least squares, eigenvalue, and SVD solutions. Specific applications (e.g., the solution of partial differential equations, adaptive noise cancellation, and optimal control) are described to typify the use of matrix processors in modern advanced signal processing.

  4. Simulation of a processor switching circuit with APLSV

    International Nuclear Information System (INIS)

    Dilcher, H.

    1979-01-01

    The report describes the simulation of a processor switching circuit with APL. Furthermore an APL function is represented to simulate a processor in an assembly like language. Both together serve as a tool for studying processor properties. By means of the programming function it is also possible to program other simulated processors. The processor is to be used in the processing of data in real time analysis that occur in high energy physics experiments. The data are already offered to the computer in digitalized form. A typical data rate is at 10 KB/ sec. The data are structured in blocks. The particular blocks are 1 KB wide and are independent from each other. Aprocessor has to decide, whether the block data belong to an event that is part of the backround noise and can therefore be forgotten, or whether the data should be saved for a later evaluation. (orig./WB) [de

  5. Accuracy Limitations in Optical Linear Algebra Processors

    Science.gov (United States)

    Batsell, Stephen Gordon

    1990-01-01

    One of the limiting factors in applying optical linear algebra processors (OLAPs) to real-world problems has been the poor achievable accuracy of these processors. Little previous research has been done on determining noise sources from a systems perspective which would include noise generated in the multiplication and addition operations, noise from spatial variations across arrays, and from crosstalk. In this dissertation, we propose a second-order statistical model for an OLAP which incorporates all these system noise sources. We now apply this knowledge to determining upper and lower bounds on the achievable accuracy. This is accomplished by first translating the standard definition of accuracy used in electronic digital processors to analog optical processors. We then employ our second-order statistical model. Having determined a general accuracy equation, we consider limiting cases such as for ideal and noisy components. From the ideal case, we find the fundamental limitations on improving analog processor accuracy. From the noisy case, we determine the practical limitations based on both device and system noise sources. These bounds allow system trade-offs to be made both in the choice of architecture and in individual components in such a way as to maximize the accuracy of the processor. Finally, by determining the fundamental limitations, we show the system engineer when the accuracy desired can be achieved from hardware or architecture improvements and when it must come from signal pre-processing and/or post-processing techniques.

  6. The Heidelberg POLYP - a flexible and fault-tolerant poly-processor

    International Nuclear Information System (INIS)

    Maenner, R.; Deluigi, B.

    1981-01-01

    The Heidelberg poly-processor system POLYP is described. It is intended to be used in nuclear physics for reprocessing of experimental data, in high energy physics as second-stage trigger processor, and generally in other applications requiring high-computing power. The POLYP system consists of any number of I/O-processors, processor modules (eventually of different types), global memory segments, and a host processor. All modules (up to several hundred) are connected by a multiple common-data-bus system; all processors, additionally, by a multiple sync bus system for processor/task-scheduling. All hard- and software is designed to be decentralized and free of bottle-necks. Most hardware-faults like single-bit errors in memory or multi-bit errors during transfers are automatically corrected. Defective modules, buses, etc., can be removed with only a graceful degradation of the system-throughput. (orig.)

  7. Architectures for single-chip image computing

    Science.gov (United States)

    Gove, Robert J.

    1992-04-01

    This paper will focus on the architectures of VLSI programmable processing components for image computing applications. TI, the maker of industry-leading RISC, DSP, and graphics components, has developed an architecture for a new-generation of image processors capable of implementing a plurality of image, graphics, video, and audio computing functions. We will show that the use of a single-chip heterogeneous MIMD parallel architecture best suits this class of processors--those which will dominate the desktop multimedia, document imaging, computer graphics, and visualization systems of this decade.

  8. Video rate morphological processor based on a redundant number representation

    Science.gov (United States)

    Kuczborski, Wojciech; Attikiouzel, Yianni; Crebbin, Gregory A.

    1992-03-01

    This paper presents a video rate morphological processor for automated visual inspection of printed circuit boards, integrated circuit masks, and other complex objects. Inspection algorithms are based on gray-scale mathematical morphology. Hardware complexity of the known methods of real-time implementation of gray-scale morphology--the umbra transform and the threshold decomposition--has prompted us to propose a novel technique which applied an arithmetic system without carrying propagation. After considering several arithmetic systems, a redundant number representation has been selected for implementation. Two options are analyzed here. The first is a pure signed digit number representation (SDNR) with the base of 4. The second option is a combination of the base-2 SDNR (to represent gray levels of images) and the conventional twos complement code (to represent gray levels of structuring elements). Operation principle of the morphological processor is based on the concept of the digit level systolic array. Individual processing units and small memory elements create a pipeline. The memory elements store current image windows (kernels). All operation primitives of processing units apply a unified direction of digit processing: most significant digit first (MSDF). The implementation technology is based on the field programmable gate arrays by Xilinx. This paper justified the rationality of a new approach to logic design, which is the decomposition of Boolean functions instead of Boolean minimization.

  9. Two-dimensional optoelectronic interconnect-processor and its operational bit error rate

    Science.gov (United States)

    Liu, J. Jiang; Gollsneider, Brian; Chang, Wayne H.; Carhart, Gary W.; Vorontsov, Mikhail A.; Simonis, George J.; Shoop, Barry L.

    2004-10-01

    Two-dimensional (2-D) multi-channel 8x8 optical interconnect and processor system were designed and developed using complementary metal-oxide-semiconductor (CMOS) driven 850-nm vertical-cavity surface-emitting laser (VCSEL) arrays and the photodetector (PD) arrays with corresponding wavelengths. We performed operation and bit-error-rate (BER) analysis on this free-space integrated 8x8 VCSEL optical interconnects driven by silicon-on-sapphire (SOS) circuits. Pseudo-random bit stream (PRBS) data sequence was used in operation of the interconnects. Eye diagrams were measured from individual channels and analyzed using a digital oscilloscope at data rates from 155 Mb/s to 1.5 Gb/s. Using a statistical model of Gaussian distribution for the random noise in the transmission, we developed a method to compute the BER instantaneously with the digital eye-diagrams. Direct measurements on this interconnects were also taken on a standard BER tester for verification. We found that the results of two methods were in the same order and within 50% accuracy. The integrated interconnects were investigated in an optoelectronic processing architecture of digital halftoning image processor. Error diffusion networks implemented by the inherently parallel nature of photonics promise to provide high quality digital halftoned images.

  10. A dedicated line-processor as used at the SHF

    International Nuclear Information System (INIS)

    Bevan, A.V.; Hatley, R.W.; Price, D.R.; Rankin, P.

    1985-01-01

    A hardwired trigger processor was used at the SLAC Hybrid Facility to find evidence for charged tracks originating from the fiducial volume of a 40'' rapidcycling bubble chamber. Straight-line projections of these tracks in the plane perpendicular to the applied magnetic field were searched for using data from three sets of proportional wire chambers (PWC). This information was made directly available to the processor by means of a special digitizing card. The results memory of the processor simulated read-only memory in a 168/E processor and was accessible by it. The 168/E controlled the issuing of a trigger command to the bubble chamber flash tubes. The same design of digitizer card used by the line processor was incorporated into the 168/E, again as read only memory, which allowed it access to the raw data for continual monitoring of trigger integrity. The design logic of the trigger processor was verified by running real PWC data through a FORTRAN simulation of the hardware. This enabled the debugging to become highly automated since a step by step, computer controlled comparison of processor registers to simulation predictions could be made

  11. Defense Threat Reduction Agency > Careers > Onboarding > Special Programs

    Science.gov (United States)

    Development Work/Life Programs Onboarding Onboarding Overview Before You Report Sponsor Program Getting Here , programs, and practices to help our employees and Service members balance work and family responsibilities . We have put in place family-friendly Work/Life programs and policies designed to create a more

  12. Reconfigurable Secure Video Codec Based on DWT and AES Processor

    OpenAIRE

    Rached Tourki; M. Machhout; B. Bouallegue; M. Atri; M. Zeghid; D. Dia

    2010-01-01

    In this paper, we proposed a secure video codec based on the discrete wavelet transformation (DWT) and the Advanced Encryption Standard (AES) processor. Either, use of video coding with DWT or encryption using AES is well known. However, linking these two designs to achieve secure video coding is leading. The contributions of our work are as follows. First, a new method for image and video compression is proposed. This codec is a synthesis of JPEG and JPEG2000,which is implemented using Huffm...

  13. Plans for Selection and In-Situ Investigation of Return Samples by the Supercam Instrument Onboard the Mars 2020 Rover

    Science.gov (United States)

    Wiens, R. C.; Maurice, S.; Mangold, N.; Anderson, R.; Beyssac, O.; Bonal, L.; Clegg, S.; Cousin, A.; DeFlores, L.; Dromart, G.; Fisher, W.; Forni, O.; Fouchet, T.; Gasnault, O.; Grotzinger, J.; Johnson, J.; Martinez-Frias, J.; McLennan, S.; Meslin, P.-Y.; Montmessin, F.; Poulet, F.; Rull, F.; Sharma, S.

    2018-04-01

    The SuperCam instrument onboard Rover 2020 still provides a complementary set of analyses with IR reflectance and Raman spectroscopy for mineralogy, LIBS for chemistry, and a color imager in order to investigate in-situ samples to return.

  14. Aerial Logistics Management for Carrier Onboard Delivery

    Science.gov (United States)

    2016-09-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS AERIAL LOGISTICS MANAGEMENT FOR CARRIER ONBOARD DELIVERY by Samuel L. Chen September 2016...AND SUBTITLE AERIAL LOGISTICS MANAGEMENT FOR CARRIER ONBOARD DELIVERY 5. FUNDING NUMBERS 6. AUTHOR(S) Samuel L. Chen 7. PERFORMING ORGANIZATION NAME(S...delivery (COD) is the use of aircraft to transport people and cargo from a forward logistics site (FLS) to a carrier strike group (CSG). The goal of

  15. Sojourn time tails in processor-sharing systems

    NARCIS (Netherlands)

    Egorova, R.R.

    2009-01-01

    The processor-sharing discipline was originally introduced as a modeling abstraction for the design and performance analysis of the processing unit of a computer system. Under the processor-sharing discipline, all active tasks are assumed to be processed simultaneously, receiving an equal share of

  16. An interactive parallel processor for data analysis

    International Nuclear Information System (INIS)

    Mong, J.; Logan, D.; Maples, C.; Rathbun, W.; Weaver, D.

    1984-01-01

    A parallel array of eight minicomputers has been assembled in an attempt to deal with kiloparameter data events. By exporting computer system functions to a separate processor, the authors have been able to achieve computer amplification linearly proportional to the number of executing processors

  17. Optical Array Processor: Laboratory Results

    Science.gov (United States)

    Casasent, David; Jackson, James; Vaerewyck, Gerard

    1987-01-01

    A Space Integrating (SI) Optical Linear Algebra Processor (OLAP) is described and laboratory results on its performance in several practical engineering problems are presented. The applications include its use in the solution of a nonlinear matrix equation for optimal control and a parabolic Partial Differential Equation (PDE), the transient diffusion equation with two spatial variables. Frequency-multiplexed, analog and high accuracy non-base-two data encoding are used and discussed. A multi-processor OLAP architecture is described and partitioning and data flow issues are addressed.

  18. Multiprocessor Real-Time Scheduling with Hierarchical Processor Affinities

    OpenAIRE

    Bonifaci , Vincenzo; Brandenburg , Björn; D'Angelo , Gianlorenzo; Marchetti-Spaccamela , Alberto

    2016-01-01

    International audience; Many multiprocessor real-time operating systems offer the possibility to restrict the migrations of any task to a specified subset of processors by setting affinity masks. A notion of " strong arbitrary processor affinity scheduling " (strong APA scheduling) has been proposed; this notion avoids schedulability losses due to overly simple implementations of processor affinities. Due to potential overheads, strong APA has not been implemented so far in a real-time operat...

  19. Hardware trigger processor for the MDT system

    CERN Document Server

    AUTHOR|(SzGeCERN)757787; The ATLAS collaboration; Hazen, Eric; Butler, John; Black, Kevin; Gastler, Daniel Edward; Ntekas, Konstantinos; Taffard, Anyes; Martinez Outschoorn, Verena; Ishino, Masaya; Okumura, Yasuyuki

    2017-01-01

    We are developing a low-latency hardware trigger processor for the Monitored Drift Tube system in the Muon spectrometer. The processor will fit candidate Muon tracks in the drift tubes in real time, improving significantly the momentum resolution provided by the dedicated trigger chambers. We present a novel pure-FPGA implementation of a Legendre transform segment finder, an associative-memory alternative implementation, an ARM (Zynq) processor-based track fitter, and compact ATCA carrier board architecture. The ATCA architecture is designed to allow a modular, staged approach to deployment of the system and exploration of alternative technologies.

  20. Towards a Process Algebra for Shared Processors

    DEFF Research Database (Denmark)

    Buchholtz, Mikael; Andersen, Jacob; Løvengreen, Hans Henrik

    2002-01-01

    We present initial work on a timed process algebra that models sharing of processor resources allowing preemption at arbitrary points in time. This enables us to model both the functional and the timely behaviour of concurrent processes executed on a single processor. We give a refinement relation...

  1. High-speed packet filtering utilizing stream processors

    Science.gov (United States)

    Hummel, Richard J.; Fulp, Errin W.

    2009-04-01

    Parallel firewalls offer a scalable architecture for the next generation of high-speed networks. While these parallel systems can be implemented using multiple firewalls, the latest generation of stream processors can provide similar benefits with a significantly reduced latency due to locality. This paper describes how the Cell Broadband Engine (CBE), a popular stream processor, can be used as a high-speed packet filter. Results show the CBE can potentially process packets arriving at a rate of 1 Gbps with a latency less than 82 μ-seconds. Performance depends on how well the packet filtering process is translated to the unique stream processor architecture. For example the method used for transmitting data and control messages among the pseudo-independent processor cores has a significant impact on performance. Experimental results will also show the current limitations of a CBE operating system when used to process packets. Possible solutions to these issues will be discussed.

  2. Fast processor for dilepton triggers

    International Nuclear Information System (INIS)

    Katsanevas, S.; Kostarakis, P.; Baltrusaitis, R.

    1983-01-01

    We describe a fast trigger processor, developed for and used in Fermilab experiment E-537, for selecting high-mass dimuon events produced by negative pions and anti-protons. The processor finds candidate tracks by matching hit information received from drift chambers and scintillation counters, and determines their momenta. Invariant masses are calculated for all possible pairs of tracks and an event is accepted if any invariant mass is greater than some preselectable minimum mass. The whole process, accomplished within 5 to 10 microseconds, achieves up to a ten-fold reduction in trigger rate

  3. Experiment in Onboard Synthetic Aperture Radar Data Processing

    Science.gov (United States)

    Holland, Matthew

    2011-01-01

    Single event upsets (SEUs) are a threat to any computing system running on hardware that has not been physically radiation hardened. In addition to mandating the use of performance-limited, hardened heritage equipment, prior techniques for dealing with the SEU problem often involved hardware-based error detection and correction (EDAC). With limited computing resources, software- based EDAC, or any more elaborate recovery methods, were often not feasible. Synthetic aperture radars (SARs), when operated in the space environment, are interesting due to their relevance to NASAs objectives, but problematic in the sense of producing prodigious amounts of raw data. Prior implementations of the SAR data processing algorithm have been too slow, too computationally intensive, and require too much application memory for onboard execution to be a realistic option when using the type of heritage processing technology described above. This standard C-language implementation of SAR data processing is distributed over many cores of a Tilera Multicore Processor, and employs novel Radiation Hardening by Software (RHBS) techniques designed to protect the component processes (one per core) and their shared application memory from the sort of SEUs expected in the space environment. The source code includes calls to Tilera APIs, and a specialized Tilera compiler is required to produce a Tilera executable. The compiled application reads input data describing the position and orientation of a radar platform, as well as its radar-burst data, over time and writes out processed data in a form that is useful for analysis of the radar observations.

  4. Design of RISC Processor Using VHDL and Cadence

    Science.gov (United States)

    Moslehpour, Saeid; Puliroju, Chandrasekhar; Abu-Aisheh, Akram

    The project deals about development of a basic RISC processor. The processor is designed with basic architecture consisting of internal modules like clock generator, memory, program counter, instruction register, accumulator, arithmetic and logic unit and decoder. This processor is mainly used for simple general purpose like arithmetic operations and which can be further developed for general purpose processor by increasing the size of the instruction register. The processor is designed in VHDL by using Xilinx 8.1i version. The present project also serves as an application of the knowledge gained from past studies of the PSPICE program. The study will show how PSPICE can be used to simplify massive complex circuits designed in VHDL Synthesis. The purpose of the project is to explore the designed RISC model piece by piece, examine and understand the Input/ Output pins, and to show how the VHDL synthesis code can be converted to a simplified PSPICE model. The project will also serve as a collection of various research materials about the pieces of the circuit.

  5. A technique for on-board CT reconstruction using both kilovoltage and megavoltage beam projections for 3D treatment verification

    International Nuclear Information System (INIS)

    Yin Fangfang; Guan Huaiqun; Lu Wenkai

    2005-01-01

    The technologies with kilovoltage (kV) and megavoltage (MV) imaging in the treatment room are now available for image-guided radiation therapy to improve patient setup and target localization accuracy. However, development of strategies to efficiently and effectively implement these technologies for patient treatment remains challenging. This study proposed an aggregated technique for on-board CT reconstruction using combination of kV and MV beam projections to improve the data acquisition efficiency and image quality. These projections were acquired in the treatment room at the patient treatment position with a new kV imaging device installed on the accelerator gantry, orthogonal to the existing MV portal imaging device. The projection images for a head phantom and a contrast phantom were acquired using both the On-Board Imager TM kV imaging device and the MV portal imager mounted orthogonally on the gantry of a Varian Clinac TM 21EX linear accelerator. MV projections were converted into kV information prior to the aggregated CT reconstruction. The multilevel scheme algebraic-reconstruction technique was used to reconstruct CT images involving either full, truncated, or a combination of both full and truncated projections. An adaptive reconstruction method was also applied, based on the limited numbers of kV projections and truncated MV projections, to enhance the anatomical information around the treatment volume and to minimize the radiation dose. The effects of the total number of projections, the combination of kV and MV projections, and the beam truncation of MV projections on the details of reconstructed kV/MV CT images were also investigated

  6. Automation of On-Board Flightpath Management

    Science.gov (United States)

    Erzberger, H.

    1981-01-01

    The status of concepts and techniques for the design of onboard flight path management systems is reviewed. Such systems are designed to increase flight efficiency and safety by automating the optimization of flight procedures onboard aircraft. After a brief review of the origins and functions of such systems, two complementary methods are described for attacking the key design problem, namely, the synthesis of efficient trajectories. One method optimizes en route, the other optimizes terminal area flight; both methods are rooted in optimal control theory. Simulation and flight test results are reviewed to illustrate the potential of these systems for fuel and cost savings.

  7. Band-to-Band Misregistration of the Images of MODIS On-Board Calibrators and Its Impact to Calibration

    Science.gov (United States)

    Wang, Zhipeng; Xiong, Xiaoxiong

    2017-01-01

    The MODIS instruments aboard Terra and Aqua satellites are radiometrically calibrated on-orbit with a set of onboard calibrators (OBC) including a solar diffuser (SD), a blackbody (BB) and a space view (SV) port through which the detectors can view the dark space. As a whisk-broom scanning spectroradiometer, thirty-six MODIS spectral bands are assembled in the along-scan direction on four focal plane assemblies (FPA). These bands capture images of the same target sequentially with the motion of a scan mirror. Then the images are co-registered on board by delaying appropriate band dependent amount of time depending on the band locations on the FPA. While this co-registration mechanism is functioning well for the "far field" remote targets such as Earth view (EV) scenes or the Moon, noticeable band-to-band misregistration in the along-scan direction has been observed for near field targets, in particular the OBCs. In this paper, the misregistration phenomenon is presented and analyzed. It is concluded that the root cause of the misregistration is that the rotating element of the instrument, the scan mirror, is displaced from the focus of the telescope primary mirror. The amount of the misregistration is proportional to the band location on the FPA and is inversely proportional to the distance between the target and the scan mirror. The impact of this misregistration to the calibration of MODIS bands is discussed. In particular, the calculation of the detector gain coefficient m1 of bands 8-16 (412 nm 870 nm) is improved by up to 1.5% for Aqua MODIS.

  8. Vector and parallel processors in computational science

    International Nuclear Information System (INIS)

    Duff, I.S.; Reid, J.K.

    1985-01-01

    These proceedings contain the articles presented at the named conference. These concern hardware and software for vector and parallel processors, numerical methods and algorithms for the computation on such processors, as well as applications of such methods to different fields of physics and related sciences. See hints under the relevant topics. (HSI)

  9. Autonomous Onboard Science Data Analysis for Comet Missions

    Science.gov (United States)

    Thompson, David R.; Tran, Daniel Q.; McLaren, David; Chien, Steve A.; Bergman, Larry; Castano, Rebecca; Doyle, Richard; Estlin, Tara; Lenda, Matthew

    2012-01-01

    Coming years will bring several comet rendezvous missions. The Rosetta spacecraft arrives at Comet 67P/Churyumov-Gerasimenko in 2014. Subsequent rendezvous might include a mission such as the proposed Comet Hopper with multiple surface landings, as well as Comet Nucleus Sample Return (CNSR) and Coma Rendezvous and Sample Return (CRSR). These encounters will begin to shed light on a population that, despite several previous flybys, remains mysterious and poorly understood. Scientists still have little direct knowledge of interactions between the nucleus and coma, their variation across different comets or their evolution over time. Activity may change on short timescales so it is challenging to characterize with scripted data acquisition. Here we investigate automatic onboard image analysis that could act faster than round-trip light time to capture unexpected outbursts and plume activity. We describe one edge-based method for detect comet nuclei and plumes, and test the approach on an existing catalog of comet images. Finally, we quantify benefits to specific measurement objectives by simulating a basic plume monitoring campaign.

  10. MPC Related Computational Capabilities of ARMv7A Processors

    DEFF Research Database (Denmark)

    Frison, Gianluca; Jørgensen, John Bagterp

    2015-01-01

    In recent years, the mass market of mobile devices has pushed the demand for increasingly fast but cheap processors. ARM, the world leader in this sector, has developed the Cortex-A series of processors with focus on computationally intensive applications. If properly programmed, these processors...... are powerful enough to solve the complex optimization problems arising in MPC in real-time, while keeping the traditional low-cost and low-power consumption. This makes these processors ideal candidates for use in embedded MPC. In this paper, we investigate the floating-point capabilities of Cortex A7, A9...... and A15 and show how to exploit the unique features of each processor to obtain the best performance, in the context of a novel implementation method for the linear-algebra routines used in MPC solvers. This method adapts high-performance computing techniques to the needs of embedded MPC. In particular...

  11. Using Onboard Telemetry for MAVEN Orbit Determination

    Science.gov (United States)

    Lam, Try; Trawny, Nikolas; Lee, Clifford

    2013-01-01

    Determination of the spacecraft state has been traditional done using radiometric tracking data before and after the atmosphere drag pass. This paper describes our approach and results to include onboard telemetry measurements in addition to radiometric observables to refine the reconstructed trajectory estimate for the Mars Atmosphere and Volatile Evolution Mission (MAVEN). Uncertainties in the Mars atmosphere models, combined with non-continuous tracking degrade navigation accuracy, making MAVEN a key candidate for using onboard telemetry data to help complement its orbit determination process.

  12. Evaluation of MERIS Chlorophyll-a Retrieval Processors in a Complex Turbid Lake Kasumigaura over a 10-Year Mission

    Directory of Open Access Journals (Sweden)

    Salem Ibrahim Salem

    2017-10-01

    Full Text Available Abstract: The chlorophyll-a (Chla products of seven processors developed for the Medium Resolution Imaging Spectrometer (MERIS sensor were evaluated. The seven processors, based on a neural network and band height, were assessed over an optically complex water body with Chla concentrations of 8.10–187.40 mg∙m−3 using 10-year MERIS archival data. These processors were adopted for the Ocean and Land Color Instrument (OLCI sensor. Results indicated that the four processors of band height (i.e. the Maximum Chlorophyll Index (MCI_L1; and Fluorescence Line Height (FLH_L1; neural network (i.e. Eutrophic Lake (EUL; and Case 2 Regional (C2R possessed reasonable retrieval accuracy with root mean square error (R2 in the range of 0.42–0.65. However, these processors underestimated the retrieved Chla > 100 mg∙m−3, reflecting the limitation of the band height processors to eliminate the influence of non-phytoplankton matter and highlighting the need to train the neural network for highly turbid waters. MCI_L1 outperformed other processors during the calibration and validation stages (R2 = 0.65, Root mean square error (RMSE = 22.18 mg∙m−3, the mean absolute relative error (MARE = 36.88%. In contrast, the results from the Boreal Lake (BOL and Free University of Berlin (FUB processors demonstrated their inadequacy to accurately retrieve Chla concentration > 50 mg∙m−3, mainly due to the limitation of the training datasets that resulted in a high MARE for BOL (56.20% and FUB (57.00%. Mapping the spatial distribution of Chla concentrations across Lake Kasumigaura using the seven processors showed that all processors—except for the BOL and FUB—were able to accurately capture the Chla distribution for moderate and high Chla concentrations. In addition, MCI_L1 and C2R processors were evaluated over 10-years of monthly measured Chla as they demonstrated the best retrieval accuracy from both groups (i.e. band height and neural network

  13. Logistic Fuel Processor Development

    National Research Council Canada - National Science Library

    Salavani, Reza

    2004-01-01

    The Air Base Technologies Division of the Air Force Research Laboratory has developed a logistic fuel processor that removes the sulfur content of the fuel and in the process converts logistic fuel...

  14. MAP3D: a media processor approach for high-end 3D graphics

    Science.gov (United States)

    Darsa, Lucia; Stadnicki, Steven; Basoglu, Chris

    1999-12-01

    Equator Technologies, Inc. has used a software-first approach to produce several programmable and advanced VLIW processor architectures that have the flexibility to run both traditional systems tasks and an array of media-rich applications. For example, Equator's MAP1000A is the world's fastest single-chip programmable signal and image processor targeted for digital consumer and office automation markets. The Equator MAP3D is a proposal for the architecture of the next generation of the Equator MAP family. The MAP3D is designed to achieve high-end 3D performance and a variety of customizable special effects by combining special graphics features with high performance floating-point and media processor architecture. As a programmable media processor, it offers the advantages of a completely configurable 3D pipeline--allowing developers to experiment with different algorithms and to tailor their pipeline to achieve the highest performance for a particular application. With the support of Equator's advanced C compiler and toolkit, MAP3D programs can be written in a high-level language. This allows the compiler to successfully find and exploit any parallelism in a programmer's code, thus decreasing the time to market of a given applications. The ability to run an operating system makes it possible to run concurrent applications in the MAP3D chip, such as video decoding while executing the 3D pipelines, so that integration of applications is easily achieved--using real-time decoded imagery for texturing 3D objects, for instance. This novel architecture enables an affordable, integrated solution for high performance 3D graphics.

  15. SuperAGILE onboard electronics and ground test instrumentation

    International Nuclear Information System (INIS)

    Pacciani, Luigi; Morelli, Ennio; Rubini, Alda; Mastropietro, Marcello; Porrovecchio, Geiland; Costa, Enrico; Del Monte, Ettore; Donnarumma, Immacolata; Evangelista, Yuri; Feroci, Marco; Lazzarotto, Francesco; Rapisarda, Massimo; Soffitta, Paolo

    2007-01-01

    In this paper we describe the electronics of the SuperAGILE X-ray imager on-board AGILE satellite and the instrumentation developed to test and improve the Front-End and digital electronics of the flight model of the imager. Although the working principle of the instrument is very well established, and the conceptual scheme simple, the budget and mechanical constraints of the AGILE small mission made necessary the introduction of new elements in SuperAGILE, regarding both the mechanics and the electronics. In fact the instrument is contained in a ∼44x44x16cm 3 volume, but the required performance is quite ambitious, leading us to equip a sensitive area of ∼1350cm 2 with 6144 Silicon μstrips detectors with a pitch of 121μm and a total length of ∼18.2cm. The result is a very light and power-cheap imager with a good sensitivity (∼15mCrab in 1 day in 15-45keV), high angular resolution (6arcmin) and gross spectral resolution. The test-equipment is versatile, and can be easily modified to test FEE based on self-triggered, data-driven and sparse-readout ASICs such as XA family chips

  16. Real time monitoring of electron processors

    International Nuclear Information System (INIS)

    Nablo, S.V.; Kneeland, D.R.; McLaughlin, W.L.

    1995-01-01

    A real time radiation monitor (RTRM) has been developed for monitoring the dose rate (current density) of electron beam processors. The system provides continuous monitoring of processor output, electron beam uniformity, and an independent measure of operating voltage or electron energy. In view of the device's ability to replace labor-intensive dosimetry in verification of machine performance on a real-time basis, its application to providing archival performance data for in-line processing is discussed. (author)

  17. Lunar Landing Trajectory Design for Onboard Hazard Detection and Avoidance

    Science.gov (United States)

    Paschall, Steve; Brady, Tye; Sostaric, Ron

    2009-01-01

    The Autonomous Landing and Hazard Avoidance Technology (ALHAT) Project is developing the software and hardware technology needed to support a safe and precise landing for the next generation of lunar missions. ALHAT provides this capability through terrain-relative navigation measurements to enhance global-scale precision, an onboard hazard detection system to select safe landing locations, and an Autonomous Guidance, Navigation, and Control (AGNC) capability to process these measurements and safely direct the vehicle to a landing location. This paper focuses on the key trajectory design issues relevant to providing an onboard Hazard Detection and Avoidance (HDA) capability for the lander. Hazard detection can be accomplished by the crew visually scanning the terrain through a window, a sensor system imaging the terrain, or some combination of both. For ALHAT, this hazard detection activity is provided by a sensor system, which either augments the crew s perception or entirely replaces the crew in the case of a robotic landing. Detecting hazards influences the trajectory design by requiring the proper perspective, range to the landing site, and sufficient time to view the terrain. Following this, the trajectory design must provide additional time to process this information and make a decision about where to safely land. During the final part of the HDA process, the trajectory design must provide sufficient margin to enable a hazard avoidance maneuver. In order to demonstrate the effects of these constraints on the landing trajectory, a tradespace of trajectory designs was created for the initial ALHAT Design Analysis Cycle (ALDAC-1) and each case evaluated with these HDA constraints active. The ALHAT analysis process, described in this paper, narrows down this tradespace and subsequently better defines the trajectory design needed to support onboard HDA. Future ALDACs will enhance this trajectory design by balancing these issues and others in an overall system

  18. Fast track trigger processor for the OPAL detector at LEP

    Energy Technology Data Exchange (ETDEWEB)

    Carter, A A; Carter, J R; Ward, D R; Heuer, R D; Jaroslawski, S; Wagner, A

    1986-09-20

    A fast hardware track trigger processor being built for the OPAL experiment is described. The processor will analyse data from the central drift chambers of OPAL to determine whether any tracks come from the interaction region, and thereby eliminate background events. The processor will find tracks over a large angular range, vertical strokecos thetavertical stroke < or approx. 0.95. The design of the processor is described, together with a brief account of its hardware implementation for OPAL. The results of feasibility studies are also presented.

  19. Multi-processor data acquisition and monitoring systems for particle physics

    International Nuclear Information System (INIS)

    White, V.; Burch, B.; Eng, K.; Heinicke, P.; Pyatetsky, M.; Ritchie, D.

    1983-01-01

    A high speed distributed processing system, using PDP-11 and VAX processors, is being developed at Fermilab. The acquisition of data is done using one or more PDP-11s. Additional processors are connected to provide either data logging or extra data analysis capabilities. Within this framework, functional interchangeability of PDP-11 and VAX processors and of the PDP-11 operating systems, RT-11 and RSX-11M, has been maintained. Inter-processor connections have been implemented in a general way using the 5 megabit DR11-W hardware currently selected for the purpose. Using this approach the authors have been able to make use of several existing data acquisition and analysis packages, such as RT/MULTI, in a multi-processor system

  20. Rapid prototyping and evaluation of programmable SIMD SDR processors in LISA

    Science.gov (United States)

    Chen, Ting; Liu, Hengzhu; Zhang, Botao; Liu, Dongpei

    2013-03-01

    With the development of international wireless communication standards, there is an increase in computational requirement for baseband signal processors. Time-to-market pressure makes it impossible to completely redesign new processors for the evolving standards. Due to its high flexibility and low power, software defined radio (SDR) digital signal processors have been proposed as promising technology to replace traditional ASIC and FPGA fashions. In addition, there are large numbers of parallel data processed in computation-intensive functions, which fosters the development of single instruction multiple data (SIMD) architecture in SDR platform. So a new way must be found to prototype the SDR processors efficiently. In this paper we present a bit-and-cycle accurate model of programmable SIMD SDR processors in a machine description language LISA. LISA is a language for instruction set architecture which can gain rapid model at architectural level. In order to evaluate the availability of our proposed processor, three common baseband functions, FFT, FIR digital filter and matrix multiplication have been mapped on the SDR platform. Analytical results showed that the SDR processor achieved the maximum of 47.1% performance boost relative to the opponent processor.

  1. Onboard Autonomous Corrections for Accurate IRF Pointing.

    Science.gov (United States)

    Jorgensen, J. L.; Betto, M.; Denver, T.

    2002-05-01

    Over the past decade, the Noise Equivalent Angle (NEA) of onboard attitude reference instruments, has decreased from tens-of-arcseconds to the sub-arcsecond level. This improved performance is partly due to improved sensor-technology with enhanced signal to noise ratios, partly due to improved processing electronics which allows for more sophisticated and faster signal processing. However, the main reason for the increased precision, is the application of onboard autonomy, which apart from simple outlier rejection also allows for removal of "false positive" answers, and other "unexpected" noise sources, that otherwise would degrade the quality of the measurements (e.g. discrimination between signals caused by starlight and ionizing radiation). The utilization of autonomous signal processing has also provided the means for another onboard processing step, namely the autonomous recovery from lost in space, where the attitude instrument without a priori knowledge derive the absolute attitude, i.e. in IRF coordinates, within fractions of a second. Combined with precise orbital state or position data, the absolute attitude information opens for multiple ways to improve the mission performance, either by reducing operations costs, by increasing pointing accuracy, by reducing mission expendables, or by providing backup decision information in case of anomalies. The Advanced Stellar Compass's (ASC) is a miniature, high accuracy, attitude instrument which features fully autonomous operations. The autonomy encompass all direct steps from automatic health checkout at power-on, over fully automatic SEU and SEL handling and proton induced sparkle removal, to recovery from "lost in space", and optical disturbance detection and handling. But apart from these more obvious autonomy functions, the ASC also features functions to handle and remove the aforementioned residuals. These functions encompass diverse operators such as a full orbital state vector model with automatic cloud

  2. Optical backplane interconnect switch for data processors and computers

    Science.gov (United States)

    Hendricks, Herbert D.; Benz, Harry F.; Hammer, Jacob M.

    1989-01-01

    An optoelectronic integrated device design is reported which can be used to implement an all-optical backplane interconnect switch. The switch is sized to accommodate an array of processors and memories suitable for direct replacement into the basic avionic multiprocessor backplane. The optical backplane interconnect switch is also suitable for direct replacement of the PI bus traffic switch and at the same time, suitable for supporting pipelining of the processor and memory. The 32 bidirectional switchable interconnects are configured with broadcast capability for controls, reconfiguration, and messages. The approach described here can handle a serial interconnection of data processors or a line-to-link interconnection of data processors. An optical fiber demonstration of this approach is presented.

  3. 49 CFR 395.15 - Automatic on-board recording devices.

    Science.gov (United States)

    2010-10-01

    ... information concerning on-board system sensor failures and identification of edited data. Such support systems... driving today; (iv) Total hours on duty for the 7 consecutive day period, including today; (v) Total hours...-driver operation; (7) The on-board recording device/system identifies sensor failures and edited data...

  4. Accelerating molecular dynamic simulation on the cell processor and Playstation 3.

    Science.gov (United States)

    Luttmann, Edgar; Ensign, Daniel L; Vaidyanathan, Vishal; Houston, Mike; Rimon, Noam; Øland, Jeppe; Jayachandran, Guha; Friedrichs, Mark; Pande, Vijay S

    2009-01-30

    Implementation of molecular dynamics (MD) calculations on novel architectures will vastly increase its power to calculate the physical properties of complex systems. Herein, we detail algorithmic advances developed to accelerate MD simulations on the Cell processor, a commodity processor found in PlayStation 3 (PS3). In particular, we discuss issues regarding memory access versus computation and the types of calculations which are best suited for streaming processors such as the Cell, focusing on implicit solvation models. We conclude with a comparison of improved performance on the PS3's Cell processor over more traditional processors. (c) 2008 Wiley Periodicals, Inc.

  5. Fast digital processor for event selection according to particle number difference

    International Nuclear Information System (INIS)

    Basiladze, S.G.; Gus'kov, B.N.; Li Van Sun; Maksimov, A.N.; Parfenov, A.N.

    1978-01-01

    A fast digital processor for a magnetic spectrometer is described. It is used in experimental searches for charmed particles. The basic purpose of the processor is discriminating events in the difference of numbers of particles passing through two proportional chambers (PC). The processor consists of three units for detecting signals with PC, and a binary coder. The number of inputs of the processor is 32 for the first PC and 64 for the second. The difference in the number of particles discriminated is from 0 to 8. The resolution time is 180 ns. The processor is built in the CAMAC standard

  6. Logistic Fuel Processor Development

    National Research Council Canada - National Science Library

    Salavani, Reza

    2004-01-01

    ... to light gases then steam reform the light gases into hydrogen rich stream. This report documents the efforts in developing a fuel processor capable of providing hydrogen to a 3kW fuel cell stack...

  7. Possibilities of reduction of the on-board energy for an innovative subway

    OpenAIRE

    Allègre, A-L.; Barrade, P.; Delarue, P.; Bouscayrol, A.; Chattot, E.; El-Fassi, S.

    2009-01-01

    An innovative subway has been proposed using supercapacitors as energy source. In this paper, are presented different possibilities to reduce on-board stored energy in order to downsize the on-board energy storage subsystem. Special attention is paid to the influence of a feeding rail extension or a downward slope at the beginning of the interstation on the on-board stored energy. A map is built to facilitate the selection of the solution which leads to reduce the on-board energy.

  8. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    Science.gov (United States)

    Barr, David R. W.; Dudek, Piotr

    2009-12-01

    We present a software environment for the efficient simulation of cellular processor arrays (CPAs). This software (APRON) is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  9. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    Directory of Open Access Journals (Sweden)

    David R. W. Barr

    2009-01-01

    Full Text Available We present a software environment for the efficient simulation of cellular processor arrays (CPAs. This software (APRON is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  10. The EGSE science software of the IBIS instrument on-board INTEGRAL satellite

    International Nuclear Information System (INIS)

    La Rosa, Giovanni; Fazio, Giacomo; Segreto, Alberto; Gianotti, Fulvio; Stephen, John; Trifoglio, Massimo

    2000-01-01

    IBIS (Imager on Board INTEGRAL Satellite) is one of the key instrument on-board the INTEGRAL satellite, the follow up mission of the high energy missions CGRO and Granat. The EGSE of IBIS is composed by a Satellite Interface Simulator, a Control Station and a Science Station. Here are described the solutions adopted for the architectural design of the software running on the Science Station. Some preliminary results are used to show the science functionality, that allowed to understand the instrument behavior, all along the test and calibration campaigns of the Engineering Model of IBIS

  11. Control structures for high speed processors

    Science.gov (United States)

    Maki, G. K.; Mankin, R.; Owsley, P. A.; Kim, G. M.

    1982-01-01

    A special processor was designed to function as a Reed Solomon decoder with throughput data rate in the Mhz range. This data rate is significantly greater than is possible with conventional digital architectures. To achieve this rate, the processor design includes sequential, pipelined, distributed, and parallel processing. The processor was designed using a high level language register transfer language. The RTL can be used to describe how the different processes are implemented by the hardware. One problem of special interest was the development of dependent processes which are analogous to software subroutines. For greater flexibility, the RTL control structure was implemented in ROM. The special purpose hardware required approximately 1000 SSI and MSI components. The data rate throughput is 2.5 megabits/second. This data rate is achieved through the use of pipelined and distributed processing. This data rate can be compared with 800 kilobits/second in a recently proposed very large scale integration design of a Reed Solomon encoder.

  12. Verification of ICESat-2/ATLAS Science Receiver Algorithm Onboard Databases

    Science.gov (United States)

    Carabajal, C. C.; Saba, J. L.; Leigh, H. W.; Magruder, L. A.; Urban, T. J.; Mcgarry, J.; Schutz, B. E.

    2013-12-01

    NASA's ICESat-2 mission will fly the Advanced Topographic Laser Altimetry System (ATLAS) instrument on a 3-year mission scheduled to launch in 2016. ATLAS is a single-photon detection system transmitting at 532nm with a laser repetition rate of 10 kHz, and a 6 spot pattern on the Earth's surface. A set of onboard Receiver Algorithms will perform signal processing to reduce the data rate and data volume to acceptable levels. These Algorithms distinguish surface echoes from the background noise, limit the daily data volume, and allow the instrument to telemeter only a small vertical region about the signal. For this purpose, three onboard databases are used: a Surface Reference Map (SRM), a Digital Elevation Model (DEM), and a Digital Relief Maps (DRMs). The DEM provides minimum and maximum heights that limit the signal search region of the onboard algorithms, including a margin for errors in the source databases, and onboard geolocation. Since the surface echoes will be correlated while noise will be randomly distributed, the signal location is found by histogramming the received event times and identifying the histogram bins with statistically significant counts. Once the signal location has been established, the onboard Digital Relief Maps (DRMs) will be used to determine the vertical width of the telemetry band about the signal. University of Texas-Center for Space Research (UT-CSR) is developing the ICESat-2 onboard databases, which are currently being tested using preliminary versions and equivalent representations of elevation ranges and relief more recently developed at Goddard Space Flight Center (GSFC). Global and regional elevation models have been assessed in terms of their accuracy using ICESat geodetic control, and have been used to develop equivalent representations of the onboard databases for testing against the UT-CSR databases, with special emphasis on the ice sheet regions. A series of verification checks have been implemented, including

  13. Fuel options for the fuel cell vehicle: hydrogen, methanol or gasoline?

    International Nuclear Information System (INIS)

    Thomas, C.E.; James, B.D.; Lomax, F.D. Jr.; Kuhn, I.F. Jr.

    2000-01-01

    Fuel cell vehicles can be powered directly by hydrogen or, with an onboard chemical processor, other liquid fuels such as gasoline or methanol. Most analysts agree that hydrogen is the preferred fuel in terms of reducing vehicle complexity, but one common perception is that the cost of a hydrogen infrastructure would be excessive. According to this conventional wisdom, the automobile industry must therefore develop complex onboard fuel processors to convert methanol, ethanol or gasoline to hydrogen. We show here, however, that the total fuel infrastructure cost to society including onboard fuel processors may be less for hydrogen than for either gasoline or methanol, the primary initial candidates currently under consideration for fuel cell vehicles. We also present the local air pollution and greenhouse gas advantages of hydrogen fuel cell vehicles compared to those powered by gasoline or methanol. (Author)

  14. First light of Cassis: the stereo surface imaging system onboard the exomars TGO

    Science.gov (United States)

    Gambicorti, L.; Piazza, D.; Pommerol, A.; Roloff, V.; Gerber, M.; Ziethe, R.; El-Maarry, M. R.; Weigel, T.; Johnson, M.; Vernani, D.; Pelo, E.; Da Deppo, V.; Cremonese, G.; Ficai Veltroni, I.; Thomas, N.

    2017-09-01

    The Colour and Stereo Surface Imaging System (CaSSIS) camera was launched on 14 March 2016 onboard the ExoMars Trace Gas Orbiter (TGO) and it is currently in cruise to Mars. The CaSSIS high resolution optical system is based on a TMA telescope (Three Mirrors Anastigmatic configuration) with a 4th powered folding mirror compacting the CFRP (Carbon Fiber Reinforced Polymer) structure. The camera EPD (Entrance Pupil Diameter) is 135 mm and the focal length is 880 mm, giving an F# 6.5 system; the wavelength range covered by the instrument is 400-1100 nm. The optical system is designed to have distortion of less than 2%, and a worst case Modulation Transfer Function (MTF) of 0.3 at the detector Nyquist spatial frequency (i.e. 50 lp/mm). The Focal Plane Assembly (FPA), including the detector, is a spare from the Simbio-Sys instrument of the Italian Space Agency (ASI). Simbio-Sys will fly on ESA's BepiColombo mission to Mercury in 2018. The detector, developed by Raytheon Vision Systems, is a 2k×2k hybrid Si-PIN array with 10 μm-pixel pitch. The detector allows snap shot operation at a read-out rate of 5 Mpx/s with 14-bit resolution. CaSSIS will operate in a push-frame mode with a Filter Strip Assembly (FSA), placed directly above the detector sensitive area, selecting 4 colour bands. The scale at a slant angle of 4.6 m/px from the nominal orbit is foreseen to produce frames of 9.4 km × 6.3 km on the Martian surface, and covering a Field of View (FoV) of 1.33° cross track × 0.88° along track. The University of Bern was in charge of the full instrument integration as well as the characterisation of the focal plane of CaSSIS. The paper will present an overview of CaSSIS and the optical performance of the telescope and the FPA. The preliminary results of the on-ground calibration campaign and the first light obtained during the commissioning and pointing campaign (April 2016) will be described in detail. The instrument is acquiring images with an average Point Spread

  15. Evaluation of the Intel Westmere-EP server processor

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A; CERN. Geneva. IT Department

    2010-01-01

    In this paper we report on a set of benchmark results recently obtained by CERN openlab when comparing the 6-core “Westmere-EP” processor with Intel’s previous generation of the same microarchitecture, the “Nehalem-EP”. The former is produced in a new 32nm process, the latter in 45nm. Both platforms are dual-socket servers. Multiple benchmarks were used to get a good understanding of the performance of the new processor. We used both industry-standard benchmarks, such as SPEC2006, and specific High Energy Physics benchmarks, representing both simulation of physics detectors and data analysis of physics events. Before summarizing the results we must stress the fact that benchmarking of modern processors is a very complex affair. One has to control (at least) the following features: processor frequency, overclocking via Turbo mode, the number of physical cores in use, the use of logical cores via Simultaneous Multi-Threading (SMT), the cache sizes available, the memory configuration installed, as well...

  16. Keystone Business Models for Network Security Processors

    OpenAIRE

    Arthur Low; Steven Muegge

    2013-01-01

    Network security processors are critical components of high-performance systems built for cybersecurity. Development of a network security processor requires multi-domain experience in semiconductors and complex software security applications, and multiple iterations of both software and hardware implementations. Limited by the business models in use today, such an arduous task can be undertaken only by large incumbent companies and government organizations. Neither the “fabless semiconductor...

  17. Development of level 2 processor for the readout of TMC

    International Nuclear Information System (INIS)

    Arai, Y.; Ikeno, M.; Murata, T.; Sudo, F.; Emura, T.

    1995-01-01

    We have developed a prototype 8-bit processor for the level 2 data processing for the Time Memory Cell (TMC). The first prototype processor successfully runs with 18 MHz clock. The operation of same clock frequency as TMC (30 MHz) will be easily achieved with simple modifications. Although the processor is very primitive one but shows its powerful performance and flexibility. To realize the compact TMC/L2P (Level 2 Processor) system, it is better to include the microcode memory within the chip. Encoding logic of the microcode must be included to reduce the microcode memory in this case. (J.P.N.)

  18. Review of trigger and on-line processors at SLAC

    International Nuclear Information System (INIS)

    Lankford, A.J.

    1984-07-01

    The role of trigger and on-line processors in reducing data rates to manageable proportions in e + e - physics experiments is defined not by high physics or background rates, but by the large event sizes of the general-purpose detectors employed. The rate of e + e - annihilation is low, and backgrounds are not high; yet the number of physics processes which can be studied is vast and varied. This paper begins by briefly describing the role of trigger processors in the e + e - context. The usual flow of the trigger decision process is illustrated with selected examples of SLAC trigger processing. The features are mentioned of triggering at the SLC and the trigger processing plans of the two SLC detectors: The Mark II and the SLD. The most common on-line processors at SLAC, the BADC, the SLAC Scanner Processor, the SLAC FASTBUS Controller, and the VAX CAMAC Channel, are discussed. Uses of the 168/E, 3081/E, and FASTBUS VAX processors are mentioned. The manner in which these processors are interfaced and the function they serve on line is described. Finally, the accelerator control system for the SLC is outlined. This paper is a survey in nature, and hence, relies heavily upon references to previous publications for detailed description of work mentioned here. 27 references, 9 figures, 1 table

  19. FY1995 study of design methodology and environment of high-performance processor architectures; 1995 nendo koseino processor architecture sekkeiho to sekkei kankyo no kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The aim of our project is to develop high-performance processor architectures for both general purpose and application-specific purpose. We also plan to develop basic softwares, such as compliers, and various design aid tools for those architectures. We are particularly interested in performance evaluation at architecture design phase, design optimization, automatic generation of compliers from processor designs, and architecture design methodologies combined with circuit layout. We have investigated both microprocessor architectures and design methodologies / environments for the processors. Our goal is to establish design technologies for high-performance, low-power, low-cost and highly-reliable systems in system-on-silicon era. We have proposed PPRAM architecture for high-performance system using DRAM and logic mixture technology, Softcore processor architecture for special purpose processors in embedded systems, and Power-Pro architecture for low power systems. We also developed design methodologies and design environments for the above architectures as well as a new method for design verification of microprocessors. (NEDO)

  20. FAST: framework for heterogeneous medical image computing and visualization.

    Science.gov (United States)

    Smistad, Erik; Bozorgi, Mohammadmehdi; Lindseth, Frank

    2015-11-01

    Computer systems are becoming increasingly heterogeneous in the sense that they consist of different processors, such as multi-core CPUs and graphic processing units. As the amount of medical image data increases, it is crucial to exploit the computational power of these processors. However, this is currently difficult due to several factors, such as driver errors, processor differences, and the need for low-level memory handling. This paper presents a novel FrAmework for heterogeneouS medical image compuTing and visualization (FAST). The framework aims to make it easier to simultaneously process and visualize medical images efficiently on heterogeneous systems. FAST uses common image processing programming paradigms and hides the details of memory handling from the user, while enabling the use of all processors and cores on a system. The framework is open-source, cross-platform and available online. Code examples and performance measurements are presented to show the simplicity and efficiency of FAST. The results are compared to the insight toolkit (ITK) and the visualization toolkit (VTK) and show that the presented framework is faster with up to 20 times speedup on several common medical imaging algorithms. FAST enables efficient medical image computing and visualization on heterogeneous systems. Code examples and performance evaluations have demonstrated that the toolkit is both easy to use and performs better than existing frameworks, such as ITK and VTK.

  1. Software-defined reconfigurable microwave photonics processor.

    Science.gov (United States)

    Pérez, Daniel; Gasulla, Ivana; Capmany, José

    2015-06-01

    We propose, for the first time to our knowledge, a software-defined reconfigurable microwave photonics signal processor architecture that can be integrated on a chip and is capable of performing all the main functionalities by suitable programming of its control signals. The basic configuration is presented and a thorough end-to-end design model derived that accounts for the performance of the overall processor taking into consideration the impact and interdependencies of both its photonic and RF parts. We demonstrate the model versatility by applying it to several relevant application examples.

  2. Using a digital signal processor as a data stream controller for digital subtraction angiography

    International Nuclear Information System (INIS)

    Meng, J.D.; Katz, J.E.

    1991-10-01

    High speed, flexibility, and good arithmetic abilities make digital signal processors (DSP) a good choice as input/output controllers for real time applications. The DSP can be made to pre-process data in real time to reduce data volume, to open early windows on what is being acquired and to implement local servo loops. We present an example of a DSP as an input/output controller for a digital subtraction angiographic imaging system. The DSP pre-processes the raw data, reducing data volume by a factor of two, and is potentially capable of producing real-time subtracted images for immediate display

  3. A Technique for Generating Volumetric Cine-Magnetic Resonance Imaging

    International Nuclear Information System (INIS)

    Harris, Wendy; Ren, Lei; Cai, Jing; Zhang, You; Chang, Zheng; Yin, Fang-Fang

    2016-01-01

    Purpose: The purpose of this study was to develop a techique to generate on-board volumetric cine-magnetic resonance imaging (VC-MRI) using patient prior images, motion modeling, and on-board 2-dimensional cine MRI. Methods and Materials: One phase of a 4-dimensional MRI acquired during patient simulation is used as patient prior images. Three major respiratory deformation patterns of the patient are extracted from 4-dimensional MRI based on principal-component analysis. The on-board VC-MRI at any instant is considered as a deformation of the prior MRI. The deformation field is represented as a linear combination of the 3 major deformation patterns. The coefficients of the deformation patterns are solved by the data fidelity constraint using the acquired on-board single 2-dimensional cine MRI. The method was evaluated using both digital extended-cardiac torso (XCAT) simulation of lung cancer patients and MRI data from 4 real liver cancer patients. The accuracy of the estimated VC-MRI was quantitatively evaluated using volume-percent-difference (VPD), center-of-mass-shift (COMS), and target tracking errors. Effects of acquisition orientation, region-of-interest (ROI) selection, patient breathing pattern change, and noise on the estimation accuracy were also evaluated. Results: Image subtraction of ground-truth with estimated on-board VC-MRI shows fewer differences than image subtraction of ground-truth with prior image. Agreement between normalized profiles in the estimated and ground-truth VC-MRI was achieved with less than 6% error for both XCAT and patient data. Among all XCAT scenarios, the VPD between ground-truth and estimated lesion volumes was, on average, 8.43 ± 1.52% and the COMS was, on average, 0.93 ± 0.58 mm across all time steps for estimation based on the ROI region in the sagittal cine images. Matching to ROI in the sagittal view achieved better accuracy when there was substantial breathing pattern change. The technique was robust against

  4. A Technique for Generating Volumetric Cine-Magnetic Resonance Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Harris, Wendy [Medical Physics Graduate Program, Duke University, Durham, North Carolina (United States); Ren, Lei, E-mail: lei.ren@duke.edu [Medical Physics Graduate Program, Duke University, Durham, North Carolina (United States); Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina (United States); Cai, Jing [Medical Physics Graduate Program, Duke University, Durham, North Carolina (United States); Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina (United States); Zhang, You [Medical Physics Graduate Program, Duke University, Durham, North Carolina (United States); Chang, Zheng; Yin, Fang-Fang [Medical Physics Graduate Program, Duke University, Durham, North Carolina (United States); Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina (United States)

    2016-06-01

    Purpose: The purpose of this study was to develop a techique to generate on-board volumetric cine-magnetic resonance imaging (VC-MRI) using patient prior images, motion modeling, and on-board 2-dimensional cine MRI. Methods and Materials: One phase of a 4-dimensional MRI acquired during patient simulation is used as patient prior images. Three major respiratory deformation patterns of the patient are extracted from 4-dimensional MRI based on principal-component analysis. The on-board VC-MRI at any instant is considered as a deformation of the prior MRI. The deformation field is represented as a linear combination of the 3 major deformation patterns. The coefficients of the deformation patterns are solved by the data fidelity constraint using the acquired on-board single 2-dimensional cine MRI. The method was evaluated using both digital extended-cardiac torso (XCAT) simulation of lung cancer patients and MRI data from 4 real liver cancer patients. The accuracy of the estimated VC-MRI was quantitatively evaluated using volume-percent-difference (VPD), center-of-mass-shift (COMS), and target tracking errors. Effects of acquisition orientation, region-of-interest (ROI) selection, patient breathing pattern change, and noise on the estimation accuracy were also evaluated. Results: Image subtraction of ground-truth with estimated on-board VC-MRI shows fewer differences than image subtraction of ground-truth with prior image. Agreement between normalized profiles in the estimated and ground-truth VC-MRI was achieved with less than 6% error for both XCAT and patient data. Among all XCAT scenarios, the VPD between ground-truth and estimated lesion volumes was, on average, 8.43 ± 1.52% and the COMS was, on average, 0.93 ± 0.58 mm across all time steps for estimation based on the ROI region in the sagittal cine images. Matching to ROI in the sagittal view achieved better accuracy when there was substantial breathing pattern change. The technique was robust against

  5. Designing a dataflow processor using CλaSH

    NARCIS (Netherlands)

    Niedermeier, A.; Wester, Rinse; Wester, Rinse; Rovers, K.C.; Baaij, C.P.R.; Kuper, Jan; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we show how a simple dataflow processor can be fully implemented using CλaSH, a high level HDL based on the functional programming language Haskell. The processor was described using Haskell, the CλaSH compiler was then used to translate the design into a fully synthesisable VHDL code.

  6. Researching, building a soft-processor and Ethernet interface circuit using EDK

    International Nuclear Information System (INIS)

    Tuong Thi Thu Huong; Pham Ngoc Tuan; Truong Van Dat, Dang Lanh; Chau Thi Nhu Quynh

    2014-01-01

    The processor is an indispensable component in the measurement and automatic control systems. This report describes the fabrication of a soft-processor (32-bits, on-chip block RAM 64K, 50M clock, internal and peripheral bus) for receiving, sending and processing of data Ethernet packets. This processor is fabricated using the XPS component from EDK (Xilinx) software toolkit. After that, it is configured on the FPGA named Spartan XC3S500E circuit. A firmware of a processor for controlling the interface between processor and Ethernet port is written in C language and can play a role of a HOST (station) which has its own IP to connect to Ethernet network. Besides, there are some needed parts as follows: an Ethernet interfacing controller chip, a suitable cable providing a speed up to 100 Mbs and an application program running under Window XP environment written in LabView to communicate with soft-processor. (author)

  7. A high-accuracy optical linear algebra processor for finite element applications

    Science.gov (United States)

    Casasent, D.; Taylor, B. K.

    1984-01-01

    Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.

  8. Identifying Onboarding Heuristics for Free-to-Play Mobile Games: A Mixed Methods Approach

    DEFF Research Database (Denmark)

    Thomsen, Line Ebdrup; Weigert Petersen, Falko; Mirza-Babaei, Pejman

    2016-01-01

    The onboarding phase of Free-to-Play mobile games, covering the first few minutes of play, typically sees a substantial retention rate amongst players. It is therefore crucial to the success of these games that the onboarding phase promotes engagement to the widest degree possible. In this paper ...... of puzzle games, base builders and arcade games, and utilize different onboarding phase design approaches. Results showcase how heuristics can be used to design engaging onboarding phases in mobile games....

  9. Low voltage 80 KV to 125 KV electron processors

    International Nuclear Information System (INIS)

    Lauppi, U.V.

    1999-01-01

    The classic electron beam technology made use of accelerating energies in the voltage range of 300 to 800 kV. The first EB processors - built for the curing of coatings - operated at 300 kV. The products to be treated were thicker than a simple layer of coating with thicknesses up to 100g and more. It was only in the beginning of the 1970's that industrial EB processors with accelerating voltages below 300 kV appeared on the market. Our company developed the first commercial electron accelerator without a beam scanner. The new EB machine featured a linear cathode, emitting a shower or 'curtain' of electrons over the full width of the product. These units were much smaller than anv previous EB processors and dedicated to the curing of coatings and other thin layers. ESI's first EB units operated with accelerating voltages between 150 and 200 kV. In 1993 ESI announced the introduction of a new generation of Electrocure. EB processors operating at 120 kV, and in 1998, at the RadTech North America '98 Conference in Chicago, the introduction of an 80 kV electron beam processor under the designation Microbeam LV

  10. A fast track trigger processor for the OPAL detector at LEP

    International Nuclear Information System (INIS)

    Carter, A.A.; Jaroslawski, S.; Wagner, A.

    1986-01-01

    A fast hardware track trigger processor being built for the OPAL experiment is described. The processor will analyse data from the central drift chambers of OPAL to determine whether any tracks come from the interaction region, and thereby eliminate background events. The processor will find tracks over a large angular range, vertical strokecos thetavertical stroke < or approx. 0.95. The design of the processor is described, together with a brief account of its hardware implementation for OPAL. The results of feasibility studies are also presented. (orig.)

  11. Launching applications on compute and service processors running under different operating systems in scalable network of processor boards with routers

    Science.gov (United States)

    Tomkins, James L [Albuquerque, NM; Camp, William J [Albuquerque, NM

    2009-03-17

    A multiple processor computing apparatus includes a physical interconnect structure that is flexibly configurable to support selective segregation of classified and unclassified users. The physical interconnect structure also permits easy physical scalability of the computing apparatus. The computing apparatus can include an emulator which permits applications from the same job to be launched on processors that use different operating systems.

  12. Particle simulation on a distributed memory highly parallel processor

    International Nuclear Information System (INIS)

    Sato, Hiroyuki; Ikesaka, Morio

    1990-01-01

    This paper describes parallel molecular dynamics simulation of atoms governed by local force interaction. The space in the model is divided into cubic subspaces and mapped to the processor array of the CAP-256, a distributed memory, highly parallel processor developed at Fujitsu Labs. We developed a new technique to avoid redundant calculation of forces between atoms in different processors. Experiments showed the communication overhead was less than 5%, and the idle time due to load imbalance was less than 11% for two model problems which contain 11,532 and 46,128 argon atoms. From the software simulation, the CAP-II which is under development is estimated to be about 45 times faster than CAP-256 and will be able to run the same problem about 40 times faster than Fujitsu's M-380 mainframe when 256 processors are used. (author)

  13. Code compression for VLIW embedded processors

    Science.gov (United States)

    Piccinelli, Emiliano; Sannino, Roberto

    2004-04-01

    The implementation of processors for embedded systems implies various issues: main constraints are cost, power dissipation and die area. On the other side, new terminals perform functions that require more computational flexibility and effort. Long code streams must be loaded into memories, which are expensive and power consuming, to run on DSPs or CPUs. To overcome this issue, the "SlimCode" proprietary algorithm presented in this paper (patent pending technology) can reduce the dimensions of the program memory. It can run offline and work directly on the binary code the compiler generates, by compressing it and creating a new binary file, about 40% smaller than the original one, to be loaded into the program memory of the processor. The decompression unit will be a small ASIC, placed between the Memory Controller and the System bus of the processor, keeping unchanged the internal CPU architecture: this implies that the methodology is completely transparent to the core. We present comparisons versus the state-of-the-art IBM Codepack algorithm, along with its architectural implementation into the ST200 VLIW family core.

  14. Recursive Matrix Inverse Update On An Optical Processor

    Science.gov (United States)

    Casasent, David P.; Baranoski, Edward J.

    1988-02-01

    A high accuracy optical linear algebraic processor (OLAP) using the digital multiplication by analog convolution (DMAC) algorithm is described for use in an efficient matrix inverse update algorithm with speed and accuracy advantages. The solution of the parameters in the algorithm are addressed and the advantages of optical over digital linear algebraic processors are advanced.

  15. Array processor: a new tool in nuclear medicine

    International Nuclear Information System (INIS)

    Brunol, J.; Nuta, V.

    1981-01-01

    Data or image processing already occupies a considerable place in clinical routine. But, the requirements will no doubt increase in the years to come, for nuclear medicine is a functional vocation discipline. Thus we already know that clearance computations, in the study of free fatty acid development labelled with 123 I, will require a considerable computational volume. Moreover, the introduction of tomography methods in diagnosis further multiplies these times by the number of points which are adopted in the third dimension. Consequently, the computing power required sometimes seems greater than that required for the reconstruction of a cross-section in X-ray tomodensitometry. Thus, for the same reason as the display unit and the management unit, the array processor becomes an indispensable element in nuclear data processing. Its own power furthermore opens interesting channels for progress in a near future. These are real-time processing, which consists in processing the image at the same time as it is being created. It will then be possible for example to filter the image (by Fourier transform) during acquisition and to stop the acquisition as soon as the processed image is satisfactory. This new methodology forms of course an important additional step with respect to the old one which consisted in stopping the acquisition at a fixed time, then in optional processing of the image. The fact of being able to adopt the on-line processing time forms in our opinion one of the essential aspects of the introduction of APs at the level of nuclear medicine image acquisitions. (orig.) [de

  16. Evaluation of the Intel Nehalem-EX server processor

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A; CERN. Geneva. IT Department

    2010-01-01

    In this paper we report on a set of benchmark results recently obtained by the CERN openlab by comparing the 4-socket, 32-core Intel Xeon X7560 server with the previous generation 4-socket server, based on the Xeon X7460 processor. The Xeon X7560 processor represents a major change in many respects, especially the memory sub-system, so it was important to make multiple comparisons. In most benchmarks the two 4-socket servers were compared. It should be underlined that both servers represent the “top of the line” in terms of frequency. However, in some cases, it was important to compare systems that integrated the latest processor features, such as QPI links, Symmetric multithreading and over-clocking via Turbo mode, and in such situations the X7560 server was compared to a dual socket L5520 based system with an identical frequency of 2.26 GHz. Before summarizing the results we must stress the fact that benchmarking of modern processors is a very complex affair. One has to control (at least) the following ...

  17. Graphical user interface for TOUGH/TOUGH2 - development of database, pre-processor, and post-processor

    Energy Technology Data Exchange (ETDEWEB)

    Sato, Tatsuya; Okabe, Takashi; Osato, Kazumi [Geothermal Energy Research and Development Co., Ltd., Tokyo (Japan)

    1995-03-01

    One of the advantages of the TOUGH/TOUGH2 (Pruess, 1987 and 1991) is the modeling using {open_quotes}free shape{close_quotes} polygonal blocks. However, the treatment of three-dimensional information, particularly for TOUGH/TOUGH2 is not easy because of the {open_quotes}free shape{close_quotes} polygonal blocks. Therefore, we have developed a database named {open_quotes}GEOBASE{close_quotes} and a pre/post-processor named {open_quotes}GEOGRAPH{close_quotes} for TOUGH/TOUGH2 on engineering work station (EWS). {open_quotes}GEOGRAPH{close_quotes} is based on the ORACLE{sup *1} relational database manager system to access data sets of surface exploration (geology, geophysics, geochemistry, etc.), drilling (well trajectory, geological column, logging, etc.), well testing (production test, injection test, interference test, tracer test, etc.) and production/injection history.{open_quotes}GEOGRAPH{close_quotes} consists of {open_quotes}Pre-processor{close_quotes} that can construct the three-dimensional free shape reservoir modeling by mouse operation on X-window and {open_quotes}Post-processor{close_quotes} that can display several kinds of two/three-dimensional maps and X-Y plots to compile data on {open_quotes}GEOBASE{close_quotes} and result of TOUGH/TOUGH2 calculation. This paper shows concept of the systems and examples of utilization.

  18. Biomass is beginning to threaten the wood-processors

    International Nuclear Information System (INIS)

    Beer, G.; Sobinkovic, B.

    2004-01-01

    In this issue an exploitation of biomass in Slovak Republic is analysed. Some new projects of constructing of the stoke-holds for biomass processing are published. The grants for biomass are ascending the prices of wood raw material, which is thus becoming less accessible for the wood-processors. An excessive wood export threatens the domestic processors

  19. Reconfigurable signal processor designs for advanced digital array radar systems

    Science.gov (United States)

    Suarez, Hernan; Zhang, Yan (Rockee); Yu, Xining

    2017-05-01

    The new challenges originated from Digital Array Radar (DAR) demands a new generation of reconfigurable backend processor in the system. The new FPGA devices can support much higher speed, more bandwidth and processing capabilities for the need of digital Line Replaceable Unit (LRU). This study focuses on using the latest Altera and Xilinx devices in an adaptive beamforming processor. The field reprogrammable RF devices from Analog Devices are used as analog front end transceivers. Different from other existing Software-Defined Radio transceivers on the market, this processor is designed for distributed adaptive beamforming in a networked environment. The following aspects of the novel radar processor will be presented: (1) A new system-on-chip architecture based on Altera's devices and adaptive processing module, especially for the adaptive beamforming and pulse compression, will be introduced, (2) Successful implementation of generation 2 serial RapidIO data links on FPGA, which supports VITA-49 radio packet format for large distributed DAR processing. (3) Demonstration of the feasibility and capabilities of the processor in a Micro-TCA based, SRIO switching backplane to support multichannel beamforming in real-time. (4) Application of this processor in ongoing radar system development projects, including OU's dual-polarized digital array radar, the planned new cylindrical array radars, and future airborne radars.

  20. Parallel processor for fast event analysis

    International Nuclear Information System (INIS)

    Hensley, D.C.

    1983-01-01

    Current maximum data rates from the Spin Spectrometer of approx. 5000 events/s (up to 1.3 MBytes/s) and minimum analysis requiring at least 3000 operations/event require a CPU cycle time near 70 ns. In order to achieve an effective cycle time of 70 ns, a parallel processing device is proposed where up to 4 independent processors will be implemented in parallel. The individual processors are designed around the Am2910 Microsequencer, the AM29116 μP, and the Am29517 Multiplier. Satellite histogramming in a mass memory system will be managed by a commercial 16-bit μP system

  1. Asymmetrical floating point array processors, their application to exploration and exploitation

    Energy Technology Data Exchange (ETDEWEB)

    Geriepy, B L

    1983-01-01

    An asymmetrical floating point array processor is a special-purpose scientific computer which operates under asymmetrical control of a host computer. Although an array processor can receive fixed point input and produce fixed point output, its primary mode of operation is floating point. The first generation of array processors was oriented towards time series information. The next generation of array processors has proved much more versatile and their applicability ranges from petroleum reservoir simulation to speech syntheses. Array processors are becoming commonplace in mining, the primary usage being construction of grids-by usual methods or by kriging. The Australian mining community is among the world's leaders in regard to computer-assisted exploration and exploitation systems. Part of this leadership role must be providing guidance to computer vendors in regard to current and future requirements.

  2. On the effective parallel programming of multi-core processors

    NARCIS (Netherlands)

    Varbanescu, A.L.

    2010-01-01

    Multi-core processors are considered now the only feasible alternative to the large single-core processors which have become limited by technological aspects such as power consumption and heat dissipation. However, due to their inherent parallel structure and their diversity, multi-cores are

  3. Gas monitoring onboard ISS using FTIR spectroscopy

    Science.gov (United States)

    Gisi, Michael; Stettner, Armin; Seurig, Roland; Honne, Atle; Witt, Johannes; Rebeyre, Pierre

    2017-06-01

    In the confined, enclosed environment of a spacecraft, the air quality must be monitored continuously in order to safeguard the crew's health. For this reason, OHB builds the ANITA2 (Analysing Interferometer for Ambient Air) technology demonstrator for trace gas monitoring onboard the International Space Station (ISS). The measurement principle of ANITA2 is based on the Fourier Transform Infrared (FTIR) technology with dedicated gas analysis software from the Norwegian partner SINTEF. This combination proved to provide high sensitivity, accuracy and precision for parallel measurements of 33 trace gases simultaneously onboard ISS by the precursor instrument ANITA1. The paper gives a technical overview about the opto-mechanical components of ANITA2, such as the interferometer, the reference Laser, the infrared source and the gas cell design and a quick overview about the gas analysis. ANITA2 is very well suited for measuring gas concentrations specifically but not limited to usage onboard spacecraft, as no consumables are required and measurements are performed autonomously. ANITA2 is a programme under the contract of the European Space Agency, and the air quality monitoring system is a stepping stone into the future, as a precursor system for manned exploration missions.

  4. The UA1 upgrade calorimeter trigger processor

    International Nuclear Information System (INIS)

    Bains, N.; Baird, S.A.; Biddulph, P.

    1990-01-01

    The increased luminosity of the improved CERN Collider and the more subtle signals of second-generation collider physics demand increasingly sophisticated triggering. We have built a new first-level trigger processor designed to use the excellent granularity of the UA1 upgrade calorimeter. This device is entirely digital and handles events in 1.5 μs, thus introducing no deadtime. Its most novel feature is fast two-dimensional electromagnetic cluster-finding with the possibility of demanding an isolated shower of limited penetration. The processor allows multiple combinations of triggers on electromagnetic showers, hadronic jets and energy sums, including a total-energy veto of multiple interactions and a full vector sum of missing transverse energy. This hard-wired processor is about five times more powerful than its predecessor, and makes extensive use of pipelining techniques. It was used extensively in the 1988 and 1989 runs of the CERN Collider. (author)

  5. The UA1 upgrade calorimeter trigger processor

    International Nuclear Information System (INIS)

    Bains, M.; Charleton, D.; Ellis, N.; Garvey, J.; Gregory, J.; Jimack, M.P.; Jovanovic, P.; Kenyon, I.R.; Baird, S.A.; Campbell, D.; Cawthraw, M.; Coughlan, J.; Flynn, P.; Galagedera, S.; Grayer, G.; Halsall, R.; Shah, T.P.; Stephens, R.; Biddulph, P.; Eisenhandler, E.; Fensome, I.F.; Landon, M.; Robinson, D.; Oliver, J.; Sumorok, K.

    1990-01-01

    The increased luminosity of the improved CERN Collider and the more subtle signals of second-generation collider physics demand increasingly sophisticated triggering. We have built a new first-level trigger processor designed to use the excellent granularity of the UA1 upgrade calorimeter. This device is entirely digital and handles events in 1.5 μs, thus introducing no dead time. Its most novel feature is fast two-dimensional electromagnetic cluster-finding with the possibility of demanding an isolated shower of limited penetration. The processor allows multiple combinations of triggers on electromagnetic showers, hadronic jets and energy sums, including a total-energy veto of multiple interactions and a full vector sum of missing transverse energy. This hard-wired processor is about five times more powerful than its predecessor, and makes extensive use of pipelining techniques. It was used extensively in the 1988 and 1989 runs of the CERN Collider. (orig.)

  6. Benchmarking NWP Kernels on Multi- and Many-core Processors

    Science.gov (United States)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  7. Reconfigurable On-Board Vision Processing for Small Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    James K. Archibald

    2006-12-01

    Full Text Available This paper addresses the challenge of supporting real-time vision processing on-board small autonomous vehicles. Local vision gives increased autonomous capability, but it requires substantial computing power that is difficult to provide given the severe constraints of small size and battery-powered operation. We describe a custom FPGA-based circuit board designed to support research in the development of algorithms for image-directed navigation and control. We show that the FPGA approach supports real-time vision algorithms by describing the implementation of an algorithm to construct a three-dimensional (3D map of the environment surrounding a small mobile robot. We show that FPGAs are well suited for systems that must be flexible and deliver high levels of performance, especially in embedded settings where space and power are significant concerns.

  8. Reconfigurable On-Board Vision Processing for Small Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    Fife WadeS

    2007-01-01

    Full Text Available This paper addresses the challenge of supporting real-time vision processing on-board small autonomous vehicles. Local vision gives increased autonomous capability, but it requires substantial computing power that is difficult to provide given the severe constraints of small size and battery-powered operation. We describe a custom FPGA-based circuit board designed to support research in the development of algorithms for image-directed navigation and control. We show that the FPGA approach supports real-time vision algorithms by describing the implementation of an algorithm to construct a three-dimensional (3D map of the environment surrounding a small mobile robot. We show that FPGAs are well suited for systems that must be flexible and deliver high levels of performance, especially in embedded settings where space and power are significant concerns.

  9. MicroASC instrument onboard Juno spacecraft utilizing inertially controlled imaging

    DEFF Research Database (Denmark)

    Pedersen, David Arge Klevang; Jørgensen, Andreas Härstedt; Benn, Mathias

    2016-01-01

    This contribution describes the post-processing of the raw image data acquired by the microASC instrument during the Earth-fly-by of the Juno spacecraft. The images show a unique view of the Earth and Moon system as seen from afar. The procedure utilizes attitude measurements and inter......-calibration of the Camera Head Units of the microASC system to trigger the image capturing. The triggering is synchronized with the inertial attitude and rotational phase of the sensor acquiring the images. This is essentially works as inertially controlled imaging facilitating image acquisition from unexplored...

  10. ACP/R3000 processors in data acquisition systems

    International Nuclear Information System (INIS)

    Deppe, J.; Areti, H.; Atac, R.

    1989-02-01

    We describe ACP/R3000 processor based data acquisition systems for high energy physics. This VME bus compatible processor board, with a computational power equivalent to 15 VAX 11/780s or better, contains 8 Mb of memory for event buffering and has a high speed secondary bus that allows data gathering from front end electronics. 2 refs., 3 figs

  11. GA103: A microprogrammable processor for online filtering

    International Nuclear Information System (INIS)

    Calzas, A.; Danon, G.; Bouquet, B.

    1981-01-01

    GA 103 is a 16 bit microprogrammable processor which emulates the PDP 11 instruction set. It is based on the Am 2900 slices. It allows user-implemented microinstructions and addition of hardwired processors. It will perform on-line filtering tasks in the NA 14 experiment at CERN, based on the reconstruction of transverse momentum of photons detected in a lead glass calorimeter. (orig.)

  12. Real-time trajectory optimization on parallel processors

    Science.gov (United States)

    Psiaki, Mark L.

    1993-01-01

    A parallel algorithm has been developed for rapidly solving trajectory optimization problems. The goal of the work has been to develop an algorithm that is suitable to do real-time, on-line optimal guidance through repeated solution of a trajectory optimization problem. The algorithm has been developed on an INTEL iPSC/860 message passing parallel processor. It uses a zero-order-hold discretization of a continuous-time problem and solves the resulting nonlinear programming problem using a custom-designed augmented Lagrangian nonlinear programming algorithm. The algorithm achieves parallelism of function, derivative, and search direction calculations through the principle of domain decomposition applied along the time axis. It has been encoded and tested on 3 example problems, the Goddard problem, the acceleration-limited, planar minimum-time to the origin problem, and a National Aerospace Plane minimum-fuel ascent guidance problem. Execution times as fast as 118 sec of wall clock time have been achieved for a 128-stage Goddard problem solved on 32 processors. A 32-stage minimum-time problem has been solved in 151 sec on 32 processors. A 32-stage National Aerospace Plane problem required 2 hours when solved on 32 processors. A speed-up factor of 7.2 has been achieved by using 32-nodes instead of 1-node to solve a 64-stage Goddard problem.

  13. The ATLAS Level-1 Central Trigger Processor (CTP)

    CERN Document Server

    Spiwoks, Ralf; Ellis, Nick; Farthouat, P; Gällnö, P; Haller, J; Krasznahorkay, A; Maeno, T; Pauly, T; Pessoa-Lima, H; Resurreccion-Arcas, I; Schuler, G; De Seixas, J M; Torga-Teixeira, R; Wengler, T

    2005-01-01

    The ATLAS Level-1 Central Trigger Processor (CTP) combines information from calorimeter and muon trigger processors and makes the final Level-1 Accept (L1A) decision on the basis of lists of selection criteria (trigger menus). In addition to the event-selection decision, the CTP also provides trigger summary information to the Level-2 trigger and the data acquisition system. It further provides accumulated and bunch-by-bunch scaler data for monitoring of the trigger, detector and beam conditions. The CTP is presented and results are shown from tests with the calorimeter adn muon trigger processors connected to detectors in a particle beam, as well as from stand-alone full-system tests in the laboratory which were used to validate the CTP.

  14. A Processor-Sharing Scheduling Strategy for NFV Nodes

    Directory of Open Access Journals (Sweden)

    Giuseppe Faraci

    2016-01-01

    Full Text Available The introduction of the two paradigms SDN and NFV to “softwarize” the current Internet is making management and resource allocation two key challenges in the evolution towards the Future Internet. In this context, this paper proposes Network-Aware Round Robin (NARR, a processor-sharing strategy, to reduce delays in traversing SDN/NFV nodes. The application of NARR alleviates the job of the Orchestrator by automatically working at the intranode level, dynamically assigning the processor slices to the virtual network functions (VNFs according to the state of the queues associated with the output links of the network interface cards (NICs. An extensive simulation set is presented to show the improvements achieved with respect to two more processor-sharing strategies chosen as reference.

  15. Processor farming method for multi-scale analysis of masonry structures

    Science.gov (United States)

    Krejčí, Tomáš; Koudelka, Tomáš

    2017-07-01

    This paper describes a processor farming method for a coupled heat and moisture transport in masonry using a two-level approach. The motivation for the two-level description comes from difficulties connected with masonry structures, where the size of stone blocks is much larger than the size of mortar layers and very fine finite element mesh has to be used. The two-level approach is suitable for parallel computing because nearly all computations can be performed independently with little synchronization. This approach is called processor farming. The master processor is dealing with the macro-scale level - the structure and the slave processors are dealing with a homogenization procedure on the meso-scale level which is represented by an appropriate representative volume element.

  16. [Improving speech comprehension using a new cochlear implant speech processor].

    Science.gov (United States)

    Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A

    2009-06-01

    The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg

  17. Processing Chains for Desis and Enmap Imaging Spectroscopy Data: Similarities and Differences

    Science.gov (United States)

    Storch, T.; Müller, R.

    2017-10-01

    The Earth Observation Center (EOC) of the German Aerospace Center (DLR) realizes operational processors for DESIS (DLR Earth Sensing Imaging Spectrometer) and EnMAP (Environmental Mapping and Analysis Program) high-resolution imaging spectroscopy remote sensing satellite missions. DESIS is planned to be launched in 2018 and EnMAP in 2020. The developmental (namely schedule, deployment, and team) and functional (namely processing levels, algorithms in processors, and archiving approaches) similarities and differences of the fully-automatic processors are analyzed. The processing chains generate high-quality standardized image products for users at different levels taking characterization and calibration data into account. EOC has long lasting experiences with the airborne and spaceborne acquisition, processing, and analysis of hyperspectral image data. It turns out that both activities strongly benefit from each other.

  18. Reconfigurable Secure Video Codec Based on DWT and AES Processor

    Directory of Open Access Journals (Sweden)

    Rached Tourki

    2010-01-01

    Full Text Available In this paper, we proposed a secure video codec based on the discrete wavelet transformation (DWT and the Advanced Encryption Standard (AES processor. Either, use of video coding with DWT or encryption using AES is well known. However, linking these two designs to achieve secure video coding is leading. The contributions of our work are as follows. First, a new method for image and video compression is proposed. This codec is a synthesis of JPEG and JPEG2000,which is implemented using Huffman coding to the JPEG and DWT to the JPEG2000. Furthermore, an improved motion estimation algorithm is proposed. Second, the encryptiondecryption effects are achieved by the AES processor. AES is aim to encrypt group of LL bands. The prominent feature of this method is an encryption of LL bands by AES-128 (128-bit keys, or AES-192 (192-bit keys, or AES-256 (256-bit keys.Third, we focus on a method that implements partial encryption of LL bands. Our approach provides considerable levels of security (key size, partial encryption, mode encryption, and has very limited adverse impact on the compression efficiency. The proposed codec can provide up to 9 cipher schemes within a reasonable software cost. Latency, correlation, PSNR and compression rate results are analyzed and shown.

  19. Hardware processor for tracking particles in an alternating-gradient synchrotron

    International Nuclear Information System (INIS)

    Johnson, M.; Avilez, C.

    1987-01-01

    We discuss the design and performance of special-purpose processors for tracking particles through an alternating-gradient synchrotron. We present block diagram designs for two hardware processors. Both processors use algorithms based on the 'kick' approximation, i.e., transport matrices are used for dipoles and quadrupoles, and the thin-lens approximation is used for all higher multipoles. The faster processor makes extensive use of memory look-up tables for evaluating functions. For the case of magnets with multipoles up to pole 30 and using one kick per magnet, this processor can track 19 particles through an accelerator at a rate that is only 220 times slower than the time it takes real particles to travel around the machine. For a model consisting of only thin lenses, it is only 150 times slower than real particles. An additional factor of 2 can be obtained with chips now becoming available. The number of magnets in the accelerator is limited only by the amount of memory available for storing magnet parameters. (author) 20 refs., 7 figs., 2 tabs

  20. High-speed special-purpose processor for event selection by number of direct tracks

    International Nuclear Information System (INIS)

    Kalinnikov, V.A.; Krastev, V.R.; Chudakov, E.A.

    1986-01-01

    A processor which uses data on events from five detector planes is described. To increase economy and speed in parallel processing, the processor converts the input data to superposition code and recognizes tracks by a generated search mask. The resolving time of the processor is ≤300 nsec. The processor is CAMAC-compatible and uses ECL integrated circuits

  1. Multibus-based parallel processor for simulation

    Science.gov (United States)

    Ogrady, E. P.; Wang, C.-H.

    1983-01-01

    A Multibus-based parallel processor simulation system is described. The system is intended to serve as a vehicle for gaining hands-on experience, testing system and application software, and evaluating parallel processor performance during development of a larger system based on the horizontal/vertical-bus interprocessor communication mechanism. The prototype system consists of up to seven Intel iSBC 86/12A single-board computers which serve as processing elements, a multiple transmission controller (MTC) designed to support system operation, and an Intel Model 225 Microcomputer Development System which serves as the user interface and input/output processor. All components are interconnected by a Multibus/IEEE 796 bus. An important characteristic of the system is that it provides a mechanism for a processing element to broadcast data to other selected processing elements. This parallel transfer capability is provided through the design of the MTC and a minor modification to the iSBC 86/12A board. The operation of the MTC, the basic hardware-level operation of the system, and pertinent details about the iSBC 86/12A and the Multibus are described.

  2. Monitoring the performance of off-site processors

    International Nuclear Information System (INIS)

    Miller, C.C.

    1995-01-01

    Commercial nuclear power plants have been able to utilize the latest technologies and achieve large volume reduction by obtaining off-site waste processor services. Although the use of such services reduce the burden of waste processing it also reduces the utility's control over the process. Monitoring the performance of off-site processors is important so that the utility is cognizant of the waste disposition for required regulatory reporting. In addition to obtaining data for Reg Guide 1.21 reporting, Performance monitoring is important to determine which vendor and which services to utilize. Off-site processor services were initially offered for the decontamination of metallic waste. Since that time the list of services has expanded to include supercompaction, survey for release, incineration and metal melting. The number of vendors offering off-site services has increased and the services they offer vary. processing rates vary between vendors and have different charge bases. Determining which vendor to use for what service can be complicated and confusing

  3. Vector and parallel processors in computational science. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I S; Reid, J K

    1985-01-01

    This volume contains papers from most of the invited talks and from several of the contributed talks and poster sessions presented at VAPP II. The contents present an extensive coverage of all important aspects of vector and parallel processors, including hardware, languages, numerical algorithms and applications. The topics covered include descriptions of new machines (both research and commercial machines), languages and software aids, and general discussions of whole classes of machines and their uses. Numerical methods papers include Monte Carlo algorithms, iterative and direct methods for solving large systems, finite elements, optimization, random number generation and mathematical software. The specific applications covered include neutron diffusion calculations, molecular dynamics, weather forecasting, lattice gauge calculations, fluid dynamics, flight simulation, cartography, image processing and cryptography. Most machines and architecture types are being used for these applications. many refs.

  4. Digital Signal Processor System for AC Power Drivers

    Directory of Open Access Journals (Sweden)

    Ovidiu Neamtu

    2009-10-01

    Full Text Available DSP (Digital Signal Processor is the bestsolution for motor control systems to make possible thedevelopment of advanced motor drive systems. The motorcontrol processor calculates the required motor windingvoltage magnitude and frequency to operate the motor atthe desired speed. A PWM (Pulse Width Modulationcircuit controls the on and off duty cycle of the powerinverter switches to vary the magnitude of the motorvoltages.

  5. Early experience with the cochlear ESPrit ear-level speech processor in children.

    Science.gov (United States)

    Totten, C; Cope, Y; McCormick, B

    2000-12-01

    The ESPrit ear-level speech processor has recently become available in the United Kingdom for use with the Nucleus CI24M multichannel cochlear implant. We report on the use of this ear-level processor with 6 children, ages 8 to 15 years. In this study, all patients were initially fitted with the SPrint body-worn processor, this being a prerequisite for programming the ESPrit. Five of the children were fitted successfully with the ESPrit and are using their devices consistently. The results show that patient experience with the ESPrit has been favorable, although there have been some device and programming difficulties. Aided threshold measures show that the ESPrit processor performs at least as well as the SPrint processor, with a trend toward improved aided thresholds for the ESPrit processor compared with the SPrint processor. Further study of the functional benefit of both of these devices may confirm these potential gains. The ESPrit device currently has a disadvantage for children in that it does not support FM radio hearing aid use. Finally, caution is advised in the fitting of the ESPrit in very young children or inexperienced listeners, because of difficulties in monitoring device function.

  6. Survey of cochlear implant user satisfaction with the Neptune™ waterproof sound processor

    Directory of Open Access Journals (Sweden)

    Jeroen J. Briaire

    2016-04-01

    Full Text Available A multi-center self-assessment survey was conducted to evaluate patient satisfaction with the Advanced Bionics Neptune™ waterproof sound processor used with the AquaMic™ totally submersible microphone. Subjective satisfaction with the different Neptune™ wearing options, comfort, ease of use, sound quality and use of the processor in a range of active and water related situations were assessed for 23 adults and 73 children, using an online and paper based questionnaire. Upgraded subjects compared their previous processor to the Neptune™. The Neptune™ was most popular for use in general sports and in the pool. Subjects were satisfied with the sound quality of the sound processor outside and under water and following submersion. Seventyeight percent of subjects rated waterproofness as being very useful and 83% of the newly implanted subjects selected waterproofness as one of the reasons why they chose the Neptune™ processor. Providing a waterproof sound processor is considered by cochlear implant recipients to be useful and important and is a factor in their processor choice. Subjects reported that they were satisfied with the Neptune™ sound quality, ease of use and different wearing options.

  7. A word processor optimized for preparing journal articles and student papers.

    Science.gov (United States)

    Wolach, A H; McHale, M A

    2001-11-01

    A new Windows-based word processor for preparing journal articles and student papers is described. In addition to standard features found in word processors, the present word processor provides specific help in preparing manuscripts. Clicking on "Reference Help (APA Form)" in the "File" menu provides a detailed help system for entering the references in a journal article. Clicking on "Examples and Explanations of APA Form" provides a help system with examples of the various sections of a review article, journal article that has one experiment, or journal article that has two or more experiments. The word processor can automatically place the manuscript page header and page number at the top of each page using the form required by APA and Psychonomic Society journals. The "APA Form" submenu of the "Help" menu provides detailed information about how the word processor is optimized for preparing articles and papers.

  8. Extended performance electric propulsion power processor design study. Volume 2: Technical summary

    Science.gov (United States)

    Biess, J. J.; Inouye, L. Y.; Schoenfeld, A. D.

    1977-01-01

    Electric propulsion power processor technology has processed during the past decade to the point that it is considered ready for application. Several power processor design concepts were evaluated and compared. Emphasis was placed on a 30 cm ion thruster power processor with a beam power rating supply of 2.2KW to 10KW for the main propulsion power stage. Extension in power processor performance were defined and were designed in sufficient detail to determine efficiency, component weight, part count, reliability and thermal control. A detail design was performed on a microprocessor as the thyristor power processor controller. A reliability analysis was performed to evaluate the effect of the control electronics redesign. Preliminary electrical design, mechanical design and thermal analysis were performed on a 6KW power transformer for the beam supply. Bi-Mod mechanical, structural and thermal control configurations were evaluated for the power processor and preliminary estimates of mechanical weight were determined.

  9. First Results of an “Artificial Retina” Processor Prototype

    International Nuclear Information System (INIS)

    Cenci, Riccardo; Bedeschi, Franco; Marino, Pietro; Morello, Michael J.; Ninci, Daniele; Piucci, Alessio; Punzi, Giovanni; Ristori, Luciano; Spinella, Franco; Stracka, Simone; Tonelli, Diego; Walsh, John

    2016-01-01

    We report on the performance of a specialized processor capable of reconstructing charged particle tracks in a realistic LHC silicon tracker detector, at the same speed of the readout and with sub-microsecond latency. The processor is based on an innovative pattern-recognition algorithm, called “artificial retina algorithm”, inspired from the vision system of mammals. A prototype of the processor has been designed, simulated, and implemented on Tel62 boards equipped with high-bandwidth Altera Stratix III FPGA devices. The prototype is the first step towards a real-time track reconstruction device aimed at processing complex events of high-luminosity LHC experiments at 40 MHz crossing rate

  10. Compact optical processor for Hough and frequency domain features

    Science.gov (United States)

    Ott, Peter

    1996-11-01

    Shape recognition is necessary in a broad band of applications such as traffic sign or work piece recognition. It requires not only neighborhood processing of the input image pixels but global interconnection of them. The Hough transform (HT) performs such a global operation and it is well suited in the preprocessing stage of a shape recognition system. Translation invariant features can be easily calculated form the Hough domain. We have implemented on the computer a neural network shape recognition system which contains a HT, a feature extraction, and a classification layer. The advantage of this approach is that the total system can be optimized with well-known learning techniques and that it can explore the parallelism of the algorithms. However, the HT is a time consuming operation. Parallel, optical processing is therefore advantageous. Several systems have been proposed, based on space multiplexing with arrays of holograms and CGH's or time multiplexing with acousto-optic processors or by image rotation with incoherent and coherent astigmatic optical processors. We took up the last mentioned approach because 2D array detectors are read out line by line, so a 2D detector can achieve the same speed and is easier to implement. Coherent processing can allow the implementation of tilers in the frequency domain. Features based on wedge/ring, Gabor, or wavelet filters have been proven to show good discrimination capabilities for texture and shape recognition. The astigmatic lens system which is derived form the mathematical formulation of the HT is long and contains a non-standard, astigmatic element. By methods of lens transformation s for coherent applications we map the original design to a shorter lens with a smaller number of well separated standard elements and with the same coherent system response. The final lens design still contains the frequency plane for filtering and ray-tracing shows diffraction limited performance. Image rotation can be done

  11. Nonlinear Wave Simulation on the Xeon Phi Knights Landing Processor

    Science.gov (United States)

    Hristov, Ivan; Goranov, Goran; Hristova, Radoslava

    2018-02-01

    We consider an interesting from computational point of view standing wave simulation by solving coupled 2D perturbed Sine-Gordon equations. We make an OpenMP realization which explores both thread and SIMD levels of parallelism. We test the OpenMP program on two different energy equivalent Intel architectures: 2× Xeon E5-2695 v2 processors, (code-named "Ivy Bridge-EP") in the Hybrilit cluster, and Xeon Phi 7250 processor (code-named "Knights Landing" (KNL). The results show 2 times better performance on KNL processor.

  12. M7--a high speed digital processor for second level trigger selections

    International Nuclear Information System (INIS)

    Droege, T.F.; Gaines, I.; Turner, K.J.

    1978-01-01

    A digital processor is described which reconstructs mass and momentum as a second-level trigger selection. The processor is a five-address, microprogramed, pipelined, ECL machine with simultaneous memory access to four operands which load two parallel multipliers and an ALU. Source data modules are extensions of the processor

  13. TH-C-17A-12: Integrated CBCT and Optical Tomography System On-Board a Small Animal Radiation Research Platform (SARRP)

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K; Zhang, B; Eslami, S; Iordachita, I; Wong, J [Johns Hopkins University, Baltimore, MD (United States); Patterson, M [Hamilton Regional Cancer Ctr., Hamilton, ON (Canada)

    2014-06-15

    Purpose: We present a newly developed on-board optical tomography system for SARRP. Innovative features include the compact design and fast acquisition optical method to perform 3D soft tissue radiation guidance. Because of the on-board feature and the combination of the CBCT, diffusive optical tomography (DOT), bioluminescence and fluorescence tomography (BLT and FT), this integrated system is expected to provide more accurate soft tissue guidance than an off-line system as well as highly sensitive functional imaging in preclinical research. Methods: Images are acquired in the order of CBCT, DOT and then BLT/FT, where the SARRP CBCT and DOT are used to provide the anatomical and optical properties information to enhance the subsequent BLT/FT optical reconstruction. The SARRP stage is redesigned to include 9 imbedded optical fibers in contact with the animal's skin. These fibers, connected to a white light lamp or laser, serve as the light sources for the DOT or FT, respectively. A CCD camera with f/1.4 lens and multi-spectral filter set is used as the optical detector and is mounted on a portable cart ready to dock into the SARRP. No radiation is delivered during optical image acquisition. A 3-way mirror system capable of 180 degree rotation around the animal reflects the optical signal to the camera at multiple projection angles. A special black-painted dome covers the stage and provides the light shielding. Results: Spontaneous metastatic bioluminescent liver and lung tumor models will be used to validate the 3D BLT reconstruction. To demonstrate the capability of our FT system, GastroSense750 fluorescence agent will be used to imaging the mouse stomach and intestinal region in 3D. Conclusion: We expect that this integrated CBCT and optical tomography on-board a SARRP will present new research opportunities for pre-clinical radiation research. Supported by NCI RO1-CA 158100.

  14. TH-C-17A-12: Integrated CBCT and Optical Tomography System On-Board a Small Animal Radiation Research Platform (SARRP)

    International Nuclear Information System (INIS)

    Wang, K; Zhang, B; Eslami, S; Iordachita, I; Wong, J; Patterson, M

    2014-01-01

    Purpose: We present a newly developed on-board optical tomography system for SARRP. Innovative features include the compact design and fast acquisition optical method to perform 3D soft tissue radiation guidance. Because of the on-board feature and the combination of the CBCT, diffusive optical tomography (DOT), bioluminescence and fluorescence tomography (BLT and FT), this integrated system is expected to provide more accurate soft tissue guidance than an off-line system as well as highly sensitive functional imaging in preclinical research. Methods: Images are acquired in the order of CBCT, DOT and then BLT/FT, where the SARRP CBCT and DOT are used to provide the anatomical and optical properties information to enhance the subsequent BLT/FT optical reconstruction. The SARRP stage is redesigned to include 9 imbedded optical fibers in contact with the animal's skin. These fibers, connected to a white light lamp or laser, serve as the light sources for the DOT or FT, respectively. A CCD camera with f/1.4 lens and multi-spectral filter set is used as the optical detector and is mounted on a portable cart ready to dock into the SARRP. No radiation is delivered during optical image acquisition. A 3-way mirror system capable of 180 degree rotation around the animal reflects the optical signal to the camera at multiple projection angles. A special black-painted dome covers the stage and provides the light shielding. Results: Spontaneous metastatic bioluminescent liver and lung tumor models will be used to validate the 3D BLT reconstruction. To demonstrate the capability of our FT system, GastroSense750 fluorescence agent will be used to imaging the mouse stomach and intestinal region in 3D. Conclusion: We expect that this integrated CBCT and optical tomography on-board a SARRP will present new research opportunities for pre-clinical radiation research. Supported by NCI RO1-CA 158100

  15. Discussion paper for a highly parallel array processor-based machine

    International Nuclear Information System (INIS)

    Hagstrom, R.; Bolotin, G.; Dawson, J.

    1984-01-01

    The architectural plant for a quickly realizable implementation of a highly parallel special-purpose computer system with peak performance in the range of 6 billion floating point operations per second is discussed. The architecture is suitable to Lattice Gauge theoretical computations of fundamental physics interest and may be applicable to a range of other problems which deal with numerically intensive computational problems. The plan is quickly realizable because it employs a maximum of commercially available hardware subsystems and because the architecture is software-transparent to the individual processors, allowing straightforward re-use of whatever commercially available operating-systems and support software that is suitable to run on the commercially-produced processors. A tiny prototype instrument, designed along this architecture has already operated. A few elementary examples of programs which can run efficiently are presented. The large machine which the authors would propose to build would be based upon a highly competent array-processor, the ST-100 Array Processor, and specific design possibilities are discussed. The first step toward realizing this plan practically is to install a single ST-100 to allow algorithm development to proceed while a demonstration unit is built using two of the ST-100 Array Processors

  16. SSC 254 Screen-Based Word Processors: Production Tests. The Lanier Word Processor.

    Science.gov (United States)

    Moyer, Ruth A.

    Designed for use in Trident Technical College's Secretarial Lab, this series of 12 production tests focuses on the use of the Lanier Word Processor for a variety of tasks. In tests 1 and 2, students are required to type and print out letters. Tests 3 through 8 require students to reformat a text; make corrections on a letter; divide and combine…

  17. Massively parallel electrical conductivity imaging of the subsurface: Applications to hydrocarbon exploration

    Science.gov (United States)

    Newman, Gregory A.; Commer, Michael

    2009-07-01

    Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/L supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.

  18. Massively parallel electrical conductivity imaging of the subsurface: Applications to hydrocarbon exploration

    International Nuclear Information System (INIS)

    Newman, Gregory A; Commer, Michael

    2009-01-01

    Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/L supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.

  19. Time Synchronization Strategy Between On-Board Computer and FIMS on STSAT-1

    Directory of Open Access Journals (Sweden)

    Seong Woo Kwak

    2004-06-01

    Full Text Available STSAT-1 was launched on sep. 2003 with the main payload of Far Ultra-violet Imaging Spectrograph(FIMS. The mission of FIMS is to observe universe and aurora. In this paper, we suggest a simple and reliable strategy adopted in STSAT-1 to synchronize time between On-board Computer(OBC and FIMS. For the characteristics of STSAT-1, this strategy is devised to maintain reliability of satellite system and to reduce implementation cost by using minimized electronic circuits. We suggested two methods with different synchronization resolutions to cope with unexpected faults in space. The backup method with low resolution can be activated when the main has some problems.

  20. Performance of Artificial Intelligence Workloads on the Intel Core 2 Duo Series Desktop Processors

    OpenAIRE

    Abdul Kareem PARCHUR; Kuppangari Krishna RAO; Fazal NOORBASHA; Ram Asaray SINGH

    2010-01-01

    As the processor architecture becomes more advanced, Intel introduced its Intel Core 2 Duo series processors. Performance impact on Intel Core 2 Duo processors are analyzed using SPEC CPU INT 2006 performance numbers. This paper studied the behavior of Artificial Intelligence (AI) benchmarks on Intel Core 2 Duo series processors. Moreover, we estimated the task completion time (TCT) @1 GHz, @2 GHz and @3 GHz Intel Core 2 Duo series processors frequency. Our results show the performance scalab...

  1. Nonlinear Wave Simulation on the Xeon Phi Knights Landing Processor

    Directory of Open Access Journals (Sweden)

    Hristov Ivan

    2018-01-01

    Full Text Available We consider an interesting from computational point of view standing wave simulation by solving coupled 2D perturbed Sine-Gordon equations. We make an OpenMP realization which explores both thread and SIMD levels of parallelism. We test the OpenMP program on two different energy equivalent Intel architectures: 2× Xeon E5-2695 v2 processors, (code-named “Ivy Bridge-EP” in the Hybrilit cluster, and Xeon Phi 7250 processor (code-named “Knights Landing” (KNL. The results show 2 times better performance on KNL processor.

  2. HTGR core seismic analysis using an array processor

    International Nuclear Information System (INIS)

    Shatoff, H.; Charman, C.M.

    1983-01-01

    A Floating Point Systems array processor performs nonlinear dynamic analysis of the high-temperature gas-cooled reactor (HTGR) core with significant time and cost savings. The graphite HTGR core consists of approximately 8000 blocks of various shapes which are subject to motion and impact during a seismic event. Two-dimensional computer programs (CRUNCH2D, MCOCO) can perform explicit step-by-step dynamic analyses of up to 600 blocks for time-history motions. However, use of two-dimensional codes was limited by the large cost and run times required. Three-dimensional analysis of the entire core, or even a large part of it, had been considered totally impractical. Because of the needs of the HTGR core seismic program, a Floating Point Systems array processor was used to enhance computer performance of the two-dimensional core seismic computer programs, MCOCO and CRUNCH2D. This effort began by converting the computational algorithms used in the codes to a form which takes maximum advantage of the parallel and pipeline processors offered by the architecture of the Floating Point Systems array processor. The subsequent conversion of the vectorized FORTRAN coding to the array processor required a significant programming effort to make the system work on the General Atomic (GA) UNIVAC 1100/82 host. These efforts were quite rewarding, however, since the cost of running the codes has been reduced approximately 50-fold and the time threefold. The core seismic analysis with large two-dimensional models has now become routine and extension to three-dimensional analysis is feasible. These codes simulate the one-fifth-scale full-array HTGR core model. This paper compares the analysis with the test results for sine-sweep motion

  3. The hardware track finder processor in CMS at CERN

    CERN Document Server

    Kluge, A

    1997-01-01

    The work covers the design of the Track Finder Processor in the high energy experiment CMS (Compact Muon Solenoid, planned for 2005) at CERN/Geneva. The task of this processor is to identify muons and measure their transverse momentum. The track finder processor makes it possible to determine the physical relevance of each high energetic collision and to forward only interesting data to the data an alysis units. Data of more than two hundred thousand detector cells are used to determine the location of muons and measure their transverse momentum. Each 25 ns a new data set is generated. Measurem ent of location and transverse momentum of the muons can be terminated within 350 ns by using an ASIC (Application Specific Integrated Circuit). A pipeline architecture processes new data sets with th e required data rate of 40 MHz to ensure dead time free operation. In the framework of this study specifications and the overall concept of the track finder processor were worked out in detail. Simul ations were performed...

  4. Analysis of plasmaspheric plumes: CLUSTER and IMAGE observations

    Directory of Open Access Journals (Sweden)

    F. Darrouzet

    2006-07-01

    Full Text Available Plasmaspheric plumes have been routinely observed by CLUSTER and IMAGE. The CLUSTER mission provides high time resolution four-point measurements of the plasmasphere near perigee. Total electron density profiles have been derived from the electron plasma frequency identified by the WHISPER sounder supplemented, in-between soundings, by relative variations of the spacecraft potential measured by the electric field instrument EFW; ion velocity is also measured onboard these satellites. The EUV imager onboard the IMAGE spacecraft provides global images of the plasmasphere with a spatial resolution of 0.1 RE every 10 min; such images acquired near apogee from high above the pole show the geometry of plasmaspheric plumes, their evolution and motion. We present coordinated observations of three plume events and compare CLUSTER in-situ data with global images of the plasmasphere obtained by IMAGE. In particular, we study the geometry and the orientation of plasmaspheric plumes by using four-point analysis methods. We compare several aspects of plume motion as determined by different methods: (i inner and outer plume boundary velocity calculated from time delays of this boundary as observed by the wave experiment WHISPER on the four spacecraft, (ii drift velocity measured by the electron drift instrument EDI onboard CLUSTER and (iii global velocity determined from successive EUV images. These different techniques consistently indicate that plasmaspheric plumes rotate around the Earth, with their foot fully co-rotating, but with their tip rotating slower and moving farther out.

  5. UA1 upgrade first-level calorimeter trigger processor

    International Nuclear Information System (INIS)

    Bains, N.; Charlton, D.; Ellis, N.; Garvey, J.; Gregory, J.; Jimack, M.P.; Jovanovic, P.; Kenyon, I.R.; Baird, S.A.; Campbell, D.; Cawthraw, M.; Coughlan, J.; Flynn, P.; Galagedera, S.; Grayer, G.; Halsall, R.; Shah, T.P.; Stephens, R.; Eisenhandler, E.; Fensome, I.; Landon, M.

    1989-01-01

    A new first-level trigger processor has been built for the UA1 experiment on the Cern SppS Collider. The processor exploits the fine granularity of the new UA1 uranium-TMP calorimeter to improve the selectivity of the trigger. The new electron trigger has improved hadron jet rejection, achieved by requiring low energy deposition around the electromagnetic cluster. A missing transverse energy trigger and a total energy trigger have also been implemented. (orig.)

  6. Zircaloy nodular corrosion analysis by an image processing technique

    International Nuclear Information System (INIS)

    Kawashima, Junko; Sato, Kanemitsu; Kuwae, Ryosho; Higashinakagawa, Emiko

    1987-01-01

    An image processor has been fabricated to examine out-of-pile nodular corrosion for Zircaloy-2 tubings. The covering fraction, which is the percentage of the nodule occupying area on the Zircaloy surface, was measured with the processor. The covering fraction showed a strong correlation with the weight gain at any corrosion time of this experiment. The correlation observed can be explained by a model for the lenticular shape of the nodules. The image processor also gives unfolded pictures of the whole Zircaloy surface. By analyzing the picture, the location of the nodules generated was found to be determined in an early stage of corrosion. New nodules were not produced later, and the nodules only grew larger with time. (orig.)

  7. Fresh water generators onboard a floating platform

    International Nuclear Information System (INIS)

    Tewari, P.K.; Verma, R.K.; Misra, B.M.; Sadhulkan, H.K.

    1997-01-01

    A dependable supply of fresh water is essential for any ocean going vessel. The operating and maintenance personnel on offshore platforms and marine structures also require a constant and regular supply of fresh water to meet their essential daily needs. A seawater thermal desalination unit onboard delivers good quality fresh water from seawater. The desalination units developed by Bhabha Atomic Research Centre (BARC) suitable for ocean going vessels and offshore platforms have been discussed. Design considerations of such units with reference to floating platforms and corrosive environments have been presented. The feasibility of coupling a low temperature vacuum evaporation (LTVE) desalination plant suitable for an onboard floating platform to a PHWR nuclear power plant has also been discussed. (author). 1 ref., 3 figs, 2 tabs

  8. Application of the Computer Capacity to the Analysis of Processors Evolution

    OpenAIRE

    Ryabko, Boris; Rakitskiy, Anton

    2017-01-01

    The notion of computer capacity was proposed in 2012, and this quantity has been estimated for computers of different kinds. In this paper we show that, when designing new processors, the manufacturers change the parameters that affect the computer capacity. This allows us to predict the values of parameters of future processors. As the main example we use Intel processors, due to the accessibility of detailed description of all their technical characteristics.

  9. Optimal processor for malfunction detection in operating nuclear reactor

    International Nuclear Information System (INIS)

    Ciftcioglu, O.

    1990-01-01

    An optimal processor for diagnosing operational transients in a nuclear reactor is described. Basic design of the processor involves real-time processing of noise signal obtained from a particular in core sensor and the optimality is based on minimum alarm failure in contrast to minimum false alarm criterion from the safe and reliable plant operation viewpoint

  10. An updated program-controlled analog processor, model AP-006, for semiconductor detector spectrometers

    International Nuclear Information System (INIS)

    Shkola, N.F.; Shevchenko, Yu.A.

    1989-01-01

    An analog processor, model AP-006, is reported. The processor is a development of a series of spectrometric units based on a shaper of the type 'DL dif +TVS+gated ideal integrator'. Structural and circuits design features are described. The results of testing the processor in a setup with a Si(Li) detecting unit over an input count-rate range of up to 5x10 5 cps are presented. Processor applications are illustrated. (orig.)

  11. Onboard Processing on PWE OFA/WFC (Onboard Frequency Analyzer/Waveform Capture) aboard the ERG (ARASE) Satellite

    Science.gov (United States)

    Matsuda, S.; Kasahara, Y.; Kojima, H.; Kasaba, Y.; Yagitani, S.; Ozaki, M.; Imachi, T.; Ishisaka, K.; Kurita, S.; Ota, M.; Kumamoto, A.; Tsuchiya, F.; Yoshizumi, M.; Matsuoka, A.; Teramoto, M.; Shinohara, I.

    2017-12-01

    Exploration of energization and Radiation in Geospace (ERG) is a mission for understanding particle acceleration, loss mechanisms, and the dynamic evolution of space storms in the context of cross-energy and cross-regional coupling [Miyoshi et al., 2012]. The ERG (ARASE) satellite was launched on December 20, 2016, and successfully inserted into an orbit. The Plasma Wave Experiment (PWE) is one of the science instruments on board the ERG satellite to measure electric field and magnetic field in the inner magnetosphere. PWE consists of three sub-components, EFD (Electric Field Detector), OFA/WFC (Onboard Frequency Analyzer and Waveform Capture), and HFA (High Frequency Analyzer). Especially, OFA/WFC measures electric and magnetic field spectrum and waveform from a few Hz to 20 kHz. OFA/WFC processes signals detected by a couple of dipole wire-probe antenna (WPT) and tri-axis magnetic search coils (MSC) installed onboard the satellite. The PWE-OFA subsystem calculates and produces three kind of data; OFA-SPEC (power spectrum), OFA-MATRIX (spectrum matrix), and OFA-COMPLEX (complex spectrum). They are continuously processed 24 hours per day and all data are sent to the ground. OFA-MATRIX and OFA-COMPLEX are used for polarization analyses and direction finding of the plasma waves. The PWE-WFC subsystem measures raw (64 kHz sampled) and down-sampled (1 kHz sampled) burst waveform detected by the WPT and the MSC sensors. It activates by a command, automatic triggering, and scheduling. The initial check-out process of the PWE successfully completed, and initial data has been obtained. In this presentation, we introduce onboard processing technique on PWE OFA/WFC and its initial results.

  12. Intelligent trigger processor for the crystal box

    International Nuclear Information System (INIS)

    Sanders, G.H.; Butler, H.S.; Cooper, M.D.

    1981-01-01

    A large solid angle modular NaI(Tl) detector with 432 phototubes and 88 trigger scintillators is being used to search simultaneously for three lepton flavor changing decays of muon. A beam of up to 10 6 muons stopping per second with a 6% duty factor would yield up to 1000 triggers per second from random triple coincidences. A reduction of the trigger rate to 10 Hz is required from a hardwired primary trigger processor described in this paper. Further reduction to < 1 Hz is achieved by a microprocessor based secondary trigger processor. The primary trigger hardware imposes voter coincidence logic, stringent timing requirements, and a non-adjacency requirement in the trigger scintillators defined by hardwired circuits. Sophisticated geometric requirements are imposed by a PROM-based matrix logic, and energy and vector-momentum cuts are imposed by a hardwired processor using LSI flash ADC's and digital arithmetic loci. The secondary trigger employs four satellite microprocessors to do a sparse data scan, multiplex the data acquisition channels and apply additional event filtering

  13. 40 CFR 80.840 - What requirements apply to transmix processors?

    Science.gov (United States)

    2010-07-01

    ... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Toxics Gasoline Toxics Performance Requirements § 80.840 What requirements apply to transmix processors? Any transmix processor who produces gasoline or gasoline blendstock from transmix, or recovers gasoline or gasoline blendstock from transmix...

  14. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2008-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chip. This means that parallel processing is required in application areas that traditionally have not used...

  15. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2009-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chips. This means that parallel processing is required in application areas that traditionally have not used...

  16. The Serial Link Processor for the Fast TracKer (FTK) processor at ATLAS

    CERN Document Server

    Biesuz, Nicolo Vladi; The ATLAS collaboration; Luciano, Pierluigi; Magalotti, Daniel; Rossi, Enrico

    2015-01-01

    The Associative Memory (AM) system of the Fast Tracker (FTK) processor has been designed to perform pattern matching using the hit information of the ATLAS experiment silicon tracker. The AM is the heart of FTK and is mainly based on the use of ASICs (AM chips) designed on purpose to execute pattern matching with a high degree of parallelism. It finds track candidates at low resolution that are seeds for a full resolution track fitting. To solve the very challenging data traffic problems inside FTK, multiple board and chip designs have been performed. The currently proposed solution is named the “Serial Link Processor” and is based on an extremely powerful network of 2 Gb/s serial links. This paper reports on the design of the Serial Link Processor consisting of two types of boards, the Local Associative Memory Board (LAMB), a mezzanine where the AM chips are mounted, and the Associative Memory Board (AMB), a 9U VME board which holds and exercises four LAMBs. We report on the performance of the intermedia...

  17. The Serial Link Processor for the Fast TracKer (FTK) processor at ATLAS

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beccherle, R; Beretta, M; Cipriani, R; Citraro, S; Citterio, M; Colombo, A; Crescioli, F; Dimas, D; Donati, S; Giannetti, P; Kordas, K; Lanza, A; Liberali, V; Luciano, P; Magalotti, D; Neroutsos, P; Nikolaidis, S; Piendibene, M; Sakellariou, A; Shojaii, S; Sotiropoulou, C-L; Stabile, A

    2014-01-01

    The Associative Memory (AM) system of the FTK processor has been designed to perform pattern matching using the hit information of the ATLAS silicon tracker. The AM is the heart of the FTK and it finds track candidates at low resolution that are seeds for a full resolution track fitting. To solve the very challenging data traffic problems inside the FTK, multiple designs and tests have been performed. The currently proposed solution is named the “Serial Link Processor” and is based on an extremely powerful network of 2 Gb/s serial links. This paper reports on the design of the Serial Link Processor consisting of the AM chip, an ASIC designed and optimized to perform pattern matching, and two types of boards, the Local Associative Memory Board (LAMB), a mezzanine where the AM chips are mounted, and the Associative Memory Board (AMB), a 9U VME board which holds and exercises four LAMBs. Special relevance will be given to the AMchip design that includes two custom cells optimized for low consumption. We repo...

  18. The Serial Link Processor for the Fast TracKer (FTK) processor at ATLAS

    CERN Document Server

    Biesuz, Nicolo Vladi; The ATLAS collaboration; Luciano, Pierluigi; Magalotti, Daniel; Rossi, Enrico

    2015-01-01

    The Associative Memory (AM) system of the Fast Tracker (FTK) processor has been designed to perform pattern matching using the hit information of the ATLAS experiment silicon tracker. The AM is the heart of FTK and is mainly based on the use of ASICs (AM chips) designed to execute pattern matching with a high degree of parallelism. The AM system finds track candidates at low resolution that are seeds for a full resolution track fitting. To solve the very challenging data traffic problems inside FTK, multiple board and chip designs have been performed. The currently proposed solution is named the “Serial Link Processor” and is based on an extremely powerful network of 828 2 Gbit/s serial links for a total in/out bandwidth of 56 Gb/s. This paper reports on the design of the Serial Link Processor consisting of two types of boards, the Local Associative Memory Board (LAMB), a mezzanine where the AM chips are mounted, and the Associative Memory Board (AMB), a 9U VME board which holds and exercises four LAMBs. ...

  19. Safe and Efficient Support for Embeded Multi-Processors in ADA

    Science.gov (United States)

    Ruiz, Jose F.

    2010-08-01

    New software demands increasing processing power, and multi-processor platforms are spreading as the answer to achieve the required performance. Embedded real-time systems are also subject to this trend, but in the case of real-time mission-critical systems, the properties of reliability, predictability and analyzability are also paramount. The Ada 2005 language defined a subset of its tasking model, the Ravenscar profile, that provides the basis for the implementation of deterministic and time analyzable applications on top of a streamlined run-time system. This Ravenscar tasking profile, originally designed for single processors, has proven remarkably useful for modelling verifiable real-time single-processor systems. This paper proposes a simple extension to the Ravenscar profile to support multi-processor systems using a fully partitioned approach. The implementation of this scheme is simple, and it can be used to develop applications amenable to schedulability analysis.

  20. A programmable systolic trigger processor for FERA bus data

    International Nuclear Information System (INIS)

    Appelquist, G.; Hovander, B.; Sellden, B.; Bohm, C.

    1992-09-01

    A generic CAMAC based trigger processor module for fast processing of large amounts of ADC data, has been designed. This module has been realised using complex programmable gate arrays (LCAs from XILINX). The gate arrays have been connected to memories and multipliers in such a way that different gate array configurations can cover a wide range of module applications. Using this module, it is possible to construct complex trigger processors. The module uses both the fast ECL FERA bus and the CAMAC bus for inputs and outputs. The latter, however, is primarily used for set-up and control but may also be used for data output. Large numbers of ADCs can be served by a hierarchical arrangement of trigger processor modules, processing ADC data with pipe-line arithmetics producing the final result at the apex of the pyramid. The trigger decision will be transmitted to the data acquisition system via a logic signal while numeric results may be extracted by the CAMAC controller. The trigger processor was originally developed for the proposed neutral particle search experiment at CERN, NUMASS. There it was designed to serve as a second level trigger processor. It was required to correct all ADC raw data for efficiency and pedestal, calculate the total calorimeter energy, obtain the optimal time of flight data and calculate the particle mass. A suitable mass cut would then deliver the trigger decision. More complex triggers were also considered. (au)

  1. CNES studies for on-board implementation via HLS tools of a cloud-detection module for selective compression

    Science.gov (United States)

    Camarero, R.; Thiebaut, C.; Dejean, Ph.; Speciel, A.

    2010-08-01

    Future CNES high resolution instruments for remote sensing missions will lead to higher data-rates because of the increase in resolution and dynamic range. For example, the ground resolution improvement has induced a data-rate multiplied by 8 from SPOT4 to SPOT5 [1] and by 28 to PLEIADES-HR [2]. Innovative "smart" compression techniques will be then required, performing different types of compression inside a scene, in order to reach higher global compression ratios while complying with image quality requirements. This socalled "selective compression", allows important compression gains by detecting and then differently compressing the regions-of-interest (ROI) and non-interest in the image (e.g. higher compression ratios are assigned to the non-interesting data). Given that most of CNES high resolution images are cloudy [1], significant mass-memory and transmission gain could be reached by just detecting and suppressing (or compressing significantly) the areas covered by clouds. Since 2007, CNES works on a cloud detection module [3] as a simplification for on-board implementation of an already existing module used on-ground for PLEIADES-HR album images [4]. The different steps of this Support Vector Machine classifier have already been analyzed, for simplification and optimization, during this on-board implementation study: reflectance computation, characteristics vector computation (based on multispectral criteria) and computation of the SVM output. In order to speed up the hardware design phase, a new approach based on HLS [5] tools is being tested for the VHDL description stage. The aim is to obtain a bit-true VDHL design directly from a high level description language as C or Matlab/Simulink [6].

  2. Onboard autonomous mission re-planning for multi-satellite system

    Science.gov (United States)

    Zheng, Zixuan; Guo, Jian; Gill, Eberhard

    2018-04-01

    This paper presents an onboard autonomous mission re-planning system for Multi-Satellites System (MSS) to perform onboard re-planing in disruptive situations. The proposed re-planning system can deal with different potential emergency situations. This paper uses Multi-Objective Hybrid Dynamic Mutation Genetic Algorithm (MO-HDM GA) combined with re-planning techniques as the core algorithm. The Cyclically Re-planning Method (CRM) and the Near Real-time Re-planning Method (NRRM) are developed to meet different mission requirements. Simulations results show that both methods can provide feasible re-planning sequences under unforeseen situations. The comparisons illustrate that using the CRM is average 20% faster than the NRRM on computation time. However, by using the NRRM more raw data can be observed and transmitted than using the CRM within the same period. The usability of this onboard re-planning system is not limited to multi-satellite system. Other mission planning and re-planning problems related to autonomous multiple vehicles with similar demands are also applicable.

  3. Autonomous, On-board Processing for Sensor Systems

    Data.gov (United States)

    National Aeronautics and Space Administration — Fuse high performance reconfigurable processors with emerging fault-tolerance & autonomous processing techniques for a 10-100x decrease in processing time. This...

  4. 7 CFR 1215.14 - Processor.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Processor. 1215.14 Section 1215.14 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION Popcorn Promotion, Research, and Consumer Information Order Definitions § 1215.14...

  5. Performance of Artificial Intelligence Workloads on the Intel Core 2 Duo Series Desktop Processors

    Directory of Open Access Journals (Sweden)

    Abdul Kareem PARCHUR

    2010-12-01

    Full Text Available As the processor architecture becomes more advanced, Intel introduced its Intel Core 2 Duo series processors. Performance impact on Intel Core 2 Duo processors are analyzed using SPEC CPU INT 2006 performance numbers. This paper studied the behavior of Artificial Intelligence (AI benchmarks on Intel Core 2 Duo series processors. Moreover, we estimated the task completion time (TCT @1 GHz, @2 GHz and @3 GHz Intel Core 2 Duo series processors frequency. Our results show the performance scalability in Intel Core 2 Duo series processors. Even though AI benchmarks have similar execution time, they have dissimilar characteristics which are identified using principal component analysis and dendogram. As the processor frequency increased from 1.8 GHz to 3.167 GHz the execution time is decreased by ~370 sec for AI workloads. In the case of Physics/Quantum Computing programs it was ~940 sec.

  6. Photonics and Fiber Optics Processor Lab

    Data.gov (United States)

    Federal Laboratory Consortium — The Photonics and Fiber Optics Processor Lab develops, tests and evaluates high speed fiber optic network components as well as network protocols. In addition, this...

  7. Slowdown in the $M/M/1$ discriminatory processor-sharing queue

    NARCIS (Netherlands)

    Cheung, S.K.; Kim, Bara; Kim, Jeongsim

    2008-01-01

    We consider a queue with multiple K job classes, Poisson arrivals, and exponentially distributed required service times in which a single processor serves according to the discriminatory processor-sharing (DPS) discipline. For this queue, we obtain the first and second moments of the slowdown, which

  8. Online Fastbus processor for LEP

    International Nuclear Information System (INIS)

    Mueller, H.

    1986-01-01

    The author describes the online computing aspects of Fastbus systems using a processor module which has been developed at CERN and is now available commercially. These General Purpose Master/Slaves (GPMS) are based on 68000/10 (or optionally 68020/68881) processors. Applications include use as event-filters (DELPHI), supervisory controllers, Fastbus stand-alone diagnostic tools, and multiprocessor array components. The direct mapping of single, 32-bit assembly instructions to execute Fastbus protocols makes the use of a GPM both simple and flexible. Loosely coupled processing in Fastbus networks is possible between GPM's as they support access semaphores and use a two port memory as I/O buffer for Fastbus. Both master and slave-ports support block transfers up to 20 Mbytes/s. The CERN standard Fastbus software and the MoniCa symbolic debugging monitor are available on the GPM with real time, multiprocessing support. (Auth.)

  9. Invasive tightly coupled processor arrays

    CERN Document Server

    LARI, VAHID

    2016-01-01

    This book introduces new massively parallel computer (MPSoC) architectures called invasive tightly coupled processor arrays. It proposes strategies, architecture designs, and programming interfaces for invasive TCPAs that allow invading and subsequently executing loop programs with strict requirements or guarantees of non-functional execution qualities such as performance, power consumption, and reliability. For the first time, such a configurable processor array architecture consisting of locally interconnected VLIW processing elements can be claimed by programs, either in full or in part, using the principle of invasive computing. Invasive TCPAs provide unprecedented energy efficiency for the parallel execution of nested loop programs by avoiding any global memory access such as GPUs and may even support loops with complex dependencies such as loop-carried dependencies that are not amenable to parallel execution on GPUs. For this purpose, the book proposes different invasion strategies for claiming a desire...

  10. Applied Questions of Onboard Laser Radar Equipment Development

    Directory of Open Access Journals (Sweden)

    E. I. Starovoitov

    2015-01-01

    Full Text Available During development of the spacecraft laser radar systems (LRS it is a problem to make a choice of laser sources and photo-detectors both because of their using specifics in onboard equipment and because of the limited number of domestic and foreign manufacturers.Previous publications did not consider in detail the accuracy versus laser pulse repetition frequency, the impact of photo-detector sensitivity and dynamic range on the LRS characteristics, and the power signal-protected photo-detector against overload.The objective of this work is to analyze how the range, accuracy, and reliability of onboard LRS depend on different types of laser sources and photo-detectors, and on availability of electromechanical optical attenuator.The paper describes design solutions that are used to compensate for a decreased sensitivity of photo-detector and an impact of these changes on the LRS characteristics.It is shown that due to the high pulse repetition frequency a fiber laser is the preferred type of a laser source in onboard LRS, which can be used at ranges less than 500 m for two purposes: determining the orientation of the passive spacecraft with the accuracy of 0.3 and measuring the range rate during the rendezvous of spacecrafts with an accuracy of 0.003... 0.006 m/s.The work identifies the attenuation level of the optical attenuator versus measured range. In close proximity to a diffusely reflecting passive spacecraft and a corner reflector this attenuator protects photo-detector. It is found that the optical attenuator is advisable to apply when using the photo-detector based on an avalanche photodiode. There is no need in optical attenuator (if a geometric factor is available in the case of sounding corner reflector when a photo-detector based on pin-photodiode is used. Exclusion of electromechanical optical attenuator can increase the reliability function of LRS from Р (t = 0.9991 to Р (t = 0.9993.The results obtained in this work can be used

  11. A Modular Pipelined Processor for High Resolution Gamma-Ray Spectroscopy

    Science.gov (United States)

    Veiga, Alejandro; Grunfeld, Christian

    2016-02-01

    The design of a digital signal processor for gamma-ray applications is presented in which a single ADC input can simultaneously provide temporal and energy characterization of gamma radiation for a wide range of applications. Applying pipelining techniques, the processor is able to manage and synchronize very large volumes of streamed real-time data. Its modular user interface provides a flexible environment for experimental design. The processor can fit in a medium-sized FPGA device operating at ADC sampling frequency, providing an efficient solution for multi-channel applications. Two experiments are presented in order to characterize its temporal and energy resolution.

  12. A survey of Tumult, a real-time multi-processor system

    International Nuclear Information System (INIS)

    Jansen, P.G.

    1986-01-01

    Tumult (Twente University MULTi processor system) is the name of an ongoing project aiming at the design and implementation of a modular extendible multiprocessor system. All memory is distributed and processors communicate in parallel via a fast and reliable local switching network instead of a shared bus. A distributed real-time operating system is being designed and implemented, consisting of a multi-tasking subsystem per processor. Processes can communicate via a message passing mechanism. Communication links and processes are dynamically created and disposed by the application. In this article a brief description of the system is given; communication aspects are emphasized. (Auth.)

  13. Reaction-diffusion path planning in a hybrid chemical and cellular-automaton processor

    International Nuclear Information System (INIS)

    Adamatzky, Andrew; Lacy Costello, Benjamin de

    2003-01-01

    To find the shortest collision-free path in a room containing obstacles we designed a chemical processor and coupled it with a cellular-automaton processor. In the chemical processor obstacles are represented by sites of high concentration of potassium iodide and a planar substrate is saturated with palladium chloride. Potassium iodide diffuses into the substrate and reacts with palladium chloride. A dark coloured precipitate of palladium iodide is formed almost everywhere except sites where two or more diffusion wavefronts collide. The less coloured sites are situated at the furthest distance from obstacles. Thus, the chemical processor develops a repulsive field, generated by obstacles. A snapshot of the chemical processor is inputted to a cellular automaton. The automaton behaves like a discrete excitable media; also, every cell of the automaton is supplied with a pointer that shows an origin of the cell's excitation. The excitation spreads along the cells corresponding to precipitate depleted sites of the chemical processor. When the destination-site is excited, waves travel on the lattice and update the orientations of the pointers. Thus, the automaton constructs a spanning tree, made of pointers, that guides a traveler towards the destination point. Thus, the automaton medium generates an attractive field and combination of this attractive field with the repulsive field, generated by the chemical processor, provides us with a solution of the collision-free path problem

  14. A VAX-FPS Loosely-Coupled Array of Processors

    International Nuclear Information System (INIS)

    Grosdidier, G.

    1987-03-01

    The main features of a VAX-FPS Loosely-Coupled Array of Processors (LCAP) set-up and the implementation of a High Energy Physics tracking program for off-line purposes will be described. This LCAP consists of a VAX 11/750 host and two FPS 64 bit attached processors. Before analyzing the performances of this LCAP, its characteristics will be outlined, especially from a user's point of vue, and will be briefly compared to those of the IBM-FPS LCAP

  15. Reducing Competitive Cache Misses in Modern Processor Architectures

    OpenAIRE

    Prisagjanec, Milcho; Mitrevski, Pece

    2017-01-01

    The increasing number of threads inside the cores of a multicore processor, and competitive access to the shared cache memory, become the main reasons for an increased number of competitive cache misses and performance decline. Inevitably, the development of modern processor architectures leads to an increased number of cache misses. In this paper, we make an attempt to implement a technique for decreasing the number of competitive cache misses in the first level of cache memory. This tec...

  16. 16-Bit RISC Processor Design for Convolution Application

    OpenAIRE

    Anand Nandakumar Shardul

    2013-01-01

    In this project, we propose a 16-bit non-pipelined RISC processor, which is used for signal processing applications. The processor consists of the blocks, namely, program counter, clock control unit, ALU, IDU and registers. Advantageous architectural modifications have been made in the incremented circuit used in program counter and carry select adder unit of the ALU in the RISC CPU core. Furthermore, a high speed and low power modified modifies multiplier has been designed and introduced in ...

  17. Modal Processor Effects Inspired by Hammond Tonewheel Organs

    Directory of Open Access Journals (Sweden)

    Kurt James Werner

    2016-06-01

    Full Text Available In this design study, we introduce a novel class of digital audio effects that extend the recently introduced modal processor approach to artificial reverberation and effects processing. These pitch and distortion processing effects mimic the design and sonics of a classic additive-synthesis-based electromechanical musical instrument, the Hammond tonewheel organ. As a reverb effect, the modal processor simulates a room response as the sum of resonant filter responses. This architecture provides precise, interactive control over the frequency, damping, and complex amplitude of each mode. Into this framework, we introduce two types of processing effects: pitch effects inspired by the Hammond organ’s equal tempered “tonewheels”, “drawbar” tone controls, vibrato/chorus circuit, and distortion effects inspired by the pseudo-sinusoidal shape of its tonewheels and electromagnetic pickup distortion. The result is an effects processor that imprints the Hammond organ’s sonics onto any audio input.

  18. Safety-critical Java on a time-predictable processor

    DEFF Research Database (Denmark)

    Korsholm, Stephan E.; Schoeberl, Martin; Puffitsch, Wolfgang

    2015-01-01

    For real-time systems the whole execution stack needs to be time-predictable and analyzable for the worst-case execution time (WCET). This paper presents a time-predictable platform for safety-critical Java. The platform consists of (1) the Patmos processor, which is a time-predictable processor......; (2) a C compiler for Patmos with support for WCET analysis; (3) the HVM, which is a Java-to-C compiler; (4) the HVM-SCJ implementation which supports SCJ Level 0, 1, and 2 (for both single and multicore platforms); and (5) a WCET analysis tool. We show that real-time Java programs translated to C...... and compiled to a Patmos binary can be analyzed by the AbsInt aiT WCET analysis tool. To the best of our knowledge the presented system is the second WCET analyzable real-time Java system; and the first one on top of a RISC processor....

  19. Stepping motor control processor reference manual. Volume I

    International Nuclear Information System (INIS)

    Holloway, F.W.; VanArsdall, P.J.; Suski, G.J.; Gant, R.G.; Rash, M.

    1980-01-01

    This manual is intended to serve several purposes. The first goal is to describe the capabilities and operation of the SMC processor package from an operator or user point of view. Secondly, the manual will describe in some detail the basic hardware elements and how they can be used effectively to implement a step motor control system. Practical information on the use, installation and checkout of the hardware set is presented in the following sections along with programming suggestions. Available related system software is described in this manual for reference and as an aid in understanding the system architecture. Section two presents an overview and operations manual of the SMC processor describing its composition and functional capabilities. Section three contains hardware descriptions in some detail for the LLL-designed hardware used in the SMC processor. Basic theory of operation and important features are explained

  20. The Interface Between Redundant Processor Modules Of Safety Grade PLC Using Mass Storage DPRAM

    International Nuclear Information System (INIS)

    Hwang, Sung Jae; Song, Seong Hwan; No, Young Hun; Yun, Dong Hwa; Park, Gang Min; Kim, Min Gyu; Choi, Kyung Chul; Lee, Ui Taek

    2010-01-01

    Processor module of safety grade PLC (hereinafter called as POSAFE-Q) developed by POSCO ICT provides high reliability and safety. However, POSAFEQ would have suffered a malfunction when we think taking place of abnormal operation by exceptional environmental. POSAFE-Q would not able to conduct its function normally in such case. To prevent these situations, the necessity of redundant processor module has been raised. Therefore, redundant processor module, NCPU-2Q, has been developed which has not only functions of single processor module with high reliability and safety but also functions of redundant processor

  1. The performances of coffee processors and coffee market in the Republic of Serbia

    Directory of Open Access Journals (Sweden)

    Nuševa Daniela

    2017-01-01

    Full Text Available The main aim of this paper is to investigate the performances of coffee processors and coffee market in Serbia based on the market concentration analysis, profitability analysis, and profitability determinants analysis. The research was based on the sample of 40 observations of coffee processing companies divided into two groups: large and small coffee processors. The results indicate that two large coffee processors have dominant market share. Even though the Serbian coffee market is an oligopolistic, profitability analysis indicates that small coffee processors have a significant better profitability ratio than large coffee processors. Furthermore, results show that profitability ratio is positively related to the inventory turnover and negatively related to the market share.

  2. Hardware Synchronization for Embedded Multi-Core Processors

    DEFF Research Database (Denmark)

    Stoif, Christian; Schoeberl, Martin; Liccardi, Benito

    2011-01-01

    Multi-core processors are about to conquer embedded systems — it is not the question of whether they are coming but how the architectures of the microcontrollers should look with respect to the strict requirements in the field. We present the step from one to multiple cores in this paper, establi......Multi-core processors are about to conquer embedded systems — it is not the question of whether they are coming but how the architectures of the microcontrollers should look with respect to the strict requirements in the field. We present the step from one to multiple cores in this paper...

  3. The microelectronic and photonic test bed RISC processor and DRAM memory stack experiments

    International Nuclear Information System (INIS)

    Clark, K.A.; Meehan, T.J.

    1999-01-01

    This paper reports on the on-orbit data obtained from the MPTB RISC Processor Experiment, containing three Integrated Device Technologies R3081 processors. During operations, nine SEUs were observed in the processors, and four SEUs were observed in the memory and/or support circuitry. (authors)

  4. Embedded Processor Based Automatic Temperature Control of VLSI Chips

    Directory of Open Access Journals (Sweden)

    Narasimha Murthy Yayavaram

    2009-01-01

    Full Text Available This paper presents embedded processor based automatic temperature control of VLSI chips, using temperature sensor LM35 and ARM processor LPC2378. Due to the very high packing density, VLSI chips get heated very soon and if not cooled properly, the performance is very much affected. In the present work, the sensor which is kept very near proximity to the IC will sense the temperature and the speed of the fan arranged near to the IC is controlled based on the PWM signal generated by the ARM processor. A buzzer is also provided with the hardware, to indicate either the failure of the fan or overheating of the IC. The entire process is achieved by developing a suitable embedded C program.

  5. Hardware Realization of an FPGA Processor – Operating System Call Offload and Experiences

    DEFF Research Database (Denmark)

    Hindborg, Andreas Erik; Schleuniger, Pascal; Jensen, Nicklas Bo

    2014-01-01

    Field-programmable gate arrays, FPGAs, are attractive implementation platforms for low-volume signal and image processing applications. The structure of FPGAs allows for an efficient implementation of parallel algorithms. Sequential algorithms, on the other hand, often perform better...... core that can be integrated in many signal and data processing platforms on FPGAs. We also show how we allow the processor to use operating system services. For a set of SPLASH-2 and SPEC CPU2006 benchmarks we show a speedup of up to 64% over a similar Xilinx MicroBlaze implementation while using 27...

  6. On Gamma Ray Instrument On-Board Data Processing Real-Time Computational Algorithm for Cosmic Ray Rejection

    Science.gov (United States)

    Kizhner, Semion; Hunter, Stanley D.; Hanu, Andrei R.; Sheets, Teresa B.

    2016-01-01

    Richard O. Duda and Peter E. Hart of Stanford Research Institute in [1] described the recurring problem in computer image processing as the detection of straight lines in digitized images. The problem is to detect the presence of groups of collinear or almost collinear figure points. It is clear that the problem can be solved to any desired degree of accuracy by testing the lines formed by all pairs of points. However, the computation required for n=NxM points image is approximately proportional to n2 or O(n2), becoming prohibitive for large images or when data processing cadence time is in milliseconds. Rosenfeld in [2] described an ingenious method due to Hough [3] for replacing the original problem of finding collinear points by a mathematically equivalent problem of finding concurrent lines. This method involves transforming each of the figure points into a straight line in a parameter space. Hough chose to use the familiar slope-intercept parameters, and thus his parameter space was the two-dimensional slope-intercept plane. A parallel Hough transform running on multi-core processors was elaborated in [4]. There are many other proposed methods of solving a similar problem, such as sampling-up-the-ramp algorithm (SUTR) [5] and algorithms involving artificial swarm intelligence techniques [6]. However, all state-of-the-art algorithms lack in real time performance. Namely, they are slow for large images that require performance cadence of a few dozens of milliseconds (50ms). This problem arises in spaceflight applications such as near real-time analysis of gamma ray measurements contaminated by overwhelming amount of traces of cosmic rays (CR). Future spaceflight instruments such as the Advanced Energetic Pair Telescope instrument (AdEPT) [7-9] for cosmos gamma ray survey employ large detector readout planes registering multitudes of cosmic ray interference events and sparse science gamma ray event traces' projections. The AdEPT science of interest is in the

  7. gFEX, the ATLAS Calorimeter Level-1 Real Time Processor

    CERN Document Server

    AUTHOR|(SzGeCERN)759889; The ATLAS collaboration; Begel, Michael; Chen, Hucheng; Lanni, Francesco; Takai, Helio; Wu, Weihao

    2016-01-01

    The global feature extractor (gFEX) is a component of the Level-1 Calorimeter trigger Phase-I upgrade for the ATLAS experiment. It is intended to identify patterns of energy associated with the hadronic decays of high momentum Higgs, W, & Z bosons, top quarks, and exotic particles in real time at the LHC crossing rate. The single processor board will be packaged in an Advanced Telecommunications Computing Architecture (ATCA) module and implemented as a fast reconfigurable processor based on three Xilinx Vertex Ultra-scale FPGAs. The board will receive coarse-granularity information from all the ATLAS calorimeters on 276 optical fibers with the data transferred at the 40 MHz Large Hadron Collider (LHC) clock frequency. The gFEX will be controlled by a single system-on-chip processor, ZYNQ, that will be used to configure all the processor Field-Programmable Gate Array (FPGAs), monitor board health, and interface to external signals. Now, the pre-prototype board which includes one ZYNQ and one Vertex-7 FPGA ...

  8. gFEX, the ATLAS Calorimeter Level 1 Real Time Processor

    CERN Document Server

    Tang, Shaochun; The ATLAS collaboration

    2015-01-01

    The global feature extractor (gFEX) is a component of the Level-1Calorimeter trigger Phase-I upgrade for the ATLAS experiment. It is intended to identify patterns of energy associated with the hadronic decays of high momentum Higgs, W, & Z bosons, top quarks, and exotic particles in real time at the LHC crossing rate. The single processor board will be packaged in an Advanced Telecommunications Computing Architecture (ATCA) module and implemented as a fast reconfigurable processor based on three Xilinx Ultra-scale FPGAs. The board will receive coarse-granularity information from all the ATLAS calorimeters on 264 optical fibers with the data transferred at the 40 MHz LHC clock frequency. The gFEX will be controlled by a single system-on-chip processor, ZYNQ, that will be used to configure all the processor FPGAs, monitor board health, and interface to external signals. Now, the pre-prototype board which includes one ZYNQ and one Vertex-7 FPGA has been designed for testing and verification. The performance ...

  9. Scientific programming on massively parallel processor CP-PACS

    International Nuclear Information System (INIS)

    Boku, Taisuke

    1998-01-01

    The massively parallel processor CP-PACS takes various problems of calculation physics as the object, and it has been designed so that its architecture has been devised to do various numerical processings. In this report, the outline of the CP-PACS and the example of programming in the Kernel CG benchmark in NAS Parallel Benchmarks, version 1, are shown, and the pseudo vector processing mechanism and the parallel processing tuning of scientific and technical computation utilizing the three-dimensional hyper crossbar net, which are two great features of the architecture of the CP-PACS are described. As for the CP-PACS, the PUs based on RISC processor and added with pseudo vector processor are used. Pseudo vector processing is realized as the loop processing by scalar command. The features of the connection net of PUs are explained. The algorithm of the NPB version 1 Kernel CG is shown. The part that takes the time for processing most in the main loop is the product of matrix and vector (matvec), and the parallel processing of the matvec is explained. The time for the computation by the CPU is determined. As the evaluation of the performance, the evaluation of the time for execution, the short vector processing of pseudo vector processor based on slide window, and the comparison with other parallel computers are reported. (K.I.)

  10. Investigation of Large Scale Cortical Models on Clustered Multi-Core Processors

    Science.gov (United States)

    2013-02-01

    Playstation 3 with 6 available SPU cores outperforms the Intel Xeon processor (with 4 cores) by about 1.9 times for the HTM model and by 2.4 times...runtime breakdowns of the HTM and Dean models respectively on the Cell processor (on the Playstation 3) and the Intel Xeon processor ( 4 thread...YOUR FORM TO THE ABOVE ORGANIZATION. 1. REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 3. DATES COVERED (From - To) 4 . TITLE AND SUBTITLE 5a. CONTRACT NUMBER

  11. Sample Processor for Life on Icy Worlds (SPLIce): Design and Test Results

    Science.gov (United States)

    Chinn, Tori N.; Lee, Anthony K.; Boone, Travis D.; Tan, Ming X.; Chin, Matthew M.; McCutcheon, Griffin C.; Horne, Mera F.; Padgen, Michael R.; Blaich, Justin T.; Forgione, Joshua B.; hide

    2017-01-01

    We report the design, development, and testing of the Sample Processor for Life on Icy Worlds (SPLIce) system, a microfluidic sample processor to enable autonomous detection of signatures of life and measurements of habitability parameters in Ocean Worlds. This monolithic fluid processing-and-handling system (Figure 1; mass 0.5 kg) retrieves a 50-L-volume sample and prepares it to supply a suite of detection instruments, each with unique preparation needs. SPLIce has potential applications in orbiter missions that sample ocean plumes, such as found in Saturns icy moon Enceladus, or landed missions on the surface of icy satellites, such as Jupiters moon Europa. Answering the question Are we alone in the universe? is captivating and exceptionally challenging. Even general criteria that define life very broadly include a significant role for water [1,2]. Searches for extinct or extant life therefore prioritize locations of abundant water whether in ancient (Mars), or present (Europa and Enceladus) times. Only two previous planetary missions had onboard fluid processing: the Viking Biology Experiments [3] and Phoenixs Wet Chemistry Laboratory (WCL) [4]. SPLIce differs crucially from those systems, including its capability to process and distribute L-volume samples and the integration autonomous control of a wide range of fluidic functions, including: 1) retrieval of fluid samples from an evacuated sample chamber; 2) onboard multi-year storage of dehydrated reagents; 3) integrated pressure, pH, and conductivity measurement; 4) filtration and retention of insoluble particles for microscopy; 5) dilution or vacuum-driven concentration of samples to accommodate instrument working ranges; 6) removal of gas bubbles from sample aliquots; 7) unidirectional flow (check valves); 8) active flow-path selection (solenoid-actuated valves); 9) metered pumping in 100 nL volume increments. The SPLIce manifold, made of three thermally fused layers of precision-machined cyclo

  12. New control method of on-board ATP system of Shinkansen trains

    Energy Technology Data Exchange (ETDEWEB)

    Fukuda, N.; Watanabe, T. [Railway Technical Research Inst. (Japan)

    2000-07-01

    We studied a new control method of the on-board automatic train protection (ATP) system for Shinkansen trains to shorten the operation time and not to degrade ride comfort at changes in deceleration of the train, while maintaining the safety and reliability of the present ATP signal system. We propose a new on-board pattern brake control system based on the present ATP data without changing the wayside equipment. By simulating the ATP braking of the proposed control method, we succeeded in shortening the operation time by 48 seconds per one station in comparison with the present ATP brake control system. This paper reports the concept of the system and simulation results of the on-board pattern. (orig.)

  13. Automation of ORIGEN2 calculations for the transuranic waste baseline inventory database using a pre-processor and a post-processor

    International Nuclear Information System (INIS)

    Liscum-Powell, J.

    1997-06-01

    The purpose of the work described in this report was to automate ORIGEN2 calculations for the Waste Isolation Pilot Plant (WIPP) Transuranic Waste Baseline Inventory Database (WTWBID); this was done by developing a pre-processor to generate ORIGEN2 input files from WWBID inventory files and a post-processor to remove excess information from the ORIGEN2 output files. The calculations performed with ORIGEN2 estimate the radioactive decay and buildup of various radionuclides in the waste streams identified in the WTWBID. The resulting radionuclide inventories are needed for performance assessment calculations for the WIPP site. The work resulted in the development of PreORG, which requires interaction with the user to generate ORIGEN2 input files on a site-by-site basis, and PostORG, which processes ORIGEN2 output into more manageable files. Both programs are written in the FORTRAN 77 computer language. After running PreORG, the user will run ORIGEN2 to generate the desired data; upon completion of ORIGEN2 calculations, the user can run PostORG to process the output to make it more manageable. All the programs run on a 386 PC or higher with a math co-processor or a computer platform running under VMS operating system. The pre- and post-processors for ORIGEN2 were generated for use with Rev. 1 data of the WTWBID and can also be used with Rev. 2 and 3 data of the TWBID (Transuranic Waste Baseline Inventory Database)

  14. A design of a computer complex including vector processors

    International Nuclear Information System (INIS)

    Asai, Kiyoshi

    1982-12-01

    We, members of the Computing Center, Japan Atomic Energy Research Institute have been engaged for these six years in the research of adaptability of vector processing to large-scale nuclear codes. The research has been done in collaboration with researchers and engineers of JAERI and a computer manufacturer. In this research, forty large-scale nuclear codes were investigated from the viewpoint of vectorization. Among them, twenty-six codes were actually vectorized and executed. As the results of the investigation, it is now estimated that about seventy percents of nuclear codes and seventy percents of our total amount of CPU time of JAERI are highly vectorizable. Based on the data obtained by the investigation, (1)currently vectorizable CPU time, (2)necessary number of vector processors, (3)necessary manpower for vectorization of nuclear codes, (4)computing speed, memory size, number of parallel 1/0 paths, size and speed of 1/0 buffer of vector processor suitable for our applications, (5)necessary software and operational policy for use of vector processors are discussed, and finally (6)a computer complex including vector processors is presented in this report. (author)

  15. Amaro-autonomous real-time detection of moving maritime objects: introducing a flight experiment for an on-board ship detection system

    Science.gov (United States)

    Schwenk, Kurt; Willburger, Katharina; Pless, Sebastian

    2017-10-01

    Motivated by politics and economy, the monitoring of the world wide ship traffic is a field of high topicality. To detect illegal activities like piracy, illegal fishery, ocean dumping and refugee transportation is of great value. The analysis of satellite images on the ground delivers a great contribution to situation awareness. However, for many applications the up-to-dateness of the data is crucial. With ground based processing, the time between image acquisition and delivery of the data to the end user is in the range of several hours. The highest influence to the duration of ground based processing is the delay caused by the transmission of the large amount of image data from the satellite to the processing centre on the ground. One expensive solution to this issue is the usage of data relay satellites systems like EDRS. Another approach is to analyse the image data directly on-board of the satellite. Since the product data (e.g. ship position, heading, velocity, characteristics) is very small compared to the input image data, real-time connections provided by satellite telecommunication services like Iridium or Orbcomm can be used to send small packets of information directly to the end user without significant delay. The AMARO (Autonomous real-time detection of moving maritime objects) project at DLR is a feasibility study of an on-board ship detection system involving a real-time low bandwidth communication. The operation of a prototype on-board ship detection system will be demonstrated on an airborne platform. In this article, the scope, aim and design of a flight experiment for an on-board ship detection system scheduled for mid of 2018 is presented. First, the scope and the constraints of the experiment are explained in detail. The main goal is to demonstrate the operability of an automatic ship detection system on board of an airplane. For data acquisition the optical high resolution DLR MACS-MARE camera (VIS/NIR) is used. The system will be able to

  16. Parallel computation for distributed parameter system-from vector processors to Adena computer

    Energy Technology Data Exchange (ETDEWEB)

    Nogi, T

    1983-04-01

    Research on advanced parallel hardware and software architectures for very high-speed computation deserves and needs more support and attention to fulfil its promise. Novel architectures for parallel processing are being made ready. Architectures for parallel processing can be roughly divided into two groups. One is a vector processor in which a single central processing unit involves multiple vector-arithmetic registers. The other is a processor array in which slave processors are connected to a host processor to perform parallel computation. In this review, the concept and data structure of the Adena (alternating-direction edition nexus array) architecture, which is conformable to distributed-parameter simulation algorithms, are described. 5 references.

  17. On-Orbit Performance of the Helioseismic and Magnetic Imager Instrument onboard the Solar Dynamics Observatory

    Science.gov (United States)

    Hoeksema, J. T.; Baldner, C. S.; Bush, R. I.; Schou, J.; Scherrer, P. H.

    2018-03-01

    The Helioseismic and Magnetic Imager (HMI) instrument is a major component of NASA's Solar Dynamics Observatory (SDO) spacecraft. Since commencement of full regular science operations on 1 May 2010, HMI has operated with remarkable continuity, e.g. during the more than five years of the SDO prime mission that ended 30 September 2015, HMI collected 98.4% of all possible 45-second velocity maps; minimizing gaps in these full-disk Dopplergrams is crucial for helioseismology. HMI velocity, intensity, and magnetic-field measurements are used in numerous investigations, so understanding the quality of the data is important. This article describes the calibration measurements used to track the performance of the HMI instrument, and it details trends in important instrument parameters during the prime mission. Regular calibration sequences provide information used to improve and update the calibration of HMI data. The set-point temperature of the instrument front window and optical bench is adjusted regularly to maintain instrument focus, and changes in the temperature-control scheme have been made to improve stability in the observable quantities. The exposure time has been changed to compensate for a 20% decrease in instrument throughput. Measurements of the performance of the shutter and tuning mechanisms show that they are aging as expected and continue to perform according to specification. Parameters of the tunable optical-filter elements are regularly adjusted to account for drifts in the central wavelength. Frequent measurements of changing CCD-camera characteristics, such as gain and flat field, are used to calibrate the observations. Infrequent expected events such as eclipses, transits, and spacecraft off-points interrupt regular instrument operations and provide the opportunity to perform additional calibration. Onboard instrument anomalies are rare and seem to occur quite uniformly in time. The instrument continues to perform very well.

  18. Digital control card based on digital signal processor

    International Nuclear Information System (INIS)

    Hou Shigang; Yin Zhiguo; Xia Le

    2008-01-01

    A digital control card based on digital signal processor was developed. Two Freescale DSP-56303 processors were utilized to achieve 3 channels proportional- integral-differential regulations. The card offers high flexibility for 100 MeV cyclotron RF system development. It was used as feedback controller in low level radio frequency control prototype, with the feedback gain parameters continuously adjustable. By using high precision analog to digital converter with 500 kHz sampling rate, a regulation bandwidth of 20 kHz was achieved. (authors)

  19. Performance of Web-based image distribution: client-oriented measurements

    International Nuclear Information System (INIS)

    Bergh, B.; Pietsch, M.; Schlaefke, A.; Vogl, T.J.

    2003-01-01

    The aim of this study was to define a clinically suitable personal computer (PC) configuration for Web-based image distribution and to assess the influence of different hard- and software configurations on the performance. Through specially developed software the time-to-display (TTD) for various PC configurations was measured. Different processor speeds, random access memory (RAM), screen resolutions, graphic adapters, network speeds, operating systems and examination types (computed radiography, CT, MRI) were evaluated, providing more than half a million measurements. Processor speed was the most relevant factor for the TTD; doubling the speed halved the TTD. Under processor speeds of 350 MHz, TTD mostly remained above 5 s for 1 CR or 16 CT images. Here Windows NT with lossy compression were superior. Processor speeds of 350 MHz and over delivered TTD <5 s. In this case Windows 2000 and lossless compression were preferable. Screen resolutions above 1280 x 1024 pixels increased the TTD mainly for CR images. The RAM amount, network speed and graphic adapter did not have a significant influence. The minimum threshold for clinical routine is any standard off-the-shelf PC better than Pentium II 350 MHz, 128 MB RAM; hence, high-end PC hardware is not required. (orig.)

  20. A Comprehensive Onboarding and Orientation Plan for Neurocritical Care Advanced Practice Providers.

    Science.gov (United States)

    Langley, Tamra M; Dority, Jeremy; Fraser, Justin F; Hatton, Kevin W

    2018-06-01

    As the role of advanced practice providers (APPs) expands to include increasingly complex patient care within the intensive care unit, the educational needs of these providers must also be expanded. An onboarding process was designed for APPs in the neurocritical care service line. Onboarding for new APPs revolved around 5 specific areas: candidate selection, proctor assignment, 3-phased orientation process, remediation, and mentorship. To ensure effective training for APPs, using the most time-conscious approach, the backbone of the process is a structured curriculum. This was developed and integrated within the standard orientation and onboarding process. The curriculum design incorporated measurable learning goals, objective assessments of phased goal achievements, and opportunities for remediation. The neurocritical care service implemented an onboarding process in 2014. Four APPs (3 nurse practitioners and 1 physician assistant) were employed by the department before the implementation of the orientation program. The length of employment ranged from 1 to 4 years. Lack of clinical knowledge and/or sufficient training was cited as reasons for departure from the position in 2 of the 4 APPs, as either self-expression or peer evaluation. Since implementation of this program, 12 APPs have completed the program, of which 10 remain within the division, creating an 83% retention rate. The onboarding process, including a 3-phased, structured orientation plan for neurocritical care, has increased APP retention since its implementation. The educational model, along with proctoring and mentorship, has improved clinical knowledge and increased nurse practitioner retention. A larger-scale study would help to support the validity of this onboarding process.

  1. On-boarding the Middle Manager.

    Science.gov (United States)

    OʼConnor, Mary

    The trend of promoting clinical experts into management roles continues. New middle managers need a transitional plan that includes support, mentoring, and direction from senior leaders, including the chief nursing officer (CNO). This case study demonstrates how the CNO of one organization collaborated with a faculty member colleague to develop and implement a yearlong personalized on-boarding program for a group of new nurse middle managers.

  2. Treecode with a Special-Purpose Processor

    Science.gov (United States)

    Makino, Junichiro

    1991-08-01

    We describe an implementation of the modified Barnes-Hut tree algorithm for a gravitational N-body calculation on a GRAPE (GRAvity PipE) backend processor. GRAPE is a special-purpose computer for N-body calculations. It receives the positions and masses of particles from a host computer and then calculates the gravitational force at each coordinate specified by the host. To use this GRAPE processor with the hierarchical tree algorithm, the host computer must maintain a list of all nodes that exert force on a particle. If we create this list for each particle of the system at each timestep, the number of floating-point operations on the host and that on GRAPE would become comparable, and the increased speed obtained by using GRAPE would be small. In our modified algorithm, we create a list of nodes for many particles. Thus, the amount of the work required of the host is significantly reduced. This algorithm was originally developed by Barnes in order to vectorize the force calculation on a Cyber 205. With this algorithm, the computing time of the force calculation becomes comparable to that of the tree construction, if the GRAPE backend processor is sufficiently fast. The obtained speed-up factor is 30 to 50 for a RISC-based host computer and GRAPE-1A with a peak speed of 240 Mflops.

  3. Token-Aware Completion Functions for Elastic Processor Verification

    Directory of Open Access Journals (Sweden)

    Sudarshan K. Srinivasan

    2009-01-01

    Full Text Available We develop a formal verification procedure to check that elastic pipelined processor designs correctly implement their instruction set architecture (ISA specifications. The notion of correctness we use is based on refinement. Refinement proofs are based on refinement maps, which—in the context of this problem—are functions that map elastic processor states to states of the ISA specification model. Data flow in elastic architectures is complicated by the insertion of any number of buffers in any place in the design, making it hard to construct refinement maps for elastic systems in a systematic manner. We introduce token-aware completion functions, which incorporate a mechanism to track the flow of data in elastic pipelines, as a highly automated and systematic approach to construct refinement maps. We demonstrate the efficiency of the overall verification procedure based on token-aware completion functions using six elastic pipelined processor models based on the DLX architecture.

  4. Implementation of an anonymisation tool for clinical trials using a clinical trial processor integrated with an existing trial patient data information system

    NARCIS (Netherlands)

    Aryanto, Kadek Y. E.; Broekema, Andre; Oudkerk, Matthijs; van Ooijen, Peter M. A.

    To present an adapted Clinical Trial Processor (CTP) test set-up for receiving, anonymising and saving Digital Imaging and Communications in Medicine (DICOM) data using external input from the original database of an existing clinical study information system to guide the anonymisation process. Two

  5. A light hydrocarbon fuel processor producing high-purity hydrogen

    Science.gov (United States)

    Löffler, Daniel G.; Taylor, Kyle; Mason, Dylan

    This paper discusses the design process and presents performance data for a dual fuel (natural gas and LPG) fuel processor for PEM fuel cells delivering between 2 and 8 kW electric power in stationary applications. The fuel processor resulted from a series of design compromises made to address different design constraints. First, the product quality was selected; then, the unit operations needed to achieve that product quality were chosen from the pool of available technologies. Next, the specific equipment needed for each unit operation was selected. Finally, the unit operations were thermally integrated to achieve high thermal efficiency. Early in the design process, it was decided that the fuel processor would deliver high-purity hydrogen. Hydrogen can be separated from other gases by pressure-driven processes based on either selective adsorption or permeation. The pressure requirement made steam reforming (SR) the preferred reforming technology because it does not require compression of combustion air; therefore, steam reforming is more efficient in a high-pressure fuel processor than alternative technologies like autothermal reforming (ATR) or partial oxidation (POX), where the combustion occurs at the pressure of the process stream. A low-temperature pre-reformer reactor is needed upstream of a steam reformer to suppress coke formation; yet, low temperatures facilitate the formation of metal sulfides that deactivate the catalyst. For this reason, a desulfurization unit is needed upstream of the pre-reformer. Hydrogen separation was implemented using a palladium alloy membrane. Packed beds were chosen for the pre-reformer and reformer reactors primarily because of their low cost, relatively simple operation and low maintenance. Commercial, off-the-shelf balance of plant (BOP) components (pumps, valves, and heat exchangers) were used to integrate the unit operations. The fuel processor delivers up to 100 slm hydrogen >99.9% pure with <1 ppm CO, <3 ppm CO 2. The

  6. Soft-core dataflow processor architecture optimised for radar signal processing: Article

    CSIR Research Space (South Africa)

    Broich, R

    2014-10-01

    Full Text Available Current radar signal processors lack either performance or flexibility. Custom soft-core processors exhibit potential in high-performance signal processing applications, yet remain relatively unexplored in research literature. In this paper, we use...

  7. Onboard Autonomy and Ground Operations Automation for the Intelligent Payload Experiment (IPEX) CubeSat Mission

    Science.gov (United States)

    Chien, Steve; Doubleday, Joshua; Ortega, Kevin; Tran, Daniel; Bellardo, John; Williams, Austin; Piug-Suari, Jordi; Crum, Gary; Flatley, Thomas

    2012-01-01

    The Intelligent Payload Experiment (IPEX) is a cubesat manifested for launch in October 2013 that will flight validate autonomous operations for onboard instrument processing and product generation for the Intelligent Payload Module (IPM) of the Hyperspectral Infra-red Imager (HyspIRI) mission concept. We first describe the ground and flight operations concept for HyspIRI IPM operations. We then describe the ground and flight operations concept for the IPEX mission and how that will validate HyspIRI IPM operations. We then detail the current status of the mission and outline the schedule for future development.

  8. Elaboration and implementation of standard operational procedure for quality assurance of cone beam CT image in radiotherapy

    International Nuclear Information System (INIS)

    Bonatto, Larisse N.; Estacio, Daniela R.; Lopes, Juliane S.; Sansson, Angela; Duarte, Lucas O.; Sbaraini, Patricia; Silva, Ana M. Marques da; Streck, Elaine E.

    2016-01-01

    The objective of this article is to present the implementation of the quality Control of Cone Beam Computed Tomography (CBCT) image, generated by the On-Board Imager, integrated with the linear accelerator Trilogy. Standard operating procedures (POPs) have been developed based on the literature and manuals of the simulator object Catphan 504 and the On-Board Imager. The following POPs were developed: acquisition of the CBCT image; linearity of CT number; uniformity; spatial resolution; low contrast resolution; spatial linearity; thickness of the cut. The validation of the elaborated procedures was done from an experimental acquisition of the simulator object. The results obtained in the validation of the POPs are in compliance with the parameters established by the manufacturer of the simulator object, as well as those obtained in the acceptance of the On-Board Imager device.

  9. Merged ozone profiles from four MIPAS processors

    Science.gov (United States)

    Laeng, Alexandra; von Clarmann, Thomas; Stiller, Gabriele; Dinelli, Bianca Maria; Dudhia, Anu; Raspollini, Piera; Glatthor, Norbert; Grabowski, Udo; Sofieva, Viktoria; Froidevaux, Lucien; Walker, Kaley A.; Zehner, Claus

    2017-04-01

    The Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) was an infrared (IR) limb emission spectrometer on the Envisat platform. Currently, there are four MIPAS ozone data products, including the operational Level-2 ozone product processed at ESA, with the scientific prototype processor being operated at IFAC Florence, and three independent research products developed by the Istituto di Fisica Applicata Nello Carrara (ISAC-CNR)/University of Bologna, Oxford University, and the Karlsruhe Institute of Technology-Institute of Meteorology and Climate Research/Instituto de Astrofísica de Andalucía (KIT-IMK/IAA). Here we present a dataset of ozone vertical profiles obtained by merging ozone retrievals from four independent Level-2 MIPAS processors. We also discuss the advantages and the shortcomings of this merged product. As the four processors retrieve ozone in different parts of the spectra (microwindows), the source measurements can be considered as nearly independent with respect to measurement noise. Hence, the information content of the merged product is greater and the precision is better than those of any parent (source) dataset. The merging is performed on a profile per profile basis. Parent ozone profiles are weighted based on the corresponding error covariance matrices; the error correlations between different profile levels are taken into account. The intercorrelations between the processors' errors are evaluated statistically and are used in the merging. The height range of the merged product is 20-55 km, and error covariance matrices are provided as diagnostics. Validation of the merged dataset is performed by comparison with ozone profiles from ACE-FTS (Atmospheric Chemistry Experiment-Fourier Transform Spectrometer) and MLS (Microwave Limb Sounder). Even though the merging is not supposed to remove the biases of the parent datasets, around the ozone volume mixing ratio peak the merged product is found to have a smaller (up to 0.1 ppmv

  10. Keystone Business Models for Network Security Processors

    Directory of Open Access Journals (Sweden)

    Arthur Low

    2013-07-01

    Full Text Available Network security processors are critical components of high-performance systems built for cybersecurity. Development of a network security processor requires multi-domain experience in semiconductors and complex software security applications, and multiple iterations of both software and hardware implementations. Limited by the business models in use today, such an arduous task can be undertaken only by large incumbent companies and government organizations. Neither the “fabless semiconductor” models nor the silicon intellectual-property licensing (“IP-licensing” models allow small technology companies to successfully compete. This article describes an alternative approach that produces an ongoing stream of novel network security processors for niche markets through continuous innovation by both large and small companies. This approach, referred to here as the "business ecosystem model for network security processors", includes a flexible and reconfigurable technology platform, a “keystone” business model for the company that maintains the platform architecture, and an extended ecosystem of companies that both contribute and share in the value created by innovation. New opportunities for business model innovation by participating companies are made possible by the ecosystem model. This ecosystem model builds on: i the lessons learned from the experience of the first author as a senior integrated circuit architect for providers of public-key cryptography solutions and as the owner of a semiconductor startup, and ii the latest scholarly research on technology entrepreneurship, business models, platforms, and business ecosystems. This article will be of interest to all technology entrepreneurs, but it will be of particular interest to owners of small companies that provide security solutions and to specialized security professionals seeking to launch their own companies.

  11. Improving the Geolocation Algorithm for Sensors Onboard the ISS: Effect of Drift Angle

    Directory of Open Access Journals (Sweden)

    Changyong Dou

    2014-05-01

    Full Text Available The drift angle caused by the Earth’s self-rotation may introduce rotational displacement artifact on the geolocation results of imagery acquired by an Earth observing sensor onboard the International Space Station (ISS. If uncorrected, it would cause a gradual degradation of positional accuracy from the center towards the edges of an image. One correction method to account for the drift angle effect was developed. The drift angle was calculated from the ISS state vectors and positional information of the ground nadir point of the imagery. Tests with images acquired by the International Space Station Agriculture Camera (ISSAC using Google EarthTM as a reference indicated that applying the drift angle correction can reduce the residual geolocation error for the corner points of the ISSAC images from over 1000 to less than 500 m. The improved geolocation accuracy is well within the inherent geolocation uncertainty of up to 800 m, mainly due to imprecise knowledge of the ISS attitude and state parameters required to perform the geolocation algorithm.

  12. Using remotely piloted aircraft and onboard processing to optimize and expand data collection

    Science.gov (United States)

    Fladeland, M. M.; Sullivan, D. V.; Chirayath, V.; Instrella, R.; Phelps, G. A.

    2016-12-01

    Remotely piloted aircraft (RPA) have the potential to revolutionize local to regional data collection for geophysicists as platform and payload size decrease while aircraft capabilities increase. In particular, data from RPAs combine high-resolution imagery available from low flight elevations with comprehensive areal coverage, unattainable from ground investigations and difficult to acquire from manned aircraft due to budgetary and logistical costs. Low flight elevations are particularly important for detecting signals that decay exponentially with distance, such as electromagnetic fields. Onboard data processing coupled with high-bandwidth telemetry open up opportunities for real-time and near real-time data processing, producing more efficient flight plans through the use of payload-directed flight, machine learning and autonomous systems. Such applications not only strive to enhance data collection, but also enable novel sensing modalities and temporal resolution. NASA's Airborne Science Program has been refining the capabilities and applications of RPA in support of satellite calibration and data product validation for several decades. In this paper, we describe current platforms, payloads, and onboard data systems available to the research community. Case studies include Fluid Lensing for littoral zone 3D mapping, structure from motion for terrestrial 3D multispectral imaging, and airborne magnetometry on medium and small RPAs.

  13. Demonstration of two-qubit algorithms with a superconducting quantum processor.

    Science.gov (United States)

    DiCarlo, L; Chow, J M; Gambetta, J M; Bishop, Lev S; Johnson, B R; Schuster, D I; Majer, J; Blais, A; Frunzio, L; Girvin, S M; Schoelkopf, R J

    2009-07-09

    Quantum computers, which harness the superposition and entanglement of physical states, could outperform their classical counterparts in solving problems with technological impact-such as factoring large numbers and searching databases. A quantum processor executes algorithms by applying a programmable sequence of gates to an initialized register of qubits, which coherently evolves into a final state containing the result of the computation. Building a quantum processor is challenging because of the need to meet simultaneously requirements that are in conflict: state preparation, long coherence times, universal gate operations and qubit readout. Processors based on a few qubits have been demonstrated using nuclear magnetic resonance, cold ion trap and optical systems, but a solid-state realization has remained an outstanding challenge. Here we demonstrate a two-qubit superconducting processor and the implementation of the Grover search and Deutsch-Jozsa quantum algorithms. We use a two-qubit interaction, tunable in strength by two orders of magnitude on nanosecond timescales, which is mediated by a cavity bus in a circuit quantum electrodynamics architecture. This interaction allows the generation of highly entangled states with concurrence up to 94 per cent. Although this processor constitutes an important step in quantum computing with integrated circuits, continuing efforts to increase qubit coherence times, gate performance and register size will be required to fulfil the promise of a scalable technology.

  14. Advanced Avionics and Processor Systems for a Flexible Space Exploration Architecture

    Science.gov (United States)

    Keys, Andrew S.; Adams, James H.; Smith, Leigh M.; Johnson, Michael A.; Cressler, John D.

    2010-01-01

    The Advanced Avionics and Processor Systems (AAPS) project, formerly known as the Radiation Hardened Electronics for Space Environments (RHESE) project, endeavors to develop advanced avionic and processor technologies anticipated to be used by NASA s currently evolving space exploration architectures. The AAPS project is a part of the Exploration Technology Development Program, which funds an entire suite of technologies that are aimed at enabling NASA s ability to explore beyond low earth orbit. NASA s Marshall Space Flight Center (MSFC) manages the AAPS project. AAPS uses a broad-scoped approach to developing avionic and processor systems. Investment areas include advanced electronic designs and technologies capable of providing environmental hardness, reconfigurable computing techniques, software tools for radiation effects assessment, and radiation environment modeling tools. Near-term emphasis within the multiple AAPS tasks focuses on developing prototype components using semiconductor processes and materials (such as Silicon-Germanium (SiGe)) to enhance a device s tolerance to radiation events and low temperature environments. As the SiGe technology will culminate in a delivered prototype this fiscal year, the project emphasis shifts its focus to developing low-power, high efficiency total processor hardening techniques. In addition to processor development, the project endeavors to demonstrate techniques applicable to reconfigurable computing and partially reconfigurable Field Programmable Gate Arrays (FPGAs). This capability enables avionic architectures the ability to develop FPGA-based, radiation tolerant processor boards that can serve in multiple physical locations throughout the spacecraft and perform multiple functions during the course of the mission. The individual tasks that comprise AAPS are diverse, yet united in the common endeavor to develop electronics capable of operating within the harsh environment of space. Specifically, the AAPS tasks for

  15. OLYMPUS system and development of its pre-processor

    International Nuclear Information System (INIS)

    Okamoto, Masao; Takeda, Tatsuoki; Tanaka, Masatoshi; Asai, Kiyoshi; Nakano, Koh.

    1977-08-01

    The OLYMPUS SYSTEM developed by K. V. Roverts et al. was converted and introduced in computer system FACOM 230/75 of the JAERI Computing Center. A pre-processor was also developed for the OLYMPUS SYSTEM. The OLYMPUS SYSTEM is very useful for development, standardization and exchange of programs in thermonuclear fusion research and plasma physics. The pre-processor developed by the present authors is not only essential for the JAERI OLYMPUS SYSTEM, but also useful in manipulation, creation and correction of program files. (auth.)

  16. A fast processor for di-lepton triggers

    CERN Document Server

    Kostarakis, P; Barsotti, E; Conetti, S; Cox, B; Enagonio, J; Haldeman, M; Haynes, W; Katsanevas, S; Kerns, C; Lebrun, P; Smith, H; Soszyniski, T; Stoffel, J; Treptow, K; Turkot, F; Wagner, R

    1981-01-01

    As a new application of the Fermilab ECL-CAMAC logic modules a fast trigger processor was developed for Fermilab experiment E-537, aiming to measure the higher mass di-muon production by antiprotons. The processor matches the hit information received from drift chambers and scintillation counters, to find candidate muon tracks and determine their directions and momenta. The tracks are then paired to compute an invariant mass: when the computed mass falls within the desired range, the event is accepted. The process is accomplished in times of 5 to 10 microseconds, while achieving a trigger rate reduction of up to a factor of ten. (5 refs).

  17. Time Manager Software for a Flight Processor

    Science.gov (United States)

    Zoerne, Roger

    2012-01-01

    Data analysis is a process of inspecting, cleaning, transforming, and modeling data to highlight useful information and suggest conclusions. Accurate timestamps and a timeline of vehicle events are needed to analyze flight data. By moving the timekeeping to the flight processor, there is no longer a need for a redundant time source. If each flight processor is initially synchronized to GPS, they can freewheel and maintain a fairly accurate time throughout the flight with no additional GPS time messages received. How ever, additional GPS time messages will ensure an even greater accuracy. When a timestamp is required, a gettime function is called that immediately reads the time-base register.

  18. Bulk-memory processor for data acquisition

    International Nuclear Information System (INIS)

    Nelson, R.O.; McMillan, D.E.; Sunier, J.W.; Meier, M.; Poore, R.V.

    1981-01-01

    To meet the diverse needs and data rate requirements at the Van de Graaff and Weapons Neutron Research (WNR) facilities, a bulk memory system has been implemented which includes a fast and flexible processor. This bulk memory processor (BMP) utilizes bit slice and microcode techniques and features a 24 bit wide internal architecture allowing direct addressing of up to 16 megawords of memory and histogramming up to 16 million counts per channel without overflow. The BMP is interfaced to the MOSTEK MK 8000 bulk memory system and to the standard MODCOMP computer I/O bus. Coding for the BMP both at the microcode level and with macro instructions is supported. The generalized data acquisition system has been extended to support the BMP in a manner transparent to the user

  19. RISC Processors and High Performance Computing

    Science.gov (United States)

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  20. Reduced power processor requirements for the 30-cm diameter HG ion thruster

    Science.gov (United States)

    Rawlin, V. K.

    1979-01-01

    The characteristics of power processors strongly impact the overall performance and cost of electric propulsion systems. A program was initiated to evaluate simplifications of the thruster-power processor interface requirements. The power processor requirements are mission dependent with major differences arising for those missions which require a nearly constant thruster operating point (typical of geocentric and some inbound planetary missions) and those requiring operation over a large range of input power (such as outbound planetary missions). This paper describes the results of tests which have indicated that as many as seven of the twelve power supplies may be eliminated from the present Functional Model Power Processor used with 30-cm diameter Hg ion thrusters.

  1. Rational calculation accuracy in acousto-optical matrix-vector processor

    Science.gov (United States)

    Oparin, V. V.; Tigin, Dmitry V.

    1994-01-01

    The high speed of parallel computations for a comparatively small-size processor and acceptable power consumption makes the usage of acousto-optic matrix-vector multiplier (AOMVM) attractive for processing of large amounts of information in real time. The limited accuracy of computations is an essential disadvantage of such a processor. The reduced accuracy requirements allow for considerable simplification of the AOMVM architecture and the reduction of the demands on its components.

  2. Comparing an FPGA to a Cell for an Image Processing Application

    Science.gov (United States)

    Rakvic, Ryan N.; Ngo, Hau; Broussard, Randy P.; Ives, Robert W.

    2010-12-01

    Modern advancements in configurable hardware, most notably Field-Programmable Gate Arrays (FPGAs), have provided an exciting opportunity to discover the parallel nature of modern image processing algorithms. On the other hand, PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high performance. In this research project, our aim is to study the differences in performance of a modern image processing algorithm on these two hardware platforms. In particular, Iris Recognition Systems have recently become an attractive identification method because of their extremely high accuracy. Iris matching, a repeatedly executed portion of a modern iris recognition algorithm, is parallelized on an FPGA system and a Cell processor. We demonstrate a 2.5 times speedup of the parallelized algorithm on the FPGA system when compared to a Cell processor-based version.

  3. Comparing an FPGA to a Cell for an Image Processing Application

    Directory of Open Access Journals (Sweden)

    Robert W. Ives

    2010-01-01

    Full Text Available Modern advancements in configurable hardware, most notably Field-Programmable Gate Arrays (FPGAs, have provided an exciting opportunity to discover the parallel nature of modern image processing algorithms. On the other hand, PlayStation3 (PS3 game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high performance. In this research project, our aim is to study the differences in performance of a modern image processing algorithm on these two hardware platforms. In particular, Iris Recognition Systems have recently become an attractive identification method because of their extremely high accuracy. Iris matching, a repeatedly executed portion of a modern iris recognition algorithm, is parallelized on an FPGA system and a Cell processor. We demonstrate a 2.5 times speedup of the parallelized algorithm on the FPGA system when compared to a Cell processor-based version.

  4. High-Speed On-Board Data Processing for Science Instruments

    Science.gov (United States)

    Beyon, Jeffrey Y.; Ng, Tak-Kwong; Lin, Bing; Hu, Yongxiang; Harrison, Wallace

    2014-01-01

    A new development of on-board data processing platform has been in progress at NASA Langley Research Center since April, 2012, and the overall review of such work is presented in this paper. The project is called High-Speed On-Board Data Processing for Science Instruments (HOPS) and focuses on a high-speed scalable data processing platform for three particular National Research Council's Decadal Survey missions such as Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS), Aerosol-Cloud-Ecosystems (ACE), and Doppler Aerosol Wind Lidar (DAWN) 3-D Winds. HOPS utilizes advanced general purpose computing with Field Programmable Gate Array (FPGA) based algorithm implementation techniques. The significance of HOPS is to enable high speed on-board data processing for current and future science missions with its reconfigurable and scalable data processing platform. A single HOPS processing board is expected to provide approximately 66 times faster data processing speed for ASCENDS, more than 70% reduction in both power and weight, and about two orders of cost reduction compared to the state-of-the-art (SOA) on-board data processing system. Such benchmark predictions are based on the data when HOPS was originally proposed in August, 2011. The details of these improvement measures are also presented. The two facets of HOPS development are identifying the most computationally intensive algorithm segments of each mission and implementing them in a FPGA-based data processing board. A general introduction of such facets is also the purpose of this paper.

  5. Observation sequences and onboard data processing of Planet-C

    Science.gov (United States)

    Suzuki, M.; Imamura, T.; Nakamura, M.; Ishi, N.; Ueno, M.; Hihara, H.; Abe, T.; Yamada, T.

    Planet-C or VCO Venus Climate Orbiter will carry 5 cameras IR1 IR 1micrometer camera IR2 IR 2micrometer camera UVI UV Imager LIR Long-IR camera and LAC Lightning and Airglow Camera in the UV-IR region to investigate atmospheric dynamics of Venus During 30 hr orbiting designed to quasi-synchronize to the super rotation of the Venus atmosphere 3 groups of scientific observations will be carried out i image acquisition of 4 cameras IR1 IR2 UVI LIR 20 min in 2 hrs ii LAC operation only when VCO is within Venus shadow and iii radio occultation These observation sequences will define the scientific outputs of VCO program but the sequences must be compromised with command telemetry downlink and thermal power conditions For maximizing science data downlink it must be well compressed and the compression efficiency and image quality have the significant scientific importance in the VCO program Images of 4 cameras IR1 2 and UVI 1Kx1K and LIR 240x240 will be compressed using JPEG2000 J2K standard J2K is selected because of a no block noise b efficiency c both reversible and irreversible d patent loyalty free and e already implemented as academic commercial software ICs and ASIC logic designs Data compression efficiencies of J2K are about 0 3 reversible and 0 1 sim 0 01 irreversible The DE Digital Electronics unit which controls 4 cameras and handles onboard data processing compression is under concept design stage It is concluded that the J2K data compression logics circuits using space

  6. Post-silicon and runtime verification for modern processors

    CERN Document Server

    Wagner, Ilya

    2010-01-01

    The purpose of this book is to survey the state of the art and evolving directions in post-silicon and runtime verification. The authors start by giving an overview of the state of the art in verification, particularly current post-silicon methodologies in use in the industry, both for the domain of processor pipeline design and for memory subsystems. They then dive into the presentation of several new post-silicon verification solutions aimed at boosting the verification coverage of modern processors, dedicating several chapters to this topic. The presentation of runtime verification solution

  7. Ring-array processor distribution topology for optical interconnects

    Science.gov (United States)

    Li, Yao; Ha, Berlin; Wang, Ting; Wang, Sunyu; Katz, A.; Lu, X. J.; Kanterakis, E.

    1992-01-01

    The existing linear and rectangular processor distribution topologies for optical interconnects, although promising in many respects, cannot solve problems such as clock skews, the lack of supporting elements for efficient optical implementation, etc. The use of a ring-array processor distribution topology, however, can overcome these problems. Here, a study of the ring-array topology is conducted with an aim of implementing various fast clock rate, high-performance, compact optical networks for digital electronic multiprocessor computers. Practical design issues are addressed. Some proof-of-principle experimental results are included.

  8. Reconfigurable lattice mesh designs for programmable photonic processors.

    Science.gov (United States)

    Pérez, Daniel; Gasulla, Ivana; Capmany, José; Soref, Richard A

    2016-05-30

    We propose and analyse two novel mesh design geometries for the implementation of tunable optical cores in programmable photonic processors. These geometries are the hexagonal and the triangular lattice. They are compared here to a previously proposed square mesh topology in terms of a series of figures of merit that account for metrics that are relevant to on-chip integration of the mesh. We find that that the hexagonal mesh is the most suitable option of the three considered for the implementation of the reconfigurable optical core in the programmable processor.

  9. Interactive high-resolution isosurface ray casting on multicore processors.

    Science.gov (United States)

    Wang, Qin; JaJa, Joseph

    2008-01-01

    We present a new method for the interactive rendering of isosurfaces using ray casting on multi-core processors. This method consists of a combination of an object-order traversal that coarsely identifies possible candidate 3D data blocks for each small set of contiguous pixels, and an isosurface ray casting strategy tailored for the resulting limited-size lists of candidate 3D data blocks. While static screen partitioning is widely used in the literature, our scheme performs dynamic allocation of groups of ray casting tasks to ensure almost equal loads among the different threads running on multi-cores while maintaining spatial locality. We also make careful use of memory management environment commonly present in multi-core processors. We test our system on a two-processor Clovertown platform, each consisting of a Quad-Core 1.86-GHz Intel Xeon Processor, for a number of widely different benchmarks. The detailed experimental results show that our system is efficient and scalable, and achieves high cache performance and excellent load balancing, resulting in an overall performance that is superior to any of the previous algorithms. In fact, we achieve an interactive isosurface rendering on a 1024(2) screen for all the datasets tested up to the maximum size of the main memory of our platform.

  10. 77 FR 124 - Biological Processors of Alabama; Decatur, Morgan County, AL; Notice of Settlement

    Science.gov (United States)

    2012-01-03

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9612-9] Biological Processors of Alabama; Decatur, Morgan... reimbursement of past response costs concerning the Biological Processors of Alabama Superfund Site located in... Ms. Paula V. Painter. Submit your comments by Site name Biological Processors of Alabama Superfund...

  11. A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor.

    Science.gov (United States)

    Tayara, Hilal; Ham, Woonchul; Chong, Kil To

    2016-12-15

    This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation.

  12. Observation planning algorithm of a Japanese space-borne sensor: Hyperspectral Imager SUIte (HISUI) onboard International Space Station (ISS) as platform

    Science.gov (United States)

    Ogawa, Kenta; Konno, Yukiko; Yamamoto, Satoru; Matsunaga, Tsuneo; Tachikawa, Tetsushi; Komoda, Mako

    2017-09-01

    Hyperspectral Imager Suite (HISUI) is a Japanese future space-borne hyperspectral instrument being developed by Ministry of Economy, Trade, and Industry (METI). HISUI will be launched in 2019 or later onboard International Space Station (ISS) as platform. HISUI has 185 spectral band from 0.4 to 2.5 μm with 20 by 30 m spatial resolution with swath of 20 km. Swath is limited as such, however observations in continental scale area are requested in HISUI mission lifetime of three years. Therefore we are developing a scheduling algorithm to generate effective observation plans. HISUI scheduling algorithm is to generate observation plans automatically based on platform orbit, observation area maps (we say DAR; "Data Acquisition Request" in HISUI project), their priorities, and available resources and limitation of HISUI system such as instrument operation time per orbit and data transfer capability. Then next we need to set adequate DAR before start of HISUI observation, because years of observations are needed to cover continental scale wide area that is difficult to change after the mission started. To address these issues, we have developed observation simulator. The simulator's critical inputs are DAR and the ISS's orbit, HISUI limitations in observation minutes per orbit, data storage and past cloud coverage data for term of HISUI observations (3 years). Then the outputs of simulator are coverage map of each day. Areas with cloud free image are accumulated for the term of observation up to three years. We have successfully tested the simulator and tentative DAR and found that it is possible to estimate coverage for each of requests for the mission lifetime.

  13. Monte Carlo photon transport on shared memory and distributed memory parallel processors

    International Nuclear Information System (INIS)

    Martin, W.R.; Wan, T.C.; Abdel-Rahman, T.S.; Mudge, T.N.; Miura, K.

    1987-01-01

    Parallelized Monte Carlo algorithms for analyzing photon transport in an inertially confined fusion (ICF) plasma are considered. Algorithms were developed for shared memory (vector and scalar) and distributed memory (scalar) parallel processors. The shared memory algorithm was implemented on the IBM 3090/400, and timing results are presented for dedicated runs with two, three, and four processors. Two alternative distributed memory algorithms (replication and dispatching) were implemented on a hypercube parallel processor (1 through 64 nodes). The replication algorithm yields essentially full efficiency for all cube sizes; with the 64-node configuration, the absolute performance is nearly the same as with the CRAY X-MP. The dispatching algorithm also yields efficiencies above 80% in a large simulation for the 64-processor configuration

  14. Mesoscale circulation at the upper cloud level at middle latitudes from the imaging by Venus Monitoring Camera onboard Venus Express

    Science.gov (United States)

    Patsaeva, Marina; Ignatiev, Nikolay; Markiewicz, Wojciech; Khatuntsev, Igor; Titov, Dmitrij; Patsaev, Dmitry

    The Venus Monitoring Camera onboard ESA Venus Express spacecraft acquired a great number of UV images (365 nm) allowing us to track the motion of cloud features at the upper cloud layer of Venus. A digital method developed to analyze correlation functions between two UV images provided wind vector fields on the Venus day side (9-16 hours local time) from the equator to high latitudes. Sizes and regions for the correlation were chosen empirically, as a trade-off of sensitivity against noise immunity and vary from 10(°) x7.5(°) to 20(°) x10(°) depending on the grid step, making this method suitable to investigate the mesoscale circulation. Previously, the digital method was used for investigation of the circulation at low latitudes and provided good agreement with manual tracking of the motion of cloud patterns. Here we present first results obtained by this method for middle latitudes (25(°) S-75(°) S) on the basis of 270 orbits. Comparing obtained vector fields with images for certain orbits, we found a relationship between morphological patterns of the cloud cover at middle latitudes and parameters of the circulation. Elongated cloud features, so-called streaks, are typical for middle latitudes, and their orientation varies over wide range. The behavior of the vector field of velocities depends on the angle between the streak and latitude circles. In the middle latitudes the average angle of the flow deviation from the zonal direction is equal to -5.6(°) ± 1(°) (the sign “-“ means the poleward flow, the standard error is given). For certain orbits, this angle varies from -15.6(°) ± 1(°) to 1.4(°) ± 1(°) . In some regions at latitudes above 60(°) S the meridional wind is equatorward in the morning. The relationship between the cloud cover morphology and circulation peculiarity can be attributed to the motion of the Y-feature in the upper cloud layer due to the super-rotation of the atmosphere.

  15. Reproducibility of Mammography Units, Film Processing and Quality Imaging

    International Nuclear Information System (INIS)

    Gaona, Enrique

    2003-01-01

    The purpose of this study was to carry out an exploratory survey of the problems of quality control in mammography and processors units as a diagnosis of the current situation of mammography facilities. Measurements of reproducibility, optical density, optical difference and gamma index are included. Breast cancer is the most frequently diagnosed cancer and is the second leading cause of cancer death among women in the Mexican Republic. Mammography is a radiographic examination specially designed for detecting breast pathology. We found that the problems of reproducibility of AEC are smaller than the problems of processors units because almost all processors fall outside of the acceptable variation limits and they can affect the mammography quality image and the dose to breast. Only four mammography units agree with the minimum score established by ACR and FDA for the phantom image

  16. System and method for three-dimensional image reconstruction using an absolute orientation sensor

    KAUST Repository

    Giancola, Silvio; Ghanem, Bernard; Schneider, Jens; Wonka, Peter

    2018-01-01

    A three-dimensional image reconstruction system includes an image capture device, an inertial measurement unit (IMU), and an image processor. The image capture device captures image data. The inertial measurement unit (IMU) is affixed to the image

  17. Very Long Instruction Word Processors

    Indian Academy of Sciences (India)

    Explicitly Parallel Instruction Computing (EPIC) is an instruction processing paradigm that has been in the spot- light due to its adoption by the next generation of Intel. Processors starting with the IA-64. The EPIC processing paradigm is an evolution of the Very Long Instruction. Word (VLIW) paradigm. This article gives an ...

  18. User manual Dieka PreProcessor

    NARCIS (Netherlands)

    Valkering, Kasper

    2000-01-01

    This is the user manual belonging to the Dieka-PreProcessor. This application was written by Wenhua Cao and revised and expanded by Kasper Valkering. The aim of this preproccesor is to be able to draw and mesh extrusion dies in ProEngineer, and do the FE-calculation in Dieka. The preprocessor makes

  19. Low-level processing for real-time image analysis

    Science.gov (United States)

    Eskenazi, R.; Wilf, J. M.

    1979-01-01

    A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.

  20. A study on the real-time reliability of on-board equipment of train control system

    Science.gov (United States)

    Zhang, Yong; Li, Shiwei

    2018-05-01

    Real-time reliability evaluation is conducive to establishing a condition based maintenance system for the purpose of guaranteeing continuous train operation. According to the inherent characteristics of the on-board equipment, the connotation of reliability evaluation of on-board equipment is defined and the evaluation index of real-time reliability is provided in this paper. From the perspective of methodology and practical application, the real-time reliability of the on-board equipment is discussed in detail, and the method of evaluating the realtime reliability of on-board equipment at component level based on Hidden Markov Model (HMM) is proposed. In this method the performance degradation data is used directly to realize the accurate perception of the hidden state transition process of on-board equipment, which can achieve a better description of the real-time reliability of the equipment.

  1. Dust Measurements Onboard the Deep Space Gateway

    Science.gov (United States)

    Horanyi, M.; Kempf, S.; Malaspina, D.; Poppe, A.; Srama, R.; Sternovsky, Z.; Szalay, J.

    2018-02-01

    A dust instrument onboard the Deep Space Gateway will revolutionize our understanding of the dust environment at 1 AU, help our understanding of the evolution of the solar system, and improve dust hazard models for the safety of crewed and robotic missions.

  2. Evaluation of the Intel Westmere-EX server processor

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A; CERN. Geneva. IT Department

    2011-01-01

    One year after the arrival of the Intel Xeon 7500 systems (“Nehalem-EX”), CERN openlab is presenting a set of benchmark results obtained when running on the new Xeon E7-4870 Processors, representing the “Westmere-EX” family. A modern 4-socket, 40-core system is confronted with the previous generation of expandable (“EX”) platforms, represented by a 4-socket, 32-core Intel Xeon X7560 based system – both being “top of the line” systems. Benchmarking of modern processors is a very complex affair. One has to control (at least) the following features: processor frequency, overclocking via Turbo mode, the number of physical cores in use, the use of logical cores via Symmetric MultiThreading (SMT), the cache sizes available, the configured memory topology, as well as the power configuration if throughput per watt is to be measured. As in previous activities, we have tried to do a good job of comparing like with like. In a “top of the line” comparison based on the HEPSPEC06 benchmark, the “We...

  3. 3081/E processor and its on-line use

    International Nuclear Information System (INIS)

    Rankin, P.; Bricaud, B.; Gravina, M.

    1985-05-01

    The 3081/E is a second generation emulator of a mainframe IBM. One of it's applications will be to form part of the data acquisition system of the upgraded Mark II detector for data taking at the SLAC linear collider. Since the processor does not have direct connections to I/O devices a FASTBUS interface will be provided to allow communication with both SLAC Scanner Processors (which are responsible for the accumulation of data at a crate level) and the experiment's VAX 8600 mainframe. The 3081/E's will supply a significant amount of on-line computing power to the experiment (a single 3081/E is equivalent to 4 to 5 VAX 11/780's). A major advantage of the 3081/E is that program development can be done on an IBM mainframe (such as the one used for off-line analysis) which gives the programmer access to a full range of debugging tools. The processor's performance can be continually monitored by comparison of the results obtained using it to those given when the same program is run on an IBM computer. 9 refs

  4. An intercomparison of Canadian external dosimetry processors for radiation protection

    International Nuclear Information System (INIS)

    1989-10-01

    The five Canadian external dosimetry processors have participated in a two-stage intercomparison. The first stage involved dosimeters to known radiation fields under controlled laboratory conditions. The second stage involved exposing dosimeters to radiation fields in power reactor working environments. The results for each stage indicated the dose reported by each processor relative to an independently determined dose and relative to the others. The results of the intercomparisons confirm the original supposition: namely that the average differences in reported dose among five processors are much less than the uncertainty limits recommended by the ICRP. This report provides a description of the experimental methods as well as a discussion of the results for each stage. The report also includes a set of recommendations

  5. Chang?E-5T Orbit Determination Using Onboard GPS Observations

    OpenAIRE

    Su, Xing; Geng, Tao; Li, Wenwen; Zhao, Qile; Xie, Xin

    2017-01-01

    In recent years, Global Navigation Satellite System (GNSS) has played an important role in Space Service Volume, the region enclosing the altitudes above 3000 km up to 36,000 km. As an in-flight test for the feasibility as well as for the performance of GNSS-based satellite orbit determination (OD), the Chinese experimental lunar mission Chang?E-5T had been equipped with an onboard high-sensitivity GNSS receiver with GPS and GLONASS tracking capability. In this contribution, the 2-h onboard G...

  6. A high-speed analog neural processor

    NARCIS (Netherlands)

    Masa, P.; Masa, Peter; Hoen, Klaas; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    Targeted at high-energy physics research applications, our special-purpose analog neural processor can classify up to 70 dimensional vectors within 50 nanoseconds. The decision-making process of the implemented feedforward neural network enables this type of computation to tolerate weight

  7. Processor farming in two-level analysis of historical bridge

    Science.gov (United States)

    Krejčí, T.; Kruis, J.; Koudelka, T.; Šejnoha, M.

    2017-11-01

    This contribution presents a processor farming method in connection with a multi-scale analysis. In this method, each macro-scopic integration point or each finite element is connected with a certain meso-scopic problem represented by an appropriate representative volume element (RVE). The solution of a meso-scale problem provides then effective parameters needed on the macro-scale. Such an analysis is suitable for parallel computing because the meso-scale problems can be distributed among many processors. The application of the processor farming method to a real world masonry structure is illustrated by an analysis of Charles bridge in Prague. The three-dimensional numerical model simulates the coupled heat and moisture transfer of one half of arch No. 3. and it is a part of a complex hygro-thermo-mechanical analysis which has been developed to determine the influence of climatic loading on the current state of the bridge.

  8. Enhancing Image Processing Performance for PCID in a Heterogeneous Network of Multi-code Processors

    Science.gov (United States)

    Linderman, R.; Spetka, S.; Fitzgerald, D.; Emeny, S.

    The Physically-Constrained Iterative Deconvolution (PCID) image deblurring code is being ported to heterogeneous networks of multi-core systems, including Intel Xeons and IBM Cell Broadband Engines. This paper reports results from experiments using the JAWS supercomputer at MHPCC (60 TFLOPS of dual-dual Xeon nodes linked with Infiniband) and the Cell Cluster at AFRL in Rome, NY. The Cell Cluster has 52 TFLOPS of Playstation 3 (PS3) nodes with IBM Cell Broadband Engine multi-cores and 15 dual-quad Xeon head nodes. The interconnect fabric includes Infiniband, 10 Gigabit Ethernet and 1 Gigabit Ethernet to each of the 336 PS3s. The results compare approaches to parallelizing FFT executions across the Xeons and the Cell's Synergistic Processing Elements (SPEs) for frame-level image processing. The experiments included Intel's Performance Primitives and Math Kernel Library, FFTW3.2, and Carnegie Mellon's SPIRAL. Optimization of FFTs in the PCID code led to a decrease in relative processing time for FFTs. Profiling PCID version 6.2, about one year ago, showed the 13 functions that accounted for the highest percentage of processing were all FFT processing functions. They accounted for over 88% of processing time in one run on Xeons. FFT optimizations led to improvement in the current PCID version 8.0. A recent profile showed that only two of the 19 functions with the highest processing time were FFT processing functions. Timing measurements showed that FFT processing for PCID version 8.0 has been reduced to less than 19% of overall processing time. We are working toward a goal of scaling to 200-400 cores per job (1-2 imagery frames/core). Running a pair of cores on each set of frames reduces latency by implementing parallel FFT processing. Our current results show scaling well out to 100 pairs of cores. These results support the next higher level of parallelism in PCID, where groups of several hundred frames each producing one resolved image are sent to cliques of several

  9. Parallel processing approach to transform-based image coding

    Science.gov (United States)

    Normile, James O.; Wright, Dan; Chu, Ken; Yeh, Chia L.

    1991-06-01

    This paper describes a flexible parallel processing architecture designed for use in real time video processing. The system consists of floating point DSP processors connected to each other via fast serial links, each processor has access to a globally shared memory. A multiple bus architecture in combination with a dual ported memory allows communication with a host control processor. The system has been applied to prototyping of video compression and decompression algorithms. The decomposition of transform based algorithms for decompression into a form suitable for parallel processing is described. A technique for automatic load balancing among the processors is developed and discussed, results ar presented with image statistics and data rates. Finally techniques for accelerating the system throughput are analyzed and results from the application of one such modification described.

  10. Wavelength-encoded OCDMA system using opto-VLSI processors.

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  11. Wavelength-encoded OCDMA system using opto-VLSI processors

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  12. Radiation dosimetry onboard the International Space Station ISS

    International Nuclear Information System (INIS)

    Berger, Thomas

    2008-01-01

    Besides the effects of the microgravity environment, and the psychological and psychosocial problems encountered in confined spaces, radiation is the main health detriment for long duration human space missions. The radiation environment encountered in space differs in nature front that on earth, consisting mostly of high energetic ions from protons up to iron, resulting in radiation levels far exceeding the ones encountered on earth for occupational radiation workers. Therefore the determination and the control of the radiation load on astronauts is a moral obligation of the space faring nations. The requirements for radiation detectors in space are very different to that on earth. Limitations in mass, power consumption and the complex nature of the space radiation environment define and limit the overall construction of radiation detectors. Radiation dosimetry onboard the International Space Station (ISS) is accomplished to one part as ''operational'' dosimetry aiming for area monitoring of the radiation environment as well as astronaut surveillance. Another part focuses on ''scientific'' dosimetry aiming for a better understanding of the radiation environment and its constitutes. Various research activities for a more detailed quantification of the radiation environment as well as its distribution in and outside the space station have been accomplished in the last years onboard the ISS. The paper will focus on the current radiation detectors onboard the ISS, their results, as well as on future planned activities. (orig.)

  13. Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures

    Science.gov (United States)

    Manolakos, Elias S.

    2015-01-01

    Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub. PMID:26605332

  14. Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures.

    Science.gov (United States)

    Sharma, Anuj; Manolakos, Elias S

    2015-01-01

    Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub.

  15. Optimization of image processing algorithms on mobile platforms

    Science.gov (United States)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  16. ARM Processor Based Embedded System for Remote Data Acquisition

    OpenAIRE

    Raj Kumar Tiwari; Santosh Kumar Agrahari

    2014-01-01

    The embedded systems are widely used for the data acquisition. The data acquired may be used for monitoring various activity of the system or it can be used to control the parts of the system. Accessing various signals with remote location has greater advantage for multisite operation or unmanned systems. The remote data acquisition used in this paper is based on ARM processor. The Cortex M3 processor used in this system has in-built Ethernet controller which facilitate to acquire the remote ...

  17. Application of Advanced Multi-Core Processor Technologies to Oceanographic Research

    Science.gov (United States)

    2013-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Application of Advanced Multi-Core Processor Technologies...STM32 NXP LPC series No Proprietary Microchip PIC32/DSPIC No > 500 mW; < 5 W ARM Cortex TI OMAP TI Sitara Broadcom BCM2835 Varies FPGA...state-of-the-art information processing architectures. OBJECTIVES Next-generation processor architectures (multi-core, multi-threaded) hold the

  18. Hardware processors for pattern recognition tasks in experiments with wire chambers

    International Nuclear Information System (INIS)

    Verkerk, C.

    1975-01-01

    Hardware processors for pattern recognition tasks in experiments with multiwire proportional chambers or drift chambers are described. They vary from simple ones used for deciding in real time if particle trajectories are straight to complex ones for recognition of curved tracks. Schematics and block-diagrams of different processors are shown

  19. Thermal Dissipation Efficiency in a Micro-Processor Using Carbon Nanotubes Based Composite

    Science.gov (United States)

    Thang, Bui Hung; Van Quang, Cao; Nghia, Van Trong; Hong, Phan Ngoc; Van Chuc, Nguyen; Tam, Ngo Thi Thanh; Quang, Le Dinh; Khang, Dao Duc; Khoi, Phan Hong; Minh, Phan Ngoc

    2009-09-01

    Modern electronic and optoelectronic devices such as μ-processor, light emitting diode, semiconductor laser issued a challenge in the thermal dissipation problem. Finding an effective way for thermal dissipation therefore becomes a very important issue. It is known that carbon nanotubes (CNTs) is one of the most valuable materials with high thermal conductivity (2000 W/m.K compared to thermal conductivity of Ag 419 W/m.K). This suggested an approach in applying the CNTs as an essential component for thermal dissipation media to improve the performance of computer processor and other high power electronic devices. In this work multi walled carbon nanotubes (MWCNTs) based composites were utilized as the thermal dissipation media in a micro processor of a personal computer. The MWCNTs of different concentrations were added into polyaniline, commercial silicon thermal paste and commercial silver thermal paste by mechanical methods. A personal computer with configuration: Intel Pentium IV 3.066 GHz, 512 MB of RAM and Windows XP Service Pack 2 Operating System was employed. The thermal dissipation efficiency of the system was evaluated by directly measure the temperature of the μ-processor during the operation of the computer in different CPU speeds. The measured results showed that the CNTs based composite could reduce the temperature of the u-processor more than 5° C, and the time for increasing the temperature of the μ-processor was three times longer than that when using commercial thermal paste.

  20. Determination of the Image Complexity Feature in Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Veacheslav L. Perju

    2003-11-01

    Full Text Available The new image complexity informative feature is proposed. The experimental estimation of the image complexity is carried out. There are elaborated two optical-electronic processors for image complexity calculation. The determination of the necessary number of the image's digitization elements depending on the image complexity was carried out. The accuracy of the image complexity feature calculation was made.