WorldWideScience

Sample records for streak camera based

  1. Design of microcontroller based system for automation of streak camera

    International Nuclear Information System (INIS)

    Joshi, M. J.; Upadhyay, J.; Deshpande, P. P.; Sharma, M. L.; Navathe, C. P.

    2010-01-01

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  2. Design of microcontroller based system for automation of streak camera.

    Science.gov (United States)

    Joshi, M J; Upadhyay, J; Deshpande, P P; Sharma, M L; Navathe, C P

    2010-08-01

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  3. Design of microcontroller based system for automation of streak camera

    Energy Technology Data Exchange (ETDEWEB)

    Joshi, M. J.; Upadhyay, J.; Deshpande, P. P.; Sharma, M. L.; Navathe, C. P. [Laser Electronics Support Division, RRCAT, Indore 452013 (India)

    2010-08-15

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  4. rf streak camera based ultrafast relativistic electron diffraction.

    Science.gov (United States)

    Musumeci, P; Moody, J T; Scoby, C M; Gutierrez, M S; Tran, T

    2009-01-01

    We theoretically and experimentally investigate the possibility of using a rf streak camera to time resolve in a single shot structural changes at the sub-100 fs time scale via relativistic electron diffraction. We experimentally tested this novel concept at the UCLA Pegasus rf photoinjector. Time-resolved diffraction patterns from thin Al foil are recorded. Averaging over 50 shots is required in order to get statistics sufficient to uncover a variation in time of the diffraction patterns. In the absence of an external pump laser, this is explained as due to the energy chirp on the beam out of the electron gun. With further improvements to the electron source, rf streak camera based ultrafast electron diffraction has the potential to yield truly single shot measurements of ultrafast processes.

  5. Laser-based terahertz-field-driven streak camera for the temporal characterization of ultrashort processes

    International Nuclear Information System (INIS)

    Schuette, Bernd

    2011-09-01

    In this work, a novel laser-based terahertz-field-driven streak camera is presented. It allows for a pulse length characterization of femtosecond (fs) extreme ultraviolet (XUV) pulses by a cross-correlation with terahertz (THz) pulses generated with a Ti:sapphire laser. The XUV pulses are emitted by a source of high-order harmonic generation (HHG) in which an intense near-infrared (NIR) fs laser pulse is focused into a gaseous medium. The design and characterization of a high-intensity THz source needed for the streak camera is also part of this thesis. The source is based on optical rectification of the same NIR laser pulse in a lithium niobate crystal. For this purpose, the pulse front of the NIR beam is tilted via a diffraction grating to achieve velocity matching between NIR and THz beams within the crystal. For the temporal characterization of the XUV pulses, both HHG and THz beams are focused onto a gas target. The harmonic radiation creates photoelectron wavepackets which are then accelerated by the THz field depending on its phase at the time of ionization. This principle adopted from a conventional streak camera and now widely used in attosecond metrology. The streak camera presented here is an advancement of a terahertz-field-driven streak camera implemented at the Free Electron Laser in Hamburg (FLASH). The advantages of the laser-based streak camera lie in its compactness, cost efficiency and accessibility, while providing the same good quality of measurements as obtained at FLASH. In addition, its flexibility allows for a systematic investigation of streaked Auger spectra which is presented in this thesis. With its fs time resolution, the terahertz-field-driven streak camera thereby bridges the gap between attosecond and conventional cameras. (orig.)

  6. Laser-based terahertz-field-driven streak camera for the temporal characterization of ultrashort processes

    Energy Technology Data Exchange (ETDEWEB)

    Schuette, Bernd

    2011-09-15

    In this work, a novel laser-based terahertz-field-driven streak camera is presented. It allows for a pulse length characterization of femtosecond (fs) extreme ultraviolet (XUV) pulses by a cross-correlation with terahertz (THz) pulses generated with a Ti:sapphire laser. The XUV pulses are emitted by a source of high-order harmonic generation (HHG) in which an intense near-infrared (NIR) fs laser pulse is focused into a gaseous medium. The design and characterization of a high-intensity THz source needed for the streak camera is also part of this thesis. The source is based on optical rectification of the same NIR laser pulse in a lithium niobate crystal. For this purpose, the pulse front of the NIR beam is tilted via a diffraction grating to achieve velocity matching between NIR and THz beams within the crystal. For the temporal characterization of the XUV pulses, both HHG and THz beams are focused onto a gas target. The harmonic radiation creates photoelectron wavepackets which are then accelerated by the THz field depending on its phase at the time of ionization. This principle adopted from a conventional streak camera and now widely used in attosecond metrology. The streak camera presented here is an advancement of a terahertz-field-driven streak camera implemented at the Free Electron Laser in Hamburg (FLASH). The advantages of the laser-based streak camera lie in its compactness, cost efficiency and accessibility, while providing the same good quality of measurements as obtained at FLASH. In addition, its flexibility allows for a systematic investigation of streaked Auger spectra which is presented in this thesis. With its fs time resolution, the terahertz-field-driven streak camera thereby bridges the gap between attosecond and conventional cameras. (orig.)

  7. Compact Optical Technique for Streak Camera Calibration

    International Nuclear Information System (INIS)

    Bell, P; Griffith, R; Hagans, K; Lerche, R; Allen, C; Davies, T; Janson, F; Justin, R; Marshall, B; Sweningsen, O

    2004-01-01

    The National Ignition Facility (NIF) is under construction at the Lawrence Livermore National Laboratory (LLNL) for the U.S. Department of Energy Stockpile Stewardship Program. Optical streak cameras are an integral part of the experimental diagnostics instrumentation. To accurately reduce data from the streak cameras a temporal calibration is required. This article describes a technique for generating trains of precisely timed short-duration optical pulses1 (optical comb generators) that are suitable for temporal calibrations. These optical comb generators (Figure 1) are used with the LLNL optical streak cameras. They are small, portable light sources that produce a series of temporally short, uniformly spaced, optical pulses. Comb generators have been produced with 0.1, 0.5, 1, 3, 6, and 10-GHz pulse trains of 780-nm wavelength light with individual pulse durations of ∼25-ps FWHM. Signal output is via a fiber-optic connector. Signal is transported from comb generator to streak camera through multi-mode, graded-index optical fibers. At the NIF, ultra-fast streak-cameras are used by the Laser Fusion Program experimentalists to record fast transient optical signals. Their temporal resolution is unmatched by any other transient recorder. Their ability to spatially discriminate an image along the input slit allows them to function as a one-dimensional image recorder, time-resolved spectrometer, or multichannel transient recorder. Depending on the choice of photocathode, they can be made sensitive to photon energies from 1.1 eV to 30 keV and beyond. Comb generators perform two important functions for LLNL streak-camera users. First, comb generators are used as a precision time-mark generator for calibrating streak camera sweep rates. Accuracy is achieved by averaging many streak camera images of comb generator signals. Time-base calibrations with portable comb generators are easily done in both the calibration laboratory and in situ. Second, comb signals are applied

  8. Streak cameras and their applications

    International Nuclear Information System (INIS)

    Bernet, J.M.; Imhoff, C.

    1987-01-01

    Over the last several years, development of various measurement techniques in the nanosecond and pico-second range has led to increased reliance on streak cameras. This paper will present the main electronic and optoelectronic performances of the Thomson-CSF TSN 506 cameras and their associated devices used to build an automatic image acquisition and processing system (NORMA). A brief survey of the diversity and the spread of the use of high speed electronic cinematography will be illustrated by a few typical applications [fr

  9. Ultra fast x-ray streak camera

    International Nuclear Information System (INIS)

    Coleman, L.W.; McConaghy, C.F.

    1975-01-01

    A unique ultrafast x-ray sensitive streak camera, with a time resolution of 50psec, has been built and operated. A 100A thick gold photocathode on a beryllium vacuum window is used in a modified commerical image converter tube. The X-ray streak camera has been used in experiments to observe time resolved emission from laser-produced plasmas. (author)

  10. Streak camera recording of interferometer fringes

    International Nuclear Information System (INIS)

    Parker, N.L.; Chau, H.H.

    1977-01-01

    The use of an electronic high-speed camera in the streaking mode to record interference fringe motion from a velocity interferometer is discussed. Advantages of this method over the photomultiplier tube-oscilloscope approach are delineated. Performance testing and data for the electronic streak camera are discussed. The velocity profile of a mylar flyer accelerated by an electrically exploded bridge, and the jump-off velocity of metal targets struck by these mylar flyers are measured in the camera tests. Advantages of the streak camera include portability, low cost, ease of operation and maintenance, simplified interferometer optics, and rapid data analysis

  11. A sampling ultra-high-speed streak camera based on the use of a unique photomultiplier

    International Nuclear Information System (INIS)

    Marode, Emmanuel

    An apparatus reproducing the ''streak'' mode of a high-speed camera is proposed for the case of a slit AB whose variations in luminosity are repetitive. A photomultiplier, analysing the object AB point by point, and a still camera, photographing a slit fixed on the oscilloscope screen parallel to the sweep direction are placed on a mobile platform P. The movement of P assures a time-resolved analysis of AB. The resolution is of the order of 2.10 -9 s, and can be improved [fr

  12. Cheap streak camera based on the LD-S-10 intensifier tube

    Science.gov (United States)

    Dashevsky, Boris E.; Krutik, Mikhail I.; Surovegin, Alexander L.

    1992-01-01

    Basic properties of a new streak camera and its test results are reported. To intensify images on its screen, we employed modular G1 tubes, the LD-A-1.0 and LD-A-0.33, enabling magnification of 1.0 and 0.33, respectively. If necessary, the LD-A-0.33 tube may be substituted by any other image intensifier of the LDA series, the choice to be determined by the size of the CCD matrix with fiber-optical windows. The reported camera employs a 12.5- mm-long CCD strip consisting of 1024 pixels, each 12 X 500 micrometers in size. Registered radiation was imaged on a 5 X 0.04 mm slit diaphragm tightly connected with the LD-S- 10 fiber-optical input window. Electrons escaping the cathode are accelerated in a 5 kV electric field and focused onto a phosphor screen covering a fiber-optical plate as they travel between deflection plates. Sensitivity of the latter was 18 V/mm, which implies that the total deflecting voltage was 720 V per 40 mm of the screen surface, since reversed-polarity scan pulses +360 V and -360 V were applied across the deflection plate. The streak camera provides full scan times over the screen of 15, 30, 50, 100, 250, and 500 ns. Timing of the electrically or optically driven camera was done using a 10 ns step-controlled-delay (0 - 500 ns) circuit.

  13. Notes on the IMACON 500 streak camera system

    International Nuclear Information System (INIS)

    Clendenin, J.E.

    1985-01-01

    The notes provided are intended to supplement the instruction manual for the IMACON 500 streak camera system. The notes cover the streak analyzer, instructions for timing the streak camera, and calibration

  14. Compact Optical Technique for Streak Camera Calibration

    International Nuclear Information System (INIS)

    Curt Allen; Terence Davies; Frans Janson; Ronald Justin; Bruce Marshall; Oliver Sweningsen; Perry Bell; Roger Griffith; Karla Hagans; Richard Lerche

    2004-01-01

    The National Ignition Facility is under construction at the Lawrence Livermore National Laboratory for the U.S. Department of Energy Stockpile Stewardship Program. Optical streak cameras are an integral part of the experimental diagnostics instrumentation. To accurately reduce data from the streak cameras a temporal calibration is required. This article describes a technique for generating trains of precisely timed short-duration optical pulses that are suitable for temporal calibrations

  15. Soft x-ray streak cameras

    International Nuclear Information System (INIS)

    Stradling, G.L.

    1988-01-01

    This paper is a discussion of the development and of the current state of the art in picosecond soft x-ray streak camera technology. Accomplishments from a number of institutions are discussed. X-ray streak cameras vary from standard visible streak camera designs in the use of an x-ray transmitting window and an x-ray sensitive photocathode. The spectral sensitivity range of these instruments includes portions of the near UV and extends from the subkilovolt x- ray region to several tens of kilovolts. Attendant challenges encountered in the design and use of x-ray streak cameras include the accommodation of high-voltage and vacuum requirements, as well as manipulation of a photocathode structure which is often fragile. The x-ray transmitting window is generally too fragile to withstand atmospheric pressure, necessitating active vacuum pumping and a vacuum line of sight to the x-ray signal source. Because of the difficulty of manipulating x-ray beams with conventional optics, as is done with visible light, the size of the photocathode sensing area, access to the front of the tube, the ability to insert the streak tube into a vacuum chamber and the capability to trigger the sweep with very short internal delay times are issues uniquely relevant to x-ray streak camera use. The physics of electron imaging may place more stringent limitations on the temporal and spatial resolution obtainable with x-ray photocathodes than with the visible counterpart. Other issues which are common to the entire streak camera community also concern the x-ray streak camera users and manufacturers

  16. Picosecond x-ray streak cameras

    Science.gov (United States)

    Averin, V. I.; Bryukhnevich, Gennadii I.; Kolesov, G. V.; Lebedev, Vitaly B.; Miller, V. A.; Saulevich, S. V.; Shulika, A. N.

    1991-04-01

    The first multistage image converter with an X-ray photocathode (UMI-93 SR) was designed in VNIIOFI in 1974 [1]. The experiments carried out in IOFAN pointed out that X-ray electron-optical cameras using the tube provided temporal resolution up to 12 picoseconds [2]. The later work has developed into the creation of the separate streak and intensifying tubes. Thus, PV-003R tube has been built on base of UMI-93SR design, fibre optically connected to PMU-2V image intensifier carrying microchannel plate.

  17. The LLL compact 10-ps streak camera

    International Nuclear Information System (INIS)

    Thomas, S.W.; Houghton, J.W.; Tripp, G.R.; Coleman, L.W.

    1975-01-01

    The 10-ps streak camera has been redesigned to simplify its operation, reduce manufacturing costs, and improve its appearance. The electronics have been simplified, a film indexer added, and a contacted slit has been evaluated. Data support a 10-ps resolution. (author)

  18. Sweep time performance of optic streak camera

    International Nuclear Information System (INIS)

    Wang Zhebin; Yang Dong; Zhang Huige

    2012-01-01

    The sweep time performance of the optic streak camera (OSC) is of critical importance to its application. The systematic analysis of full-screen sweep velocity shows that the traditional method based on the averaged velocity and its nonlinearity would increase the uncertainty of sweep time and can not reflect the influence of the spatial distortion of OSC. A elaborate method for sweep time has been developed with the aid of full-screen sweep velocity and its uncertainty. It is proved by the theoretical analysis and experimental study that the method would decrease the uncertainty of sweep time within 1%, which would improve the accuracy of sweep time and the reliability of OSC application. (authors)

  19. A time-resolved image sensor for tubeless streak cameras

    Science.gov (United States)

    Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji

    2014-03-01

    This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .

  20. Sweep devices for picosecond image-converter streak cameras

    International Nuclear Information System (INIS)

    Cunin, B.; Miehe, J.A.; Sipp, B.; Schelev, M.Ya.; Serduchenko, J.N.; Thebault, J.

    1979-01-01

    Four different sweep devices based on microwave tubes, avalanche transistors, krytrons, and laser-triggered spark gaps are treated in detail. These control circuits are developed for picosecond image-converter cameras and generate sweep pulses providing streak speeds in the range of 10 7 to 5x10 10 cm/sec with maximum time resolution better than 10 -12 sec. Special low-jitter triggering schemes reduce the jitter to less than 5x10 -11 sec. Some problems arising in the construction and matching of the sweep devices and image-streak tube are discussed. Comparative parameters of nanosecond switching elements are presented. The results described can be used by other authors involved in streak camera development

  1. Traveling wave deflector design for femtosecond streak camera

    International Nuclear Information System (INIS)

    Pei, Chengquan; Wu, Shengli; Luo, Duan; Wen, Wenlong; Xu, Junkai; Tian, Jinshou; Zhang, Minrui; Chen, Pin; Chen, Jianzhong; Liu, Rong

    2017-01-01

    In this paper, a traveling wave deflection deflector (TWD) with a slow-wave property induced by a microstrip transmission line is proposed for femtosecond streak cameras. The pass width and dispersion properties were simulated. In addition, the dynamic temporal resolution of the femtosecond camera was simulated by CST software. The results showed that with the proposed TWD a femtosecond streak camera can achieve a dynamic temporal resolution of less than 600 fs. Experiments were done to test the femtosecond streak camera, and an 800 fs dynamic temporal resolution was obtained. Guidance is provided for optimizing a femtosecond streak camera to obtain higher temporal resolution.

  2. Traveling wave deflector design for femtosecond streak camera

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Chengquan; Wu, Shengli [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi’an 710049 (China); Luo, Duan [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Wen, Wenlong [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Xu, Junkai [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Tian, Jinshou, E-mail: tianjs@opt.ac.cn [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, Shanxi 030006 (China); Zhang, Minrui; Chen, Pin [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Chen, Jianzhong [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi’an 710049 (China); Liu, Rong [Xi' an Technological University, Xi' an 710021 (China)

    2017-05-21

    In this paper, a traveling wave deflection deflector (TWD) with a slow-wave property induced by a microstrip transmission line is proposed for femtosecond streak cameras. The pass width and dispersion properties were simulated. In addition, the dynamic temporal resolution of the femtosecond camera was simulated by CST software. The results showed that with the proposed TWD a femtosecond streak camera can achieve a dynamic temporal resolution of less than 600 fs. Experiments were done to test the femtosecond streak camera, and an 800 fs dynamic temporal resolution was obtained. Guidance is provided for optimizing a femtosecond streak camera to obtain higher temporal resolution.

  3. Compact optical technique for streak camera calibration

    International Nuclear Information System (INIS)

    Bell, Perry; Griffith, Roger; Hagans, Karla; Lerche, Richard; Allen, Curt; Davies, Terence; Janson, Frans; Justin, Ronald; Marshall, Bruce; Sweningsen, Oliver

    2004-01-01

    To produce accurate data from optical streak cameras requires accurate temporal calibration sources. We have reproduced an older technology for generating optical timing marks that had been lost due to component availability. Many improvements have been made which allow the modern units to service a much larger need. Optical calibrators are now available that produce optical pulse trains of 780 nm wavelength light at frequencies ranging from 0.1 to 10 GHz, with individual pulse widths of approximately 25 ps full width half maximum. Future plans include the development of single units that produce multiple frequencies to cover a wide temporal range, and that are fully controllable via an RS232 interface

  4. Compact optical technique for streak camera calibration

    Science.gov (United States)

    Bell, Perry; Griffith, Roger; Hagans, Karla; Lerche, Richard; Allen, Curt; Davies, Terence; Janson, Frans; Justin, Ronald; Marshall, Bruce; Sweningsen, Oliver

    2004-10-01

    To produce accurate data from optical streak cameras requires accurate temporal calibration sources. We have reproduced an older technology for generating optical timing marks that had been lost due to component availability. Many improvements have been made which allow the modern units to service a much larger need. Optical calibrators are now available that produce optical pulse trains of 780 nm wavelength light at frequencies ranging from 0.1 to 10 GHz, with individual pulse widths of approximately 25 ps full width half maximum. Future plans include the development of single units that produce multiple frequencies to cover a wide temporal range, and that are fully controllable via an RS232 interface.

  5. A novel simultaneous streak and framing camera without principle errors

    Science.gov (United States)

    Jingzhen, L.; Fengshan, S.; Ningwen, L.; Xiangdong, G.; Bin, H.; Qingyang, W.; Hongyi, C.; Yi, C.; Xiaowei, L.

    2018-02-01

    A novel simultaneous streak and framing camera with continuous access, the perfect information of which is far more important for the exact interpretation and precise evaluation of many detonation events and shockwave phenomena, has been developed. The camera with the maximum imaging frequency of 2 × 106 fps and the maximum scanning velocity of 16.3 mm/μs has fine imaging properties which are the eigen resolution of over 40 lp/mm in the temporal direction and over 60 lp/mm in the spatial direction and the framing frequency principle error of zero for framing record, and the maximum time resolving power of 8 ns and the scanning velocity nonuniformity of 0.136%~-0.277% for streak record. The test data have verified the performance of the camera quantitatively. This camera, simultaneously gained frames and streak with parallax-free and identical time base, is characterized by the plane optical system at oblique incidence different from space system, the innovative camera obscura without principle errors, and the high velocity motor driven beryllium-like rotating mirror, made of high strength aluminum alloy with cellular lateral structure. Experiments demonstrate that the camera is very useful and reliable to take high quality pictures of the detonation events.

  6. Microprocessor-controlled wide-range streak camera

    Science.gov (United States)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  7. Microprocessor-controlled, wide-range streak camera

    International Nuclear Information System (INIS)

    Amy E. Lewis; Craig Hollabaugh

    2006-01-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized

  8. Microprocessor-controlled, wide-range streak camera

    Energy Technology Data Exchange (ETDEWEB)

    Amy E. Lewis, Craig Hollabaugh

    2006-09-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  9. Triggered streak and framing rotating-mirror cameras

    International Nuclear Information System (INIS)

    Huston, A.E.; Tabrar, A.

    1975-01-01

    A pulse motor has been developed which enables a mirror to be rotated to speeds in excess of 20,000 rpm with 10 -4 s. High-speed cameras of both streak and framing type have been assembled which incorporate this mirror drive, giving streak writing speeds up to 2,000ms -1 , and framing speeds up to 500,000 frames s -1 , in each case with the capability of triggering the camera from the event under investigation. (author)

  10. Dynamic range studies of the RCA streak tube in the LLL streak camera

    International Nuclear Information System (INIS)

    Thomas, S.W.; Phillips, G.E.

    1979-01-01

    As indicated by tests on several cameras, the dynamic range of the Lawrence Livermore Laboratory streak-camera system appears to be about two orders of magnitude greater than those reported for other systems for 10- to 200-ps pulses. The lack of a fine mesh grid in the RCA streak tube used in these cameras probably contributes to a lower system dynamic noise and therefore raises the dynamic range. A developmental tube with a mesh grid was tested and supports this conjecture. Order-of-magnitude variations in input slit width do not affect the spot size on the phosphor or the dynamic range of the RCA tube. (author)

  11. STREAK CAMERA MEASUREMENTS OF THE APS PC GUN DRIVE LASER

    Energy Technology Data Exchange (ETDEWEB)

    Dooling, J. C.; Lumpkin, A. H.

    2017-06-25

    We report recent pulse-duration measurements of the APS PC Gun drive laser at both second harmonic and fourth harmonic wavelengths. The drive laser is a Nd:Glass-based chirped pulsed amplifier (CPA) operating at an IR wavelength of 1053 nm, twice frequency-doubled to obtain UV output for the gun. A Hamamatsu C5680 streak camera and an M5675 synchroscan unit are used for these measurements; the synchroscan unit is tuned to 119 MHz, the 24th subharmonic of the linac s-band operating frequency. Calibration is accomplished both electronically and optically. Electronic calibration utilizes a programmable delay line in the 119 MHz rf path. The optical delay uses an etalon with known spacing between reflecting surfaces and is coated for the visible, SH wavelength. IR pulse duration is monitored with an autocorrelator. Fitting the streak camera image projected profiles with Gaussians, UV rms pulse durations are found to vary from 2.1 ps to 3.5 ps as the IR varies from 2.2 ps to 5.2 ps.

  12. Streak camera imaging of single photons at telecom wavelength

    Science.gov (United States)

    Allgaier, Markus; Ansari, Vahid; Eigner, Christof; Quiring, Viktor; Ricken, Raimund; Donohue, John Matthew; Czerniuk, Thomas; Aßmann, Marc; Bayer, Manfred; Brecht, Benjamin; Silberhorn, Christine

    2018-01-01

    Streak cameras are powerful tools for temporal characterization of ultrafast light pulses, even at the single-photon level. However, the low signal-to-noise ratio in the infrared range prevents measurements on weak light sources in the telecom regime. We present an approach to circumvent this problem, utilizing an up-conversion process in periodically poled waveguides in Lithium Niobate. We convert single photons from a parametric down-conversion source in order to reach the point of maximum detection efficiency of commercially available streak cameras. We explore phase-matching configurations to apply the up-conversion scheme in real-world applications.

  13. Soft x-ray streak camera for laser fusion applications

    International Nuclear Information System (INIS)

    Stradling, G.L.

    1981-04-01

    This thesis reviews the development and significance of the soft x-ray streak camera (SXRSC) in the context of inertial confinement fusion energy development. A brief introduction of laser fusion and laser fusion diagnostics is presented. The need for a soft x-ray streak camera as a laser fusion diagnostic is shown. Basic x-ray streak camera characteristics, design, and operation are reviewed. The SXRSC design criteria, the requirement for a subkilovolt x-ray transmitting window, and the resulting camera design are explained. Theory and design of reflector-filter pair combinations for three subkilovolt channels centered at 220 eV, 460 eV, and 620 eV are also presented. Calibration experiments are explained and data showing a dynamic range of 1000 and a sweep speed of 134 psec/mm are presented. Sensitivity modifications to the soft x-ray streak camera for a high-power target shot are described. A preliminary investigation, using a stepped cathode, of the thickness dependence of the gold photocathode response is discussed. Data from a typical Argus laser gold-disk target experiment are shown

  14. Reliable and repeatable characterization of optical streak cameras

    International Nuclear Information System (INIS)

    Charest, Michael R. Jr.; Torres, Peter III; Silbernagel, Christopher T.; Kalantar, Daniel H.

    2008-01-01

    Optical streak cameras are used as primary diagnostics for a wide range of physics and laser experiments at facilities such as the National Ignition Facility. To meet the strict accuracy requirements needed for these experiments, the systematic nonlinearities of the streak cameras (attributed to nonlinearities in the optical and electrical components that make up the streak camera system) must be characterized. In some cases the characterization information is used as a guide to help determine how experiment data should be taken. In other cases, the characterization data are applied to the raw data images to correct for the nonlinearities. In order to characterize an optical streak camera, a specific set of data is collected, where the response to defined inputs are recorded. A set of analysis software routines has been developed to extract information such as spatial resolution, dynamic range, and temporal resolution from this data set. The routines are highly automated, requiring very little user input and thus provide very reliable and repeatable results that are not subject to interpretation. An emphasis on quality control has been placed on these routines due to the high importance of the camera characterization information.

  15. Reliable and Repeatable Characterization of Optical Streak Cameras

    International Nuclear Information System (INIS)

    Kalantar, D; Charest, M; Torres III, P; Charest, M

    2008-01-01

    Optical streak cameras are used as primary diagnostics for a wide range of physics and laser experiments at facilities such as the National Ignition Facility (NIF). To meet the strict accuracy requirements needed for these experiments, the systematic nonlinearities of the streak cameras (attributed to nonlinearities in the optical and electrical components that make up the streak camera system) must be characterized. In some cases the characterization information is used as a guide to help determine how experiment data should be taken. In other cases, the characterization data are applied to the raw data images to correct for the nonlinearities. In order to characterize an optical streak camera, a specific set of data is collected, where the response to defined inputs are recorded. A set of analysis software routines has been developed to extract information such as spatial resolution, dynamic range, and temporal resolution from this data set. The routines are highly automated, requiring very little user input and thus provide very reliable and repeatable results that are not subject to interpretation. An emphasis on quality control has been placed on these routines due to the high importance of the camera characterization information

  16. Reliable and Repeatable Characterization of Optical Streak Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Michael Charest Jr., Peter Torres III, Christopher Silbernagel, and Daniel Kalantar

    2008-10-31

    Optical streak cameras are used as primary diagnostics for a wide range of physics and laser experiments at facilities such as the National Ignition Facility (NIF). To meet the strict accuracy requirements needed for these experiments, the systematic nonlinearities of the streak cameras (attributed to nonlinearities in the optical and electrical components that make up the streak camera system) must be characterized. In some cases the characterization information is used as a guide to help determine how experiment data should be taken. In other cases, the characterization data are applied to the raw data images to correct for the nonlinearities. In order to characterize an optical streak camera, a specific set of data is collected, where the response to defined inputs are recorded. A set of analysis software routines has been developed to extract information such as spatial resolution, dynamic range, and temporal resolution from this data set. The routines are highly automated, requiring very little user input and thus provide very reliable and repeatable results that are not subject to interpretation. An emphasis on quality control has been placed on these routines due to the high importance of the camera characterization information.

  17. Reliable and Repeatable Characterication of Optical Streak Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Kalantar, D; Charest, M; Torres III, P; Charest, M

    2008-05-06

    Optical streak cameras are used as primary diagnostics for a wide range of physics and laser experiments at facilities such as the National Ignition Facility (NIF). To meet the strict accuracy requirements needed for these experiments, the systematic nonlinearities of the streak cameras (attributed to nonlinearities in the optical and electrical components that make up the streak camera system) must be characterized. In some cases the characterization information is used as a guide to help determine how experiment data should be taken. In other cases, the characterization data are applied to the raw data images to correct for the nonlinearities. In order to characterize an optical streak camera, a specific set of data is collected, where the response to defined inputs are recorded. A set of analysis software routines has been developed to extract information such as spatial resolution, dynamic range, and temporal resolution from this data set. The routines are highly automated, requiring very little user input and thus provide very reliable and repeatable results that are not subject to interpretation. An emphasis on quality control has been placed on these routines due to the high importance of the camera characterization information.

  18. Picosecond X-ray streak camera dynamic range measurement

    Energy Technology Data Exchange (ETDEWEB)

    Zuber, C., E-mail: celine.zuber@cea.fr; Bazzoli, S.; Brunel, P.; Gontier, D.; Raimbourg, J.; Rubbelynck, C.; Trosseille, C. [CEA, DAM, DIF, F-91297 Arpajon (France); Fronty, J.-P.; Goulmy, C. [Photonis SAS, Avenue Roger Roncier, BP 520, 19106 Brive Cedex (France)

    2016-09-15

    Streak cameras are widely used to record the spatio-temporal evolution of laser-induced plasma. A prototype of picosecond X-ray streak camera has been developed and tested by Commissariat à l’Énergie Atomique et aux Énergies Alternatives to answer the Laser MegaJoule specific needs. The dynamic range of this instrument is measured with picosecond X-ray pulses generated by the interaction of a laser beam and a copper target. The required value of 100 is reached only in the configurations combining the slowest sweeping speed and optimization of the streak tube electron throughput by an appropriate choice of high voltages applied to its electrodes.

  19. Design of neutron streak camera for fusion diagnostics

    International Nuclear Information System (INIS)

    Wang, C.L.; Kalibjian, R.; Singh, M.S.

    1982-06-01

    The D-T burn time for advanced laser-fusion targets is calculated to be very short, 2 . Each fission fragment leaving the cathode generates 400 secondary electrons that are all < 20 eV. These electrons are focussed to a point with an extractor and an anode, and are then purified with an electrostatic deflector. The electron beam is streaked and detected with the standard streak camera techniques. Careful shielding is needed for x-rays from the fusion target and general background. It appears that the neutron streak camera can be a viable and unique tool for studying temporal history of fusion burns in D-T plasmas of a few keV ion temperature

  20. Improvements in Off-Center Focusing in an X-ray Streak Camera

    International Nuclear Information System (INIS)

    McDonald, J W; Weber, F; Holder, J P; Bell, P M

    2003-01-01

    Due to the planar construction of present x-ray streak tubes significant off-center defocusing is observed in both static and dynamic images taken with one-dimensional resolution slits. Based on the streak tube geometry curved photocathodes with radii of curvature ranging from 3.5 to 18 inches have been fabricated. We report initial off-center focusing performance data on the evaluation of these ''improved'' photocathodes in an X-ray streak camera and an update on the theoretical simulations to predict the optimum cathode curvature

  1. Imacon 600 ultrafast streak camera evaluation

    International Nuclear Information System (INIS)

    Owen, T.C.; Coleman, L.W.

    1975-01-01

    The Imacon 600 has a number of designed in disadvantages for use as an ultrafast diagnostic instrument. The unit is physically large (approximately 5' long) and uses an external power supply rack for the image intensifier. Water cooling is required for the intensifier; it is quiet but not conducive to portability. There is no interlock on the cooling water. The camera does have several switch selectable sweep speeds. This is desirable if one is working with both slow and fast events. The camera can be run in a framing mode. (MOW)

  2. Improved approach to characterizing and presenting streak camera performance

    International Nuclear Information System (INIS)

    Wiedwald, J.D.; Jones, B.A.

    1985-01-01

    The performance of a streak camera recording system is strongly linked to the technique used to amplify, detect and quantify the streaked image. At the Lawrence Livermore National Laboratory (LLNL) streak camera images have been recorded both on film and by fiber-optically coupling to charge-coupled devices (CCD's). During the development of a new process for recording these images (lens coupling the image onto a cooled CCD) the definitions of important performance characteristics such as resolution and dynamic range were re-examined. As a result of this development, these performance characteristics are now presented to the streak camera user in a more useful format than in the past. This paper describes how these techniques are used within the Laser Fusion Program at LLNL. The system resolution is presented as a modulation transfer function, including the seldom reported effects that flare and light scattering have at low spatial frequencies. Data are presented such that a user can adjust image intensifier gain and pixel averaging to optimize the useful dynamic range in any particular application

  3. Reliable and Repeatable Characterization of Optical Streak Cameras

    International Nuclear Information System (INIS)

    Michael R. Charest, Peter Torres III, Christopher Silbernagel

    2008-01-01

    Optical streak cameras are used as primary diagnostics for a wide range of physics and laser performance verification experiments at the National Ignition Facility (NIF). To meet the strict accuracy requirements needed for these experiments, the systematic nonlinearities of the streak cameras (attributed to nonlinearities in the optical and electronic components that make up the streak camera system) must be characterized. In some cases the characterization information is used as a guide to help determine how experiment data should be taken. In other cases the characterization data is used to 'correct' data images, to remove some of the nonlinearities. In order to obtain these camera characterizations, a specific data set is collected where the response to specific known inputs is recorded. A set of analysis software routines has been developed to extract information such as spatial resolution, dynamic range, temporal resolution, etc., from this data set. The routines are highly automated, requiring very little user input and thus provide very reliable and repeatable results that are not subject to interpretation. An emphasis on quality control has been placed on these routines due to the high importance of the camera characterization information

  4. Characteristics of uranium oxide cathode for neutron streak camera

    International Nuclear Information System (INIS)

    Niki, H.; Itoga, K.; Yamanaka, M.; Yamanaka, T.; Yamanaka, C.

    1986-01-01

    In laser fusion research, time-resolved neutron measurements require 20ps resolution in order to obtain the time history of the D-T burn. Uranium oxide was expected to be a sensitive material as a cathode of a neutron streak camera because of its large fission cross section. The authors report their measurements of some characteristics of the uranium oxide cathode connected to a conventional streak tube. 14 MeV neutron signal were observed as the bright spots on a TV monitor using a focus mode opration. Detection efficiency was ∼ 1 x 10 -6 for 1 μm thick cathode. Each signal consisted of more than several tens of components, which were corresponding to the secondary electrons dragged out from the cathode by a fission fragment. Time resolution is thought to be limited mainly by the transit time spread of the secondary electrons. 14ps resolution was obtained by a streak mode operation for a single fission event

  5. Characterization of X-ray streak cameras for use on Nova

    International Nuclear Information System (INIS)

    Kalantar, D.H.; Bell, P.M.; Costa, R.L.; Hammel, B.A.; Landen, O.L.; Orzechowski, T.J.; Hares, J.D.; Dymoke-Bradshaw, A.K.L.

    1996-09-01

    There are many different types of measurements that require a continuous time history of x-ray emission that can be provided with an x-ray streak camera. In order to properly analyze the images that are recorded with the x-ray streak cameras operated on Nova, it is important to account for the streak characterization of each camera. We have performed a number of calibrations of the streak cameras both on the bench as well as with Nova disk target shots where we use a time modulated laser intensity profile (self-beating of the laser) on the target to generate an x-ray comb. We have measured the streak camera sweep direction and spatial offset, curvature of the electron optics, sweep rate, and magnification and resolution of the electron optics

  6. Fabry-Perot interferometry using an image-intensified rotating-mirror streak camera

    International Nuclear Information System (INIS)

    Seitz, W.L.; Stacy, H.L.

    1983-01-01

    A Fabry-Perot velocity interferometer system is described that uses a modified rotating mirror streak camera to recrod the dynamic fringe positions. A Los Alamos Model 72B rotating-mirror streak camera, equipped with a beryllium mirror, was modified to include a high aperture (f/2.5) relay lens and a 40-mm image-intensifier tube such that the image normally formed at the film plane of the streak camera is projected onto the intensifier tube. Fringe records for thin (0.13 mm) flyers driven by a small bridgewire detonator obtained with a Model C1155-01 Hamamatsu and Model 790 Imacon electronic streak cameras are compared with those obtained with the image-intensified rotating-mirror streak camera (I 2 RMC). Resolution comparisons indicate that the I 2 RMC gives better time resolution than either the Hamamatsu or the Imacon for total writing times of a few microseconds or longer

  7. Flat-field response and geometric distortion measurements of optical streak cameras

    International Nuclear Information System (INIS)

    Montgomery, D.S.; Drake, R.P.; Jones, B.A.; Wiedwald, J.D.

    1987-08-01

    To accurately measure pulse amplitude, shape, and relative time histories of optical signals with an optical streak camera, it is necessary to correct each recorded image for spatially-dependent gain nonuniformity and geometric distortion. Gain nonuniformities arise from sensitivity variations in the streak-tube photocathode, phosphor screen, image-intensifier tube, and image recording system. These nonuniformities may be severe, and have been observed to be on the order of 100% for some LLNL optical streak cameras. Geometric distortion due to optical couplings, electron-optics, and sweep nonlinearity not only affects pulse position and timing measurements, but affects pulse amplitude and shape measurements as well. By using a 1.053-μm, long-pulse, high-power laser to generate a spatially and temporally uniform source as input to the streak camera, the combined effects of flat-field response and geometric distortion can be measured under the normal dynamic operation of cameras with S-1 photocathodes. Additionally, by using the same laser system to generate a train of short pulses that can be spatially modulated at the input of the streak camera, we can effectively create a two-dimensional grid of equally-spaced pulses. This allows a dynamic measurement of the geometric distortion of the streak camera. We will discuss the techniques involved in performing these calibrations, will present some of the measured results for LLNL optical streak cameras, and will discuss software methods to correct for these effects. 6 refs., 6 figs

  8. Precise measurement of a subpicosecond electron single bunch by the femtosecond streak camera

    International Nuclear Information System (INIS)

    Uesaka, M.; Ueda, T.; Kozawa, T.; Kobayashi, T.

    1998-01-01

    Precise measurement of a subpicosecond electron single bunch by the femtosecond streak camera is presented. The subpicosecond electron single bunch of energy 35 MeV was generated by the achromatic magnetic pulse compressor at the S-band linear accelerator of nuclear engineering research laboratory (NERL), University of Tokyo. The electric charge per bunch and beam size are 0.5 nC and the horizontal and vertical beam sizes are 3.3 and 5.5 mm (full width at half maximum; FWHM), respectively. Pulse shape of the electron single bunch is measured via Cherenkov radiation emitted in air by the femtosecond streak camera. Optical parameters of the optical measurement system were optimized based on much experiment and numerical analysis in order to achieve a subpicosecond time resolution. By using the optimized optical measurement system, the subpicosecond pulse shape, its variation for the differents rf phases in the accelerating tube, the jitter of the total system and the correlation between measured streak images and calculated longitudinal phase space distributions were precisely evaluated. This measurement system is going to be utilized in several subpicosecond analyses for radiation physics and chemistry. (orig.)

  9. Structured photocathodes for improved high-energy x-ray efficiency in streak cameras

    Energy Technology Data Exchange (ETDEWEB)

    Opachich, Y. P., E-mail: opachiyp@nv.doe.gov; Huffman, E.; Koch, J. A. [National Security Technologies, LLC, Livermore, California 94551 (United States); Bell, P. M.; Bradley, D. K.; Hatch, B.; Landen, O. L.; MacPhee, A. G.; Nagel, S. R. [Lawrence Livermore National Laboratory, Livermore, California 94551 (United States); Chen, N.; Gopal, A.; Udin, S. [Nanoshift LLC, Emeryville, California 94608 (United States); Feng, J. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Hilsabeck, T. J. [General Atomics, San Diego, California 92121 (United States)

    2016-11-15

    We have designed and fabricated a structured streak camera photocathode to provide enhanced efficiency for high energy X-rays (1–12 keV). This gold coated photocathode was tested in a streak camera and compared side by side against a conventional flat thin film photocathode. Results show that the measured electron yield enhancement at energies ranging from 1 to 10 keV scales well with predictions, and that the total enhancement can be more than 3×. The spatial resolution of the streak camera does not show degradation in the structured region. We predict that the temporal resolution of the detector will also not be affected as it is currently dominated by the slit width. This demonstration with Au motivates exploration of comparable enhancements with CsI and may revolutionize X-ray streak camera photocathode design.

  10. Flat-field response and geometric distortion measurements of optical streak cameras

    International Nuclear Information System (INIS)

    Montgomery, D.S.; Drake, R.P.; Jones, B.A.; Wiedwald, J.D.

    1987-01-01

    To accurately measure pulse amplitude, shape, and relative time histories of optical signals with an optical streak camera, it is necessary to correct each recorded image for spatially-dependent gain nonuniformity and geometric distortion. Gain nonuniformities arise from sensitivity variations in the streak-tube photocathode, phosphor screen, image-intensifier tube, and image recording system. By using a 1.053-μm, long-pulse, high-power laser to generate a spatially and temporally uniform source as input to the streak camera, the combined effects of flat-field response and geometric distortion can be measured under the normal dynamic operation of cameras with S-1 photocathodes. Additionally, by using the same laser system to generate a train of short pulses that can be spatially modulated at the input of the streak camera, the authors can create a two-dimensional grid of equally-spaced pulses. This allows a dynamic measurement of the geometric distortion of the streak camera. The author discusses the techniques involved in performing these calibrations, present some of the measured results for LLNL optical streak cameras, and will discuss software methods to correct for these effects

  11. Fiber scintillator/streak camera detector for burn history measurement in inertial confinement fusion experiment

    International Nuclear Information System (INIS)

    Miyanaga, N.; Ohba, N.; Fujimoto, K.

    1997-01-01

    To measure the burn history in an inertial confinement fusion experiment, we have developed a new neutron detector based on plastic scintillation fibers. Twenty-five fiber scintillators were arranged in a geometry compensation configuration by which the time-of-flight difference of the neutrons is compensated by the transit time difference of light passing through the fibers. Each fiber scintillator is spliced individually to an ultraviolet optical fiber that is coupled to a streak camera. We have demonstrated a significant improvement of sensitivity compared with the usual bulk scintillator coupled to a bundle of the same ultraviolet fibers. copyright 1997 American Institute of Physics

  12. Characterization results from several commercial soft X-ray streak cameras

    Science.gov (United States)

    Stradling, G. L.; Studebaker, J. K.; Cavailler, C.; Launspach, J.; Planes, J.

    The spatio-temporal performance of four soft X-ray streak cameras has been characterized. The objective in evaluating the performance capability of these instruments is to enable us to optimize experiment designs, to encourage quantitative analysis of streak data and to educate the ultra high speed photography and photonics community about the X-ray detector performance which is available. These measurements have been made collaboratively over the space of two years at the Forge pulsed X-ray source at Los Alamos and at the Ketjak laser facility an CEA Limeil-Valenton. The X-ray pulse lengths used for these measurements at these facilities were 150 psec and 50 psec respectively. The results are presented as dynamically-measured modulation transfer functions. Limiting temporal resolution values were also calculated. Emphasis is placed upon shot noise statistical limitations in the analysis of the data. Space charge repulsion in the streak tube limits the peak flux at ultra short experiments duration times. This limit results in a reduction of total signal and a decrease in signal to no ise ratio in the streak image. The four cameras perform well with 20 1p/mm resolution discernable in data from the French C650X, the Hadland X-Chron 540 and the Hamamatsu C1936X streak cameras. The Kentech X-ray streak camera has lower modulation and does not resolve below 10 1p/mm but has a longer photocathode.

  13. X-ray streak and framing camera techniques

    International Nuclear Information System (INIS)

    Coleman, L.W.; Attwood, D.T.

    1975-01-01

    This paper reviews recent developments and applications of ultrafast diagnostic techniques for x-ray measurements. These techniques, based on applications of image converter devices, are already capable of significantly important resolution capabilities. Techniques capable of time resolution in the sub-nanosecond regime are being considered. Mechanical cameras are excluded from considerations as are devices using phosphors or fluors as x-ray converters

  14. C.C.D. readout of a picosecond streak camera with an intensified C.C.D

    International Nuclear Information System (INIS)

    Lemonier, M.; Richard, J.C.; Cavailler, C.; Mens, A.; Raze, G.

    1984-08-01

    This paper deals with a digital streak camera readout device. The device consists in a low light level television camera made of a solid state C.C.D. array coupled to an image intensifier associated to a video-digitizer coupled to a micro-computer system. The streak camera images are picked-up as a video signal, digitized and stored. This system allows the fast recording and the automatic processing of the data provided by the streak tube

  15. Performance of Laser Megajoule’s x-ray streak camera

    Energy Technology Data Exchange (ETDEWEB)

    Zuber, C., E-mail: celine.zuber@cea.fr; Bazzoli, S.; Brunel, P.; Burillo, M.; Gontier, D.; Moreau, I.; Oudot, G.; Rubbelynck, C.; Soullié, G.; Stemmler, P.; Trosseille, C. [CEA, DAM, DIF, F-91297 Arpajon (France); Fronty, J. P.; Goulmy, C. [Photonis France SAS, Avenue Roger Roncier, BP 520, 19106 Brive Cedex (France)

    2016-11-15

    A prototype of a picosecond x-ray streak camera has been developed and tested by Commissariat à l’Énergie Atomique et aux Énergies Alternatives to provide plasma-diagnostic support for the Laser Megajoule. We report on the measured performance of this streak camera, which almost fulfills the requirements: 50-μm spatial resolution over a 15-mm field in the photocathode plane, 17-ps temporal resolution in a 2-ns timebase, a detection threshold lower than 625 nJ/cm{sup 2} in the 0.05–15 keV spectral range, and a dynamic range greater than 100.

  16. X-ray streak camera for observation of tightly pinched relativistic electron beams

    International Nuclear Information System (INIS)

    Johnson, D.J.

    1977-01-01

    A pinhole camera is coupled with a Pilot-B scintillator and image-intensified TRW streak camera to study pinched electron beam profiles via observation of anode target bremsstrahlung. Streak intensification is achieved with an EMI image intensifier operated at a gain of up to 10 6 which allows optimizing the pinhole configuration so that resolution is simultaneously limited by photon-counting statistics and pinhole geometry. The pinhole used is one-dimensional and is fabricated by inserting uranium shims with hyperbolic curved edges between two 5-cm-thick lead blocks. The loss of spatial resolution due to the x-ray transmission through the perimeter of the pinhole is calculated and a streak photograph of a Gamble I pinched beam interacting with a brass anode is presented

  17. Optical Comb Generation for Streak Camera Calibration for Inertial Confinement Fusion Experiments

    International Nuclear Information System (INIS)

    Ronald Justin; Terence Davies; Frans Janson; Bruce Marshall; Perry Bell; Daniel Kalantar; Joseph Kimbrough; Stephen Vernon; Oliver Sweningsen

    2008-01-01

    The National Ignition Facility (NIF) at Lawrence Livermore National Laboratory (LLNL) is coming on-line to support physics experimentation for the U.S. Department of Energy (DOE) programs in Inertial Confinement Fusion (ICF) and Stockpile Stewardship (SS). Optical streak cameras are an integral part of the experimental diagnostics instrumentation at NIF. To accurately reduce streak camera data a highly accurate temporal calibration is required. This article describes a technique for simultaneously generating a precise +/- 2 ps optical marker pulse (fiducial reference) and trains of precisely timed, short-duration optical pulses (so-called 'comb' pulse trains) that are suitable for the timing calibrations. These optical pulse generators are used with the LLNL optical streak cameras. They are small, portable light sources that, in the comb mode, produce a series of temporally short, uniformly spaced optical pulses, using a laser diode source. Comb generators have been produced with pulse-train repetition rates up to 10 GHz at 780 nm, and somewhat lower frequencies at 664 nm. Individual pulses can be as short as 25-ps FWHM. Signal output is via a fiber-optic connector on the front panel of the generator box. The optical signal is transported from comb generator to streak camera through multi-mode, graded-index optical fiber

  18. Streak camera measurements of laser pulse temporal dispersion in short graded-index optical fibers

    International Nuclear Information System (INIS)

    Lerche, R.A.; Phillips, G.E.

    1981-01-01

    Streak camera measurements were used to determine temporal dispersion in short (5 to 30 meter) graded-index optical fibers. Results show that 50-ps, 1.06-μm and 0.53-μm laser pulses can be propagated without significant dispersion when care is taken to prevent propagation of energy in fiber cladding modes

  19. Image-converter streak cameras with very high gain

    International Nuclear Information System (INIS)

    1975-01-01

    A new camera is described with slit scanning and very high photonic gain (G=5000). Development of the technology of tubes and microchannel plates has enabled integration of such an amplifying element in an image converter tube which does away with the couplings and the intermediary electron-photon-electron conversions of the classical converter systems having external amplification. It is thus possible to obtain equal or superior performance while retaining considerable gain for the camera, great compactness, great flexibility in use, and easy handling. (author)

  20. Commissioning of the advanced light source dual-axis streak camera

    International Nuclear Information System (INIS)

    Hinkson, J.; Keller, R.; Byrd, J.

    1997-05-01

    A dual-axis camera, Hamamatsu model C5680, has been installed on the Advanced Light Source photon-diagnostics beam-line to investigate electron-beam parameters. During its commissioning process, the camera has been used to measure single-bunch length vs. current, relative bunch charge in adjacent RF buckets, and bunchphase stability. In this paper the authors describe the visible-light branch of the diagnostics beam-line, the streak-camera installation, and the timing electronics. They will show graphical results of beam measurements taken during a variety of accelerator conditions

  1. Temporal resolution limit estimation of x-ray streak cameras using a CsI photocathode

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiang; Gu, Li; Zong, Fangke; Zhang, Jingjin; Yang, Qinlao, E-mail: qlyang@szu.edu.cn [Key Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong Province, Institute of Optoelectronics, Shenzhen University, Shenzhen 518060 (China)

    2015-08-28

    A Monte Carlo model is developed and implemented to calculate the characteristics of x-ray induced secondary electron (SE) emission from a CsI photocathode used in an x-ray streak camera. Time distributions of emitted SEs are investigated with an incident x-ray energy range from 1 to 30 keV and a CsI thickness range from 100 to 1000 nm. Simulation results indicate that SE time distribution curves have little dependence on the incident x-ray energy and CsI thickness. The calculated time dispersion within the CsI photocathode is about 70 fs, which should be the temporal resolution limit of x-ray streak cameras that use CsI as the photocathode material.

  2. Development of intelligent control system for X-ray streak camera in diagnostic instrument manipulator

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Chengquan [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Wu, Shengli, E-mail: slwu@mail.xjtu.edu.cn [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Tian, Jinshou [Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Liu, Zhen [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Fang, Yuman [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Gao, Guilong; Liang, Lingliang [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Wen, Wenlong [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China)

    2015-11-01

    An intelligent control system for an X ray streak camera in a diagnostic instrument manipulator (DIM) is proposed and implemented, which can control time delay, electric focusing, image gain adjustment, switch of sweep voltage, acquiring environment parameters etc. The system consists of 16 A/D converters and 16 D/A converters, a 32-channel general purpose input/output (GPIO) and two sensors. An isolated DC/DC converter with multi-outputs and a single mode fiber were adopted to reduce the interference generated by the common ground among the A/D, D/A and I/O. The software was designed using graphical programming language and can remotely access the corresponding instrument from a website. The entire intelligent control system can acquire the desirable data at a speed of 30 Mb/s and store it for later analysis. The intelligent system was implemented on a streak camera in a DIM and it shows a temporal resolution of 11.25 ps, spatial distortion of less than 10% and dynamic range of 279:1. The intelligent control system has been successfully used in a streak camera to verify the synchronization of multi-channel laser on the Inertial Confinement Fusion Facility.

  3. Development of intelligent control system for X-ray streak camera in diagnostic instrument manipulator

    International Nuclear Information System (INIS)

    Pei, Chengquan; Wu, Shengli; Tian, Jinshou; Liu, Zhen; Fang, Yuman; Gao, Guilong; Liang, Lingliang; Wen, Wenlong

    2015-01-01

    An intelligent control system for an X ray streak camera in a diagnostic instrument manipulator (DIM) is proposed and implemented, which can control time delay, electric focusing, image gain adjustment, switch of sweep voltage, acquiring environment parameters etc. The system consists of 16 A/D converters and 16 D/A converters, a 32-channel general purpose input/output (GPIO) and two sensors. An isolated DC/DC converter with multi-outputs and a single mode fiber were adopted to reduce the interference generated by the common ground among the A/D, D/A and I/O. The software was designed using graphical programming language and can remotely access the corresponding instrument from a website. The entire intelligent control system can acquire the desirable data at a speed of 30 Mb/s and store it for later analysis. The intelligent system was implemented on a streak camera in a DIM and it shows a temporal resolution of 11.25 ps, spatial distortion of less than 10% and dynamic range of 279:1. The intelligent control system has been successfully used in a streak camera to verify the synchronization of multi-channel laser on the Inertial Confinement Fusion Facility

  4. Improving the off-axis spatial resolution and dynamic range of the NIF X-ray streak cameras (invited)

    Energy Technology Data Exchange (ETDEWEB)

    MacPhee, A. G., E-mail: macphee2@llnl.gov; Hatch, B. W.; Bell, P. M.; Bradley, D. K.; Datte, P. S.; Landen, O. L.; Palmer, N. E.; Piston, K. W.; Rekow, V. V. [Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, California 94551-0808 (United States); Dymoke-Bradshaw, A. K. L.; Hares, J. D. [Kentech Instruments Ltd., Isis Building, Howbery Park, Wallingford, Oxfordshire OX10 8BD (United Kingdom); Hassett, J. [Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, California 94551-0808 (United States); Department of Electrical and Computer Engineering, University of Rochester, Rochester, New York 14627 (United States); Meadowcroft, A. L. [AWE Aldermaston, Reading, Berkshire RG7 4PR (United Kingdom); Hilsabeck, T. J.; Kilkenny, J. D. [General Atomics, P.O. Box 85608, San Diego, California 92186-5608 (United States)

    2016-11-15

    We report simulations and experiments that demonstrate an increase in spatial resolution of the NIF core diagnostic x-ray streak cameras by at least a factor of two, especially off axis. A design was achieved by using a corrector electron optic to flatten the field curvature at the detector plane and corroborated by measurement. In addition, particle in cell simulations were performed to identify the regions in the streak camera that contribute the most to space charge blurring. These simulations provide a tool for convolving synthetic pre-shot spectra with the instrument function so signal levels can be set to maximize dynamic range for the relevant part of the streak record.

  5. Improving the off-axis spatial resolution and dynamic range of the NIF X-ray streak cameras (invited).

    Science.gov (United States)

    MacPhee, A G; Dymoke-Bradshaw, A K L; Hares, J D; Hassett, J; Hatch, B W; Meadowcroft, A L; Bell, P M; Bradley, D K; Datte, P S; Landen, O L; Palmer, N E; Piston, K W; Rekow, V V; Hilsabeck, T J; Kilkenny, J D

    2016-11-01

    We report simulations and experiments that demonstrate an increase in spatial resolution of the NIF core diagnostic x-ray streak cameras by at least a factor of two, especially off axis. A design was achieved by using a corrector electron optic to flatten the field curvature at the detector plane and corroborated by measurement. In addition, particle in cell simulations were performed to identify the regions in the streak camera that contribute the most to space charge blurring. These simulations provide a tool for convolving synthetic pre-shot spectra with the instrument function so signal levels can be set to maximize dynamic range for the relevant part of the streak record.

  6. Synchronization of streak and framing camera measurements of an intense relativistic electron beam propagating through gas

    International Nuclear Information System (INIS)

    Weidman, D.J.; Murphy, D.P.; Myers, M.C.; Meger, R.A.

    1994-01-01

    The expansion of the radius of a 5 MeV, 20 kA, 40 ns electron beam from SuperIBEX during propagation through gas is being measured. The beam is generated, conditions, equilibrated, and then passed through a thin foil that produces Cherenkov light, which is recorded by a streak camera. At a second location, the beam hits another Cherenkov emitter, which is viewed by a framing camera. Measurements at these two locations can provide a time-resolved measure of the beam expansion. The two measurements, however, must be synchronized with each other, because the beam radius is not constant throughout the pulse due to variations in beam current and energy. To correlate the timing of the two diagnostics, several shots have been taken with both diagnostics viewing Cherenkov light from the same foil. Experimental measurements of the Cherenkov light from one foil viewed by both diagnostics will be presented to demonstrate the feasibility of correlating the diagnostics with each other. Streak camera data showing the optical fiducial, as well as the final correlation of the two diagnostics, will also be presented. Preliminary beam radius measurements from Cherenkov light measured at two locations will be shown

  7. Deflection system of a high-speed streak camera in the form of a delay line

    International Nuclear Information System (INIS)

    Korzhenevich, I.M.; Fel'dman, G.G.

    1993-01-01

    This paper presents an analysis of the operation of a meander deflection system, well-known in oscillography, when it is used to scan the image in a streak-camera tube. Effects that are specific to high-speed photography are considered. It is shown that such a deflection system imposes reduced requirements both on the steepness and on the duration of the linear leading edges of the pulses of the spark gaps that generate the sweep voltage. An example of the design of a meander deflection system whose sensitivity is a factor of two higher than for a conventional system is considered. 5 refs., 3 figs

  8. Identification and Removal of High Frequency Temporal Noise in a Nd:YAG Macro-Pulse Laser Assisted with a Diagnostic Streak Camera

    International Nuclear Information System (INIS)

    Kent Marlett; Ke-Xun Sun

    2004-01-01

    This paper discusses the use of a reference streak camera (SC) to diagnose laser performance and guide modifications to remove high frequency noise from Bechtel Nevada's long-pulse laser. The upgraded laser exhibits less than 0.1% high frequency noise in cumulative spectra, exceeding National Ignition Facility (NIF) calibration specifications. Inertial Confinement Fusion (ICF) experiments require full characterization of streak cameras over a wide range of sweep speeds (10 ns to 480 ns). This paradigm of metrology poses stringent spectral requirements on the laser source for streak camera calibration. Recently, Bechtel Nevada worked with a laser vendor to develop a high performance, multi-wavelength Nd:YAG laser to meet NIF calibration requirements. For a typical NIF streak camera with a 4096 x 4096 pixel CCD, the flat field calibration at 30 ns requires a smooth laser spectrum over 33 MHz to 68 GHz. Streak cameras are the appropriate instrumentation for measuring laser amplitude noise at these very high frequencies since the upper end spectral content is beyond the frequency response of typical optoelectronic detectors for a single shot pulse. The SC was used to measure a similar laser at its second harmonic wavelength (532 nm), to establish baseline spectra for testing signal analysis algorithms. The SC was then used to measure the new custom calibration laser. In both spatial-temporal measurements and cumulative spectra, 6-8 GHz oscillations were identified. The oscillations were found to be caused by inter-surface reflections between amplifiers. Additional variations in the SC spectral data were found to result from temperature instabilities in the seeding laser. Based on these findings, laser upgrades were made to remove the high frequency noise from the laser output

  9. Lasers and laser applications. Imaging implosion dynamics: The x-ray pinhole/streak camera

    International Nuclear Information System (INIS)

    Attwood, D.T.

    1976-01-01

    A Livermore-developed x-ray-sensitive streak camera was combined with a unique x-ray pinhole camera to make dynamic photographs of laser-irradiated fusion target implosions. These photographs show x radiation emitted from the imploding shell during its 100-ps implosion; they are the first continuous observations of an imploding laser-driven fusion capsule. The diagnostic system has a time resolution of 15 ps and a spatial resolution of about 6 μm. Results agree very well with those predicted by our LASNEX calculations, confirming that the essential physics are correctly described in the code and providing further confidence in the soundness of this approach to inertial confinement fusion

  10. Picosecond streak camera diagnostics of CO2 laser-produced plasmas

    International Nuclear Information System (INIS)

    Jaanimagi, P.A.; Marjoribanks, R.S.; Sancton, R.W.; Enright, G.D.; Richardson, M.C.

    1979-01-01

    The interaction of intense laser radiation with solid targets is currently of considerable interest in laser fusion studies. Its understanding requires temporal knowledge of both laser and plasma parameters on a picosecond time scale. In this paper we describe the progress we have recently made in analysing, with picosecond time resolution, various features of intense nanosecond CO 2 laser pulse interaction experiments. An infrared upconversion scheme, having linear response and <20 ps temporal resolution, has been utilized to characterise the 10 μm laser pulse. Various features of the interaction have been studied with the aid of picosecond IR and x-ray streak cameras. These include the temporal and spatial characteristics of high harmonic emission from the plasma, and the temporal development of the x-ray continuum spectrum. (author)

  11. Towards jitter free synchronization of synchroscan streak cameras by noisy periodic laser pulses

    International Nuclear Information System (INIS)

    Cunin, B.; Heisel, F.; Miehe, J.A.

    1991-01-01

    In connection with the parameters characterizing the phase noise in cw mode-locked lasers and under the employ of streak cameras operated by sinewave deflection, the timing capabilities of the measuring system for two commonly used synchronization techniques are discussed by stochastic description. Especially, the power spectrum of the sweep signal versus the laser phase noise is examined in detail. The theoretical results are used to interpret experimental observations recorded by means of actively and passively mode-locked lasers. One of the interesting applications of synchroscan operations to metrology is the determination of short-term instabilities of the oscillator on a time scale near to the period. (author) 12 refs.; 3 figs

  12. Compact streak camera for the shock study of solids by using the high-pressure gas gun

    Science.gov (United States)

    Nagayama, Kunihito; Mori, Yasuhito

    1993-01-01

    For the precise observation of high-speed impact phenomena, a compact high-speed streak camera recording system has been developed. The system consists of a high-pressure gas gun, a streak camera, and a long-pulse dye laser. The gas gun installed in our laboratory has a muzzle of 40 mm in diameter, and a launch tube of 2 m long. Projectile velocity is measured by the laser beam cut method. The gun is capable of accelerating a 27 g projectile up to 500 m/s, if helium gas is used as a driver. The system has been designed on the principal idea that the precise optical measurement methods developed in other areas of research can be applied to the gun study. The streak camera is 300 mm in diameter, with a rectangular rotating mirror which is driven by an air turbine spindle. The attainable streak velocity is 3 mm/microsecond(s) . The size of the camera is rather small aiming at the portability and economy. Therefore, the streak velocity is relatively slower than the fast cameras, but it is possible to use low-sensitivity but high-resolution film as a recording medium. We have also constructed a pulsed dye laser of 25 - 30 microsecond(s) in duration. The laser can be used as a light source of observation. The advantage for the use of the laser will be multi-fold, i.e., good directivity, almost single frequency, and so on. The feasibility of the system has been demonstrated by performing several experiments.

  13. A Robust In-Situ Warp-Correction Algorithm For VISAR Streak Camera Data at the National Ignition Facility

    International Nuclear Information System (INIS)

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.

    2015-01-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  14. A Robust In-Situ Warp-Correction Algorithm For VISAR Streak Camera Data at the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Labaria, George R. [Univ. of California, Santa Cruz, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Warrick, Abbie L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Celliers, Peter M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kalantar, Daniel H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-01-12

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  15. Overall comparison of subpicosecond electron beam diagnostics by the polychromator, the interferometer and the femtosecond streak camera

    CERN Document Server

    Watanabe, T; Yoshimatsu, T; Sasaki, S; Sugiyama, Y; Ishi, K; Shibata, Y; Kondo, Y; Yoshii, K; Ueda, T; Uesaka, M

    2002-01-01

    Measurements of longitudinal bunch length of subpicosecond and picosecond electron beams have been performed by three methods with three radiation sources at the 35 MeV S-band twin liner accelerators at Nuclear Engineering Research Laboratory, University of Tokyo. The methods we adopt are the femtosecond streak camera with a nondispersive reflective optics, the coherent transition radiation (CTR) Michelson interferometer and the 10 ch polychromator that detects the spectrum of CTR and coherent diffraction radiation (CDR). The measurements by the two CTR methods were independently done with the streak camera and their results were consistent with one another. As a result, the reliability of the polychromator for the diagnostics of less than picosecond electron bunch and the usefulness of the diagnostics for the single shot measurement were verified. Furthermore, perfect nondestructive diagnostics for subpicosecond bunches was performed utilizing CDR interferometry. Then the good agreement between CDR interfero...

  16. Evaluation of dynamic range for LLNL streak cameras using high contrast pulsed and pulse podiatry on the Nova laser system

    International Nuclear Information System (INIS)

    Richards, J.B.; Weiland, T.L.; Prior, J.A.

    1990-01-01

    This paper reports on a standard LLNL streak camera that has been used to analyze high contrast pulses on the Nova laser facility. These pulses have a plateau at their leading edge (foot) with an amplitude which is approximately 1% of the maximum pulse height. Relying on other features of the pulses and on signal multiplexing, we were able to determine how accurately the foot amplitude was being represented by the camera. Results indicate that the useful single channel dynamic range of the instrument approaches 100:1

  17. X-ray streak-camera study of the dynamics of laser-imploded microballoons

    International Nuclear Information System (INIS)

    Key, M.H.; Lamb, M.J.; Lewis, C.L.S.; Moore, A.; Evans, R.G.

    1979-01-01

    The time and space development of the x-ray emission from the irradiated target surface and the implosion core in laser-compressed glass microballoons is recorded by x-ray streak photography. The experimental variation of implosion time with target mass and laser energy is considered and compared with computer modeling of the implosion

  18. Streak electronic camera with slow-scanning storage tube used in the field of high-speed cineradiography

    International Nuclear Information System (INIS)

    Marilleau, J.; Bonnet, L.; Garcin, G.; Guix, R.; Loichot, R.

    The cineradiographic machine designed for measurements in the field of detonics consists of a linear accelerator associated with a braking target, a scintillator and a remote controlled electronic camera. The quantum factor of X-ray detection and the energetic efficiency of the scintillator are given. The electronic camera is built upon a deflection-converter tube (RCA C. 73 435 AJ) coupled by optical fibres to a photosensitive storage tube (TH-CSF Esicon) used in a slow-scanning process with electronic recording of the information. The different parts of the device are described. Some capabilities such as data processing numerical outputs, measurements and display are outlined. A streak cineradiogram of a typical implosion experiment is given [fr

  19. Aluminum-coated optical fibers as efficient infrared timing fiducial photocathodes for synchronizing x-ray streak cameras

    International Nuclear Information System (INIS)

    Koch, J.A.; MacGowan, B.J.

    1991-01-01

    The timing fiducial system at the Nova Two-Beam Facility allows time-resolved x-ray and optical streak camera data from laser-produced plasmas to be synchronized to within 30 ps. In this system, an Al-coated optical fiber is inserted into an aperture in the cathode plate of each streak camera. The coating acts as a photocathode for a low-energy pulse of 1ω (λ = 1.054 μm) light which is synchronized to the main Nova beam. The use of the fundamental (1ω) for this fiducial pulse has been found to offer significant advantages over the use of the 2ω second harmonic (λ = 0.53 μm). These advantages include brighter signals, greater reliability, and a higher relative damage threshold, allowing routine use without fiber replacement. The operation of the system is described, and experimental data and interpretations are discussed which suggest that the electron production in the Al film is due to thermionic emission. The results of detailed numerical simulations of the relevant thermal processes, undertaken to model the response of the coated fiber to 1ω laser pulses, are also presented, which give qualitative agreement with experimental data. Quantitative discrepancies between the modeling results and the experimental data are discussed, and suggestions for further research are given

  20. Light field driven streak-camera for single-shot measurements of the temporal profile of XUV-pulses from a free-electron laser; Lichtfeld getriebene Streak-Kamera zur Einzelschuss Zeitstrukturmessung der XUV-Pulse eines Freie-Elektronen Lasers

    Energy Technology Data Exchange (ETDEWEB)

    Fruehling, Ulrike

    2009-10-15

    The Free Electron Laser in Hamburg (FLASH) is a source for highly intense ultra short extreme ultraviolet (XUV) light pulses with pulse durations of a few femtoseconds. Due to the stochastic nature of the light generation scheme based on self amplified spontaneous emission (SASE), the duration and temporal profile of the XUV pulses fluctuate from shot to shot. In this thesis, a THz-field driven streak-camera capable of single pulse measurements of the XUV pulse-profile has been realized. In a first XUV-THz pump-probe experiment at FLASH, the XUV-pulses are overlapped in a gas target with synchronized THz-pulses generated by a new THz-undulator. The electromagnetic field of the THz light accelerates photoelectrons produced by the XUV-pulses with the resulting change of the photoelectron momenta depending on the phase of the THz field at the time of ionisation. This technique is intensively used in attosecond metrology where near infrared streaking fields are employed for the temporal characterisation of attosecond XUV-Pulses. Here, it is adapted for the analysis of pulse durations in the few femtosecond range by choosing a hundred times longer far infrared streaking wavelengths. Thus, the gap between conventional streak cameras with typical resolutions of hundreds of femtoseconds and techniques with attosecond resolution is filled. Using the THz-streak camera, the time dependent electric field of the THz-pulses was sampled in great detail while on the other hand the duration and even details of the time structure of the XUV-pulses were characterized. (orig.)

  1. Development and performance test of picosecond pulse x-ray excited streak camera system for scintillator characterization

    International Nuclear Information System (INIS)

    Yanagida, Takayuki; Fujimoto, Yutaka; Yoshikawa, Akira

    2010-01-01

    To observe time and wavelength-resolved scintillation events, picosecond pulse X-ray excited streak camera system is developed. The wavelength range spreads from vacuum ultraviolet (VUV) to near infrared region (110-900 nm) and the instrumental response function is around 80 ps. This work describes the principle of the newly developed instrument and the first performance test using BaF 2 single crystal scintillator. Core valence luminescence of BaF 2 peaking around 190 and 220 nm is clearly detected by our system, and the decay time turned out to be of 0.7 ns. These results are consistent with literature and confirm that our system properly works. (author)

  2. Realization of an optical multi and mono-channel analyzer, associated to a streak camera. Application to metrology of picosecond low intensity luminous pulses

    International Nuclear Information System (INIS)

    Roth, J.M.

    1985-02-01

    An electronic system including a low light level television tube (Nocticon) to digitize images from streak cameras is studied and realized. Performances (sensibility, signal-to-noise ratio) are studied and compared with a multi-channel analyzer using a linear network of photodiodes. It is applied to duration and amplitude measurement of short luminous pulses [fr

  3. Streak-Camera Measurements with High Currents in PEP-II and Variable Optics in SPEAR3

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Weixeng; Fisher, Alan, a Corbett, Jeff; /SLAC

    2008-06-05

    A dual-axis, synchroscan streak camera was used to measure longitudinal bunch profiles in three storage rings at SLAC: the PEP-II low- and high-energy rings, and SPEAR3. At high currents, both PEP rings exhibit a transient synchronous-phase shift along the bunch train due to RF-cavity beam loading. Bunch length and profile asymmetry were measured along the train for a range of beam currents. To avoid the noise inherent in a dual-axis sweep, we accumulated single-axis synchroscan images while applying a 50-ns gate to the microchannel plate. To improve the extinction ratio, an upstream mirror pivoting at 1 kHz was synchronized with the 2kHz MCP gate to deflect light from other bunches off the photocathode. Bunch length was also measured on the HER as a function of beam energy. For SPEAR3 we measured bunch length as a function of single-bunch current for several lattices: achromatic, low-emittance and low momentum compaction. In the first two cases, resistive and reactive impedance components can be extracted from the longitudinal bunch profiles. In the low-alpha configurations, we observed natural bunch lengths approaching the camera resolution, requiring special care to remove instrumental effects, and saw evidence of periodic bursting.

  4. X-ray imaging of JET. A design study for a streak camera application

    International Nuclear Information System (INIS)

    Bateman, J.E.; Hobby, M.G.

    1980-03-01

    A single dimensional imaging system is proposed which will image a strip of the JET plasma up to 320 times per shot with a time resolution of better than 50 μs using the bremsstrahlung X-rays. The images are obtained by means of a pinhole camera followed by an X-ray image intensifier system the output of which is in turn digitised by a photodiode array. The information is stored digitally in a fast memory and is immediately available for display or analysis. (author)

  5. Evaluation of dynamic range for LLNL streak cameras using high contrast pulses and pulse podiatry'' on the Nova laser system

    Energy Technology Data Exchange (ETDEWEB)

    Richards, J.B.; Weiland, T.L.; Prior, J.A.

    1990-07-01

    A standard LLNL streak camera has been used to analyze high contrast pulses on the Nova laser facility. These pulses have a plateau at their leading edge (foot) with an amplitude which is approximately 1% of the maximum pulse height. Relying on other features of the pulses and on signal multiplexing, we were able to determine how accurately the foot amplitude was being represented by the camera. Results indicate that the useful single channel dynamic range of the instrument approaches 100:1. 1 ref., 4 figs., 1 tab.

  6. Initial tests of the dual-sweep streak camera system planned for APS particle-beam diagnostics

    International Nuclear Information System (INIS)

    Lumpkin, A.; Yang, B.; Gai, W.; Cieslik, W.

    1995-01-01

    Initial tests of a dual-sweep streak system planned for use on the Advanced Photon Source (APS) have been performed using assets of the Argonne Wakefield Accelerator (AWA) facility. The short light pulses from the photoelectric injector drive laser in both the visible (λ=496 nm, Δt∼1.5 ps (FWHM)), and the ultraviolet (λ=248 nm, Δt∼5 ps (FWHM)) were used. Both a UV-visible S20 photocathode streak tube and a UV-to-x-ray Au photocathode streak tube were tested. Calibration data with an etalon were also obtained. A sample of dual-sweep streak data using optical synchrotron radiation on the APS injector synchrotron is also presented

  7. Time- and wavelength-resolved luminescence evaluation of several types of scintillators using streak camera system equipped with pulsed X-ray source

    Energy Technology Data Exchange (ETDEWEB)

    Furuya, Yuki, E-mail: f.yuki@mail.tagen.tohoku.ac.j [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 (Japan); Yanagida, Takayuki; Fujimoto, Yutaka; Yokota, Yuui; Kamada, Kei [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 (Japan); Kawaguchi, Noriaki [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 (Japan); Research and Development Division, Tokuyama., Co. Ltd., ICR-Building, Minamiyoshinari, Aoba-ku, Sendai (Japan); Ishizu, Sumito [Research and Development Division, Tokuyama., Co. Ltd., ICR-Building, Minamiyoshinari, Aoba-ku, Sendai (Japan); Uchiyama, Koro; Mori, Kuniyoshi [Hamamatsu Photonics K.K., 325-6, Sunayama-cho, Naka-ku, Hamamatsu, Shizuoka 430-8587 (Japan); Kitano, Ken [Vacuum and Optical Instruments, 2-18-18 Shimomaruko, Ota, Tokyo 146-0092 (Japan); Nikl, Martin [Institute of Physics ASCR, Cukrovarnicka 10, Prague 6, 162-53 (Czech Republic); Yoshikawa, Akira [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 (Japan); NICHe, Tohoku University, 6-6-10 Aoba, Aramaki, Aoba-ku, Sendai 980-8579 (Japan)

    2011-04-01

    To design new scintillating materials, it is very important to understand detailed information about the events, which occurred during the excitation and emission processes under the ionizing radiation excitation. We developed a streak camera system equipped with picosecond pulsed X-ray source to observe time- and wavelength-resolved scintillation events. In this report, we test the performance of this new system using several types of scintillators including bulk oxide/halide crystals, transparent ceramics, plastics and powders. For all samples, the results were consistent with those reported previously. The results demonstrated that the developed system is suitable for evaluation of the scintillation properties.

  8. Potential applications of a dual-sweep streak camera system for characterizing particle and photon beams of VUV, XUV, and x-ray FELS

    Energy Technology Data Exchange (ETDEWEB)

    Lumpkin, A. [Argonne National Lab., IL (United States)

    1995-12-31

    The success of time-resolved imaging techniques in the Characterization of particle beams and photon beams of the recent generation of L-band linac-driven or storage ring FELs in the infrared, visible, and ultraviolet wavelength regions can be extended to the VUV, XUV, and x-ray FELs. Tests and initial data have been obtained with the Hamamatsu C5680 dual-sweep streak camera system which includes a demountable photocathode (thin Au) assembly and a flange that allows windowless operation with the transport vacuum system. This system can be employed at wavelengths shorter than 100 nm and down to 1 {Angstrom}. First tests on such a system at 248-nm wavelengths have been performed oil the Argonne Wakefield Accelerator (AWA) drive laser source. A quartz window was used at the tube entrance aperture. A preliminary test using a Be window mounted on a different front flange of the streak tube to look at an x-ray bremsstrahlung source at the AWA was limited by photon statistics. This system`s limiting resolution of {sigma}{approximately}1.1 ps observed at 248 nm would increase with higher incoming photon energies to the photocathode. This effect is related to the fundamental spread in energies of the photoelectrons released from the photocathodes. Possible uses of the synchrotron radiation sources at the Advanced Photon Source and emerging short wavelength FELs to test the system will be presented.

  9. Large-grazing-angle, multi-image Kirkpatrick-Baez microscope as the front end to a high-resolution streak camera for OMEGA

    International Nuclear Information System (INIS)

    Gotchev, O.V.; Hayes, L.J.; Jaanimagi, P.A.; Knauer, J.P.; Marshall, F.J.; Meyerhofer, D.D.

    2003-01-01

    A high-resolution x-ray microscope with a large grazing angle has been developed, characterized, and fielded at the Laboratory for Laser Energetics. It increases the sensitivity and spatial resolution in planar direct-drive hydrodynamic stability experiments, relevant to inertial confinement fusion research. It has been designed to work as the optical front end of the PJX - a high-current, high-dynamic-range x-ray streak camera. Optical design optimization, results from numerical ray tracing, mirror-coating choice, and characterization have been described previously [O. V. Gotchev, et al., Rev. Sci. Instrum. 74, 2178 (2003)]. This work highlights the optics' unique mechanical design and flexibility and considers certain applications that benefit from it. Characterization of the microscope's resolution in terms of its modulation transfer function over the field of view is shown. Recent results from hydrodynamic stability experiments, diagnosed with the optic and the PJX, are provided to confirm the microscope's advantages as a high-resolution, high-throughput x-ray optical front end for streaked imaging

  10. Large-Grazing-Angle, Multi-Image Kirkpatrick-Baez Microscope as the Front End to a High-Resolution Streak Camera for OMEGA

    International Nuclear Information System (INIS)

    Gotchev, O.V.; Hayes, L.J.; Jaanimagi, P.A.; Knauer, J.P.; Marshall, F.J.; Meyerhofer, D. D.

    2003-01-01

    (B204)A new, high-resolution x-ray microscope with a large grazing angle has been developed, characterized, and fielded at the Laboratory for Laser Energetics. It increases the sensitivity and spatial resolution in planar direct-drive hydrodynamic stability experiments, relevant to inertial confinement fusion (ICF) research. It has been designed to work as the optical front end of the PJX-a high-current, high-dynamic-range x-ray streak camera. Optical design optimization, results from numerical ray tracing, mirror-coating choice, and characterization have been described previously [O. V. Gotchev, et al./Rev. Sci. Instrum. 74, 2178 (2003)]. This work highlights the optics' unique mechanical design and flexibility and considers certain applications that benefit from it. Characterization of the microscope's resolution in terms of its modulation transfer function (MTF) over the field of view is shown. Recent results from hydrodynamic stability experiments, diagnosed with the optic and the PJX, are provided to confirm the microscope's advantages as a high-resolution, high-throughput x-ray optical front end for streaked imaging

  11. Target 3-D reconstruction of streak tube imaging lidar based on Gaussian fitting

    Science.gov (United States)

    Yuan, Qingyu; Niu, Lihong; Hu, Cuichun; Wu, Lei; Yang, Hongru; Yu, Bing

    2018-02-01

    Streak images obtained by the streak tube imaging lidar (STIL) contain the distance-azimuth-intensity information of a scanned target, and a 3-D reconstruction of the target can be carried out through extracting the characteristic data of multiple streak images. Significant errors will be caused in the reconstruction result by the peak detection method due to noise and other factors. So as to get a more precise 3-D reconstruction, a peak detection method based on Gaussian fitting of trust region is proposed in this work. Gaussian modeling is performed on the returned wave of single time channel of each frame, then the modeling result which can effectively reduce the noise interference and possesses a unique peak could be taken as the new returned waveform, lastly extracting its feature data through peak detection. The experimental data of aerial target is for verifying this method. This work shows that the peak detection method based on Gaussian fitting reduces the extraction error of the feature data to less than 10%; utilizing this method to extract the feature data and reconstruct the target make it possible to realize the spatial resolution with a minimum 30 cm in the depth direction, and improve the 3-D imaging accuracy of the STIL concurrently.

  12. Neutron imaging system based on a video camera

    International Nuclear Information System (INIS)

    Dinca, M.

    2004-01-01

    possess versatile and unique readout capabilities that have established their utility in scientific and especially for radiation field applications. A detector for neutron radiography based on a cooled CID camera offers some capabilities, as follows: - Extended linear dynamic range up to 109 without blooming or streaking; - Arbitrary pixel selection and nondestructive readout makes it possible to introduce a high degree of exposure control to low-light viewing of static scenes; - Read multiple areas of interest of an image within a given frame at higher rates; - Wide spectral response (185 nm - 1100 nm); - CIDs tolerate high radiation environments up to 3 Mrad integrated dose; - The contiguous pixel structure of CID arrays contributes to accurate imaging because there are virtually no opaque areas between pixels. (author)

  13. Movement-based Interaction in Camera Spaces

    DEFF Research Database (Denmark)

    Eriksson, Eva; Riisgaard Hansen, Thomas; Lykke-Olesen, Andreas

    2006-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movement-based projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  14. A numerical algorithm to evaluate the transient response for a synchronous scanning streak camera using a time-domain Baum–Liu–Tesche equation

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Chengquan [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi’an 710049 (China); Tian, Jinshou [Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Wu, Shengli, E-mail: slwu@mail.xjtu.edu.cn [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi’an 710049 (China); He, Jiai [School of Computer and Communication, Lanzhou University of Technology, Lanzhou, Gansu 730050 (China); Liu, Zhen [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi’an 710049 (China)

    2016-10-01

    The transient response is of great influence on the electromagnetic compatibility of synchronous scanning streak cameras (SSSCs). In this paper we propose a numerical method to evaluate the transient response of the scanning deflection plate (SDP). First, we created a simplified circuit model for the SDP used in an SSSC, and then derived the Baum–Liu–Tesche (BLT) equation in the frequency domain. From the frequency-domain BLT equation, its transient counterpart was derived. These parameters, together with the transient-BLT equation, were used to compute the transient load voltage and load current, and then a novel numerical method to fulfill the continuity equation was used. Several numerical simulations were conducted to verify this proposed method. The computed results were then compared with transient responses obtained by a frequency-domain/fast Fourier transform (FFT) method, and the accordance was excellent for highly conducting cables. The benefit of deriving the BLT equation in the time domain is that it may be used with slight modifications to calculate the transient response and the error can be controlled by a computer program. The result showed that the transient voltage was up to 1000 V and the transient current was approximately 10 A, so some protective measures should be taken to improve the electromagnetic compatibility.

  15. A numerical algorithm to evaluate the transient response for a synchronous scanning streak camera using a time-domain Baum–Liu–Tesche equation

    International Nuclear Information System (INIS)

    Pei, Chengquan; Tian, Jinshou; Wu, Shengli; He, Jiai; Liu, Zhen

    2016-01-01

    The transient response is of great influence on the electromagnetic compatibility of synchronous scanning streak cameras (SSSCs). In this paper we propose a numerical method to evaluate the transient response of the scanning deflection plate (SDP). First, we created a simplified circuit model for the SDP used in an SSSC, and then derived the Baum–Liu–Tesche (BLT) equation in the frequency domain. From the frequency-domain BLT equation, its transient counterpart was derived. These parameters, together with the transient-BLT equation, were used to compute the transient load voltage and load current, and then a novel numerical method to fulfill the continuity equation was used. Several numerical simulations were conducted to verify this proposed method. The computed results were then compared with transient responses obtained by a frequency-domain/fast Fourier transform (FFT) method, and the accordance was excellent for highly conducting cables. The benefit of deriving the BLT equation in the time domain is that it may be used with slight modifications to calculate the transient response and the error can be controlled by a computer program. The result showed that the transient voltage was up to 1000 V and the transient current was approximately 10 A, so some protective measures should be taken to improve the electromagnetic compatibility.

  16. Streak tube development

    International Nuclear Information System (INIS)

    Hinrichs, C.K.; Estrella, R.M.

    1979-01-01

    A research program for the development of a high-speed, high-resolution streak image tube is described. This is one task in the development of a streak camera system with digital electronic readout, whose primary application is for diagnostics in underground nuclear testing. This program is concerned with the development of a high-resolution streak image tube compatible with x-ray input and electronic digital output. The tube must be capable of time resolution down to 100 psec and spatial resolution to provide greater than 1000 resolution elements across the cathode (much greater than presently available). Another objective is to develop the capability to make design changes in tube configurations to meet different experimental requirements. A demountable prototype streak tube was constructed, mounted on an optical bench, and placed in a vacuum system. Initial measurements of the tube resolution with an undeflected image show a resolution of 32 line pairs per millimeter over a cathode diameter of one inch, which is consistent with the predictions of the computer simulations. With the initial set of unoptmized deflection plates, the resolution pattern appeared to remain unchanged for static deflections of +- 1/2-inch, a total streak length of one inch, also consistent with the computer simulations. A passively mode-locked frequency-doubled dye laser is being developed as an ultraviolet pulsed light source to measure dynamic tube resolution during streaking. A sweep circuit to provide the deflection voltage in the prototype tube has been designed and constructed and provides a relatively linear ramp voltage with ramp durations adjustable between 10 and 1000 nsec

  17. Multislit streak photography for plasma dynamics studies

    International Nuclear Information System (INIS)

    Tou, T.Y.; Lee, S.

    1988-01-01

    A microscope slide with several transparent slits installed in a streak camera is used to record time-resolved two-dimensional information when a curved luminous plasma sheath traverses these slits. Applying this method to the plasma focus experiment, the axial run-down trajectory and the shapes of the plasma sheath at various moments can be obtained from a single streak photograph

  18. Camera Based Navigation System with Augmented Reality

    Directory of Open Access Journals (Sweden)

    M. Marcu

    2012-06-01

    Full Text Available Nowadays smart mobile devices have enough processing power, memory, storage and always connected wireless communication bandwidth that makes them available for any type of application. Augmented reality (AR proposes a new type of applications that tries to enhance the real world by superimposing or combining virtual objects or computer generated information with it. In this paper we present a camera based navigation system with augmented reality integration. The proposed system aims to the following: the user points the camera of the smartphone towards a point of interest, like a building or any other place, and the application searches for relevant information about that specific place and superimposes the data over the video feed on the display. When the user moves the camera away, changing its orientation, the data changes as well, in real-time, with the proper information about the place that is now in the camera view.

  19. Novel computer-based endoscopic camera

    Science.gov (United States)

    Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia

    1995-05-01

    We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.

  20. Video Sharing System Based on Wi-Fi Camera

    OpenAIRE

    Qidi Lin; Hewei Yu; Jinbin Huang; Weile Liang

    2015-01-01

    This paper introduces a video sharing platform based on WiFi, which consists of camera, mobile phone and PC server. This platform can receive wireless signal from the camera and show the live video on the mobile phone captured by camera. In addition, it is able to send commands to camera and control the camera's holder to rotate. The platform can be applied to interactive teaching and dangerous area's monitoring and so on. Testing results show that the platform can share ...

  1. The AOTF-Based NO2 Camera

    Science.gov (United States)

    Dekemper, E.; Fussen, D.; Vanhellemont, F.; Vanhamel, J.; Pieroux, D.; Berkenbosch, S.

    2017-12-01

    In an urban environment, nitrogen dioxide is emitted by a multitude of static and moving point sources (cars, industry, power plants, heating systems,…). Air quality models generally rely on a limited number of monitoring stations which do not capture the whole pattern, neither allow for full validation. So far, there has been a lack of instrument capable of measuring NO2 fields with the necessary spatio-temporal resolution above major point sources (power plants), or more extended ones (cities). We have developed a new type of passive remote sensing instrument aiming at the measurement of 2-D distributions of NO2 slant column densities (SCDs) with a high spatial (meters) and temporal (minutes) resolution. The measurement principle has some similarities with the popular filter-based SO2 camera (used in volcanic and industrial sulfur emissions monitoring) as it relies on spectral images taken at wavelengths where the molecule absorption cross section is different. But contrary to the SO2 camera, the spectral selection is performed by an acousto-optical tunable filter (AOTF) capable of resolving the target molecule's spectral features. A first prototype was successfully tested with the plume of a coal-firing power plant in Romania, revealing the dynamics of the formation of NO2 in the early plume. A lighter version of the NO2 camera is now being tested on other targets, such as oil refineries and urban air masses.

  2. Gamma camera based FDG PET in oncology

    International Nuclear Information System (INIS)

    Park, C. H.

    2002-01-01

    Positron Emission Tomography(PET) was introduced as a research tool in the 1970s and it took about 20 years before PET became an useful clinical imaging modality. In the USA, insurance coverage for PET procedures in the 1990s was the turning point, I believe, for this progress. Initially PET was used in neurology but recently more than 80% of PET procedures are in oncological applications. I firmly believe, in the 21st century, one can not manage cancer patients properly without PET and PET is very important medical imaging modality in basic and clinical sciences. PET is grouped into 2 categories; conventional (c) and gamma camera based ( CB ) PET. CB PET is more readily available utilizing dual-head gamma cameras and commercially available FDG to many medical centers at low cost to patients. In fact there are more CB PET in operation than cPET in the USA. CB PET is inferior to cPET in its performance but clinical studies in oncology is feasible without expensive infrastructures such as staffing, rooms and equipments. At Ajou university Hospital, CBPET was installed in late 1997 for the first time in Korea as well as in Asia and the system has been used successfully and effectively in oncological applications. Our was the fourth PET operation in Korea and I believe this may have been instrumental for other institutions got interested in clinical PET. The following is a brief description of our clinical experience of FDG CBPET in oncology

  3. A generic model for camera based intelligent road crowd control ...

    African Journals Online (AJOL)

    This research proposes a model for intelligent traffic flow control by implementing camera based surveillance and feedback system. A series of cameras are set minimum three signals ahead from the target junction. The complete software system is developed to help integrating the multiple camera on road as feedback to ...

  4. Research on the underwater target imaging based on the streak tube laser lidar

    Science.gov (United States)

    Cui, Zihao; Tian, Zhaoshuo; Zhang, Yanchao; Bi, Zongjie; Yang, Gang; Gu, Erdan

    2018-03-01

    A high frame rate streak tube imaging lidar (STIL) for real-time 3D imaging of underwater targets is presented in this paper. The system uses 532nm pulse laser as the light source, the maximum repetition rate is 120Hz, and the pulse width is 8ns. LabVIEW platform is used in the system, the system control, synchronous image acquisition, 3D data processing and display are realized through PC. 3D imaging experiment of underwater target is carried out in a flume with attenuation coefficient of 0.2, and the images of different depth and different material targets are obtained, the imaging frame rate is 100Hz, and the maximum detection depth is 31m. For an underwater target with a distance of 22m, the high resolution 3D image real-time acquisition is realized with range resolution of 1cm and space resolution of 0.3cm, the spatial relationship of the targets can be clearly identified by the image. The experimental results show that STIL has a good application prospect in underwater terrain detection, underwater search and rescue, and other fields.

  5. Wind Streaks on Earth; Exploration and Interpretation

    Science.gov (United States)

    Cohen-Zada, Aviv Lee; Blumberg, Dan G.; Maman, Shimrit

    2015-04-01

    Wind streaks, one of the most common aeolian features on planetary surfaces, are observable on the surface of the planets Earth, Mars and Venus. Due to their reflectance properties, wind streaks are distinguishable from their surroundings, and they have thus been widely studied by remote sensing since the early 1970s, particularly on Mars. In imagery, these streaks are interpreted as the presence - or lack thereof - of small loose particles on the surface deposited or eroded by wind. The existence of wind streaks serves as evidence for past or present active aeolian processes. Therefore, wind streaks are thought to represent integrative climate processes. As opposed to the comprehensive and global studies of wind streaks on Mars and Venus, wind streaks on Earth are understudied and poorly investigated, both geomorphologically and by remote sensing. The aim of this study is, thus, to fill the knowledge gap about the wind streaks on Earth by: generating a global map of Earth wind streaks from modern high-resolution remotely sensed imagery; incorporating the streaks in a geographic information system (GIS); and overlaying the GIS layers with boundary layer wind data from general circulation models (GCMs) and data from the ECMWF Reanalysis Interim project. The study defines wind streaks (and thereby distinguishes them from other aeolian features) based not only on their appearance in imagery but more importantly on their surface appearance. This effort is complemented by a focused field investigation to study wind streaks on the ground and from a variety of remotely sensed images (both optical and radar). In this way, we provide a better definition of the physical and geomorphic characteristics of wind streaks and acquire a deeper knowledge of terrestrial wind streaks as a means to better understand global and planetary climate and climate change. In a preliminary study, we detected and mapped over 2,900 wind streaks in the desert regions of Earth distributed in

  6. LAMOST CCD camera-control system based on RTS2

    Science.gov (United States)

    Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng

    2018-05-01

    The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.

  7. External Mask Based Depth and Light Field Camera

    Science.gov (United States)

    2013-12-08

    External mask based depth and light field camera Dikpal Reddy NVIDIA Research Santa Clara, CA dikpalr@nvidia.com Jiamin Bai University of California...passive depth acquisition technology is illustrated by the emergence of light field camera companies like Lytro [1], Raytrix [2] and Pelican Imaging

  8. New light field camera based on physical based rendering tracing

    Science.gov (United States)

    Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung

    2014-03-01

    Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.

  9. Picosecond x-ray streak camera studies

    International Nuclear Information System (INIS)

    Kasyanov, Yu.S.; Malyutin, A.A.; Richardson, M.C.; Chevokin, V.K.

    1975-01-01

    Some initial results of direct measurement of picosecond x-ray emission from laser-produced plasmas are presented. A PIM-UMI 93 image converter tube, incorporating an x-ray sensitive photocathode, linear deflection, and three stages of image amplification was used to analyse the x-ray radiation emanating from plasmas produced from solid Ti targets by single high-intensity picosecond laser pulses. From such plasmas, the x-ray emission typically persisted for times of 60psec. However, it is shown that this detection system should be capable of resolving x-ray phenomena of much shorter duration. (author)

  10. Whole body scan system based on γ camera

    International Nuclear Information System (INIS)

    Ma Tianyu; Jin Yongjie

    2001-01-01

    Most existing domestic γ cameras can not perform whole body scan protocol, which is of important use in clinic. The authors designed a set of whole body scan system, which is made up of a scan bed, an ISA interface card controlling the scan bed and the data acquisition software based on a data acquisition and image processing system for γ cameras. The image was obtained in clinical experiment, and the authors think it meets the need of clinical diagnosis. Application of this system in γ cameras can provide whole body scan function at low cost

  11. Homography-based multiple-camera person-tracking

    Science.gov (United States)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of

  12. Global Calibration of Multiple Cameras Based on Sphere Targets

    Directory of Open Access Journals (Sweden)

    Junhua Sun

    2016-01-01

    Full Text Available Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three, while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view.

  13. BENCHMARKING THE OPTICAL RESOLVING POWER OF UAV BASED CAMERA SYSTEMS

    Directory of Open Access Journals (Sweden)

    H. Meißner

    2017-08-01

    Full Text Available UAV based imaging and 3D object point generation is an established technology. Some of the UAV users try to address (very highaccuracy applications, i.e. inspection or monitoring scenarios. In order to guarantee such level of detail and accuracy high resolving imaging systems are mandatory. Furthermore, image quality considerably impacts photogrammetric processing, as the tie point transfer, mandatory for forming the block geometry, fully relies on the radiometric quality of images. Thus, empirical testing of radiometric camera performance is an important issue, in addition to standard (geometric calibration, which normally is covered primarily. Within this paper the resolving power of ten different camera/lens installations has been investigated. Selected systems represent different camera classes, like DSLRs, system cameras, larger format cameras and proprietary systems. As the systems have been tested in wellcontrolled laboratory conditions and objective quality measures have been derived, individual performance can be compared directly, thus representing a first benchmark on radiometric performance of UAV cameras. The results have shown, that not only the selection of appropriate lens and camera body has an impact, in addition the image pre-processing, i.e. the use of a specific debayering method, significantly influences the final resolving power.

  14. A cooperative control algorithm for camera based observational systems.

    Energy Technology Data Exchange (ETDEWEB)

    Young, Joseph G.

    2012-01-01

    Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.

  15. A mathematical model for camera calibration based on straight lines

    Directory of Open Access Journals (Sweden)

    Antonio M. G. Tommaselli

    2005-12-01

    Full Text Available In other to facilitate the automation of camera calibration process, a mathematical model using straight lines was developed, which is based on the equivalent planes mathematical model. Parameter estimation of the developed model is achieved by the Least Squares Method with Conditions and Observations. The same method of adjustment was used to implement camera calibration with bundles, which is based on points. Experiments using simulated and real data have shown that the developed model based on straight lines gives results comparable to the conventional method with points. Details concerning the mathematical development of the model and experiments with simulated and real data will be presented and the results with both methods of camera calibration, with straight lines and with points, will be compared.

  16. Movement-based interaction in camera spaces: a conceptual framework

    DEFF Research Database (Denmark)

    Eriksson, Eva; Hansen, Thomas Riisgaard; Lykke-Olesen, Andreas

    2007-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movementbased projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space,...

  17. Lights, Camera, Project-Based Learning!

    Science.gov (United States)

    Cox, Dannon G.; Meaney, Karen S.

    2018-01-01

    A physical education instructor incorporates a teaching method known as project-based learning (PBL) in his physical education curriculum. Utilizing video-production equipment to imitate the production of a televisions show, sixth-grade students attending a charter school invited college students to share their stories about physical activity and…

  18. People counting with stereo cameras : two template-based solutions

    NARCIS (Netherlands)

    Englebienne, Gwenn; van Oosterhout, Tim; Kröse, B.J.A.

    2012-01-01

    People counting is a challenging task with many applications. We propose a method with a fixed stereo camera that is based on projecting a template onto the depth image. The method was tested on a challenging outdoor dataset with good results and runs in real time.

  19. Streaking tremor in Cascadia

    Science.gov (United States)

    Vidale, J. E.; Ghosh, A.; Sweet, J. R.; Creager, K. C.; Wech, A.; Houston, H.

    2009-12-01

    Details of tremor deep in subduction zones is damnably difficult to glimpse because of the lack of crisp initial arrivals, low waveform coherence, uncertain focal mechanisms, and the probability of simultaneous activity across extended regions. Yet such details hold out the best hope to illuminate the unknown mechanisms underlying episodic tremor and slip. Attacking this problem with brute force, we pointed a small, very dense seismic array down at the migration path of a good-sized episodic tremor and slip (ETS) event. In detail, it was an 84-element, 1300-m-aperture temporary seismic array in northern Washington, and the migration path of the May 2008 ETS event was 30-40 km directly underneath. Our beamforming technique tracked the time, incident angle, and azimuth of tremor radiation in unprecedented detail. We located the tremor by assuming it occurs on the subduction interface, estimated relative tremor moment released by each detected tremor window, and mapped it on the interface [Ghosh et al., GRL, 2009]. Fortunately for our ability to image it, the tremor generally appears to emanate from small regions, and we were surprised by how steadily the regions migrated with time. For the first time in Cascadia, we found convergence-parallel transient streaks of tremor migrating at velocities of several tens of km/hr, with movement in both up- and down-dip directions. Similar patterns have been seen in Japan [Shelly, G3, 2007]. This is in contrast to the long-term along-strike marching of tremor at 10 km/day. These streaks tend to propagate steadily and often repeat the same track on the interface multiple times. They light up persistent moment patches on the interface by a combination of increased amplitude and longer residence time within the patches. The up- and down-dip migration dominates the 2 days of tremor most clearly imaged by our array. The tendency of the streaks to fill in bands is the subject of the presentation of Ghosh et al. here. The physical

  20. Feature-based automatic color calibration for networked camera system

    Science.gov (United States)

    Yamamoto, Shoji; Taki, Keisuke; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2011-01-01

    In this paper, we have developed a feature-based automatic color calibration by using an area-based detection and adaptive nonlinear regression method. Simple color matching of chartless is achieved by using the characteristic of overlapping image area with each camera. Accurate detection of common object is achieved by the area-based detection that combines MSER with SIFT. Adaptive color calibration by using the color of detected object is calculated by nonlinear regression method. This method can indicate the contribution of object's color for color calibration, and automatic selection notification for user is performed by this function. Experimental result show that the accuracy of the calibration improves gradually. It is clear that this method can endure practical use of multi-camera color calibration if an enough sample is obtained.

  1. Streaked, x-ray-transmission-grating spectrometer

    International Nuclear Information System (INIS)

    Ceglio, N.M.; Roth, M.; Hawryluk, A.M.

    1981-08-01

    A free standing x-ray transmission grating has been coupled with a soft x-ray streak camera to produce a time resolved x-ray spectrometer. The instrument has a temporal resolution of approx. 20 psec, is capable of covering a broad spectral range, 2 to 120 A, has high sensitivity, and is simple to use requiring no complex alignment procedure. In recent laser fusion experiments the spectrometer successfully recorded time resolved spectra over the range 10 to 120 A with a spectral resolving power, lambda/Δlambda of 4 to 50, limited primarily by source size and collimation effects

  2. Triton's streaks as windblown dust

    Science.gov (United States)

    Sagan, Carl; Chyba, Christopher

    1990-01-01

    Explanations for the surface streaks observed by Voyager 2 on Triton's southern hemisphere are discussed. It is shown that, despite Triton's tenuous atmosphere, low-cohesion dust trains with diameters of about 5 micron or less may be carried into suspension by aeolian surface shear stress, given expected geostrophic wind speeds of about 10 m/s. For geyser-like erupting dust plumes, it is shown that dust-settling time scales and expected wind velocities can produce streaks with length scales in good agreement with those of the streaks. Thus, both geyserlike eruptions or direct lifting by surface winds appear to be viable mechanisms for the origin of the streaks.

  3. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    OpenAIRE

    Orts-Escolano, Sergio; Garcia-Rodriguez, Jose; Morell, Vicente; Cazorla, Miguel; Azorin-Lopez, Jorge; García-Chamizo, Juan Manuel

    2014-01-01

    In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mob...

  4. High dynamic range image acquisition based on multiplex cameras

    Science.gov (United States)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  5. Trained neurons-based motion detection in optical camera communications

    Science.gov (United States)

    Teli, Shivani; Cahyadi, Willy Anugrah; Chung, Yeon Ho

    2018-04-01

    A concept of trained neurons-based motion detection (TNMD) in optical camera communications (OCC) is proposed. The proposed TNMD is based on neurons present in a neural network that perform repetitive analysis in order to provide efficient and reliable motion detection in OCC. This efficient motion detection can be considered another functionality of OCC in addition to two traditional functionalities of illumination and communication. To verify the proposed TNMD, the experiments were conducted in an indoor static downlink OCC, where a mobile phone front camera is employed as the receiver and an 8 × 8 red, green, and blue (RGB) light-emitting diode array as the transmitter. The motion is detected by observing the user's finger movement in the form of centroid through the OCC link via a camera. Unlike conventional trained neurons approaches, the proposed TNMD is trained not with motion itself but with centroid data samples, thus providing more accurate detection and far less complex detection algorithm. The experiment results demonstrate that the TNMD can detect all considered motions accurately with acceptable bit error rate (BER) performances at a transmission distance of up to 175 cm. In addition, while the TNMD is performed, a maximum data rate of 3.759 kbps over the OCC link is obtained. The OCC with the proposed TNMD combined can be considered an efficient indoor OCC system that provides illumination, communication, and motion detection in a convenient smart home environment.

  6. Upgrading of analogue gamma cameras with PC based computer system

    International Nuclear Information System (INIS)

    Fidler, V.; Prepadnik, M.

    2002-01-01

    Full text: Dedicated nuclear medicine computers for acquisition and processing of images from analogue gamma cameras in developing countries are in many cases faulty and technologically obsolete. The aim of the upgrading project of International Atomic Energy Agency (IAEA) was to support the development of the PC based computer system which would cost 5.000 $ in total. Several research institutions from different countries (China, Cuba, India and Slovenia) were financially supported in this development. The basic demands for the system were: one acquisition card an ISA bus, image resolution up to 256x256, SVGA graphics, low count loss at high count rates, standard acquisition and clinical protocols incorporated in PIP (Portable Image Processing), on-line energy and uniformity correction, graphic printing and networking. The most functionally stable acquisition system tested on several international workshops and university clinics was the Slovenian one with a complete set of acquisition and clinical protocols, transfer of scintigraphic data from acquisition card to PC through PORT, count loss less than 1 % at count rate of 120 kc/s, improvement of integral uniformity index by a factor of 3-5 times, reporting, networking and archiving solutions for simple MS network or server oriented network systems (NT server, etc). More than 300 gamma cameras in 52 countries were digitized and put in the routine work. The project of upgrading the analogue gamma cameras yielded a high promotion of nuclear medicine in the developing countries by replacing the old computer systems, improving the technological knowledge of end users on workshops and training courses and lowering the maintenance cost of the departments. (author)

  7. A new streaked soft x-ray imager for the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Benstead, J., E-mail: james.benstead@awe.co.uk; Morton, J.; Guymer, T. M.; Garbett, W. J.; Rubery, M. S.; Skidmore, J. W. [AWE, Aldermaston, Reading, Berkshire RG7 4PR (United Kingdom); Moore, A. S.; Ahmed, M. F.; Soufli, R.; Pardini, T.; Hibbard, R. L.; Bailey, C. G.; Bell, P. M.; Hau-Riege, S. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Bedzyk, M.; Shoup, M. J.; Reagan, S.; Agliata, T.; Jungquist, R. [Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623 (United States); Schmidt, D. W. [Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); and others

    2016-05-15

    A new streaked soft x-ray imager has been designed for use on high energy-density (HED) physics experiments at the National Ignition Facility based at the Lawrence Livermore National Laboratory. This streaked imager uses a slit aperture, single shallow angle reflection from a nickel mirror, and soft x-ray filtering to, when coupled to one of the NIF’s x-ray streak cameras, record a 4× magnification, one-dimensional image of an x-ray source with a spatial resolution of less than 90 μm. The energy band pass produced depends upon the filter material used; for the first qualification shots, vanadium and silver-on-titanium filters were used to gate on photon energy ranges of approximately 300–510 eV and 200–400 eV, respectively. A two-channel version of the snout is available for x-ray sources up to 1 mm and a single-channel is available for larger sources up to 3 mm. Both the one and two-channel variants have been qualified on quartz wire and HED physics target shots.

  8. Goal-oriented rectification of camera-based document images.

    Science.gov (United States)

    Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J

    2011-04-01

    Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure.

  9. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    Directory of Open Access Journals (Sweden)

    Sergio Orts-Escolano

    2014-04-01

    Full Text Available In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units. It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

  10. Construction of a frameless camera-based stereotactic neuronavigator.

    Science.gov (United States)

    Cornejo, A; Algorri, M E

    2004-01-01

    We built an infrared vision system to be used as the real time 3D motion sensor in a prototype low cost, high precision, frameless neuronavigator. The objective of the prototype is to develop accessible technology for increased availability of neuronavigation systems in research labs and small clinics and hospitals. We present our choice of technology including camera and IR emitter characteristics. We describe the methodology for setting up the 3D motion sensor, from the arrangement of the cameras and the IR emitters on surgical instruments, to triangulation equations from stereo camera pairs, high bandwidth computer communication with the cameras and real time image processing algorithms. We briefly cover the issues of camera calibration and characterization. Although our performance results do not yet fully meet the high precision, real time requirements of neuronavigation systems we describe the current improvements being made to the 3D motion sensor that will make it suitable for surgical applications.

  11. Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm

    Science.gov (United States)

    Gao, X.; Li, M.; Xing, L.; Liu, Y.

    2018-04-01

    Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.

  12. An autonomous sensor module based on a legacy CCTV camera

    Science.gov (United States)

    Kent, P. J.; Faulkner, D. A. A.; Marshall, G. F.

    2016-10-01

    A UK MoD funded programme into autonomous sensors arrays (SAPIENT) has been developing new, highly capable sensor modules together with a scalable modular architecture for control and communication. As part of this system there is a desire to also utilise existing legacy sensors. The paper reports upon the development of a SAPIENT-compliant sensor module using a legacy Close-Circuit Television (CCTV) pan-tilt-zoom (PTZ) camera. The PTZ camera sensor provides three modes of operation. In the first mode, the camera is automatically slewed to acquire imagery of a specified scene area, e.g. to provide "eyes-on" confirmation for a human operator or for forensic purposes. In the second mode, the camera is directed to monitor an area of interest, with zoom level automatically optimized for human detection at the appropriate range. Open source algorithms (using OpenCV) are used to automatically detect pedestrians; their real world positions are estimated and communicated back to the SAPIENT central fusion system. In the third mode of operation a "follow" mode is implemented where the camera maintains the detected person within the camera field-of-view without requiring an end-user to directly control the camera with a joystick.

  13. Prism-based single-camera system for stereo display

    Science.gov (United States)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  14. Camera Coverage Estimation Based on Multistage Grid Subdivision

    Directory of Open Access Journals (Sweden)

    Meizhen Wang

    2017-04-01

    Full Text Available Visual coverage is one of the most important quality indexes for depicting the usability of an individual camera or camera network. It is the basis for camera network deployment, placement, coverage-enhancement, planning, etc. Precision and efficiency are critical influences on applications, especially those involving several cameras. This paper proposes a new method to efficiently estimate superior camera coverage. First, the geographic area that is covered by the camera and its minimum bounding rectangle (MBR without considering obstacles is computed using the camera parameters. Second, the MBR is divided into grids using the initial grid size. The status of the four corners of each grid is estimated by a line of sight (LOS algorithm. If the camera, considering obstacles, covers a corner, the status is represented by 1, otherwise by 0. Consequently, the status of a grid can be represented by a code that is a combination of 0s or 1s. If the code is not homogeneous (not four 0s or four 1s, the grid will be divided into four sub-grids until the sub-grids are divided into a specific maximum level or their codes are homogeneous. Finally, after performing the process above, total camera coverage is estimated according to the size and status of all grids. Experimental results illustrate that the proposed method’s accuracy is determined by the method that divided the coverage area into the smallest grids at the maximum level, while its efficacy is closer to the method that divided the coverage area into the initial grids. It considers both efficiency and accuracy. The initial grid size and maximum level are two critical influences on the proposed method, which can be determined by weighing efficiency and accuracy.

  15. Camera calibration method of binocular stereo vision based on OpenCV

    Science.gov (United States)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  16. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer

    Directory of Open Access Journals (Sweden)

    Bailey Y. Shen

    2017-01-01

    Full Text Available Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm×91mm×45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient.

  17. Camera calibration based on the back projection process

    Science.gov (United States)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  18. HEAVY ION FUSION SCIENCE VIRTUAL NATIONAL LABORATORY 2nd QUARTER 2010 MILESTONE REPORT. Develop the theory connecting pyrometer and streak camera spectrometer data to the material properties of beam heated targets and compare to the data

    International Nuclear Information System (INIS)

    More, R.M.; Barnard, J.J.; Bieniosek, F.M.; Henestroza, E.; Lidia, S.M.; Ni, P.A.

    2010-01-01

    This milestone has been accomplished. We have extended the theory that connects pyrometer and streak spectrometer data to material temperature on several fronts and have compared theory to NDCX-I experiments. For the case of NDCX-I, the data suggests that as the metallic foils are heated they break into droplets (cf. HIFS VNL Milestone Report FY 2009 Q4). Evaporation of the metallic surface will occur, but optical emission should be directly observable from the solid or liquid surface of the foil or from droplets. However, the emissivity of hot material may be changed from the cold material and interference effects will alter the spectrum emitted from small droplets. These effects have been incorporated into a theory of emission from droplets. We have measured emission using streaked spectrometry and together with theory of emission from heated droplets have inferred the temperature of a gold foil heated by the NDCX-I experiment. The intensity measured by the spectrometer is proportional to the emissivity times the blackbody intensity at the temperature of the foil or droplets. Traditionally, a functional form for the emissivity as a function of wavelength (such as a quadratic) is assumed and the three unknown emissivity parameters (for the case of a quadratic) and the temperature are obtained by minimizing the deviations from the fit. In the case of the NDCX-I experiment, two minima were obtained: at 7200 K and 2400 K. The best fit was at 7200 K. However, when the actual measured emissivity of gold was used and when the theoretical corrections for droplet interference effects were made for emission from droplets having radii in the range 0.2 to 2.0 microns, the corrected emissivity was consistent with the 2400 K value, whereas the fit emissivity at 7200 K shows no similarity to the corrected emissivity curves. Further, an estimate of the temperature obtained from beam heating is consistent with the lower value. This exercise proved to be a warning to be skeptical

  19. Simulation-based camera navigation training in laparoscopy-a randomized trial

    DEFF Research Database (Denmark)

    Nilsson, Cecilia; Sørensen, Jette Led; Konge, Lars

    2017-01-01

    patient safety. The objectives of this trial were to examine how to train laparoscopic camera navigation and to explore the transfer of skills to the operating room. MATERIALS AND METHODS: A randomized, single-center superiority trial with three groups: The first group practiced simulation-based camera...... navigation tasks (camera group), the second group practiced performing a simulation-based cholecystectomy (procedure group), and the third group received no training (control group). Participants were surgical novices without prior laparoscopic experience. The primary outcome was assessment of camera.......033), had a higher score. CONCLUSIONS: Simulation-based training improves the technical skills required for camera navigation, regardless of practicing camera navigation or the procedure itself. Transfer to the clinical setting could, however, not be demonstrated. The control group demonstrated higher...

  20. Streaking into middle school science: The Dell Streak pilot project

    Science.gov (United States)

    Austin, Susan Eudy

    A case study is conducted implementing the Dell Streak seven-inch android device into eighth grade science classes of one teacher in a rural middle school in the Piedmont region of North Carolina. The purpose of the study is to determine if the use of the Dell Streaks would increase student achievement on standardized subject testing, if the Streak could be used as an effective instructional tool, and if it could be considered an effective instructional resource for reviewing and preparing for the science assessments. A mixed method research design was used for the study to analyze both quantitative and qualitative results to determine if the Dell Streaks' utilization could achieve the following: 1. instructional strategies would change, 2. it would be an effective instructional tool, and 3. a comparison of the students' test scores and benchmark assessments' scores would provide statistically significant difference. Through the use of an ANOVA it was determined a statistically significant difference had occurred. A Post Hoc analysis was conducted to identify where the difference occurred. Finally a T-test determined was there was no statistically significance difference between the mean End-of-Grade tests and four quarterly benchmark scores of the control and the experimental groups. Qualitative research methods were used to gather results to determine if the Streaks were an effective instructional tool. Classroom observations identified that the teacher's teaching styles and new instructional strategies were implemented throughout the pilot project. Students had an opportunity to complete a questionnaire three times during the pilot project. Results revealed what the students liked about using the devices and the challenges they were facing. The teacher completed a reflective questionnaire throughout the pilot project and offered valuable reflections about the use of the devices in an educational setting. The reflection data supporting the case study was drawn

  1. Object Detection and Tracking-Based Camera Calibration for Normalized Human Height Estimation

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-01-01

    Full Text Available This paper presents a normalized human height estimation algorithm using an uncalibrated camera. To estimate the normalized human height, the proposed algorithm detects a moving object and performs tracking-based automatic camera calibration. The proposed method consists of three steps: (i moving human detection and tracking, (ii automatic camera calibration, and (iii human height estimation and error correction. The proposed method automatically calibrates camera by detecting moving humans and estimates the human height using error correction. The proposed method can be applied to object-based video surveillance systems and digital forensic.

  2. Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor

    Directory of Open Access Journals (Sweden)

    Dong Seop Kim

    2018-03-01

    Full Text Available Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR open database, show that our method outperforms previous works.

  3. A real-time camera calibration system based on OpenCV

    Science.gov (United States)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  4. PC based simulation of gamma camera for training of operating and maintenance staff

    International Nuclear Information System (INIS)

    Singh, B.; Kataria, S.K.; Samuel, A.M.

    2000-01-01

    Gamma camera- a sophisticated imaging system is used for functional assessment of biological subsystems/organs in nuclear medicine. The radioactive tracer attached to the native substance is injected into the patient. The distribution of radioactivity in the patient is imaged by the gamma camera. This report describes a PC based package for simulation of gamma cameras and effect of malfunctioning of its subsystems on images of different phantoms

  5. Spectral colors capture and reproduction based on digital camera

    Science.gov (United States)

    Chen, Defen; Huang, Qingmei; Li, Wei; Lu, Yang

    2018-01-01

    The purpose of this work is to develop a method for the accurate reproduction of the spectral colors captured by digital camera. The spectral colors being the purest color in any hue, are difficult to reproduce without distortion on digital devices. In this paper, we attempt to achieve accurate hue reproduction of the spectral colors by focusing on two steps of color correction: the capture of the spectral colors and the color characterization of digital camera. Hence it determines the relationship among the spectral color wavelength, the RGB color space of the digital camera device and the CIEXYZ color space. This study also provides a basis for further studies related to the color spectral reproduction on digital devices. In this paper, methods such as wavelength calibration of the spectral colors and digital camera characterization were utilized. The spectrum was obtained through the grating spectroscopy system. A photo of a clear and reliable primary spectrum was taken by adjusting the relative parameters of the digital camera, from which the RGB values of color spectrum was extracted in 1040 equally-divided locations. Calculated using grating equation and measured by the spectrophotometer, two wavelength values were obtained from each location. The polynomial fitting method for the camera characterization was used to achieve color correction. After wavelength calibration, the maximum error between the two sets of wavelengths is 4.38nm. According to the polynomial fitting method, the average color difference of test samples is 3.76. This has satisfied the application needs of the spectral colors in digital devices such as display and transmission.

  6. NEMA NU-1 2007 based and independent quality control software for gamma cameras and SPECT

    International Nuclear Information System (INIS)

    Vickery, A; Joergensen, T; De Nijs, R

    2011-01-01

    A thorough quality assurance of gamma and SPECT cameras requires a careful handling of the measured quality control (QC) data. Most gamma camera manufacturers provide the users with camera specific QC Software. This QC software is indeed a useful tool for the following of day-to-day performance of a single camera. However, when it comes to objective performance comparison of different gamma cameras and a deeper understanding of the calculated numbers, the use of camera specific QC software without access to the source code is rather avoided. Calculations and definitions might differ, and manufacturer independent standardized results are preferred. Based upon the NEMA Standards Publication NU 1-2007, we have developed a suite of easy-to-use data handling software for processing acquired QC data providing the user with instructive images and text files with the results.

  7. Advances in top-down and bottom-up approaches to video-based camera tracking

    OpenAIRE

    Marimón Sanjuán, David

    2007-01-01

    Video-based camera tracking consists in trailing the three dimensional pose followed by a mobile camera using video as sole input. In order to estimate the pose of a camera with respect to a real scene, one or more three dimensional references are needed. Examples of such references are landmarks with known geometric shape, or objects for which a model is generated beforehand. By comparing what is seen by a camera with what is geometrically known from reality, it is possible to recover the po...

  8. Advances in top-down and bottom-up approaches to video-based camera tracking

    OpenAIRE

    Marimón Sanjuán, David; Ebrahimi, Touradj

    2008-01-01

    Video-based camera tracking consists in trailing the three dimensional pose followed by a mobile camera using video as sole input. In order to estimate the pose of a camera with respect to a real scene, one or more three dimensional references are needed. Examples of such references are landmarks with known geometric shape, or objects for which a model is generated beforehand. By comparing what is seen by a camera with what is geometrically known from reality, it is possible to recover the po...

  9. Streak detection and analysis pipeline for optical images

    Science.gov (United States)

    Virtanen, J.; Granvik, M.; Torppa, J.; Muinonen, K.; Poikonen, J.; Lehti, J.; Säntti, T.; Komulainen, T.; Flohrer, T.

    2014-07-01

    We describe a novel data processing and analysis pipeline for optical observations of moving objects, either of natural (asteroids, meteors) or artificial origin (satellites, space debris). The monitoring of the space object populations requires reliable acquisition of observational data to support the development and validation of population models, and to build and maintain catalogues of orbital elements. The orbital catalogues are, in turn, needed for the assessment of close approaches (for asteroids, with the Earth; for satellites, with each other) and for the support of contingency situations or launches. For both types of populations, there is also increasing interest to detect fainter objects corresponding to the small end of the size distribution. We focus on the low signal-to-noise (SNR) detection of objects with high angular velocities, resulting in long and faint object trails, or streaks, in the optical images. The currently available, mature image processing algorithms for detection and astrometric reduction of optical data cover objects that cross the sensor field-of-view comparably slowly, and, particularly for satellites, within a rather narrow, predefined range of angular velocities. By applying specific tracking techniques, the objects appear point-like or as short trails in the exposures. However, the general survey scenario is always a 'track-before-detect' problem, resulting in streaks of arbitrary lengths. Although some considerations for low-SNR processing of streak-like features are available in the current image processing and computer vision literature, algorithms are not readily available yet. In the ESA-funded StreakDet (Streak detection and astrometric reduction) project, we develop and evaluate an automated processing pipeline applicable to single images (as compared to consecutive frames of the same field) obtained with any observing scenario, including space-based surveys and both low- and high-altitude populations. The algorithmic

  10. Upgrading of analogue cameras using modern PC based computer

    International Nuclear Information System (INIS)

    Pardom, M.F.; Matos, L.

    2002-01-01

    Aim: The use of computers along with analogue cameras enables them to perform tasks involving time-activity parameters. The INFORMENU system converts a modern PC computer into a dedicated nuclear medicine computer system with a total cost affordable to emerging economic countries, and easily adaptable to all existing cameras. Materials and Methods: In collaboration with nuclear medicine physicians, an application including hardware and software was developed by a private firm. The system runs smoothly on Windows 98 and its operation is very easy. The main features are comparable to the brand commercial computer systems; such as image resolution until 1024 x 1024, low count loss at high count rate, uniformity correction, integrated graphical and text reporting, and user defined clinical protocols. Results: The system is used in more than 20 private and public institutions. The count loss is less than 1% in all the routine work, improvement of uniformity correction of 3-5 times, improved utility of the analogue cameras. Conclusion: The INFORMENU system improves the utility of analogue cameras permitting the inclusion of dynamic clinical protocols and quantifications, helping the development of the nuclear medicine practice. The operation and maintenance costs were lowered. The end users improve their knowledge of modern nuclear medicine

  11. CAMERA-BASED SOFTWARE IN REHABILITATION/THERAPY INTERVENTION (extended)

    DEFF Research Database (Denmark)

    Brooks, Anthony Lewis

    2014-01-01

    on specific hardware. Adaptable means that human tracking and created artefact interaction in the camera field of view is relatively easily changed as one desires via a user-friendly GUI. The significance of having both available for contemporary intervention is argued. Conclusions are that the mature, robust...

  12. Simulation-Based Optimization of Camera Placement in the Context of Industrial Pose Estimation

    DEFF Research Database (Denmark)

    Jørgensen, Troels Bo; Iversen, Thorbjørn Mosekjær; Lindvig, Anders Prier

    2018-01-01

    In this paper, we optimize the placement of a camera in simulation in order to achieve a high success rate for a pose estimation problem. This is achieved by simulating 2D images from a stereo camera in a virtual scene. The stereo images are then used to generate 3D point clouds based on two diff...

  13. Global Calibration of Multi-Cameras Based on Refractive Projection and Ray Tracing

    Directory of Open Access Journals (Sweden)

    Mingchi Feng

    2017-10-01

    Full Text Available Multi-camera systems are widely applied in the three dimensional (3D computer vision, especially when multiple cameras are distributed on both sides of the measured object. The calibration methods of multi-camera systems are critical to the accuracy of vision measurement and the key is to find an appropriate calibration target. In this paper, a high-precision camera calibration method for multi-camera systems based on transparent glass checkerboards and ray tracing is described, and is used to calibrate multiple cameras distributed on both sides of the glass checkerboard. Firstly, the intrinsic parameters of each camera are obtained by Zhang’s calibration method. Then, multiple cameras capture several images from the front and back of the glass checkerboard with different orientations, and all images contain distinct grid corners. As the cameras on one side are not affected by the refraction of glass checkerboard, extrinsic parameters can be directly calculated. However, the cameras on the other side are influenced by the refraction of glass checkerboard, and the direct use of projection model will produce a calibration error. A multi-camera calibration method using refractive projection model and ray tracing is developed to eliminate this error. Furthermore, both synthetic and real data are employed to validate the proposed approach. The experimental results of refractive calibration show that the error of the 3D reconstruction is smaller than 0.2 mm, the relative errors of both rotation and translation are less than 0.014%, and the mean and standard deviation of reprojection error of the four-camera system are 0.00007 and 0.4543 pixels, respectively. The proposed method is flexible, highly accurate, and simple to carry out.

  14. Multi-camera synchronization core implemented on USB3 based FPGA platform

    Science.gov (United States)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  15. Transition due to streamwise streaks in a supersonic flat plate boundary layer

    Science.gov (United States)

    Paredes, Pedro; Choudhari, Meelan M.; Li, Fei

    2016-12-01

    Transition induced by stationary streaks undergoing transient growth in a supersonic flat plate boundary layer flow is studied using numerical computations. While the possibility of strong transient growth of small-amplitude stationary perturbations in supersonic boundary layer flows has been demonstrated in previous works, its relation to laminar-turbulent transition cannot be established within the framework of linear disturbances. Therefore, this paper investigates the nonlinear evolution of initially linear optimal disturbances that evolve into finite amplitude streaks in the downstream region, and then studies the modal instability of those streaks as a likely cause for the onset of bypass transition. The nonmodal evolution of linearly optimal stationary perturbations in a supersonic, Mach 3 flat plate boundary layer is computed via the nonlinear plane-marching parabolized stability equations (PSE) for stationary perturbations, or equivalently, the perturbation form of parabolized Navier-Stokes equations. To assess the effect of the nonlinear finite-amplitude streaks on transition, the linear form of plane-marching PSE is used to investigate the instability of the boundary layer flow modified by the spanwise periodic streaks. The onset of transition is estimated using an N -factor criterion based on modal amplification of the secondary instabilities of the streaks. In the absence of transient growth disturbances, first mode instabilities in a Mach 3, zero pressure gradient boundary layer reach N =10 at Rex≈107 . However, secondary instability modes of the stationary streaks undergoing transient growth are able to achieve the same N -factor at Rex<2 ×106 when the initial streak amplitude is sufficiently large. In contrast to the streak instabilities in incompressible flows, subharmonic instability modes with twice the fundamental spanwise wavelength of the streaks are found to have higher amplification ratios than the streak instabilities at fundamental

  16. Handheld Longwave Infrared Camera Based on Highly-Sensitive Quantum Well Infrared Photodetectors, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a compact handheld longwave infrared camera based on quantum well infrared photodetector (QWIP) focal plane array (FPA) technology. Based on...

  17. Principal axis-based correspondence between multiple cameras for people tracking.

    Science.gov (United States)

    Hu, Weiming; Hu, Min; Zhou, Xue; Tan, Tieniu; Lou, Jianguang; Maybank, Steve

    2006-04-01

    Visual surveillance using multiple cameras has attracted increasing interest in recent years. Correspondence between multiple cameras is one of the most important and basic problems which visual surveillance using multiple cameras brings. In this paper, we propose a simple and robust method, based on principal axes of people, to match people across multiple cameras. The correspondence likelihood reflecting the similarity of pairs of principal axes of people is constructed according to the relationship between "ground-points" of people detected in each camera view and the intersections of principal axes detected in different camera views and transformed to the same view. Our method has the following desirable properties: 1) Camera calibration is not needed. 2) Accurate motion detection and segmentation are less critical due to the robustness of the principal axis-based feature to noise. 3) Based on the fused data derived from correspondence results, positions of people in each camera view can be accurately located even when the people are partially occluded in all views. The experimental results on several real video sequences from outdoor environments have demonstrated the effectiveness, efficiency, and robustness of our method.

  18. Semantic Information Extraction of Lanes Based on Onboard Camera Videos

    Science.gov (United States)

    Tang, L.; Deng, T.; Ren, C.

    2018-04-01

    In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.

  19. On camera-based smoke and gas leakage detection

    Energy Technology Data Exchange (ETDEWEB)

    Nyboe, Hans Olav

    1999-07-01

    Gas detectors are found in almost every part of industry and in many homes as well. An offshore oil or gas platform may host several hundred gas detectors. The ability of the common point and open path gas detectors to detect leakages depends on their location relative to the location of a gas cloud. This thesis describes the development of a passive volume gas detector, that is, one than will detect a leakage anywhere in the area monitored. After the consideration of several detection techniques it was decided to use an ordinary monochrome camera as sensor. Because a gas leakage may perturb the index of refraction, parts of the background appear to be displaced from their true positions, and it is necessary to develop algorithms that can deal with small differences between images. The thesis develops two such algorithms. Many image regions can be defined and several feature values can be computed for each region. The value of the features depends on the pattern in the image regions. The classes studied in this work are: reference, gas, smoke and human activity. Test show that observation belonging to these classes can be classified fairly high accuracy. The features in the feature set were chosen and developed for this particular application. Basically, the features measure the magnitude of pixel differences, size of detected phenomena and image distortion. Interesting results from many experiments are presented. Most important, the experiments show that apparent motion caused by a gas leakage or heat convection can be detected by means of a monochrome camera. Small leakages of methane can be detected at a range of about four metres. Other gases, such as butane, where the densities differ more from the density of air than the density of methane does, can be detected further from the camera. Gas leakages large enough to cause condensation have been detected at a camera distance of 20 metres. 59 refs., 42 figs., 13 tabs.

  20. Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task

    Directory of Open Access Journals (Sweden)

    Nicholas T. Bott

    2017-06-01

    Full Text Available Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive “window on the brain,” and the recording of eye movements using web cameras is a burgeoning area of research.Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS.Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera.Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits (r = 0.88–0.92. Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81–0.88. There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets (r = 0.88–0.94. Significantly fewer data quality issues were encountered using the built-in web camera.Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as

  1. Absolute calibration method for fast-streaked, fiber optic light collection, spectroscopy systems

    International Nuclear Information System (INIS)

    Johnston, Mark D.; Frogget, Brent; Oliver, Bryan Velten; Maron, Yitzhak; Droemer, Darryl W.; Crain, Marlon D.

    2010-01-01

    This report outlines a convenient method to calibrate fast (<1ns resolution) streaked, fiber optic light collection, spectroscopy systems. Such a system is used to collect spectral data on plasmas generated in the A-K gap of electron beam diodes fielded on the RITS-6 accelerator (8-12MV, 140-200kA). On RITS, light is collected through a small diameter (200 micron) optical fiber and recorded on a fast streak camera at the output of 1 meter Czerny-Turner monochromator (F/7 optics). To calibrate such a system, it is necessary to efficiently couple light from a spectral lamp into a 200 micron diameter fiber, split it into its spectral components, with 10 Angstroms or less resolution, and record it on a streak camera with 1ns or less temporal resolution.

  2. A luminescence imaging system based on a CCD camera

    DEFF Research Database (Denmark)

    Duller, G.A.T.; Bøtter-Jensen, L.; Markey, B.G.

    1997-01-01

    Stimulated luminescence arising from naturally occurring minerals is likely to be spatially heterogeneous. Standard luminescence detection systems are unable to resolve this variability. Several research groups have attempted to use imaging photon detectors, or image intensifiers linked...... to photographic systems, in order to obtain spatially resolved data. However, the former option is extremely expensive and it is difficult to obtain quantitative data from the latter. This paper describes the use of a CCD camera for imaging both thermoluminescence and optically stimulated luminescence. The system...

  3. Spectrally-Tunable Infrared Camera Based on Highly-Sensitive Quantum Well Infrared Photodetectors, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a SPECTRALLY-TUNABLE INFRARED CAMERA based on quantum well infrared photodetector (QWIP) focal plane array (FPA) technology. This will build on...

  4. Design of gamma camera data acquisition system based on PCI9810

    International Nuclear Information System (INIS)

    Zhao Yuanyuan; Zhao Shujun; Liu Yang

    2004-01-01

    This paper describe the design of gamma camera's data acquisition system, which is based on PCI9810 data acquisition card of ADLink Technology Inc. The main function of PCI9810 and the program of data acquisition system are described. (authors)

  5. Ultrafast streak and framing technique for the observation of laser driven shock waves in transparent solid targets

    International Nuclear Information System (INIS)

    Van Kessel, C.G.M.; Sachsenmaier, P.; Sigel, R.

    1975-01-01

    Shock waves driven by laser ablation in plane transparent plexiglass and solid hydrogen targets have been observed with streak and framing techniques using a high speed image converter camera, and a dye laser as a light source. The framing pictures have been made by mode locking the dye laser and using a wide streak slit. In both materials a growing hemispherical shock wave is observed with the maximum velocity at the onset of laser radiation. (author)

  6. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    Science.gov (United States)

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  7. A Compton camera application for the GAMOS GEANT4-based framework

    Energy Technology Data Exchange (ETDEWEB)

    Harkness, L.J., E-mail: ljh@ns.ph.liv.ac.uk [Oliver Lodge Laboratory, The University of Liverpool, Liverpool L69 7ZE (United Kingdom); Arce, P. [Department of Basic Research, CIEMAT, Madrid (Spain); Judson, D.S.; Boston, A.J.; Boston, H.C.; Cresswell, J.R.; Dormand, J.; Jones, M.; Nolan, P.J.; Sampson, J.A.; Scraggs, D.P.; Sweeney, A. [Oliver Lodge Laboratory, The University of Liverpool, Liverpool L69 7ZE (United Kingdom); Lazarus, I.; Simpson, J. [STFC Daresbury Laboratory, Daresbury, Warrington WA4 4AD (United Kingdom)

    2012-04-11

    Compton camera systems can be used to image sources of gamma radiation in a variety of applications such as nuclear medicine, homeland security and nuclear decommissioning. To locate gamma-ray sources, a Compton camera employs electronic collimation, utilising Compton kinematics to reconstruct the paths of gamma rays which interact within the detectors. The main benefit of this technique is the ability to accurately identify and locate sources of gamma radiation within a wide field of view, vastly improving the efficiency and specificity over existing devices. Potential advantages of this imaging technique, along with advances in detector technology, have brought about a rapidly expanding area of research into the optimisation of Compton camera systems, which relies on significant input from Monte-Carlo simulations. In this paper, the functionality of a Compton camera application that has been integrated into GAMOS, the GEANT4-based Architecture for Medicine-Oriented Simulations, is described. The application simplifies the use of GEANT4 for Monte-Carlo investigations by employing a script based language and plug-in technology. To demonstrate the use of the Compton camera application, simulated data have been generated using the GAMOS application and acquired through experiment for a preliminary validation, using a Compton camera configured with double sided high purity germanium strip detectors. Energy spectra and reconstructed images for the data sets are presented.

  8. Nonlinear streak computation using boundary region equations

    Energy Technology Data Exchange (ETDEWEB)

    Martin, J A; Martel, C, E-mail: juanangel.martin@upm.es, E-mail: carlos.martel@upm.es [Depto. de Fundamentos Matematicos, E.T.S.I Aeronauticos, Universidad Politecnica de Madrid, Plaza Cardenal Cisneros 3, 28040 Madrid (Spain)

    2012-08-01

    The boundary region equations (BREs) are applied for the simulation of the nonlinear evolution of a spanwise periodic array of streaks in a flat plate boundary layer. The well-known BRE formulation is obtained from the complete Navier-Stokes equations in the high Reynolds number limit, and provides the correct asymptotic description of three-dimensional boundary layer streaks. In this paper, a fast and robust streamwise marching scheme is introduced to perform their numerical integration. Typical streak computations present in the literature correspond to linear streaks or to small-amplitude nonlinear streaks computed using direct numerical simulation (DNS) or the nonlinear parabolized stability equations (PSEs). We use the BREs to numerically compute high-amplitude streaks, a method which requires much lower computational effort than DNS and does not have the consistency and convergence problems of the PSE. It is found that the flow configuration changes substantially as the amplitude of the streaks grows and the nonlinear effects come into play. The transversal motion (in the wall normal-streamwise plane) becomes more important and strongly distorts the streamwise velocity profiles, which end up being quite different from those of the linear case. We analyze in detail the resulting flow patterns for the nonlinearly saturated streaks and compare them with available experimental results. (paper)

  9. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    Science.gov (United States)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  10. An Airborne Multispectral Imaging System Based on Two Consumer-Grade Cameras for Agricultural Remote Sensing

    Directory of Open Access Journals (Sweden)

    Chenghai Yang

    2014-06-01

    Full Text Available This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS sensor with 5616 × 3744 pixels. One camera captures normal color images, while the other is modified to obtain near-infrared (NIR images. The color camera is also equipped with a GPS receiver to allow geotagged images. A remote control is used to trigger both cameras simultaneously. Images are stored in 14-bit RAW and 8-bit JPEG files in CompactFlash cards. The second-order transformation was used to align the color and NIR images to achieve subpixel alignment in four-band images. The imaging system was tested under various flight and land cover conditions and optimal camera settings were determined for airborne image acquisition. Images were captured at altitudes of 305–3050 m (1000–10,000 ft and pixel sizes of 0.1–1.0 m were achieved. Four practical application examples are presented to illustrate how the imaging system was used to estimate cotton canopy cover, detect cotton root rot, and map henbit and giant reed infestations. Preliminary analysis of example images has shown that this system has potential for crop condition assessment, pest detection, and other agricultural applications.

  11. Home Camera-Based Fall Detection System for the Elderly

    Directory of Open Access Journals (Sweden)

    Koldo de Miguel

    2017-12-01

    Full Text Available Falls are the leading cause of injury and death in elderly individuals. Unfortunately, fall detectors are typically based on wearable devices, and the elderly often forget to wear them. In addition, fall detectors based on artificial vision are not yet available on the market. In this paper, we present a new low-cost fall detector for smart homes based on artificial vision algorithms. Our detector combines several algorithms (background subtraction, Kalman filtering and optical flow as input to a machine learning algorithm with high detection accuracy. Tests conducted on over 50 different fall videos have shown a detection ratio of greater than 96%.

  12. Home Camera-Based Fall Detection System for the Elderly.

    Science.gov (United States)

    de Miguel, Koldo; Brunete, Alberto; Hernando, Miguel; Gambao, Ernesto

    2017-12-09

    Falls are the leading cause of injury and death in elderly individuals. Unfortunately, fall detectors are typically based on wearable devices, and the elderly often forget to wear them. In addition, fall detectors based on artificial vision are not yet available on the market. In this paper, we present a new low-cost fall detector for smart homes based on artificial vision algorithms. Our detector combines several algorithms (background subtraction, Kalman filtering and optical flow) as input to a machine learning algorithm with high detection accuracy. Tests conducted on over 50 different fall videos have shown a detection ratio of greater than 96%.

  13. Noninvasive particle sizing using camera-based diffuse reflectance spectroscopy

    DEFF Research Database (Denmark)

    Abildgaard, Otto Højager Attermann; Frisvad, Jeppe Revall; Falster, Viggo

    2016-01-01

    Diffuse reflectance measurements are useful for noninvasive inspection of optical properties such as reduced scattering and absorption coefficients. Spectroscopic analysis of these optical properties can be used for particle sizing. Systems based on optical fiber probes are commonly employed...

  14. StreakDet data processing and analysis pipeline for space debris optical observations

    Science.gov (United States)

    Virtanen, Jenni; Flohrer, Tim; Muinonen, Karri; Granvik, Mikael; Torppa, Johanna; Poikonen, Jonne; Lehti, Jussi; Santti, Tero; Komulainen, Tuomo; Naranen, Jyri

    We describe a novel data processing and analysis pipeline for optical observations of space debris. The monitoring of space object populations requires reliable acquisition of observational data, to support the development and validation of space debris environment models, the build-up and maintenance of a catalogue of orbital elements. In addition, data is needed for the assessment of conjunction events and for the support of contingency situations or launches. The currently available, mature image processing algorithms for detection and astrometric reduction of optical data cover objects that cross the sensor field-of-view comparably slowly, and within a rather narrow, predefined range of angular velocities. By applying specific tracking techniques, the objects appear point-like or as short trails in the exposures. However, the general survey scenario is always a “track before detect” problem, resulting in streaks, i.e., object trails of arbitrary lengths, in the images. The scope of the ESA-funded StreakDet (Streak detection and astrometric reduction) project is to investigate solutions for detecting and reducing streaks from optical images, particularly in the low signal-to-noise ratio (SNR) domain, where algorithms are not readily available yet. For long streaks, the challenge is to extract precise position information and related registered epochs with sufficient precision. Although some considerations for low-SNR processing of streak-like features are available in the current image processing and computer vision literature, there is a need to discuss and compare these approaches for space debris analysis, in order to develop and evaluate prototype implementations. In the StreakDet project, we develop algorithms applicable to single images (as compared to consecutive frames of the same field) obtained with any observing scenario, including space-based surveys and both low- and high-altitude populations. The proposed processing pipeline starts from the

  15. Construct and face validity of a virtual reality-based camera navigation curriculum.

    Science.gov (United States)

    Shetty, Shohan; Panait, Lucian; Baranoski, Jacob; Dudrick, Stanley J; Bell, Robert L; Roberts, Kurt E; Duffy, Andrew J

    2012-10-01

    Camera handling and navigation are essential skills in laparoscopic surgery. Surgeons rely on camera operators, usually the least experienced members of the team, for visualization of the operative field. Essential skills for camera operators include maintaining orientation, an effective horizon, appropriate zoom control, and a clean lens. Virtual reality (VR) simulation may be a useful adjunct to developing camera skills in a novice population. No standardized VR-based camera navigation curriculum is currently available. We developed and implemented a novel curriculum on the LapSim VR simulator platform for our residents and students. We hypothesize that our curriculum will demonstrate construct and face validity in our trainee population, distinguishing levels of laparoscopic experience as part of a realistic training curriculum. Overall, 41 participants with various levels of laparoscopic training completed the curriculum. Participants included medical students, surgical residents (Postgraduate Years 1-5), fellows, and attendings. We stratified subjects into three groups (novice, intermediate, and advanced) based on previous laparoscopic experience. We assessed face validity with a questionnaire. The proficiency-based curriculum consists of three modules: camera navigation, coordination, and target visualization using 0° and 30° laparoscopes. Metrics include time, target misses, drift, path length, and tissue contact. We analyzed data using analysis of variance and Student's t-test. We noted significant differences in repetitions required to complete the curriculum: 41.8 for novices, 21.2 for intermediates, and 11.7 for the advanced group (P medical students during their surgery rotations. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Calibration of high resolution digital camera based on different photogrammetric methods

    International Nuclear Information System (INIS)

    Hamid, N F A; Ahmad, A

    2014-01-01

    This paper presents method of calibrating high-resolution digital camera based on different configuration which comprised of stereo and convergent. Both methods are performed in the laboratory and in the field calibration. Laboratory calibration is based on a 3D test field where a calibration plate of dimension 0.4 m × 0.4 m with grid of targets at different height is used. For field calibration, it uses the same concept of 3D test field which comprised of 81 target points located on a flat ground and the dimension is 9 m × 9 m. In this study, a non-metric high resolution digital camera called Canon Power Shot SX230 HS was calibrated in the laboratory and in the field using different configuration for data acquisition. The aim of the calibration is to investigate the behavior of the internal digital camera whether all the digital camera parameters such as focal length, principal point and other parameters remain the same or vice-versa. In the laboratory, a scale bar is placed in the test field for scaling the image and approximate coordinates were used for calibration process. Similar method is utilized in the field calibration. For both test fields, the digital images were acquired within short period using stereo and convergent configuration. For field calibration, aerial digital images were acquired using unmanned aerial vehicle (UAV) system. All the images were processed using photogrammetric calibration software. Different calibration results were obtained for both laboratory and field calibrations. The accuracy of the results is evaluated based on standard deviation. In general, for photogrammetric applications and other applications the digital camera must be calibrated for obtaining accurate measurement or results. The best method of calibration depends on the type of applications. Finally, for most applications the digital camera is calibrated on site, hence, field calibration is the best method of calibration and could be employed for obtaining accurate

  17. Studies on a silicon-photomultiplier-based camera for Imaging Atmospheric Cherenkov Telescopes

    Science.gov (United States)

    Arcaro, C.; Corti, D.; De Angelis, A.; Doro, M.; Manea, C.; Mariotti, M.; Rando, R.; Reichardt, I.; Tescaro, D.

    2017-12-01

    Imaging Atmospheric Cherenkov Telescopes (IACTs) represent a class of instruments which are dedicated to the ground-based observation of cosmic VHE gamma ray emission based on the detection of the Cherenkov radiation produced in the interaction of gamma rays with the Earth atmosphere. One of the key elements of such instruments is a pixelized focal-plane camera consisting of photodetectors. To date, photomultiplier tubes (PMTs) have been the common choice given their high photon detection efficiency (PDE) and fast time response. Recently, silicon photomultipliers (SiPMs) are emerging as an alternative. This rapidly evolving technology has strong potential to become superior to that based on PMTs in terms of PDE, which would further improve the sensitivity of IACTs, and see a price reduction per square millimeter of detector area. We are working to develop a SiPM-based module for the focal-plane cameras of the MAGIC telescopes to probe this technology for IACTs with large focal plane cameras of an area of few square meters. We will describe the solutions we are exploring in order to balance a competitive performance with a minimal impact on the overall MAGIC camera design using ray tracing simulations. We further present a comparative study of the overall light throughput based on Monte Carlo simulations and considering the properties of the major hardware elements of an IACT.

  18. High-resolution Compton cameras based on Si/CdTe double-sided strip detectors

    International Nuclear Information System (INIS)

    Odaka, Hirokazu; Ichinohe, Yuto; Takeda, Shin'ichiro; Fukuyama, Taro; Hagino, Koichi; Saito, Shinya; Sato, Tamotsu; Sato, Goro; Watanabe, Shin; Kokubun, Motohide; Takahashi, Tadayuki; Yamaguchi, Mitsutaka

    2012-01-01

    We have developed a new Compton camera based on silicon (Si) and cadmium telluride (CdTe) semiconductor double-sided strip detectors (DSDs). The camera consists of a 500-μm-thick Si-DSD and four layers of 750-μm-thick CdTe-DSDs all of which have common electrode configuration segmented into 128 strips on each side with pitches of 250μm. In order to realize high angular resolution and to reduce size of the detector system, a stack of DSDs with short stack pitches of 4 mm is utilized to make the camera. Taking advantage of the excellent energy and position resolutions of the semiconductor devices, the camera achieves high angular resolutions of 4.5° at 356 keV and 3.5° at 662 keV. To obtain such high resolutions together with an acceptable detection efficiency, we demonstrate data reduction methods including energy calibration using Compton scattering continuum and depth sensing in the CdTe-DSD. We also discuss imaging capability of the camera and show simultaneous multi-energy imaging.

  19. Motion camera based on a custom vision sensor and an FPGA architecture

    Science.gov (United States)

    Arias-Estrada, Miguel

    1998-09-01

    A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.

  20. A G-APD based Camera for Imaging Atmospheric Cherenkov Telescopes

    International Nuclear Information System (INIS)

    Anderhub, H.; Backes, M.; Biland, A.; Boller, A.; Braun, I.; Bretz, T.; Commichau, S.; Commichau, V.; Dorner, D.; Gendotti, A.; Grimm, O.; Gunten, H. von; Hildebrand, D.; Horisberger, U.; Koehne, J.-H.; Kraehenbuehl, T.; Kranich, D.; Lorenz, E.; Lustermann, W.; Mannheim, K.

    2011-01-01

    Imaging Atmospheric Cherenkov Telescopes (IACT) for Gamma-ray astronomy are presently using photomultiplier tubes as photo sensors. Geiger-mode avalanche photodiodes (G-APD) promise an improvement in sensitivity and, important for this application, ease of construction, operation and ruggedness. G-APDs have proven many of their features in the laboratory, but a qualified assessment of their performance in an IACT camera is best undertaken with a prototype. This paper describes the design and construction of a full-scale camera based on G-APDs realized within the FACT project (First G-APD Cherenkov Telescope).

  1. Investigation of an Autofocusing Method for Visible Aerial Cameras Based on Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Zhichao Chen

    2016-01-01

    Full Text Available In order to realize the autofocusing in aerial camera, an autofocusing system is established and its characteristics such as working principle and optical-mechanical structure and focus evaluation function are investigated. The reason for defocusing in aviation camera is analyzed and several autofocusing methods along with appropriate focus evaluation functions are introduced based on the image processing techniques. The proposed autofocusing system is designed and implemented using two CMOS detectors. The experiment results showed that the proposed method met the aviation camera focusing accuracy requirement, and a maximum focusing error of less than half of the focus depth is achieved. The system designed in this paper can find the optical imaging focal plane in real-time; as such, this novel design has great potential in practical engineering, especially aerospace applications.

  2. Efficient color correction method for smartphone camera-based health monitoring application.

    Science.gov (United States)

    Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong

    2017-07-01

    Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.

  3. A Cherenkov camera with integrated electronics based on the 'Smart Pixel' concept

    International Nuclear Information System (INIS)

    Bulian, Norbert; Hirsch, Thomas; Hofmann, Werner; Kihm, Thomas; Kohnle, Antje; Panter, Michael; Stein, Michael

    2000-01-01

    An option for the cameras of the HESS telescopes, the concept of a modular camera based on 'Smart Pixels' was developed. A Smart Pixel contains the photomultiplier, the high voltage supply for the photomultiplier, a dual-gain sample-and-hold circuit with a 14 bit dynamic range, a time-to-voltage converter, a trigger discriminator, trigger logic to detect a coincidence of X=1...7 neighboring pixels, and an analog ratemeter. The Smart Pixels plug into a common backplane which provides power, communicates trigger signals between neighboring pixels, and holds a digital control bus as well as an analog bus for multiplexed readout of pixel signals. The performance of the Smart Pixels has been studied using a 19-pixel test camera

  4. Texton-based super-resolution for achieving high spatiotemporal resolution in hybrid camera system

    Science.gov (United States)

    Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2010-05-01

    Many super-resolution methods have been proposed to enhance the spatial resolution of images by using iteration and multiple input images. In a previous paper, we proposed the example-based super-resolution method to enhance an image through pixel-based texton substitution to reduce the computational cost. In this method, however, we only considered the enhancement of a texture image. In this study, we modified this texton substitution method for a hybrid camera to reduce the required bandwidth of a high-resolution video camera. We applied our algorithm to pairs of high- and low-spatiotemporal-resolution videos, which were synthesized to simulate a hybrid camera. The result showed that the fine detail of the low-resolution video can be reproduced compared with bicubic interpolation and the required bandwidth could be reduced to about 1/5 in a video camera. It was also shown that the peak signal-to-noise ratios (PSNRs) of the images improved by about 6 dB in a trained frame and by 1.0-1.5 dB in a test frame, as determined by comparison with the processed image using bicubic interpolation, and the average PSNRs were higher than those obtained by the well-known Freeman’s patch-based super-resolution method. Compared with that of the Freeman’s patch-based super-resolution method, the computational time of our method was reduced to almost 1/10.

  5. Person re-identification using height-based gait in colour depth camera

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.

    2013-01-01

    We address the problem of person re-identification in colour-depth camera using the height temporal information of people. Our proposed gait-based feature corresponds to the frequency response of the height temporal information. We demonstrate that the discriminative periodic motion associated with

  6. Cost Effective Paper-Based Colorimetric Microfluidic Devices and Mobile Phone Camera Readers for the Classroom

    Science.gov (United States)

    Koesdjojo, Myra T.; Pengpumkiat, Sumate; Wu, Yuanyuan; Boonloed, Anukul; Huynh, Daniel; Remcho, Thomas P.; Remcho, Vincent T.

    2015-01-01

    We have developed a simple and direct method to fabricate paper-based microfluidic devices that can be used for a wide range of colorimetric assay applications. With these devices, assays can be performed within minutes to allow for quantitative colorimetric analysis by use of a widely accessible iPhone camera and an RGB color reader application…

  7. Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors.

    Science.gov (United States)

    Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung

    2017-05-08

    Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.

  8. Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors

    Directory of Open Access Journals (Sweden)

    Jong Hyun Kim

    2017-05-01

    Full Text Available Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1 and two open databases (Korea advanced institute of science and technology (KAIST and computer vision center (CVC databases, as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.

  9. Design tool for TOF and SL based 3D cameras.

    Science.gov (United States)

    Bouquet, Gregory; Thorstensen, Jostein; Bakke, Kari Anne Hestnes; Risholm, Petter

    2017-10-30

    Active illumination 3D imaging systems based on Time-of-flight (TOF) and Structured Light (SL) projection are in rapid development, and are constantly finding new areas of application. In this paper, we present a theoretical design tool that allows prediction of 3D imaging precision. Theoretical expressions are developed for both TOF and SL imaging systems. The expressions contain only physically measurable parameters and no fitting parameters. We perform 3D measurements with both TOF and SL imaging systems, showing excellent agreement between theoretical and measured distance precision. The theoretical framework can be a powerful 3D imaging design tool, as it allows for prediction of 3D measurement precision already in the design phase.

  10. Two-color spatial and temporal temperature measurements using a streaked soft x-ray imager

    Energy Technology Data Exchange (ETDEWEB)

    Moore, A. S., E-mail: alastair.moore@physics.org; Ahmed, M. F.; Soufli, R.; Pardini, T.; Hibbard, R. L.; Bailey, C. G.; Bell, P. M.; Hau-Riege, S. [Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, California 94551-0808 (United States); Benstead, J.; Morton, J.; Guymer, T. M.; Garbett, W. J.; Rubery, M. S.; Skidmore, J. W. [Directorate Science and Technology, AWE Aldermaston, Reading RG7 4PR (United Kingdom); Bedzyk, M.; Shoup, M. J.; Regan, S. P.; Agliata, T.; Jungquist, R. [Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623 (United States); Schmidt, D. W. [Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); and others

    2016-11-15

    A dual-channel streaked soft x-ray imager has been designed and used on high energy-density physics experiments at the National Ignition Facility. This streaked imager creates two images of the same x-ray source using two slit apertures and a single shallow angle reflection from a nickel mirror. Thin filters are used to create narrow band pass images at 510 eV and 360 eV. When measuring a Planckian spectrum, the brightness ratio of the two images can be translated into a color-temperature, provided that the spectral sensitivity of the two images is well known. To reduce uncertainty and remove spectral features in the streak camera photocathode from this photon energy range, a thin 100 nm CsI on 50 nm Al streak camera photocathode was implemented. Provided that the spectral shape is well-known, then uncertainties on the spectral sensitivity limits the accuracy of the temperature measurement to approximately 4.5% at 100 eV.

  11. Nuclear Radiation Degradation Study on HD Camera Based on CMOS Image Sensor at Different Dose Rates

    Directory of Open Access Journals (Sweden)

    Congzheng Wang

    2018-02-01

    Full Text Available In this work, we irradiated a high-definition (HD industrial camera based on a commercial-off-the-shelf (COTS CMOS image sensor (CIS with Cobalt-60 gamma-rays. All components of the camera under test were fabricated without radiation hardening, except for the lens. The irradiation experiments of the HD camera under biased conditions were carried out at 1.0, 10.0, 20.0, 50.0 and 100.0 Gy/h. During the experiment, we found that the tested camera showed a remarkable degradation after irradiation and differed in the dose rates. With the increase of dose rate, the same target images become brighter. Under the same dose rate, the radiation effect in bright area is lower than that in dark area. Under different dose rates, the higher the dose rate is, the worse the radiation effect will be in both bright and dark areas. And the standard deviations of bright and dark areas become greater. Furthermore, through the progressive degradation analysis of the captured image, experimental results demonstrate that the attenuation of signal to noise ratio (SNR versus radiation time is not obvious at the same dose rate, and the degradation is more and more serious with increasing dose rate. Additionally, the decrease rate of SNR at 20.0, 50.0 and 100.0 Gy/h is far greater than that at 1.0 and 10.0 Gy/h. Even so, we confirm that the HD industrial camera is still working at 10.0 Gy/h during the 8 h of measurements, with a moderate decrease of the SNR (5 dB. The work is valuable and can provide suggestion for camera users in the radiation field.

  12. Nuclear Radiation Degradation Study on HD Camera Based on CMOS Image Sensor at Different Dose Rates.

    Science.gov (United States)

    Wang, Congzheng; Hu, Song; Gao, Chunming; Feng, Chang

    2018-02-08

    In this work, we irradiated a high-definition (HD) industrial camera based on a commercial-off-the-shelf (COTS) CMOS image sensor (CIS) with Cobalt-60 gamma-rays. All components of the camera under test were fabricated without radiation hardening, except for the lens. The irradiation experiments of the HD camera under biased conditions were carried out at 1.0, 10.0, 20.0, 50.0 and 100.0 Gy/h. During the experiment, we found that the tested camera showed a remarkable degradation after irradiation and differed in the dose rates. With the increase of dose rate, the same target images become brighter. Under the same dose rate, the radiation effect in bright area is lower than that in dark area. Under different dose rates, the higher the dose rate is, the worse the radiation effect will be in both bright and dark areas. And the standard deviations of bright and dark areas become greater. Furthermore, through the progressive degradation analysis of the captured image, experimental results demonstrate that the attenuation of signal to noise ratio (SNR) versus radiation time is not obvious at the same dose rate, and the degradation is more and more serious with increasing dose rate. Additionally, the decrease rate of SNR at 20.0, 50.0 and 100.0 Gy/h is far greater than that at 1.0 and 10.0 Gy/h. Even so, we confirm that the HD industrial camera is still working at 10.0 Gy/h during the 8 h of measurements, with a moderate decrease of the SNR (5 dB). The work is valuable and can provide suggestion for camera users in the radiation field.

  13. Development of a compact scintillator-based high-resolution Compton camera for molecular imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kishimoto, A., E-mail: daphne3h-aya@ruri.waseda.jp [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Kataoka, J.; Koide, A.; Sueoka, K.; Iwamoto, Y.; Taya, T. [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Ohsuka, S. [Central Research Laboratory, Hamamatsu Photonics K.K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu, Shizuoka (Japan)

    2017-02-11

    The Compton camera, which shows gamma-ray distribution utilizing the kinematics of Compton scattering, is a promising detector capable of imaging across a wide range of energy. In this study, we aim to construct a small-animal molecular imaging system in a wide energy range by using the Compton camera. We developed a compact medical Compton camera based on a Ce-doped Gd{sub 3}Al{sub 2}Ga{sub 3}O{sub 12} (Ce:GAGG) scintillator and multi-pixel photon counter (MPPC). A basic performance confirmed that for 662 keV, the typical energy resolution was 7.4 % (FWHM) and the angular resolution was 4.5° (FWHM). We then used the medical Compton camera to conduct imaging experiments based on a 3-D imaging reconstruction algorithm using the multi-angle data acquisition method. The result confirmed that for a {sup 137}Cs point source at a distance of 4 cm, the image had a spatial resolution of 3.1 mm (FWHM). Furthermore, we succeeded in producing 3-D multi-color image of different simultaneous energy sources ({sup 22}Na [511 keV], {sup 137}Cs [662 keV], and {sup 54}Mn [834 keV]).

  14. Automated computer analysis of plasma-streak traces from SCYLLAC

    International Nuclear Information System (INIS)

    Whitman, R.L.; Jahoda, F.C.; Kruger, R.P.

    1977-01-01

    An automated computer analysis technique that locates and references the approximate centroid of single- or dual-streak traces from the Los Alamos Scientific Laboratory SCYLLAC facility is described. The technique also determines the plasma-trace width over a limited self-adjusting region. The plasma traces are recorded with streak cameras on Polaroid film, then scanned and digitized for processing. The analysis technique uses scene segmentation to separate the plasma trace from a reference fiducial trace. The technique employs two methods of peak detection; one for the plasma trace and one for the fiducial trace. The width is obtained using an edge-detection, or slope, method. Timing data are derived from the intensity modulation of the fiducial trace. To smooth (despike) the output graphs showing the plasma-trace centroid and width, a technique of ''twicing'' developed by Tukey was employed. In addition, an interactive sorting algorithm allows retrieval of the centroid, width, and fiducial data from any test shot plasma for post analysis. As yet, only a limited set of sixteen plasma traces has been processed using this technique

  15. Automated computer analysis of plasma-streak traces from SCYLLAC

    International Nuclear Information System (INIS)

    Whiteman, R.L.; Jahoda, F.C.; Kruger, R.P.

    1977-11-01

    An automated computer analysis technique that locates and references the approximate centroid of single- or dual-streak traces from the Los Alamos Scientific Laboratory SCYLLAC facility is described. The technique also determines the plasma-trace width over a limited self-adjusting region. The plasma traces are recorded with streak cameras on Polaroid film, then scanned and digitized for processing. The analysis technique uses scene segmentation to separate the plasma trace from a reference fiducial trace. The technique employs two methods of peak detection; one for the plasma trace and one for the fiducial trace. The width is obtained using an edge-detection, or slope, method. Timing data are derived from the intensity modulation of the fiducial trace. To smooth (despike) the output graphs showing the plasma-trace centroid and width, a technique of ''twicing'' developed by Tukey was employed. In addition, an interactive sorting algorithm allows retrieval of the centroid, width, and fiducial data from any test shot plasma for post analysis. As yet, only a limited set of the plasma traces has been processed with this technique

  16. Self-Calibration Method Based on Surface Micromaching of Light Transceiver Focal Plane for Optical Camera

    Directory of Open Access Journals (Sweden)

    Jin Li

    2016-10-01

    Full Text Available In remote sensing photogrammetric applications, inner orientation parameter (IOP calibration of remote sensing camera is a prerequisite for determining image position. However, achieving such a calibration without temporal and spatial limitations remains a crucial but unresolved issue to date. The accuracy of IOP calibration methods of a remote sensing camera determines the performance of image positioning. In this paper, we propose a high-accuracy self-calibration method without temporal and spatial limitations for remote sensing cameras. Our method is based on an auto-collimating dichroic filter combined with a surface micromachining (SM point-source focal plane. The proposed method can autonomously complete IOP calibration without the need of outside reference targets. The SM procedure is used to manufacture a light transceiver focal plane, which integrates with point sources, a splitter, and a complementary metal oxide semiconductor sensor. A dichroic filter is used to fabricate an auto-collimation light reflection element. The dichroic filter, splitter, and SM point-source focal plane are integrated into a camera to perform an integrated self-calibration. Experimental measurements confirm the effectiveness and convenience of the proposed method. Moreover, the method can achieve micrometer-level precision and can satisfactorily complete real-time calibration without temporal or spatial limitations.

  17. Streak detection and analysis pipeline for space-debris optical images

    Science.gov (United States)

    Virtanen, Jenni; Poikonen, Jonne; Säntti, Tero; Komulainen, Tuomo; Torppa, Johanna; Granvik, Mikael; Muinonen, Karri; Pentikäinen, Hanna; Martikainen, Julia; Näränen, Jyri; Lehti, Jussi; Flohrer, Tim

    2016-04-01

    We describe a novel data-processing and analysis pipeline for optical observations of moving objects, either of natural (asteroids, meteors) or artificial origin (satellites, space debris). The monitoring of the space object populations requires reliable acquisition of observational data, to support the development and validation of population models and to build and maintain catalogues of orbital elements. The orbital catalogues are, in turn, needed for the assessment of close approaches (for asteroids, with the Earth; for satellites, with each other) and for the support of contingency situations or launches. For both types of populations, there is also increasing interest to detect fainter objects corresponding to the small end of the size distribution. The ESA-funded StreakDet (streak detection and astrometric reduction) activity has aimed at formulating and discussing suitable approaches for the detection and astrometric reduction of object trails, or streaks, in optical observations. Our two main focuses are objects in lower altitudes and space-based observations (i.e., high angular velocities), resulting in long (potentially curved) and faint streaks in the optical images. In particular, we concentrate on single-image (as compared to consecutive frames of the same field) and low-SNR detection of objects. Particular attention has been paid to the process of extraction of all necessary information from one image (segmentation), and subsequently, to efficient reduction of the extracted data (classification). We have developed an automated streak detection and processing pipeline and demonstrated its performance with an extensive database of semisynthetic images simulating streak observations both from ground-based and space-based observing platforms. The average processing time per image is about 13 s for a typical 2k-by-2k image. For long streaks (length >100 pixels), primary targets of the pipeline, the detection sensitivity (true positives) is about 90% for

  18. Accurate measurement of imaging photoplethysmographic signals based camera using weighted average

    Science.gov (United States)

    Pang, Zongguang; Kong, Lingqin; Zhao, Yuejin; Sun, Huijuan; Dong, Liquan; Hui, Mei; Liu, Ming; Liu, Xiaohua; Liu, Lingling; Li, Xiaohui; Li, Rongji

    2018-01-01

    Imaging Photoplethysmography (IPPG) is an emerging technique for the extraction of vital signs of human being using video recordings. IPPG technology with its advantages like non-contact measurement, low cost and easy operation has become one research hot spot in the field of biomedicine. However, the noise disturbance caused by non-microarterial area cannot be removed because of the uneven distribution of micro-arterial, different signal strength of each region, which results in a low signal noise ratio of IPPG signals and low accuracy of heart rate. In this paper, we propose a method of improving the signal noise ratio of camera-based IPPG signals of each sub-region of the face using a weighted average. Firstly, we obtain the region of interest (ROI) of a subject's face based camera. Secondly, each region of interest is tracked and feature-based matched in each frame of the video. Each tracked region of face is divided into 60x60 pixel block. Thirdly, the weights of PPG signal of each sub-region are calculated, based on the signal-to-noise ratio of each sub-region. Finally, we combine the IPPG signal from all the tracked ROI using weighted average. Compared with the existing approaches, the result shows that the proposed method takes modest but significant effects on improvement of signal noise ratio of camera-based PPG estimated and accuracy of heart rate measurement.

  19. RELATIVE PANORAMIC CAMERA POSITION ESTIMATION FOR IMAGE-BASED VIRTUAL REALITY NETWORKS IN INDOOR ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    M. Nakagawa

    2017-09-01

    Full Text Available Image-based virtual reality (VR is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  20. A drone detection with aircraft classification based on a camera array

    Science.gov (United States)

    Liu, Hao; Qu, Fangchao; Liu, Yingjian; Zhao, Wei; Chen, Yitong

    2018-03-01

    In recent years, because of the rapid popularity of drones, many people have begun to operate drones, bringing a range of security issues to sensitive areas such as airports and military locus. It is one of the important ways to solve these problems by realizing fine-grained classification and providing the fast and accurate detection of different models of drone. The main challenges of fine-grained classification are that: (1) there are various types of drones, and the models are more complex and diverse. (2) the recognition test is fast and accurate, in addition, the existing methods are not efficient. In this paper, we propose a fine-grained drone detection system based on the high resolution camera array. The system can quickly and accurately recognize the detection of fine grained drone based on hd camera.

  1. Speed of sound and photoacoustic imaging with an optical camera based ultrasound detection system

    Science.gov (United States)

    Nuster, Robert; Paltauf, Guenther

    2017-07-01

    CCD camera based optical ultrasound detection is a promising alternative approach for high resolution 3D photoacoustic imaging (PAI). To fully exploit its potential and to achieve an image resolution SOS) in the image reconstruction algorithm. Hence, in the proposed work the idea and a first implementation are shown how speed of sound imaging can be added to a previously developed camera based PAI setup. The current setup provides SOS-maps with a spatial resolution of 2 mm and an accuracy of the obtained absolute SOS values of about 1%. The proposed dual-modality setup has the potential to provide highly resolved and perfectly co-registered 3D photoacoustic and SOS images.

  2. Florida-specific NTCIP management information base (MIB) for closed-circuit television (CCTV) camera : final draft.

    Science.gov (United States)

    2009-01-01

    Description: This following MIB has been developed for use by FDOT. This : proposed Florida-Specific NTCIP Management Information Base (MIB) For : Closed-Circuit Television (CCTV) Camera MIB is based on the following : documentations: : NTCIP 120...

  3. A real-time MTFC algorithm of space remote-sensing camera based on FPGA

    Science.gov (United States)

    Zhao, Liting; Huang, Gang; Lin, Zhe

    2018-01-01

    A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.

  4. Optical character recognition of camera-captured images based on phase features

    Science.gov (United States)

    Diaz-Escobar, Julia; Kober, Vitaly

    2015-09-01

    Nowadays most of digital information is obtained using mobile devices specially smartphones. In particular, it brings the opportunity for optical character recognition in camera-captured images. For this reason many recognition applications have been recently developed such as recognition of license plates, business cards, receipts and street signal; document classification, augmented reality, language translator and so on. Camera-captured images are usually affected by geometric distortions, nonuniform illumination, shadow, noise, which make difficult the recognition task with existing systems. It is well known that the Fourier phase contains a lot of important information regardless of the Fourier magnitude. So, in this work we propose a phase-based recognition system exploiting phase-congruency features for illumination/scale invariance. The performance of the proposed system is tested in terms of miss classifications and false alarms with the help of computer simulation.

  5. Secure Chaotic Map Based Block Cryptosystem with Application to Camera Sensor Networks

    Directory of Open Access Journals (Sweden)

    Muhammad Khurram Khan

    2011-01-01

    Full Text Available Recently, Wang et al. presented an efficient logistic map based block encryption system. The encryption system employs feedback ciphertext to achieve plaintext dependence of sub-keys. Unfortunately, we discovered that their scheme is unable to withstand key stream attack. To improve its security, this paper proposes a novel chaotic map based block cryptosystem. At the same time, a secure architecture for camera sensor network is constructed. The network comprises a set of inexpensive camera sensors to capture the images, a sink node equipped with sufficient computation and storage capabilities and a data processing server. The transmission security between the sink node and the server is gained by utilizing the improved cipher. Both theoretical analysis and simulation results indicate that the improved algorithm can overcome the flaws and maintain all the merits of the original cryptosystem. In addition, computational costs and efficiency of the proposed scheme are encouraging for the practical implementation in the real environment as well as camera sensor network.

  6. SENSITIVITY TEMPERATURE DEPENDENCE RESEARCH OF TV-CAMERAS BASED ON SILICON MATRIXES

    Directory of Open Access Journals (Sweden)

    Alexey N. Starchenko

    2017-07-01

    Full Text Available Subject of Research. The research is dedicated to the analysis of sensitivity change patterns of the cameras based on silicon CMOS-matrixes in various ambient temperatures. This information is necessary for the correct camera application for photometric measurements in-situ. The paper deals with studies of sensitivity variations of two digital cameras with different silicon CMOS matrixes in visible and near IR regions of the spectrum at temperature change. Method. Due to practical restrictions the temperature changes were recorded in separate spectral intervals important for practical use of the cameras. The experiments were carried out with the use of a climatic chamber, providing change and keeping the temperature range from minus 40 to plus 50 °C at a pitch of 10 о С. Two cameras were chosen for research: VAC-135-IP with OmniVision OV9121 matrix and VAC-248-IP with OnSemiconductor VITA2000 matrix. The two tested devices were placed in a climatic chamber at the same time and illuminated by one radiation source with a color temperature about 3000 K in order to eliminate a number of methodological errors. Main Results. The temperature dependence of the signals was shown to be linear and the matrixes sensitivities were determined. The results obtained are consistent with theoretical views, in general. The coefficients of thermal sensitivity were computed by these dependencies. It is shown that the greatest affect of temperature on the sensitivity occurs in the area (0.7–1.1 mkm. Temperature coefficients of sensitivity increase with the downward radiation wavelength increase. The experiments carried out have shown that it is necessary to take into account the changes in temperature sensitivity of silicon matrixes in the red and near in IR regions of the spectrum. The effect reveals itself in a clearly negative way in cameras with an amplitude resolution of 10-12 bits used for aerospace and space spectrozonal photography. Practical Relevance

  7. Development of plenoptic infrared camera using low dimensional material based photodetectors

    Science.gov (United States)

    Chen, Liangliang

    Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and

  8. Camera-Based Lock-in and Heterodyne Carrierographic Photoluminescence Imaging of Crystalline Silicon Wafers

    Science.gov (United States)

    Sun, Q. M.; Melnikov, A.; Mandelis, A.

    2015-06-01

    Carrierographic (spectrally gated photoluminescence) imaging of a crystalline silicon wafer using an InGaAs camera and two spread super-bandgap illumination laser beams is introduced in both low-frequency lock-in and high-frequency heterodyne modes. Lock-in carrierographic images of the wafer up to 400 Hz modulation frequency are presented. To overcome the frame rate and exposure time limitations of the camera, a heterodyne method is employed for high-frequency carrierographic imaging which results in high-resolution near-subsurface information. The feasibility of the method is guaranteed by the typical superlinearity behavior of photoluminescence, which allows one to construct a slow enough beat frequency component from nonlinear mixing of two high frequencies. Intensity-scan measurements were carried out with a conventional single-element InGaAs detector photocarrier radiometry system, and the nonlinearity exponent of the wafer was found to be around 1.7. Heterodyne images of the wafer up to 4 kHz have been obtained and qualitatively analyzed. With the help of the complementary lock-in and heterodyne modes, camera-based carrierographic imaging in a wide frequency range has been realized for fundamental research and industrial applications toward in-line nondestructive testing of semiconductor materials and devices.

  9. Calibration method for projector-camera-based telecentric fringe projection profilometry system.

    Science.gov (United States)

    Liu, Haibo; Lin, Huijing; Yao, Linshen

    2017-12-11

    By combining a fringe projection setup with a telecentric lens, a fringe pattern could be projected and imaged within a small area, making it possible to measure the three-dimensional (3D) surfaces of micro-components. This paper focuses on the flexible calibration of the fringe projection profilometry (FPP) system using a telecentric lens. An analytical telecentric projector-camera calibration model is introduced, in which the rig structure parameters remain invariant for all views, and the 3D calibration target can be located on the projector image plane with sub-pixel precision. Based on the presented calibration model, a two-step calibration procedure is proposed. First, the initial parameters, e.g., the projector-camera rig, projector intrinsic matrix, and coordinates of the control points of a 3D calibration target, are estimated using the affine camera factorization calibration method. Second, a bundle adjustment algorithm with various simultaneous views is applied to refine the calibrated parameters, especially the rig structure parameters and coordinates of the control points forth 3D target. Because the control points are determined during the calibration, there is no need for an accurate 3D reference target, whose is costly and extremely difficult to fabricate, particularly for tiny objects used to calibrate the telecentric FPP system. Real experiments were performed to validate the performance of the proposed calibration method. The test results showed that the proposed approach is very accurate and reliable.

  10. Vibration extraction based on fast NCC algorithm and high-speed camera.

    Science.gov (United States)

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.

  11. Camera on Vessel: A Camera-Based System to Measure Change in Water Volume in a Drinking Glass

    Directory of Open Access Journals (Sweden)

    Idowu Ayoola

    2015-09-01

    Full Text Available A major problem related to chronic health is patients’ “compliance” with new lifestyle changes, medical prescriptions, recommendations, or restrictions. Heart-failure and hemodialysis patients are usually placed on fluid restrictions due to their hemodynamic status. A holistic approach to managing fluid imbalance will incorporate the monitoring of salt-water intake, body-fluid retention, and fluid excretion in order to provide effective intervention at an early stage. Such an approach creates a need to develop a smart device that can monitor the drinking activities of the patient. This paper employs an empirical approach to infer the real water level in a conically shapped glass and the volume difference due to changes in water level. The method uses a low-resolution miniaturized camera to obtain images using an Arduino microcontroller. The images are processed in MATLAB. Conventional segmentation techniques (such as a Sobel filter to obtain a binary image are applied to extract the level gradient, and an ellipsoidal fitting helps to estimate the size of the cup. The fitting (using least-squares criterion between derived measurements in pixel and the real measurements shows a low covariance between the estimated measurement and the mean. The correlation between the estimated results to ground truth produced a variation of 3% from the mean.

  12. A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology

    Science.gov (United States)

    Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  13. Note: Tormenta: An open source Python-powered control software for camera based optical microscopy.

    Science.gov (United States)

    Barabas, Federico M; Masullo, Luciano A; Stefani, Fernando D

    2016-12-01

    Until recently, PC control and synchronization of scientific instruments was only possible through closed-source expensive frameworks like National Instruments' LabVIEW. Nowadays, efficient cost-free alternatives are available in the context of a continuously growing community of open-source software developers. Here, we report on Tormenta, a modular open-source software for the control of camera-based optical microscopes. Tormenta is built on Python, works on multiple operating systems, and includes some key features for fluorescence nanoscopy based on single molecule localization.

  14. Pea Streak Virus Recorded in Europe

    Czech Academy of Sciences Publication Activity Database

    Sarkisova, Tatiana; Bečková, M.; Fránová, Jana; Petrzik, Karel

    2016-01-01

    Roč. 52, č. 3 (2016), s. 164-166 ISSN 1212-2580 R&D Projects: GA MZe QH71145 Institutional support: RVO:60077344 Keywords : Pea streak virus * alfalfa * carlavirus * partial sequence Subject RIV: EE - Microbiology, Virology Impact factor: 0.742, year: 2016

  15. Nondipole effects in attosecond photoelectron streaking

    DEFF Research Database (Denmark)

    Spiewanowski, Maciek; Madsen, Lars Bojer

    2012-01-01

    The influence of nondipole terms on the time delay in photoionization by an extreme-ultraviolet attosecond pulse in the presence of a near-infrared femtosecond laser pulse from 1s, 2s, and 2p states in hydrogen is investigated. In this attosecond photoelectron streaking process, the relative...

  16. Atomic and molecular phases through attosecond streaking

    DEFF Research Database (Denmark)

    Baggesen, Jan Conrad; Madsen, Lars Bojer

    2011-01-01

    phase of the atomic or molecular ionization matrix elements from the two states through the interference from the two channels. The interference may change the phase of the photoelectron streaking signal within the envelope of the infrared field, an effect to be accounted for when reconstructing short...... pulses from the photoelectron signal and in attosecond time-resolved measurements....

  17. A pixellated γ-camera based on CdTe detectors clinical interests and performances

    International Nuclear Information System (INIS)

    Chambron, J.; Arntz, Y.; Eclancher, B.; Scheiber, Ch.; Siffert, P.; Hage Hali, M.; Regal, R.; Kazandjian, A.; Prat, V.; Thomas, S.; Warren, S.; Matz, R.; Jahnke, A.; Karman, M.; Pszota, A.; Nemeth, L.

    2000-01-01

    A mobile gamma camera dedicated to nuclear cardiology, based on a 15 cmx15 cm detection matrix of 2304 CdTe detector elements, 2.83 mmx2.83 mmx2 mm, has been developed with a European Community support to academic and industrial research centres. The intrinsic properties of the semiconductor crystals - low-ionisation energy, high-energy resolution, high attenuation coefficient - are potentially attractive to improve the γ-camera performances. But their use as γ detectors for medical imaging at high resolution requires production of high-grade materials and large quantities of sophisticated read-out electronics. The decision was taken to use CdTe rather than CdZnTe, because the manufacturer (Eurorad, France) has a large experience for producing high-grade materials, with a good homogeneity and stability and whose transport properties, characterised by the mobility-lifetime product, are at least 5 times greater than that of CdZnTe. The detector matrix is divided in 9 square units, each unit is composed of 256 detectors shared in 16 modules. Each module consists in a thin ceramic plate holding a line of 16 detectors, in four groups of four for an easy replacement, and holding a special 16 channels integrated circuit designed by CLRC (UK). A detection and acquisition logic based on a DSP card and a PC has been programmed by Eurorad for spectral and counting acquisition modes. Collimators LEAP and LEHR from commercial design, mobile gantry and clinical software were provided by Siemens (Germany). The γ-camera head housing, its general mounting and the electric connections were performed by Phase Laboratory (CNRS, France). The compactness of the γ-camera head, thin detectors matrix, electronic readout and collimator, facilitates the detection of close γ sources with the advantage of a high spatial resolution. Such an equipment is intended to bedside explorations. There is a growing clinical requirement in nuclear cardiology to early assess the extent of an infarct

  18. Electronics for the camera of the First G-APD Cherenkov Telescope (FACT) for ground based gamma-ray astronomy

    International Nuclear Information System (INIS)

    Anderhub, H; Biland, A; Boller, A; Braun, I; Commichau, V; Djambazov, L; Dorner, D; Gendotti, A; Grimm, O; Gunten, H P von; Hildebrand, D; Horisberger, U; Huber, B; Kim, K-S; Krähenbühl, T; Backes, M; Köhne, J-H; Krumm, B; Bretz, T; Farnier, C

    2012-01-01

    Within the FACT project, we construct a new type of camera based on Geiger-mode avalanche photodiodes (G-APDs). Compared to photomultipliers, G-APDs are more robust, need a lower operation voltage and have the potential of higher photon-detection efficiency and lower cost, but were never fully tested in the harsh environments of Cherenkov telescopes. The FACT camera consists of 1440 G-APD pixels and readout channels, based on the DRS4 (Domino Ring Sampler) analog pipeline chip and commercial Ethernet components. Preamplifiers, trigger system, digitization, slow control and power converters are integrated into the camera.

  19. A clinical gamma camera-based pinhole collimated system for high resolution small animal SPECT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Mejia, J.; Galvis-Alonso, O.Y., E-mail: mejia_famerp@yahoo.com.b [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Dept. de Biologia Molecular; Castro, A.A. de; Simoes, M.V. [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Dept. de Clinica Medica; Leite, J.P. [Universidade de Sao Paulo (FMRP/USP), Ribeirao Preto, SP (Brazil). Fac. de Medicina. Dept. de Neurociencias e Ciencias do Comportamento; Braga, J. [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil). Div. de Astrofisica

    2010-11-15

    The main objective of the present study was to upgrade a clinical gamma camera to obtain high resolution tomographic images of small animal organs. The system is based on a clinical gamma camera to which we have adapted a special-purpose pinhole collimator and a device for positioning and rotating the target based on a computer-controlled step motor. We developed a software tool to reconstruct the target's three-dimensional distribution of emission from a set of planar projections, based on the maximum likelihood algorithm. We present details on the hardware and software implementation. We imaged phantoms and heart and kidneys of rats. When using pinhole collimators, the spatial resolution and sensitivity of the imaging system depend on parameters such as the detector-to-collimator and detector-to-target distances and pinhole diameter. In this study, we reached an object voxel size of 0.6 mm and spatial resolution better than 2.4 and 1.7 mm full width at half maximum when 1.5- and 1.0-mm diameter pinholes were used, respectively. Appropriate sensitivity to study the target of interest was attained in both cases. Additionally, we show that as few as 12 projections are sufficient to attain good quality reconstructions, a result that implies a significant reduction of acquisition time and opens the possibility for radiotracer dynamic studies. In conclusion, a high resolution single photon emission computed tomography (SPECT) system was developed using a commercial clinical gamma camera, allowing the acquisition of detailed volumetric images of small animal organs. This type of system has important implications for research areas such as Cardiology, Neurology or Oncology. (author)

  20. New Stereo Vision Digital Camera System for Simultaneous Measurement of Cloud Base Height and Atmospheric Visibility

    Science.gov (United States)

    Janeiro, F. M.; Carretas, F.; Palma, N.; Ramos, P. M.; Wagner, F.

    2013-12-01

    Clouds play an important role in many aspects of everyday life. They affect both the local weather as well as the global climate and are an important parameter on climate change studies. Cloud parameters are also important for weather prediction models which make use of actual measurements. It is thus important to have low-cost instrumentation that can be deployed in the field to measure those parameters. This kind of instruments should also be automated and robust since they may be deployed in remote places and be subject to adverse weather conditions. Although clouds are very important in environmental systems, they are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Under VFR there are strict limits on the height of the cloud base, cloud cover and atmospheric visibility that ensure the safety of the pilots and planes. Although there are instruments, available in the market, to measure those parameters, their relatively high cost makes them unavailable in many local aerodromes. In this work we present a new prototype which has been recently developed and deployed in a local aerodrome as proof of concept. It is composed by two digital cameras that capture photographs of the sky and allow the measurement of the cloud height from the parallax effect. The new developments consist on having a new geometry which allows the simultaneous measurement of cloud base height, wind speed at cloud base height and atmospheric visibility, which was not previously possible with only two cameras. The new orientation of the cameras comes at the cost of a more complex geometry to measure the cloud base height. The atmospheric visibility is calculated from the Lambert-Beer law after the measurement of the contrast between a set of dark objects and the background sky. The prototype includes the latest hardware developments that

  1. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.

    Science.gov (United States)

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-08-31

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.

  2. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Hyungjin Kim

    2015-08-01

    Full Text Available Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments

  3. Six-frame picosecond radiation camera based on hydrated electron photoabsorption phenomena

    International Nuclear Information System (INIS)

    Coutts, G.W.; Olk, L.B.; Gates, H.A.; St Leger-Barter, G.

    1977-01-01

    To obtain picosecond photographs of nanosecond radiation sources, a six-frame ultra-high speed radiation camera based on hydrated electron absorption phenomena has been developed. A time-dependent opacity pattern is formed in an acidic aqueous cell by a pulsed radiation source. Six time-resolved picosecond images of this changing opacity pattern are transferred to photographic film with the use of a mode-locked dye laser and six electronically gated microchannel plate image intensifiers. Because the lifetime of the hydrated electron absorption centers can be reduced to picoseconds, the opacity patterns represent time-space pulse profile images

  4. Hyperspectral Longwave Infrared Focal Plane Array and Camera Based on Quantum Well Infrared Photodetectors, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a hyperspectral focal plane array and camera imaging in a large number of sharp hyperspectral bands in the thermal infrared. The camera is...

  5. Initial Demonstration of 9-MHz Framing Camera Rates on the FAST UV Drive Laser Pulse Trains

    Energy Technology Data Exchange (ETDEWEB)

    Lumpkin, A. H. [Fermilab; Edstrom Jr., D. [Fermilab; Ruan, J. [Fermilab

    2016-10-09

    We report the configuration of a Hamamatsu C5680 streak camera as a framing camera to record transverse spatial information of green-component laser micropulses at 3- and 9-MHz rates for the first time. The latter is near the time scale of the ~7.5-MHz revolution frequency of the Integrable Optics Test Accelerator (IOTA) ring and its expected synchroton radiation source temporal structure. The 2-D images are recorded with a Gig-E readout CCD camera. We also report a first proof of principle with an OTR source using the linac streak camera in a semi-framing mode.

  6. Stereo matching based on SIFT descriptor with illumination and camera invariance

    Science.gov (United States)

    Niu, Haitao; Zhao, Xunjie; Li, Chengjin; Peng, Xiang

    2010-10-01

    Stereo matching is the process of finding corresponding points in two or more images. The description of interest points is a critical aspect of point correspondence which is vital in stereo matching. SIFT descriptor has been proven to be better on the distinctiveness and robustness than other local descriptors. However, SIFT descriptor does not involve color information of feature point which provides powerfully distinguishable feature in matching tasks. Furthermore, in a real scene, image color are affected by various geometric and radiometric factors,such as gamma correction and exposure. These situations are very common in stereo images. For this reason, the color recorded by a camera is not a reliable cue, and the color consistency assumption is no longer valid between stereo images in real scenes. Hence the performance of other SIFT-based stereo matching algorithms can be severely degraded under the radiometric variations. In this paper, we present a new improved SIFT stereo matching algorithms that is invariant to various radiometric variations between left and right images. Unlike other improved SIFT stereo matching algorithms, we explicitly employ the color formation model with the parameters of lighting geometry, illuminant color and camera gamma in SIFT descriptor. Firstly, we transform the input color images to log-chromaticity color space, thus a linear relationship can be established. Then, we use a log-polar histogram to build three color invariance components for SIFT descriptor. So that our improved SIFT descriptor is invariant to lighting geometry, illuminant color and camera gamma changes between left and right images. Then we can match feature points between two images and use SIFT descriptor Euclidean distance as a geometric measure in our data sets to make it further accurate and robust. Experimental results show that our method is superior to other SIFT-based algorithms including conventional stereo matching algorithms under various

  7. Gamma camera based Positron Emission Tomography: a study of the viability on quantification

    International Nuclear Information System (INIS)

    Pozzo, Lorena

    2005-01-01

    Positron Emission Tomography (PET) is a Nuclear Medicine imaging modality for diagnostic purposes. Pharmaceuticals labeled with positron emitters are used and images which represent the in vivo biochemical process within tissues can be obtained. The positron/electron annihilation photons are detected in coincidence and this information is used for object reconstruction. Presently, there are two types of systems available for this imaging modality: the dedicated systems and those based on gamma camera technology. In this work, we utilized PET/SPECT systems, which also allows for the traditional Nuclear Medicine studies based on single photon emitters. There are inherent difficulties which affect quantification of activity and other indices. They are related to the Poisson nature of radioactivity, to radiation interactions with patient body and detector, noise due to statistical nature of these interactions and to all the detection processes, as well as the patient acquisition protocols. Corrections are described in the literature and not all of them are implemented by the manufacturers: scatter, attenuation, random, decay, dead time, spatial resolution, and others related to the properties of each equipment. The goal of this work was to assess these methods adopted by two manufacturers, as well as the influence of some technical characteristics of PET/SPECT systems on the estimation of SUV. Data from a set of phantoms were collected in 3D mode by one camera and 2D, by the other. We concluded that quantification is viable in PET/SPECT systems, including the estimation of SUVs. This is only possible if, apart from the above mentioned corrections, the camera is well tuned and coefficients for sensitivity normalization and partial volume corrections are applied. We also verified that the shapes of the sources used for obtaining these factors play a role on the final results and should be delt with carefully in clinical quantification. Finally, the choice of the region

  8. CCD-camera-based diffuse optical tomography to study ischemic stroke in preclinical rat models

    Science.gov (United States)

    Lin, Zi-Jing; Niu, Haijing; Liu, Yueming; Su, Jianzhong; Liu, Hanli

    2011-02-01

    Stroke, due to ischemia or hemorrhage, is the neurological deficit of cerebrovasculature and is the third leading cause of death in the United States. More than 80 percent of stroke patients are ischemic stroke due to blockage of artery in the brain by thrombosis or arterial embolism. Hence, development of an imaging technique to image or monitor the cerebral ischemia and effect of anti-stoke therapy is more than necessary. Near infrared (NIR) optical tomographic technique has a great potential to be utilized as a non-invasive image tool (due to its low cost and portability) to image the embedded abnormal tissue, such as a dysfunctional area caused by ischemia. Moreover, NIR tomographic techniques have been successively demonstrated in the studies of cerebro-vascular hemodynamics and brain injury. As compared to a fiberbased diffuse optical tomographic system, a CCD-camera-based system is more suitable for pre-clinical animal studies due to its simpler setup and lower cost. In this study, we have utilized the CCD-camera-based technique to image the embedded inclusions based on tissue-phantom experimental data. Then, we are able to obtain good reconstructed images by two recently developed algorithms: (1) depth compensation algorithm (DCA) and (2) globally convergent method (GCM). In this study, we will demonstrate the volumetric tomographic reconstructed results taken from tissuephantom; the latter has a great potential to determine and monitor the effect of anti-stroke therapies.

  9. The Camera-Based Assessment Survey System (C-BASS): A towed camera platform for reef fish abundance surveys and benthic habitat characterization in the Gulf of Mexico

    Science.gov (United States)

    Lembke, Chad; Grasty, Sarah; Silverman, Alex; Broadbent, Heather; Butcher, Steven; Murawski, Steven

    2017-12-01

    An ongoing challenge for fisheries management is to provide cost-effective and timely estimates of habitat stratified fish densities. Traditional approaches use modified commercial fishing gear (such as trawls and baited hooks) that have biases in species selectivity and may also be inappropriate for deployment in some habitat types. Underwater visual and optical approaches offer the promise of more precise and less biased assessments of relative fish abundance, as well as direct estimates of absolute fish abundance. A number of video-based approaches have been developed and the technology for data acquisition, calibration, and synthesis has been developing rapidly. Beginning in 2012, our group of engineers and researchers at the University of South Florida has been working towards the goal of completing large scale, video-based surveys in the eastern Gulf of Mexico. This paper discusses design considerations and development of a towed camera system for collection of video-based data on commercially and recreationally important reef fishes and benthic habitat on the West Florida Shelf. Factors considered during development included potential habitat types to be assessed, sea-floor bathymetry, vessel support requirements, personnel requirements, and cost-effectiveness of system components. This regional-specific effort has resulted in a towed platform called the Camera-Based Assessment Survey System, or C-BASS, which has proven capable of surveying tens of kilometers of video transects per day and has the ability to cost-effective population estimates of reef fishes and coincident benthic habitat classification.

  10. Simultaneous streak and frame interferometry for electron density measurements of laser produced plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Quevedo, H. J., E-mail: hjquevedo@utexas.edu; McCormick, M.; Wisher, M.; Bengtson, Roger D.; Ditmire, T. [Center for High Energy Density Science, Department of Physics, University of Texas at Austin, Austin, Texas 78712 (United States)

    2016-01-15

    A system of two collinear probe beams with different wavelengths and pulse durations was used to capture simultaneously snapshot interferograms and streaked interferograms of laser produced plasmas. The snapshots measured the two dimensional, path-integrated, electron density on a charge-coupled device while the radial temporal evolution of a one dimensional plasma slice was recorded by a streak camera. This dual-probe combination allowed us to select plasmas that were uniform and axisymmetric along the laser direction suitable for retrieving the continuous evolution of the radial electron density of homogeneous plasmas. Demonstration of this double probe system was done by measuring rapidly evolving plasmas on time scales less than 1 ns produced by the interaction of femtosecond, high intensity, laser pulses with argon gas clusters. Experiments aimed at studying homogeneous plasmas from high intensity laser-gas or laser-cluster interaction could benefit from the use of this probing scheme.

  11. Global calibration of multi-cameras with non-overlapping fields of view based on photogrammetry and reconfigurable target

    Science.gov (United States)

    Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling

    2018-06-01

    Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.

  12. Single-Camera-Based Method for Step Length Symmetry Measurement in Unconstrained Elderly Home Monitoring.

    Science.gov (United States)

    Cai, Xi; Han, Guang; Song, Xin; Wang, Jinkuan

    2017-11-01

    single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc. single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring

  13. Simple, fast, and low-cost camera-based water content measurement with colorimetric fluorescent indicator

    Science.gov (United States)

    Song, Seok-Jeong; Kim, Tae-Il; Kim, Youngmi; Nam, Hyoungsik

    2018-05-01

    Recently, a simple, sensitive, and low-cost fluorescent indicator has been proposed to determine water contents in organic solvents, drugs, and foodstuffs. The change of water content leads to the change of the indicator's fluorescence color under the ultra-violet (UV) light. Whereas the water content values could be estimated from the spectrum obtained by a bulky and expensive spectrometer in the previous research, this paper demonstrates a simple and low-cost camera-based water content measurement scheme with the same fluorescent water indicator. Water content is calculated over the range of 0-30% by quadratic polynomial regression models with color information extracted from the captured images of samples. Especially, several color spaces such as RGB, xyY, L∗a∗b∗, u‧v‧, HSV, and YCBCR have been investigated to establish the optimal color information features over both linear and nonlinear RGB data given by a camera before and after gamma correction. In the end, a 2nd order polynomial regression model along with HSV in a linear domain achieves the minimum mean square error of 1.06% for a 3-fold cross validation method. Additionally, the resultant water content estimation model is implemented and evaluated in an off-the-shelf Android-based smartphone.

  14. Image features dependant correlation-weighting function for efficient PRNU based source camera identification.

    Science.gov (United States)

    Tiwari, Mayank; Gupta, Bhupendra

    2018-04-01

    For source camera identification (SCI), photo response non-uniformity (PRNU) has been widely used as the fingerprint of the camera. The PRNU is extracted from the image by applying a de-noising filter then taking the difference between the original image and the de-noised image. However, it is observed that intensity-based features and high-frequency details (edges and texture) of the image, effect quality of the extracted PRNU. This effects correlation calculation and creates problems in SCI. For solving this problem, we propose a weighting function based on image features. We have experimentally identified image features (intensity and high-frequency contents) effect on the estimated PRNU, and then develop a weighting function which gives higher weights to image regions which give reliable PRNU and at the same point it gives comparatively less weights to the image regions which do not give reliable PRNU. Experimental results show that the proposed weighting function is able to improve the accuracy of SCI up to a great extent. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Performance evaluation of a hand-held, semiconductor (CdZnTe)-based gamma camera

    CERN Document Server

    Abe, A; Lee, J; Oka, T; Shizukuishi, K; Kikuchi, T; Inoue, T; Jimbo, M; Ryuo, H; Bickel, C

    2003-01-01

    We have designed and developed a small field of view gamma camera, the eZ SCOPE, based on use of a CdZnTe semiconductor. This device utilises proprietary signal processing technology and an interface to a computer-based imaging system. The purpose of this study was to evaluate the performance of the eZ scope in comparison with currently employed gamma camera technology. The detector is a single wafer of 5-mm-thick CdZnTe that is divided into a 16 x 16 array (256 pixels). The sensitive area of the detector is a square of dimension 3.2 cm. Two parallel-hole collimators are provided with the system and have a matching (256 hole) pattern to the CdZnTe detector array: a low-energy, high-resolution parallel-hole (LEHR) collimator fabricated of lead and a low-energy, high-sensitivity parallel-hole (LEHS) collimator fabricated of tungsten. Performance measurements and the data analysis were done according to the procedures of the NEMA standard. We also studied the long-term stability of the system with continuous use...

  16. The origin and structure of streak-like instabilities in laminar boundary layer flames

    Science.gov (United States)

    Gollner, Michael; Miller, Colin; Tang, Wei; Finney, Mark

    2017-11-01

    Streamwise streaks are consistently observed in wildland fires, at the base of pool fires, and in other heated flows within a boundary layer. This study examines both the origin of these structures and their role in influencing some of the macroscopic properties of the flow. Streaks were reproduced and characterized via experiments on stationary heated strips and liquid and gas-fueled burners in laminar boundary layer flows, providing a framework to develop theory based on both observed and measured physical phenomena. The incoming boundary layer was established as the controlling mechanism in forming streaks, which are generated by pre-existing coherent structures, while the amplification of streaks was determined to be compatible with quadratic growth of Rayleigh-Taylor Instabilities, providing credence to the idea that the downstream growth of streaks is strongly tied to buoyancy. These local instabilities were also found to affect macroscopic properties of the flow, including heat transfer to the surface, indicating that a two-dimensional assumption may fail to adequately describe heat and mass transfer during flame spread and other reacting boundary layer flows. This work was supported by NSF (CBET-1554026) and the USDA-FS (13-CS-11221637-124).

  17. Fast time-of-flight camera based surface registration for radiotherapy patient positioning

    International Nuclear Information System (INIS)

    Placht, Simon; Stancanello, Joseph; Schaller, Christian; Balda, Michael; Angelopoulou, Elli

    2012-01-01

    Purpose: This work introduces a rigid registration framework for patient positioning in radiotherapy, based on real-time surface acquisition by a time-of-flight (ToF) camera. Dynamic properties of the system are also investigated for future gating/tracking strategies. Methods: A novel preregistration algorithm, based on translation and rotation-invariant features representing surface structures, was developed. Using these features, corresponding three-dimensional points were computed in order to determine initial registration parameters. These parameters became a robust input to an accelerated version of the iterative closest point (ICP) algorithm for the fine-tuning of the registration result. Distance calibration and Kalman filtering were used to compensate for ToF-camera dependent noise. Additionally, the advantage of using the feature based preregistration over an ''ICP only'' strategy was evaluated, as well as the robustness of the rigid-transformation-based method to deformation. Results: The proposed surface registration method was validated using phantom data. A mean target registration error (TRE) for translations and rotations of 1.62 ± 1.08 mm and 0.07 deg. ± 0.05 deg., respectively, was achieved. There was a temporal delay of about 65 ms in the registration output, which can be seen as negligible considering the dynamics of biological systems. Feature based preregistration allowed for accurate and robust registrations even at very large initial displacements. Deformations affected the accuracy of the results, necessitating particular care in cases of deformed surfaces. Conclusions: The proposed solution is able to solve surface registration problems with an accuracy suitable for radiotherapy cases where external surfaces offer primary or complementary information to patient positioning. The system shows promising dynamic properties for its use in gating/tracking applications. The overall system is competitive with commonly-used surface registration

  18. A pixellated gamma-camera based on CdTe detectors clinical interests and performances

    CERN Document Server

    Chambron, J; Eclancher, B; Scheiber, C; Siffert, P; Hage-Ali, M; Regal, R; Kazandjian, A; Prat, V; Thomas, S; Warren, S; Matz, R; Jahnke, A; Karman, M; Pszota, A; Németh, L

    2000-01-01

    A mobile gamma camera dedicated to nuclear cardiology, based on a 15 cmx15 cm detection matrix of 2304 CdTe detector elements, 2.83 mmx2.83 mmx2 mm, has been developed with a European Community support to academic and industrial research centres. The intrinsic properties of the semiconductor crystals - low-ionisation energy, high-energy resolution, high attenuation coefficient - are potentially attractive to improve the gamma-camera performances. But their use as gamma detectors for medical imaging at high resolution requires production of high-grade materials and large quantities of sophisticated read-out electronics. The decision was taken to use CdTe rather than CdZnTe, because the manufacturer (Eurorad, France) has a large experience for producing high-grade materials, with a good homogeneity and stability and whose transport properties, characterised by the mobility-lifetime product, are at least 5 times greater than that of CdZnTe. The detector matrix is divided in 9 square units, each unit is composed ...

  19. Geolocating thermal binoculars based on a software defined camera core incorporating HOT MCT grown by MOVPE

    Science.gov (United States)

    Pillans, Luke; Harmer, Jack; Edwards, Tim; Richardson, Lee

    2016-05-01

    Geolocation is the process of calculating a target position based on bearing and range relative to the known location of the observer. A high performance thermal imager with integrated geolocation functions is a powerful long range targeting device. Firefly is a software defined camera core incorporating a system-on-a-chip processor running the AndroidTM operating system. The processor has a range of industry standard serial interfaces which were used to interface to peripheral devices including a laser rangefinder and a digital magnetic compass. The core has built in Global Positioning System (GPS) which provides the third variable required for geolocation. The graphical capability of Firefly allowed flexibility in the design of the man-machine interface (MMI), so the finished system can give access to extensive functionality without appearing cumbersome or over-complicated to the user. This paper covers both the hardware and software design of the system, including how the camera core influenced the selection of peripheral hardware, and the MMI design process which incorporated user feedback at various stages.

  20. Bio-inspired motion detection in an FPGA-based smart camera module

    International Nuclear Information System (INIS)

    Koehler, T; Roechter, F; Moeller, R; Lindemann, J P

    2009-01-01

    Flying insects, despite their relatively coarse vision and tiny nervous system, are capable of carrying out elegant and fast aerial manoeuvres. Studies of the fly visual system have shown that this is accomplished by the integration of signals from a large number of elementary motion detectors (EMDs) in just a few global flow detector cells. We developed an FPGA-based smart camera module with more than 10 000 single EMDs, which is closely modelled after insect motion-detection circuits with respect to overall architecture, resolution and inter-receptor spacing. Input to the EMD array is provided by a CMOS camera with a high frame rate. Designed as an adaptable solution for different engineering applications and as a testbed for biological models, the EMD detector type and parameters such as the EMD time constants, the motion-detection directions and the angle between correlated receptors are reconfigurable online. This allows a flexible and simultaneous detection of complex motion fields such as translation, rotation and looming, such that various tasks, e.g., obstacle avoidance, height/distance control or speed regulation can be performed by the same compact device

  1. A positioning system for forest diseases and pests based on GIS and PTZ camera

    International Nuclear Information System (INIS)

    Wang, Z B; Zhao, F F; Wang, C B; Wang, L L

    2014-01-01

    Forest diseases and pests cause enormous economic losses and ecological damage every year in China. To prevent and control forest diseases and pests, the key is to get accurate information timely. In order to improve monitoring coverage rate and economize on manpower, a cooperative investigation model for forest diseases and pests is put forward. It is composed of video positioning system and manual labor reconnaissance with mobile GIS embedded in PDA. Video system is used to scan the disaster area, and is particularly effective on where trees are withered. Forest diseases prevention and control workers can check disaster area with PDA system. To support this investigation model, we developed a positioning algorithm and a positioning system. The positioning algorithm is based on DEM and PTZ camera. Moreover, the algorithm accuracy is validated. The software consists of 3D GIS subsystem, 2D GIS subsystem, video control subsystem and disaster positioning subsystem. 3D GIS subsystem makes positioning visual, and practically easy to operate. 2D GIS subsystem can output disaster thematic map. Video control subsystem can change Pan/Tilt/Zoom of a digital camera remotely, to focus on the suspected area. Disaster positioning subsystem implements the positioning algorithm. It is proved that the positioning system can observe forest diseases and pests in practical application for forest departments

  2. A Study of the Usability of Ergonomic Camera Vest Based on Spirometry Parameters

    Directory of Open Access Journals (Sweden)

    Shirazeh Arghami

    2017-12-01

    Full Text Available Background: Being a cameraman is one of those occupations that expose people to musculoskeletal disorders (MSDs. Therefore, control measures should be taken to protect cameramen’s health. To solve the given problem, a vest was designed for cameramen to prevent MSDs by reducing the pressure and contact stress while carrying the camera on their shoulder. However, the usability of vest had to be considered. The aim of this study was to determine the usability of the proposed vest using the spirometry parameters indicator. Methods: In this experimental study, 120 spirometry experiments were conducted with 40 male volunteer subjects with and without designed vest. Data were analyzed using SPSS- 16 with dependent t-test, at 0.05 significance level. Results: Based on the spirometry results, there is a significant difference between Forced Vital Capacity (FVC, Forced Expiratory Volume (FEV1 and heart rate in activity with and without vest (p<0.001. Conclusion: The results suggest that the promising impact of this invention on the health of cameramen makes this domestically designed camera vest a good option for mass production.

  3. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  4. First experience with THE AUTOLAP™ SYSTEM: an image-based robotic camera steering device.

    Science.gov (United States)

    Wijsman, Paul J M; Broeders, Ivo A M J; Brenkman, Hylke J; Szold, Amir; Forgione, Antonello; Schreuder, Henk W R; Consten, Esther C J; Draaisma, Werner A; Verheijen, Paul M; Ruurda, Jelle P; Kaufman, Yuval

    2018-05-01

    Robotic camera holders for endoscopic surgery have been available for 20 years but market penetration is low. The current camera holders are controlled by voice, joystick, eyeball tracking, or head movements, and this type of steering has proven to be successful but excessive disturbance of surgical workflow has blocked widespread introduction. The Autolap™ system (MST, Israel) uses a radically different steering concept based on image analysis. This may improve acceptance by smooth, interactive, and fast steering. These two studies were conducted to prove safe and efficient performance of the core technology. A total of 66 various laparoscopic procedures were performed with the AutoLap™ by nine experienced surgeons, in two multi-center studies; 41 cholecystectomies, 13 fundoplications including hiatal hernia repair, 4 endometriosis surgeries, 2 inguinal hernia repairs, and 6 (bilateral) salpingo-oophorectomies. The use of the AutoLap™ system was evaluated in terms of safety, image stability, setup and procedural time, accuracy of imaged-based movements, and user satisfaction. Surgical procedures were completed with the AutoLap™ system in 64 cases (97%). The mean overall setup time of the AutoLap™ system was 4 min (04:08 ± 0.10). Procedure times were not prolonged due to the use of the system when compared to literature average. The reported user satisfaction was 3.85 and 3.96 on a scale of 1 to 5 in two studies. More than 90% of the image-based movements were accurate. No system-related adverse events were recorded while using the system. Safe and efficient use of the core technology of the AutoLap™ system was demonstrated with high image stability and good surgeon satisfaction. The results support further clinical studies that will focus on usability, improved ergonomics and additional image-based features.

  5. Parallelised photoacoustic signal acquisition using a Fabry-Perot sensor and a camera-based interrogation scheme

    Science.gov (United States)

    Saeb Gilani, T.; Villringer, C.; Zhang, E.; Gundlach, H.; Buchmann, J.; Schrader, S.; Laufer, J.

    2018-02-01

    Tomographic photoacoustic (PA) images acquired using a Fabry-Perot (FP) based scanner offer high resolution and image fidelity but can result in long acquisition times due to the need for raster scanning. To reduce the acquisition times, a parallelised camera-based PA signal detection scheme is developed. The scheme is based on using a sCMOScamera and FPI sensors with high homogeneity of optical thickness. PA signals were acquired using the camera-based setup and the signal to noise ratio (SNR) was measured. A comparison of the SNR of PA signal detected using 1) a photodiode in a conventional raster scanning detection scheme and 2) a sCMOS camera in parallelised detection scheme is made. The results show that the parallelised interrogation scheme has the potential to provide high speed PA imaging.

  6. Handbook of camera monitor systems the automotive mirror-replacement technology based on ISO 16505

    CERN Document Server

    2016-01-01

    This handbook offers a comprehensive overview of Camera Monitor Systems (CMS), ranging from the ISO 16505-based development aspects to practical realization concepts. It offers readers a wide-ranging discussion of the science and technology of CMS as well as the human-interface factors of such systems. In addition, it serves as a single reference source with contributions from leading international CMS professionals and academic researchers. In combination with the latest version of UN Regulation No. 46, the normative framework of ISO 16505 permits CMS to replace mandatory rearview mirrors in series production vehicles. The handbook includes scientific and technical background information to further readers’ understanding of both of these regulatory and normative texts. It is a key reference in the field of automotive CMS for system designers, members of standardization and regulation committees, engineers, students and researchers.

  7. MULTIMODAL IMAGING OF ANGIOID STREAKS ASSOCIATED WITH TURNER SYNDROME.

    Science.gov (United States)

    Chiu, Bing Q; Tsui, Edmund; Hussnain, Syed Amal; Barbazetto, Irene A; Smith, R Theodore

    2018-02-13

    To report multimodal imaging in a novel case of angioid streaks in a patient with Turner syndrome with 10-year follow-up. Case report of a patient with Turner syndrome and angioid streaks followed at Bellevue Hospital Eye Clinic from 2007 to 2017. Fundus photography, fluorescein angiography, and optical coherence tomography angiography were obtained. Angioid streaks with choroidal neovascularization were noted in this patient with Turner syndrome without other systemic conditions previously correlated with angioid streaks. We report a case of angioid streaks with choroidal neovascularization in a patient with Turner syndrome. We demonstrate that angioid streaks, previously associated with pseudoxanthoma elasticum, Ehlers-Danlos syndrome, Paget disease of bone, and hemoglobinopathies, may also be associated with Turner syndrome, and may continue to develop choroidal neovascularization, suggesting the need for careful ophthalmic examination in these patients.

  8. Ontological Representation of Light Wave Camera Data to Support Vision-Based AmI

    Directory of Open Access Journals (Sweden)

    José Manuel Molina

    2012-09-01

    Full Text Available Recent advances in technologies for capturing video data have opened a vast amount of new application areas in visual sensor networks. Among them, the incorporation of light wave cameras on Ambient Intelligence (AmI environments provides more accurate tracking capabilities for activity recognition. Although the performance of tracking algorithms has quickly improved, symbolic models used to represent the resulting knowledge have not yet been adapted to smart environments. This lack of representation does not allow to take advantage of the semantic quality of the information provided by new sensors. This paper advocates for the introduction of a part-based representational level in cognitive-based systems in order to accurately represent the novel sensors’ knowledge. The paper also reviews the theoretical and practical issues in part-whole relationships proposing a specific taxonomy for computer vision approaches. General part-based patterns for human body and transitive part-based representation and inference are incorporated to an ontology-based previous framework to enhance scene interpretation in the area of video-based AmI. The advantages and new features of the model are demonstrated in a Social Signal Processing (SSP application for the elaboration of live market researches.

  9. Improvement of the GRACE star camera data based on the revision of the combination method

    Science.gov (United States)

    Bandikova, Tamara; Flury, Jakob

    2014-11-01

    The new release of the sensor and instrument data (Level-1B release 02) of the Gravity Recovery and Climate Experiment (GRACE) had a substantial impact on the improvement of the overall accuracy of the gravity field models. This has implied that improvements on the sensor data level can still significantly contribute to arriving closer to the GRACE baseline accuracy. The recent analysis of the GRACE star camera data (SCA1B RL02) revealed their unexpectedly higher noise. As the star camera (SCA) data are essential for the processing of the K-band ranging data and the accelerometer data, thorough investigation of the data set was needed. We fully reexamined the SCA data processing from Level-1A to Level-1B with focus on the combination method of the data delivered by the two SCA heads. In the first step, we produced and compared our own combined attitude solution by applying two different combination methods on the SCA Level-1A data. The first method introduces the information about the anisotropic accuracy of the star camera measurement in terms of a weighing matrix. This method was applied in the official processing as well. The alternative method merges only the well determined SCA boresight directions. This method was implemented on the GRACE SCA data for the first time. Both methods were expected to provide optimal solution characteristic by the full accuracy about all three axes, which was confirmed. In the second step, we analyzed the differences between the official SCA1B RL02 data generated by the Jet Propulsion Laboratory (JPL) and our solution. SCA1B RL02 contains systematically higher noise of about a factor 3-4. The data analysis revealed that the reason is the incorrect implementation of algorithms in the JPL processing routines. After correct implementation of the combination method, significant improvement within the whole spectrum was achieved. Based on these results, the official reprocessing of the SCA data is suggested, as the SCA attitude data

  10. Method for validating cloud mask obtained from satellite measurements using ground-based sky camera.

    Science.gov (United States)

    Letu, Husi; Nagao, Takashi M; Nakajima, Takashi Y; Matsumae, Yoshiaki

    2014-11-01

    Error propagation in Earth's atmospheric, oceanic, and land surface parameters of the satellite products caused by misclassification of the cloud mask is a critical issue for improving the accuracy of satellite products. Thus, characterizing the accuracy of the cloud mask is important for investigating the influence of the cloud mask on satellite products. In this study, we proposed a method for validating multiwavelength satellite data derived cloud masks using ground-based sky camera (GSC) data. First, a cloud cover algorithm for GSC data has been developed using sky index and bright index. Then, Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data derived cloud masks by two cloud-screening algorithms (i.e., MOD35 and CLAUDIA) were validated using the GSC cloud mask. The results indicate that MOD35 is likely to classify ambiguous pixels as "cloudy," whereas CLAUDIA is likely to classify them as "clear." Furthermore, the influence of error propagations caused by misclassification of the MOD35 and CLAUDIA cloud masks on MODIS derived reflectance, brightness temperature, and normalized difference vegetation index (NDVI) in clear and cloudy pixels was investigated using sky camera data. It shows that the influence of the error propagation by the MOD35 cloud mask on the MODIS derived monthly mean reflectance, brightness temperature, and NDVI for clear pixels is significantly smaller than for the CLAUDIA cloud mask; the influence of the error propagation by the CLAUDIA cloud mask on MODIS derived monthly mean cloud products for cloudy pixels is significantly smaller than that by the MOD35 cloud mask.

  11. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor

    Directory of Open Access Journals (Sweden)

    Rizwan Ali Naqvi

    2018-02-01

    Full Text Available A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver’s point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB. The proposed method demonstrated greater accuracy than the previous gaze classification methods.

  12. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor.

    Science.gov (United States)

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung

    2018-02-03

    A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.

  13. Fast time-of-flight camera based surface registration for radiotherapy patient positioning.

    Science.gov (United States)

    Placht, Simon; Stancanello, Joseph; Schaller, Christian; Balda, Michael; Angelopoulou, Elli

    2012-01-01

    This work introduces a rigid registration framework for patient positioning in radiotherapy, based on real-time surface acquisition by a time-of-flight (ToF) camera. Dynamic properties of the system are also investigated for future gating/tracking strategies. A novel preregistration algorithm, based on translation and rotation-invariant features representing surface structures, was developed. Using these features, corresponding three-dimensional points were computed in order to determine initial registration parameters. These parameters became a robust input to an accelerated version of the iterative closest point (ICP) algorithm for the fine-tuning of the registration result. Distance calibration and Kalman filtering were used to compensate for ToF-camera dependent noise. Additionally, the advantage of using the feature based preregistration over an "ICP only" strategy was evaluated, as well as the robustness of the rigid-transformation-based method to deformation. The proposed surface registration method was validated using phantom data. A mean target registration error (TRE) for translations and rotations of 1.62 ± 1.08 mm and 0.07° ± 0.05°, respectively, was achieved. There was a temporal delay of about 65 ms in the registration output, which can be seen as negligible considering the dynamics of biological systems. Feature based preregistration allowed for accurate and robust registrations even at very large initial displacements. Deformations affected the accuracy of the results, necessitating particular care in cases of deformed surfaces. The proposed solution is able to solve surface registration problems with an accuracy suitable for radiotherapy cases where external surfaces offer primary or complementary information to patient positioning. The system shows promising dynamic properties for its use in gating/tracking applications. The overall system is competitive with commonly-used surface registration technologies. Its main benefit is the

  14. X-ray streak crystal spectography

    International Nuclear Information System (INIS)

    Kauffman, R.L.; Brown, T.; Medecki, H.

    1983-01-01

    We have built an x-ray streaked crystal spectrograph for making time-resolved x-ray spectral measurements. This instrument can access Bragg angles from 11 0 to 38 0 and x-ray spectra from 200 eV to greater than 10 keV. We have demonstrated resolving powers, E/δE > 200 at 1 keV and time resolution less than 20 psec. A description of the instrument and an example of the data is given

  15. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    International Nuclear Information System (INIS)

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-01-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination

  16. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    Science.gov (United States)

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  17. An Approach to Evaluate Stability for Cable-Based Parallel Camera Robots with Hybrid Tension-Stiffness Properties

    Directory of Open Access Journals (Sweden)

    Huiling Wei

    2015-12-01

    Full Text Available This paper focuses on studying the effect of cable tensions and stiffness on the stability of cable-based parallel camera robots. For this purpose, the tension factor and the stiffness factor are defined, and the expression of stability is deduced. A new approach is proposed to calculate the hybrid-stability index with the minimum cable tension and the minimum singular value. Firstly, the kinematic model of a cable-based parallel camera robot is established. Based on the model, the tensions are solved and a tension factor is defined. In order to obtain the tension factor, an optimization of the cable tensions is carried out. Then, an expression of the system's stiffness is deduced and a stiffness factor is defined. Furthermore, an approach to evaluate the stability of the cable-based camera robots with hybrid tension-stiffness properties is presented. Finally, a typical three-degree-of-freedom cable-based parallel camera robot with four cables is studied as a numerical example. The simulation results show that the approach is both reasonable and effective.

  18. a Uav-Based Low-Cost Stereo Camera System for Archaeological Surveys - Experiences from Doliche (turkey)

    Science.gov (United States)

    Haubeck, K.; Prinz, T.

    2013-08-01

    The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.

  19. A novel camera type for very high energy gamma-ray astronomy based on Geiger-mode avalanche photodiodes

    International Nuclear Information System (INIS)

    Anderhub, H; Biland, A; Boller, A; Braun, I; Commichau, S; Commichau, V; Dorner, D; Gendotti, A; Grimm, O; Gunten, H von; Hildebrand, D; Horisberger, U; Kraehenbuehl, T; Kranich, D; Lorenz, E; Lustermann, W; Backes, M; Neise, D; Bretz, T; Mannheim, K

    2009-01-01

    Geiger-mode avalanche photodiodes (G-APD) are promising new sensors for light detection in atmospheric Cherenkov telescopes. In this paper, the design and commissioning of a 36-pixel G-APD prototype camera is presented. The data acquisition is based on the Domino Ring Sampling (DRS2) chip. A sub-nanosecond time resolution has been achieved. Cosmic-ray induced air showers have been recorded using an imaging mirror setup, in a self-triggered mode. This is the first time that such measurements have been carried out with a complete G-APD camera.

  20. Pedestrian Detection Based on Adaptive Selection of Visible Light or Far-Infrared Light Camera Image by Fuzzy Inference System and Convolutional Neural Network-Based Verification.

    Science.gov (United States)

    Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung

    2017-07-08

    A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.

  1. New camera-based microswitch technology to monitor small head and mouth responses of children with multiple disabilities.

    Science.gov (United States)

    Lancioni, Giulio E; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N; O'Reilly, Mark F; Green, Vanessa A; Furniss, Fred

    2014-06-01

    Assessing a new camera-based microswitch technology, which did not require the use of color marks on the participants' face. Two children with extensive multiple disabilities participated. The responses selected for them consisted of small, lateral head movements and mouth closing or opening. The intervention was carried out according to a multiple probe design across responses. The technology involved a computer with a CPU using a 2-GHz clock, a USB video camera with a 16-mm lens, a USB cable connecting the camera and the computer, and a special software program written in ISO C++ language. The new technology was satisfactorily used with both children. Large increases in their responding were observed during the intervention periods (i.e. when the responses were followed by preferred stimulation). The new technology may be an important resource for persons with multiple disabilities and minimal motor behavior.

  2. A practical approach for active camera coordination based on a fusion-driven multi-agent system

    Science.gov (United States)

    Bustamante, Alvaro Luis; Molina, José M.; Patricio, Miguel A.

    2014-04-01

    In this paper, we propose a multi-agent system architecture to manage spatially distributed active (or pan-tilt-zoom) cameras. Traditional video surveillance algorithms are of no use for active cameras, and we have to look at different approaches. Such multi-sensor surveillance systems have to be designed to solve two related problems: data fusion and coordinated sensor-task management. Generally, architectures proposed for the coordinated operation of multiple cameras are based on the centralisation of management decisions at the fusion centre. However, the existence of intelligent sensors capable of decision making brings with it the possibility of conceiving alternative decentralised architectures. This problem is approached by means of a MAS, integrating data fusion as an integral part of the architecture for distributed coordination purposes. This paper presents the MAS architecture and system agents.

  3. Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera

    Directory of Open Access Journals (Sweden)

    Thomas C. Wilkes

    2016-10-01

    Full Text Available Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements.

  4. Finger Angle-Based Hand Gesture Recognition for Smart Infrastructure Using Wearable Wrist-Worn Camera

    Directory of Open Access Journals (Sweden)

    Feiyu Chen

    2018-03-01

    Full Text Available The arising of domestic robots in smart infrastructure has raised demands for intuitive and natural interaction between humans and robots. To address this problem, a wearable wrist-worn camera (WwwCam is proposed in this paper. With the capability of recognizing human hand gestures in real-time, it enables services such as controlling mopping robots, mobile manipulators, or appliances in smart-home scenarios. The recognition is based on finger segmentation and template matching. Distance transformation algorithm is adopted and adapted to robustly segment fingers from the hand. Based on fingers’ angles relative to the wrist, a finger angle prediction algorithm and a template matching metric are proposed. All possible gesture types of the captured image are first predicted, and then evaluated and compared to the template image to achieve the classification. Unlike other template matching methods relying highly on large training set, this scheme possesses high flexibility since it requires only one image as the template, and can classify gestures formed by different combinations of fingers. In the experiment, it successfully recognized ten finger gestures from number zero to nine defined by American Sign Language with an accuracy up to 99.38%. Its performance was further demonstrated by manipulating a robot arm using the implemented algorithms and WwwCam to transport and pile up wooden building blocks.

  5. A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera

    Directory of Open Access Journals (Sweden)

    Hotaka Takizawa

    2017-02-01

    Full Text Available The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots.

  6. New Lower-Limb Gait Asymmetry Indices Based on a Depth Camera

    Directory of Open Access Journals (Sweden)

    Edouard Auvinet

    2015-02-01

    Full Text Available Background: Various asymmetry indices have been proposed to compare the spatiotemporal, kinematic and kinetic parameters of lower limbs during the gait cycle. However, these indices rely on gait measurement systems that are costly and generally require manual examination, calibration procedures and the precise placement of sensors/markers on the body of the patient. Methods: To overcome these issues, this paper proposes a new asymmetry index, which uses an inexpensive, easy-to-use and markerless depth camera (Microsoft Kinect™ output. This asymmetry index directly uses depth images provided by the Kinect™ without requiring joint localization. It is based on the longitudinal spatial difference between lower-limb movements during the gait cycle. To evaluate the relevance of this index, fifteen healthy subjects were tested on a treadmill walking normally and then via an artificially-induced gait asymmetry with a thick sole placed under one shoe. The gait movement was simultaneously recorded using a Kinect™ placed in front of the subject and a motion capture system. Results: The proposed longitudinal index distinguished asymmetrical gait (p < 0.001, while other symmetry indices based on spatiotemporal gait parameters failed using such Kinect™ skeleton measurements. Moreover, the correlation coefficient between this index measured by Kinect™ and the ground truth of this index measured by motion capture is 0.968. Conclusion: This gait asymmetry index measured with a Kinect™ is low cost, easy to use and is a promising development for clinical gait analysis.

  7. New lower-limb gait asymmetry indices based on a depth camera.

    Science.gov (United States)

    Auvinet, Edouard; Multon, Franck; Meunier, Jean

    2015-02-24

    Various asymmetry indices have been proposed to compare the spatiotemporal, kinematic and kinetic parameters of lower limbs during the gait cycle. However, these indices rely on gait measurement systems that are costly and generally require manual examination, calibration procedures and the precise placement of sensors/markers on the body of the patient. To overcome these issues, this paper proposes a new asymmetry index, which uses an inexpensive, easy-to-use and markerless depth camera (Microsoft Kinect™) output. This asymmetry index directly uses depth images provided by the Kinect™ without requiring joint localization. It is based on the longitudinal spatial difference between lower-limb movements during the gait cycle. To evaluate the relevance of this index, fifteen healthy subjects were tested on a treadmill walking normally and then via an artificially-induced gait asymmetry with a thick sole placed under one shoe. The gait movement was simultaneously recorded using a Kinect™ placed in front of the subject and a motion capture system. The proposed longitudinal index distinguished asymmetrical gait (p indices based on spatiotemporal gait parameters failed using such Kinect™ skeleton measurements. Moreover, the correlation coefficient between this index measured by Kinect™ and the ground truth of this index measured by motion capture is 0.968. This gait asymmetry index measured with a Kinect™ is low cost, easy to use and is a promising development for clinical gait analysis.

  8. Phone camera detection of glucose blood level based on magnetic particles entrapped inside bubble wrap.

    Science.gov (United States)

    Martinkova, Pavla; Pohanka, Miroslav

    2016-12-18

    Glucose is an important diagnostic biochemical marker of diabetes but also for organophosphates, carbamates, acetaminophens or salicylates poisoning. Hence, innovation of accurate and fast detection assay is still one of priorities in biomedical research. Glucose sensor based on magnetic particles (MPs) with immobilized enzymes glucose oxidase (GOx) and horseradish peroxidase (HRP) was developed and the GOx catalyzed reaction was visualized by a smart-phone-integrated camera. Exponential decay concentration curve with correlation coefficient 0.997 and with limit of detection 0.4 mmol/l was achieved. Interfering and matrix substances were measured due to possibility of assay influencing and no effect of the tested substances was observed. Spiked plasma samples were also measured and no influence of plasma matrix on the assay was proved. The presented assay showed complying results with reference method (standard spectrophotometry based on enzymes glucose oxidase and peroxidase inside plastic cuvettes) with linear dependence and correlation coefficient 0.999 in concentration range between 0 and 4 mmol/l. On the grounds of measured results, method was considered as highly specific, accurate and fast assay for detection of glucose.

  9. Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor.

    Science.gov (United States)

    Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung

    2017-08-30

    Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments.

  10. Observations of temporal change of nighttime cloud cover from Himawari 8 and ground-based sky camera over Chiba, Japan

    Science.gov (United States)

    Lagrosas, N.; Gacal, G. F. B.; Kuze, H.

    2017-12-01

    Detection of nighttime cloud from Himawari 8 is implemented using the difference of digital numbers from bands 13 (10.4µm) and 7 (3.9µm). The digital number difference of -1.39x104 can be used as a threshold to separate clouds from clear sky conditions. To look at observations from the ground over Chiba, a digital camera (Canon Powershot A2300) is used to take images of the sky every 5 minutes at an exposure time of 5s at the Center for Environmental Remote Sensing, Chiba University. From these images, cloud cover values are obtained using threshold algorithm (Gacal, et al, 2016). Ten minute nighttime cloud cover values from these two datasets are compared and analyzed from 29 May to 05 June 2017 (20:00-03:00 JST). When compared with lidar data, the camera can detect thick high level clouds up to 10km. The results show that during clear sky conditions (02-03 June), both camera and satellite cloud cover values show 0% cloud cover. During cloudy conditions (05-06 June), the camera shows almost 100% cloud cover while satellite cloud cover values range from 60 to 100%. These low values can be attributed to the presence of low-level thin clouds ( 2km above the ground) as observed from National Institute for Environmental Studies lidar located inside Chiba University. This difference of cloud cover values shows that the camera can produce accurate cloud cover values of low level clouds that are sometimes not detected by satellites. The opposite occurs when high level clouds are present (01-02 June). Derived satellite cloud cover shows almost 100% during the whole night while ground-based camera shows cloud cover values that range from 10 to 100% during the same time interval. The fluctuating values can be attributed to the presence of thin clouds located at around 6km from the ground and the presence of low level clouds ( 1km). Since the camera relies on the reflected city lights, it is possible that the high level thin clouds are not observed by the camera but is

  11. Camera-based microswitch technology to monitor mouth, eyebrow, and eyelid responses of children with profound multiple disabilities

    NARCIS (Netherlands)

    Lancioni, G.E.; Bellini, D.; Oliva, D.; Singh, N.N.; O'Reilly, M.F.; Sigafoos, J.; Lang, R.B.; Didden, H.C.M.

    2011-01-01

    A camera-based microswitch technology was recently used to successfully monitor small eyelid and mouth responses of two adults with profound multiple disabilities (Lancioni et al., Res Dev Disab 31:1509-1514, 2010a). This technology, in contrast with the traditional optic microswitches used for

  12. Partially slotted crystals for a high-resolution γ-camera based on a position sensitive photomultiplier

    International Nuclear Information System (INIS)

    Giokaris, N.; Loudos, G.; Maintas, D.; Karabarbounis, A.; Lembesi, M.; Spanoudaki, V.; Stiliaris, E.; Boukis, S.; Gektin, A.; Pedash, V.; Gayshan, V.

    2005-01-01

    Partially slotted crystals have been designed and constructed and have been used to evaluate the performance with respect to the spatial resolution of a γ-camera based on a position-sensitive photomultiplier. It is shown that the resolution obtained with such a crystal is only slightly worse than the one obtained with a fully pixelized one whose cost, however, is much higher

  13. Automated Ground-based Time-lapse Camera Monitoring of West Greenland ice sheet outlet Glaciers: Challenges and Solutions

    Science.gov (United States)

    Ahn, Y.; Box, J. E.; Balog, J.; Lewinter, A.

    2008-12-01

    Monitoring Greenland outlet glaciers using remotely sensed data has drawn a great attention in earth science communities for decades and time series analysis of sensory data has provided important variability information of glacier flow by detecting speed and thickness changes, tracking features and acquiring model input. Thanks to advancements of commercial digital camera technology and increased solid state storage, we activated automatic ground-based time-lapse camera stations with high spatial/temporal resolution in west Greenland outlet and collected one-hour interval data continuous for more than one year at some but not all sites. We believe that important information of ice dynamics are contained in these data and that terrestrial mono-/stereo-photogrammetry can provide theoretical/practical fundamentals in data processing along with digital image processing techniques. Time-lapse images over periods in west Greenland indicate various phenomenon. Problematic is rain, snow, fog, shadows, freezing of water on camera enclosure window, image over-exposure, camera motion, sensor platform drift, and fox chewing of instrument cables, and the pecking of plastic window by ravens. Other problems include: feature identification, camera orientation, image registration, feature matching in image pairs, and feature tracking. Another obstacle is that non-metric digital camera contains large distortion to be compensated for precise photogrammetric use. Further, a massive number of images need to be processed in a way that is sufficiently computationally efficient. We meet these challenges by 1) identifying problems in possible photogrammetric processes, 2) categorizing them based on feasibility, and 3) clarifying limitation and alternatives, while emphasizing displacement computation and analyzing regional/temporal variability. We experiment with mono and stereo photogrammetric techniques in the aide of automatic correlation matching for efficiently handling the enormous

  14. Distributed FPGA-based smart camera architecture for computer vision applications

    OpenAIRE

    Bourrasset, Cédric; Maggiani, Luca; Sérot, Jocelyn; Berry, François; Pagano, Paolo

    2013-01-01

    International audience; Smart camera networks (SCN) raise challenging issues in many fields of research, including vision processing, communication protocols, distributed algorithms or power management. Furthermore, application logic in SCN is not centralized but spread among network nodes meaning that each node must have to process images to extract significant features, and aggregate data to understand the surrounding environment. In this context, smart camera have first embedded general pu...

  15. Approaches to diagnosis and detection of cassava brown streak ...

    African Journals Online (AJOL)

    Cassava brown streak disease (CBSD) has been a problem in the East African coastal cassava growing areas for more than 70 years. The disease is caused by successful infection with Cassava Brown Streak Virus (CBSV) (Family, Potyviridae: Genus, Ipomovirus). Diagnosis of CBSD has for long been primarily leaf ...

  16. Convolutional Neural Network-Based Embarrassing Situation Detection under Camera for Social Robot in Smart Homes.

    Science.gov (United States)

    Yang, Guanci; Yang, Jing; Sheng, Weihua; Junior, Francisco Erivaldo Fernandes; Li, Shaobo

    2018-05-12

    Recent research has shown that the ubiquitous use of cameras and voice monitoring equipment in a home environment can raise privacy concerns and affect human mental health. This can be a major obstacle to the deployment of smart home systems for elderly or disabled care. This study uses a social robot to detect embarrassing situations. Firstly, we designed an improved neural network structure based on the You Only Look Once (YOLO) model to obtain feature information. By focusing on reducing area redundancy and computation time, we proposed a bounding-box merging algorithm based on region proposal networks (B-RPN), to merge the areas that have similar features and determine the borders of the bounding box. Thereafter, we designed a feature extraction algorithm based on our improved YOLO and B-RPN, called F-YOLO, for our training datasets, and then proposed a real-time object detection algorithm based on F-YOLO (RODA-FY). We implemented RODA-FY and compared models on our MAT social robot. Secondly, we considered six types of situations in smart homes, and developed training and validation datasets, containing 2580 and 360 images, respectively. Meanwhile, we designed three types of experiments with four types of test datasets composed of 960 sample images. Thirdly, we analyzed how a different number of training iterations affects our prediction estimation, and then we explored the relationship between recognition accuracy and learning rates. Our results show that our proposed privacy detection system can recognize designed situations in the smart home with an acceptable recognition accuracy of 94.48%. Finally, we compared the results among RODA-FY, Inception V3, and YOLO, which indicate that our proposed RODA-FY outperforms the other comparison models in recognition accuracy.

  17. An energy-optimized collimator design for a CZT-based SPECT camera

    International Nuclear Information System (INIS)

    Weng, Fenghua; Bagchi, Srijeeta; Zan, Yunlong; Huang, Qiu; Seo, Youngho

    2016-01-01

    In single photon emission computed tomography, it is a challenging task to maintain reasonable performance using only one specific collimator for radiotracers over a broad spectrum of diagnostic photon energies, since photon scatter and penetration in a collimator differ with the photon energy. Frequent collimator exchanges are inevitable in daily clinical SPECT imaging, which hinders throughput while subjecting the camera to operational errors and damage. Our objective is to design a collimator, which is independent of the photon energy, performs reasonably well for commonly used radiotracers with low- to medium-energy levels of gamma emissions. Using the Geant4 simulation toolkit, we simulated and evaluated a parallel-hole collimator mounted to a CZT detector. With the pixel-geometry-matching collimation, the pitch of the collimator hole was fixed to match the pixel size of the CZT detector throughout this work. Four variables, hole shape, hole length, hole radius/width and the source-to-collimator distance were carefully studied. Scatter and penetration of the collimator, sensitivity and spatial resolution of the system were assessed for four radionuclides including "5"7Co, "9"9"mTc, "1"2"3I and "1"1"1In, with respect to the aforementioned four variables. An optimal collimator was then decided upon such that it maximized the total relative sensitivity (TRS) for the four considered radionuclides while other performance parameters, such as scatter, penetration and spatial resolution, were benchmarked to prevalent commercial scanners and collimators. Digital phantom studies were also performed to validate the system with the optimal square-hole collimator (23 mm hole length, 1.28 mm hole width, and 0.32 mm septal thickness) in terms of contrast, contrast-to-noise ratio and recovery ratio. This study demonstrates promise of our proposed energy-optimized collimator to be used in a CZT-based gamma camera, with comparable or even better imaging performance versus

  18. a R-Shiny Based Phenology Analysis System and Case Study Using Digital Camera Dataset

    Science.gov (United States)

    Zhou, Y. K.

    2018-05-01

    Accurate extracting of the vegetation phenology information play an important role in exploring the effects of climate changes on vegetation. Repeated photos from digital camera is a useful and huge data source in phonological analysis. Data processing and mining on phenological data is still a big challenge. There is no single tool or a universal solution for big data processing and visualization in the field of phenology extraction. In this paper, we proposed a R-shiny based web application for vegetation phenological parameters extraction and analysis. Its main functions include phenological site distribution visualization, ROI (Region of Interest) selection, vegetation index calculation and visualization, data filtering, growth trajectory fitting, phenology parameters extraction, etc. the long-term observation photography data from Freemanwood site in 2013 is processed by this system as an example. The results show that: (1) this system is capable of analyzing large data using a distributed framework; (2) The combination of multiple parameter extraction and growth curve fitting methods could effectively extract the key phenology parameters. Moreover, there are discrepancies between different combination methods in unique study areas. Vegetation with single-growth peak is suitable for using the double logistic module to fit the growth trajectory, while vegetation with multi-growth peaks should better use spline method.

  19. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    Science.gov (United States)

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  20. Practical Stabilization of Uncertain Nonholonomic Mobile Robots Based on Visual Servoing Model with Uncalibrated Camera Parameters

    Directory of Open Access Journals (Sweden)

    Hua Chen

    2013-01-01

    Full Text Available The practical stabilization problem is addressed for a class of uncertain nonholonomic mobile robots with uncalibrated visual parameters. Based on the visual servoing kinematic model, a new switching controller is presented in the presence of parametric uncertainties associated with the camera system. In comparison with existing methods, the new design method is directly used to control the original system without any state or input transformation, which is effective to avoid singularity. Under the proposed control law, it is rigorously proved that all the states of closed-loop system can be stabilized to a prescribed arbitrarily small neighborhood of the zero equilibrium point. Furthermore, this switching control technique can be applied to solve the practical stabilization problem of a kind of mobile robots with uncertain parameters (and angle measurement disturbance which appeared in some literatures such as Morin et al. (1998, Hespanha et al. (1999, Jiang (2000, and Hong et al. (2005. Finally, the simulation results show the effectiveness of the proposed controller design approach.

  1. Fast image acquisition and processing on a TV camera-based portal imaging system

    International Nuclear Information System (INIS)

    Baier, K.; Meyer, J.

    2005-01-01

    The present paper describes the fast acquisition and processing of portal images directly from a TV camera-based portal imaging device (Siemens Beamview Plus trademark). This approach employs not only hard- and software included in the standard package installed by the manufacturer (in particular the frame grabber card and the Matrox(tm) Intellicam interpreter software), but also a software tool developed in-house for further processing and analysis of the images. The technical details are presented, including the source code for the Matrox trademark interpreter script that enables the image capturing process. With this method it is possible to obtain raw images directly from the frame grabber card at an acquisition rate of 15 images per second. The original configuration by the manufacturer allows the acquisition of only a few images over the course of a treatment session. The approach has a wide range of applications, such as quality assurance (QA) of the radiation beam, real-time imaging, real-time verification of intensity-modulated radiation therapy (IMRT) fields, and generation of movies of the radiation field (fluoroscopy mode). (orig.)

  2. A line feature-based camera tracking method applicable to nuclear power plant environment

    International Nuclear Information System (INIS)

    Yan, Weida; Ishii, Hirotake; Shimoda, Hiroshi; Izumi, Masanori

    2014-01-01

    Augmented reality, which can support the maintenance and decommissioning work of an NPP to improve efficiency and reduce human error, is expected to be practically used in an NPP. AR has indispensable tracking technology that estimates the 3D position and orientation of users in real time, but because of the complication of the NPP environment, it is difficult for its practial use in the large space of an NPP. This study attempt to develop a tracking method for the practial use in an NPP. Marker tracking is a legacy tracking method, but the preparation work necessary for that method is onerous. Therefore, this study developed and evaluated a natural feature-based camera tracking method that demands less preparation and which is applicable in an NPP environment. This method registers natural features as landmarks. When tracking, the natural features existing in the NPP environment can be registered automatically as landmarks. It is therefore possible to expand the tracking area to cover a wide environment in theory. The evaluation result shows that the proposed tracking method has the possibility to support field work of some kinds in an NPP environment. It is possible to reduce the preparation work necessary for the marker tracking method. (author)

  3. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System

    Directory of Open Access Journals (Sweden)

    Rocco Furferi

    2016-08-01

    Full Text Available One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a “fuzzy mass” of tufted fibers into a regular mass of untwisted fibers, named “tow”. During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time.

  4. Safety impacts of red light cameras at signalized intersections based on cellular automata models.

    Science.gov (United States)

    Chai, C; Wong, Y D; Lum, K M

    2015-01-01

    This study applies a simulation technique to evaluate the hypothesis that red light cameras (RLCs) exert important effects on accident risks. Conflict occurrences are generated by simulation and compared at intersections with and without RLCs to assess the impact of RLCs on several conflict types under various traffic conditions. Conflict occurrences are generated through simulating vehicular interactions based on an improved cellular automata (CA) model. The CA model is calibrated and validated against field observations at approaches with and without RLCs. Simulation experiments are conducted for RLC and non-RLC intersections with different geometric layouts and traffic demands to generate conflict occurrences that are analyzed to evaluate the hypothesis that RLCs exert important effects on road safety. The comparison of simulated conflict occurrences show favorable safety impacts of RLCs on crossing conflicts and unfavorable impacts for rear-end conflicts during red/amber phases. Corroborative results are found from broad analysis of accident occurrence. RLCs are found to have a mixed effect on accident risk at signalized intersections: crossing collisions are reduced, whereas rear-end collisions may increase. The specially developed CA model is found to be a feasible safety assessment tool.

  5. Colorimetric analyzer based on mobile phone camera for determination of available phosphorus in soil.

    Science.gov (United States)

    Moonrungsee, Nuntaporn; Pencharee, Somkid; Jakmunee, Jaroon

    2015-05-01

    A field deployable colorimetric analyzer based on an "Android mobile phone" was developed for the determination of available phosphorus content in soil. An inexpensive mobile phone embedded with digital camera was used for taking photograph of the chemical solution under test. The method involved a reaction of the phosphorus (orthophosphate form), ammonium molybdate and potassium antimonyl tartrate to form phosphomolybdic acid which was reduced by ascorbic acid to produce the intense colored molybdenum blue. The software program was developed to use with the phone for recording and analyzing RGB color of the picture. A light tight box with LED light to control illumination was fabricated to improve precision and accuracy of the measurement. Under the optimum conditions, the calibration graph was created by measuring blue color intensity of a series of standard phosphorus solution (0.0-1.0mgPL(-1)), then, the calibration equation obtained was retained by the program for the analysis of sample solution. The results obtained from the proposed method agreed well with the spectrophotometric method, with a detection limit of 0.01mgPL(-1) and a sample throughput about 40h(-1) was achieved. The developed system provided good accuracy (REphosphorus nutrient. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. POTENTIAL OF UAV-BASED LASER SCANNER AND MULTISPECTRAL CAMERA DATA IN BUILDING INSPECTION

    Directory of Open Access Journals (Sweden)

    D. Mader

    2016-06-01

    Full Text Available Conventional building inspection of bridges, dams or large constructions in general is rather time consuming and often cost expensive due to traffic closures and the need of special heavy vehicles such as under-bridge inspection units or other large lifting platforms. In consideration that, an unmanned aerial vehicle (UAV will be more reliable and efficient as well as less expensive and simpler to operate. The utilisation of UAVs as an assisting tool in building inspections is obviously. Furthermore, light-weight special sensors such as infrared and thermal cameras as well as laser scanner are available and predestined for usage on unmanned aircraft systems. Such a flexible low-cost system is realized in the ADFEX project with the goal of time-efficient object exploration, monitoring and damage detection. For this purpose, a fleet of UAVs, equipped with several sensors for navigation, obstacle avoidance and 3D object-data acquisition, has been developed and constructed. This contribution deals with the potential of UAV-based data in building inspection. Therefore, an overview of the ADFEX project, sensor specifications and requirements of building inspections in general are given. On the basis of results achieved in practical studies, the applicability and potential of the UAV system in building inspection will be presented and discussed.

  7. Design of rotating mirror for ultra-high speed camera based on dynamic characteristic

    International Nuclear Information System (INIS)

    Li Chunbo; Chai Jinlong; Liang Yexing; Liu Chunping; Wang Hongzhi; Yu Chunhui; Li Jingzhen; Huang Hongbin

    2011-01-01

    A systematic design method has been proposed for studying the dynamic design of rotating mirror for ultra-high speed camera. With the finite element software, the numerical analyses of static, modal, harmonic responses and natural frequency sensitivity for the preliminary-designed rotating mirror were done based on the static and dynamic theories. Some experiments were done to verify the results. The physical dimensions of the rotating mirror were modified repeatedly according to the results for designing a new rotating mirror. Then simulation and experiments of fatigue life for the new rotating mirror under alternating force were done. The results show that the maximum static stress is less than the yield stress of the rotating mirror material, which proves the new rotating mirror will not be subjected to static strength failure. However, the results of modal and harmonic response analyses indicate that the dynamic characteristic of the new rotating mirror can not meet the design requirement for the first critical speed is less than the service speed. In all the physical dimensions of the rotating mirror, the circum radius of mirror body and natural frequency are negatively correlated and the degree of correlation is maximal. The first-order natural frequency in- creases from 459.4 Hz to 713.6 Hz, the rate of change is 55.3%, the first critical speed is up to 42 816 r/min, avoiding resonance successfully, and the fatigue strength of the new rotating mirror can meet the design requirement. (authors)

  8. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  9. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  10. Picosecond camera

    International Nuclear Information System (INIS)

    Decroisette, Michel

    A Kerr cell activated by infrared pulses of a model locked Nd glass laser, acts as an ultra-fast and periodic shutter, with a few p.s. opening time. Associated with a S.T.L. camera, it gives rise to a picosecond camera allowing us to study very fast effects [fr

  11. Contourlet domain multiband deblurring based on color correlation for fluid lens cameras.

    Science.gov (United States)

    Tzeng, Jack; Liu, Chun-Chen; Nguyen, Truong Q

    2010-10-01

    Due to the novel fluid optics, unique image processing challenges are presented by the fluidic lens camera system. Developed for surgical applications, unique properties, such as no moving parts while zooming and better miniaturization than traditional glass optics, are advantages of the fluid lens. Despite these abilities, sharp color planes and blurred color planes are created by the nonuniform reaction of the liquid lens to different color wavelengths. Severe axial color aberrations are caused by this reaction. In order to deblur color images without estimating a point spread function, a contourlet filter bank system is proposed. Information from sharp color planes is used by this multiband deblurring method to improve blurred color planes. Compared to traditional Lucy-Richardson and Wiener deconvolution algorithms, significantly improved sharpness and reduced ghosting artifacts are produced by a previous wavelet-based method. Directional filtering is used by the proposed contourlet-based system to adjust to the contours of the image. An image is produced by the proposed method which has a similar level of sharpness to the previous wavelet-based method and has fewer ghosting artifacts. Conditions for when this algorithm will reduce the mean squared error are analyzed. While improving the blue color plane by using information from the green color plane is the primary focus of this paper, these methods could be adjusted to improve the red color plane. Many multiband systems such as global mapping, infrared imaging, and computer assisted surgery are natural extensions of this work. This information sharing algorithm is beneficial to any image set with high edge correlation. Improved results in the areas of deblurring, noise reduction, and resolution enhancement can be produced by the proposed algorithm.

  12. Automated cloud classification using a ground based infra-red camera and texture analysis techniques

    Science.gov (United States)

    Rumi, Emal; Kerr, David; Coupland, Jeremy M.; Sandford, Andrew P.; Brettle, Mike J.

    2013-10-01

    Clouds play an important role in influencing the dynamics of local and global weather and climate conditions. Continuous monitoring of clouds is vital for weather forecasting and for air-traffic control. Convective clouds such as Towering Cumulus (TCU) and Cumulonimbus clouds (CB) are associated with thunderstorms, turbulence and atmospheric instability. Human observers periodically report the presence of CB and TCU clouds during operational hours at airports and observatories; however such observations are expensive and time limited. Robust, automatic classification of cloud type using infrared ground-based instrumentation offers the advantage of continuous, real-time (24/7) data capture and the representation of cloud structure in the form of a thermal map, which can greatly help to characterise certain cloud formations. The work presented here utilised a ground based infrared (8-14 μm) imaging device mounted on a pan/tilt unit for capturing high spatial resolution sky images. These images were processed to extract 45 separate textural features using statistical and spatial frequency based analytical techniques. These features were used to train a weighted k-nearest neighbour (KNN) classifier in order to determine cloud type. Ground truth data were obtained by inspection of images captured simultaneously from a visible wavelength colour camera at the same installation, with approximately the same field of view as the infrared device. These images were classified by a trained cloud observer. Results from the KNN classifier gave an encouraging success rate. A Probability of Detection (POD) of up to 90% with a Probability of False Alarm (POFA) as low as 16% was achieved.

  13. Generalized free-space diffuse photon transport model based on the influence analysis of a camera lens diaphragm.

    Science.gov (United States)

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie

    2010-10-10

    The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.

  14. Time-resolved brightness measurements by streaking

    Science.gov (United States)

    Torrance, Joshua S.; Speirs, Rory W.; McCulloch, Andrew J.; Scholten, Robert E.

    2018-03-01

    Brightness is a key figure of merit for charged particle beams, and time-resolved brightness measurements can elucidate the processes involved in beam creation and manipulation. Here we report on a simple, robust, and widely applicable method for the measurement of beam brightness with temporal resolution by streaking one-dimensional pepperpots, and demonstrate the technique to characterize electron bunches produced from a cold-atom electron source. We demonstrate brightness measurements with 145 ps temporal resolution and a minimum resolvable emittance of 40 nm rad. This technique provides an efficient method of exploring source parameters and will prove useful for examining the efficacy of techniques to counter space-charge expansion, a critical hurdle to achieving single-shot imaging of atomic scale targets.

  15. Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera

    Science.gov (United States)

    Dziri, Aziz; Duranton, Marc; Chapuis, Roland

    2016-07-01

    Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.

  16. Sky camera imagery processing based on a sky classification using radiometric data

    International Nuclear Information System (INIS)

    Alonso, J.; Batlles, F.J.; López, G.; Ternero, A.

    2014-01-01

    As part of the development and expansion of CSP (concentrated solar power) technology, one of the most important operational requirements is to have complete control of all factors which may affect the quantity and quality of the solar power produced. New developments and tools in this field are focused on weather forecasting improving both operational security and electricity production. Such is the case with sky cameras, devices which are currently in use in some CSP plants and whose use is expanding in the new technology sector. Their application is mainly focused on cloud detection, estimating their movement as well as their influence on solar radiation attenuation indeed, the presence of clouds is the greatest factor involved in solar radiation attenuation. The aim of this work is the detection and analysis of clouds from images taken by a TSI-880 model sky. In order to obtain accurate image processing, three different models were created, based on a previous sky classification using radiometric data and representative sky conditions parameters. As a consequence, the sky can be classified as cloudless, partially-cloudy or overcast, delivering an average success rate of 92% in sky classification and cloud detection. - Highlights: • We developed a methodology for detection of clouds in total sky imagery (TSI-880). • A classification of sky is presented according to radiometric data and sky parameters. • The sky can be classified as cloudless, partially cloudy and overcast. • The images processing is based on the sky classification for the detection of clouds. • The average success of the developed model is around 92%

  17. Synthetic neutron camera and spectrometer in JET based on AFSI-ASCOT simulations

    Science.gov (United States)

    Sirén, P.; Varje, J.; Weisen, H.; Koskela, T.; contributors, JET

    2017-09-01

    The ASCOT Fusion Source Integrator (AFSI) has been used to calculate neutron production rates and spectra corresponding to the JET 19-channel neutron camera (KN3) and the time-of-flight spectrometer (TOFOR) as ideal diagnostics, without detector-related effects. AFSI calculates fusion product distributions in 4D, based on Monte Carlo integration from arbitrary reactant distribution functions. The distribution functions were calculated by the ASCOT Monte Carlo particle orbit following code for thermal, NBI and ICRH particle reactions. Fusion cross-sections were defined based on the Bosch-Hale model and both DD and DT reactions have been included. Neutrons generated by AFSI-ASCOT simulations have already been applied as a neutron source of the Serpent neutron transport code in ITER studies. Additionally, AFSI has been selected to be a main tool as the fusion product generator in the complete analysis calculation chain: ASCOT - AFSI - SERPENT (neutron and gamma transport Monte Carlo code) - APROS (system and power plant modelling code), which encompasses the plasma as an energy source, heat deposition in plant structures as well as cooling and balance-of-plant in DEMO applications and other reactor relevant analyses. This conference paper presents the first results and validation of the AFSI DD fusion model for different auxiliary heating scenarios (NBI, ICRH) with very different fast particle distribution functions. Both calculated quantities (production rates and spectra) have been compared with experimental data from KN3 and synthetic spectrometer data from ControlRoom code. No unexplained differences have been observed. In future work, AFSI will be extended for synthetic gamma diagnostics and additionally, AFSI will be used as part of the neutron transport calculation chain to model real diagnostics instead of ideal synthetic diagnostics for quantitative benchmarking.

  18. Human Detection Based on the Generation of a Background Image by Using a Far-Infrared Light Camera

    Directory of Open Access Journals (Sweden)

    Eun Som Jeon

    2015-03-01

    Full Text Available The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction

  19. Regulatory considerations and quality assurance of depleted uranium based radiography cameras

    International Nuclear Information System (INIS)

    Sapkal, Jyotsna A.; Yadav, R.K.B.; Amrota, C.T.; Singh, Pratap; GopaIakrishanan, R.H.; Patil, B.N.; Mane, Nilesh

    2016-01-01

    Radiography cameras with shielding material as Depleted Uranium (DU) are used for containment of Iridium ( 192 Ir) source. DU shielding surrounds the titanium made 'S' tube through which the encapsulated 192 Ir source along with the pigtail travels. As per guidelines, it is required to check periodically the shielding integrity of DU shielding periodically by monitoring for alpha transferable contamination inside the 'S' tube. This paper describes in brief the method followed for collection of samples from inside the 'S' tube . The samples were analysed for transferable contamination due to gross alpha using alpha scintillation (ALSCIN) counter. The gross alpha contamination in the 'S' tube was found to be less than the recommended USNRC value for discarding the radiography camera. IAEA recommendations related to transferable contamination and AERB guidelines on the quality assurance (QA) requirements of radiography camera were studied

  20. Proposal of secure camera-based radiation warning system for nuclear detection

    International Nuclear Information System (INIS)

    Tsuchiya, Ken'ichi; Kurosawa, Kenji; Akiba, Norimitsu; Kakuda, Hidetoshi; Imoto, Daisuke; Hirabayashi, Manato; Kuroki, Kenro

    2016-01-01

    Counter-terrorisms against radiological and nuclear threat are significant issues toward Tokyo 2020 Olympic and Paralympic Games. In terms of cost benefit, it is not easy to build a warning system for nuclear detection to prevent a Dirty Bomb attack (dispersion of radioactive materials using a conventional explosive) or a Silent Source attack (hidden radioactive materials) from occurring. We propose a nuclear detection system using the installed secure cameras. We describe a method to estimate radiation dose from noise pattern in CCD images caused by radiation. Some dosimeters under neutron and gamma-ray irradiations (0.1mSv-100mSv) were taken in CCD video camera. We confirmed amount of noise in CCD images increased in radiation exposure. The radiation detection using CMOS in secure cameras or cell phones has been implemented. However, in this presentation, we propose a warning system including neutron detection to search shielded nuclear materials or radiation exposure devices using criticality. (author)

  1. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    Directory of Open Access Journals (Sweden)

    Cuicui Zhang

    2014-12-01

    Full Text Available Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1 how to define diverse base classifiers from the small data; (2 how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  2. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    Science.gov (United States)

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  3. A new apparatus for track-analysis in nuclear track emulsion based on a CCD-camera device

    International Nuclear Information System (INIS)

    Ganssauge, E.

    1993-01-01

    A CCD camera-based, image-analyzing system for automatic evaluation of nuclear track emulsion chambers is presented. The stage of a normal microscope moves using three remote controlled stepping motors with a step size of 0.25 μm. A CCD-camera is mounted on tope of the microscope in order to register the nuclear emulsion. The camera has a resolution capable of differentiating single emulsion-grains (0.6 μm). The camera picture is transformed from analogue to digital signals and stored by a frame grabber. Some background-picture elements can be eliminated by applying cuts on grey levels. The central computer processes the picture, correlates the single picture points, the coordinates and the grey-levels, such that in the end one has a unique assignment of each picture point to an address on the hard disk for a given plate. After repetition of this procedure for several plates by means of an appropriate software (for instance our vertex program [1]). the coordinates of the points are combined to tracks, and a variety of distributions like pseudorapidity-distributions can be calculated and presented on the terminal. (author)

  4. Atmospheric radiation environment analyses based-on CCD camera at various mountain altitudes and underground sites

    Directory of Open Access Journals (Sweden)

    Li Cavoli Pierre

    2016-01-01

    Full Text Available The purpose of this paper is to discriminate secondary atmospheric particles and identify muons by measuring the natural radiative environment in atmospheric and underground locations. A CCD camera has been used as a cosmic ray sensor. The Low Noise Underground Laboratory of Rustrel (LSBB, France gives the access to a unique low-noise scientific environment deep enough to ensure the screening from the neutron and proton radiative components. Analyses of the charge levels in pixels of the CCD camera induced by radiation events and cartographies of the charge events versus the hit pixel are proposed.

  5. Digital camera auto white balance based on color temperature estimation clustering

    Science.gov (United States)

    Zhang, Lei; Liu, Peng; Liu, Yuling; Yu, Feihong

    2010-11-01

    Auto white balance (AWB) is an important technique for digital cameras. Human vision system has the ability to recognize the original color of an object in a scene illuminated by a light source that has a different color temperature from D65-the standard sun light. However, recorded images or video clips, can only record the original information incident into the sensor. Therefore, those recorded will appear different from the real scene observed by the human. Auto white balance is a technique to solve this problem. Traditional methods such as gray world assumption, white point estimation, may fail for scenes with large color patches. In this paper, an AWB method based on color temperature estimation clustering is presented and discussed. First, the method gives a list of several lighting conditions that are common for daily life, which are represented by their color temperatures, and thresholds for each color temperature to determine whether a light source is this kind of illumination; second, an image to be white balanced are divided into N blocks (N is determined empirically). For each block, the gray world assumption method is used to calculate the color cast, which can be used to estimate the color temperature of that block. Third, each calculated color temperature are compared with the color temperatures in the given illumination list. If the color temperature of a block is not within any of the thresholds in the given list, that block is discarded. Fourth, the remaining blocks are given a majority selection, the color temperature having the most blocks are considered as the color temperature of the light source. Experimental results show that the proposed method works well for most commonly used light sources. The color casts are removed and the final images look natural.

  6. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  7. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  8. On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation

    Science.gov (United States)

    2015-03-01

    almost negligible detection by EO cameras in the dark . In order to compare the estimated SfM trajectories, the point clouds created by VisualSFM for...IEEE, 2000. [14] Noah Snavely, Steven M Seitz, and Richard Szeliski. Photo tourism : exploring photo collections in 3d. In ACM transactions on graphics

  9. Impact of intense x-ray pulses on a NaI(Tl)-based gamma camera

    NARCIS (Netherlands)

    Koppert, Wilco J C; van der Velden, Sandra; Steenbergen, J H Leo; de Jong, Hugo W A M

    2018-01-01

    INTRODUCTION: In SPECT/CT systems X-ray and -ray imaging is performed sequentially. Simultaneous acquisition may have advantages, for instance in interventional settings. However, this may expose a gamma camera to relatively high X-ray doses and deteriorate its functioning. We studied the NaI(Tl)

  10. Development of a hardware-based registration system for the multimodal medical images by USB cameras

    International Nuclear Information System (INIS)

    Iwata, Michiaki; Minato, Kotaro; Watabe, Hiroshi; Koshino, Kazuhiro; Yamamoto, Akihide; Iida, Hidehiro

    2009-01-01

    There are several medical imaging scanners and each modality has different aspect for visualizing inside of human body. By combining these images, diagnostic accuracy could be improved, and therefore, several attempts for multimodal image registration have been implemented. One popular approach is to use hybrid image scanners such as positron emission tomography (PET)/CT and single photon emission computed tomography (SPECT)/CT. However, these hybrid scanners are expensive and not fully available. We developed multimodal image registration system with universal serial bus (USB) cameras, which is inexpensive and applicable to any combinations of existed conventional imaging scanners. The multiple USB cameras will determine the three dimensional positions of a patient while scanning. Using information of these positions and rigid body transformation, the acquired image is registered to the common coordinate which is shared with another scanner. For each scanner, reference marker is attached on gantry of the scanner. For observing the reference marker's position by the USB cameras, the location of the USB cameras can be arbitrary. In order to validate the system, we scanned a cardiac phantom with different positions by PET and MRI scanners. Using this system, images from PET and MRI were visually aligned, and good correlations between PET and MRI images were obtained after the registration. The results suggest this system can be inexpensively used for multimodal image registrations. (author)

  11. Timing generator of scientific grade CCD camera and its implementation based on FPGA technology

    Science.gov (United States)

    Si, Guoliang; Li, Yunfei; Guo, Yongfei

    2010-10-01

    The Timing Generator's functions of Scientific Grade CCD Camera is briefly presented: it generates various kinds of impulse sequence for the TDI-CCD, video processor and imaging data output, acting as the synchronous coordinator for time in the CCD imaging unit. The IL-E2TDI-CCD sensor produced by DALSA Co.Ltd. use in the Scientific Grade CCD Camera. Driving schedules of IL-E2 TDI-CCD sensor has been examined in detail, the timing generator has been designed for Scientific Grade CCD Camera. FPGA is chosen as the hardware design platform, schedule generator is described with VHDL. The designed generator has been successfully fulfilled function simulation with EDA software and fitted into XC2VP20-FF1152 (a kind of FPGA products made by XILINX). The experiments indicate that the new method improves the integrated level of the system. The Scientific Grade CCD camera system's high reliability, stability and low power supply are achieved. At the same time, the period of design and experiment is sharply shorted.

  12. Home video monitoring system for neurodegenerative diseases based on commercial HD cameras

    NARCIS (Netherlands)

    Abramiuc, B.; Zinger, S.; De With, P.H.N.; De Vries-Farrouh, N.; Van Gilst, M.M.; Bloem, B.; Overeem, S.

    2016-01-01

    Neurodegenerative disease (ND) is an umbrella term for chronic disorders that are characterized by severe joint cognitive-motor impairments, which are difficult to evaluate on a frequent basis. HD cameras in the home environment could extend and enhance the diagnosis process and could lead to better

  13. Creating personalized memories from social events: Community-based support for multi-camera recordings of school concerts

    OpenAIRE

    Guimaraes R.L.; Cesar P.; Bulterman D.C.A.; Zsombori V.; Kegel I.

    2011-01-01

    htmlabstractThe wide availability of relatively high-quality cameras makes it easy for many users to capture video fragments of social events such as concerts, sports events or community gatherings. The wide availability of simple sharing tools makes it nearly as easy to upload individual fragments to on-line video sites. Current work on video mashups focuses on the creation of a video summary based on the characteristics of individual media fragments, but it fails to address the interpersona...

  14. Ultra-fast framing camera tube

    Science.gov (United States)

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  15. Postprocessing method to clean up streaks due to noisy detectors

    International Nuclear Information System (INIS)

    Tuy, H.K.; Mattson, R.A.

    1990-01-01

    This paper reports that occasionally, one of the thousands of detectors in a CT scanner will intermittently produce erroneous data, creating streaks in the reconstructed image. The authors propose a method to identify and clean up the streaks automatically. To find the rays along which the data values are bad, a binary image registering the edges of the original image is created. Forward projection is applied to the binary image to single out edges along rays. Data along views containing the identified bad rays are estimated by means of forward projecting the original image. Back projection of the negative of the estimated convolved data along these views onto the streaky image will remove streaks from the image. Image enhancement is achieved by means of back projecting the convolved data estimated from the image after the streak removal along views of bad rays

  16. traits and resistance to maize streak virus disease in kenya

    African Journals Online (AJOL)

    African Crop Science Journal, Vol. 14. No. 4, pp. ... Kenya Agricultural Research Institute, Muguga-South, P.O. Box 30148, Nairobi, Kenya .... streak disease has been identified in various maize recycling and development of pure-lines at.

  17. Model-based design evaluation of a compact, high-efficiency neutron scatter camera

    Science.gov (United States)

    Weinfurther, Kyle; Mattingly, John; Brubaker, Erik; Steele, John

    2018-03-01

    This paper presents the model-based design and evaluation of an instrument that estimates incident neutron direction using the kinematics of neutron scattering by hydrogen-1 nuclei in an organic scintillator. The instrument design uses a single, nearly contiguous volume of organic scintillator that is internally subdivided only as necessary to create optically isolated pillars, i.e., long, narrow parallelepipeds of organic scintillator. Scintillation light emitted in a given pillar is confined to that pillar by a combination of total internal reflection and a specular reflector applied to the four sides of the pillar transverse to its long axis. The scintillation light is collected at each end of the pillar using a photodetector, e.g., a microchannel plate photomultiplier (MCP-PM) or a silicon photomultiplier (SiPM). In this optically segmented design, the (x , y) position of scintillation light emission (where the x and y coordinates are transverse to the long axis of the pillars) is estimated as the pillar's (x , y) position in the scintillator "block", and the z-position (the position along the pillar's long axis) is estimated from the amplitude and relative timing of the signals produced by the photodetectors at each end of the pillar. The neutron's incident direction and energy is estimated from the (x , y , z) -positions of two sequential neutron-proton scattering interactions in the scintillator block using elastic scatter kinematics. For proton recoils greater than 1 MeV, we show that the (x , y , z) -position of neutron-proton scattering can be estimated with < 1 cm root-mean-squared [RMS] error and the proton recoil energy can be estimated with < 50 keV RMS error by fitting the photodetectors' response time history to models of optical photon transport within the scintillator pillars. Finally, we evaluate several alternative designs of this proposed single-volume scatter camera made of pillars of plastic scintillator (SVSC-PiPS), studying the effect of

  18. Calibration of robot tool centre point using camera-based system

    Directory of Open Access Journals (Sweden)

    Gordić Zaviša

    2016-01-01

    Full Text Available Robot Tool Centre Point (TCP calibration problem is of great importance for a number of industrial applications, and it is well known both in theory and in practice. Although various techniques have been proposed for solving this problem, they mostly require tool jogging or long processing time, both of which affect process performance by extending cycle time. This paper presents an innovative way of TCP calibration using a set of two cameras. The robot tool is placed in an area where images in two orthogonal planes are acquired using cameras. Using robust pattern recognition, even deformed tool can be identified on images, and information about its current position and orientation forwarded to control unit for calibration. Compared to other techniques, test results show significant reduction in procedure complexity and calibration time. These improvements enable more frequent TCP checking and recalibration during production, thus improving the product quality.

  19. A passive terahertz video camera based on lumped element kinetic inductance detectors

    International Nuclear Information System (INIS)

    Rowe, Sam; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Ade, Peter A. R.; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; House, Julian; Moseley, Paul; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian; Wood, Ken; Grainger, William; Mauskopf, Philip; Spencer, Locke

    2016-01-01

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  20. A passive terahertz video camera based on lumped element kinetic inductance detectors

    Energy Technology Data Exchange (ETDEWEB)

    Rowe, Sam, E-mail: sam.rowe@astro.cf.ac.uk; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Ade, Peter A. R.; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; House, Julian; Moseley, Paul; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian [Astronomy Instrumentation Group, School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); Wood, Ken [QMC Instruments Ltd., School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); Grainger, William [Rutherford Appleton Laboratory, STFC, Swindon SN2 1SZ (United Kingdom); Mauskopf, Philip [Astronomy Instrumentation Group, School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); School of Earth Science and Space Exploration, Arizona State University, Tempe, Arizona 85281 (United States); Spencer, Locke [Department of Physics and Astronomy, University of Lethbridge, Lethbridge, Alberta T1K 3M4 (Canada)

    2016-03-15

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  1. A Reaction-Diffusion-Based Coding Rate Control Mechanism for Camera Sensor Networks

    Directory of Open Access Journals (Sweden)

    Naoki Wakamiya

    2010-08-01

    Full Text Available A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  2. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    Science.gov (United States)

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  3. Design of belief propagation based on FPGA for the multistereo CAFADIS camera.

    Science.gov (United States)

    Magdaleno, Eduardo; Lüke, Jonás Philipp; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel

    2010-01-01

    In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm.

  4. Design of Belief Propagation Based on FPGA for the Multistereo CAFADIS Camera

    Directory of Open Access Journals (Sweden)

    José Manuel Rodríguez-Ramos

    2010-10-01

    Full Text Available In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm.

  5. Cryogenic solid Schmidt camera as a base for future wide-field IR systems

    Science.gov (United States)

    Yudin, Alexey N.

    2011-11-01

    Work is focused on study of capability of solid Schmidt camera to serve as a wide-field infrared lens for aircraft system with whole sphere coverage, working in 8-14 um spectral range, coupled with spherical focal array of megapixel class. Designs of 16 mm f/0.2 lens with 60 and 90 degrees sensor diagonal are presented, their image quality is compared with conventional solid design. Achromatic design with significantly improved performance, containing enclosed soft correcting lens behind protective front lens is proposed. One of the main goals of the work is to estimate benefits from curved detector arrays in 8-14 um spectral range wide-field systems. Coupling of photodetector with solid Schmidt camera by means of frustrated total internal reflection is considered, with corresponding tolerance analysis. The whole lens, except front element, is considered to be cryogenic, with solid Schmidt unit to be flown by hydrogen for improvement of bulk transmission.

  6. An Imaging Camera for Biomedical Application Based on Compton Scattering of Gamma Rays

    OpenAIRE

    Fontana, Cristiano Lino

    2013-01-01

    In this thesis we present the R&D of a Compton Camera (CC) for small object imaging. The CC concept requires two detectors to obtain the incoming direction of the gamma ray. This approach, sometimes named ``Electronic Collimation,'' differs from the usual technique that employs collimators for physically selecting gamma-rays of a given direction. This solution offers the advantage of much greater sensitivity and hence smaller doses. We propose a novel design, which uses two simila...

  7. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates.

    Directory of Open Access Journals (Sweden)

    Alexander Richard Braczkowski

    Full Text Available Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a 'control' and 'treatment' survey on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96 or temporal activity of female (p = 0.12 or male leopards (p = 0.79, and the assumption of geographic closure was met for both surveys (p >0.05. The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90. Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28-9.28 leopards/100km2 were considerably higher than estimates from spatially-explicit methods (3.40-3.65 leopards/100km2. The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted.

  8. Use of a smart phone based thermo camera for skin prick allergy testing: a feasibility study (Conference Presentation)

    Science.gov (United States)

    Barla, Lindi; Verdaasdonk, Rudolf M.; Rustemeyer, Thomas; Klaessens, John; van der Veen, Albert

    2016-02-01

    Allergy testing is usually performed by exposing the skin to small quantities of potential allergens on the inner forearm and scratching the protective epidermis to increase exposure. After 15 minutes the dermatologist performs a visual check for swelling and erythema which is subjective and difficult for e.g. dark skin types. A small smart phone based thermo camera (FLIR One) was used to obtain quantitative images in a feasibility study of 17 patients Directly after allergen exposure on the forearm, thermal images were captured at 30 seconds interval and processed to a time lapse movie over 15 minutes. Considering the 'subjective' reading of the dermatologist as golden standard, in 11/17 pts (65%) the evaluation of dermatologist was confirmed by the thermo camera including 5 of 6 patients without allergic response. In 7 patients thermo showed additional spots. Of the 342 sites tested, the dermatologist detected 47 allergies of which 28 (60%) were confirmed by thermo imaging while thermo imaging showed 12 additional spots. The method can be improved with user dedicated acquisition software and better registration between normal and thermal images. The lymphatic reaction seems to shift from the original puncture site. The interpretation of the thermal images is still subjective since collecting quantitative data is difficult due to motion patient during 15 minutes. Although not yet conclusive, thermal imaging shows to be promising to improve the sensitivity and selectivity of allergy testing using a smart phone based camera.

  9. Flaw evaluation of Nd:YAG laser welding based plume shape by infrared thermal camera

    International Nuclear Information System (INIS)

    Kim, Jae Yeol; Yoo, Young Tae; Yang, Dong Jo; Song, Kyung Seol; Ro, Kyoung Bo

    2003-01-01

    In Nd:YAG laser welding evaluation methods of welding flaw are various. But, the method due to plume shape is difficult to classification od welding flaw. The Nd:YAG laser process is known to have high speed and deep penetration capability to become one of the most advanced welding technologies. At the present time, some methods are studied for measurement of plume shape by using high-speed camera and photo diode. This paper describes the machining characteristics of SM45C carbon steel welding by use of an Nd:YAG laser. In spite of its good mechanical characteristics, SM45C carbon steel has a high carbon contents and suffers a limitation in the industrial application due to the poor welding properties. In this study, plume shape was measured by infrared thermal camera that is non-contact/non-destructive thermal measurement equipment through change of laser generating power, speed, focus. Weld was performed on bead-on method. Measurement results are compared as two equipment. Here, two results are composed of measurement results of plume quantities due to plume shape by infrared thermal camera and inspection results of weld bead include weld flaws by ultrasonic inspector.

  10. The use of a sky camera for solar radiation estimation based on digital image processing

    International Nuclear Information System (INIS)

    Alonso-Montesinos, J.; Batlles, F.J.

    2015-01-01

    The necessary search for a more sustainable global future means using renewable energy sources to generate pollutant-free electricity. CSP (Concentrated solar power) and PV (photovoltaic) plants are the systems most in demand for electricity production using solar radiation as the energy source. The main factors affecting final electricity generation in these plants are, among others, atmospheric conditions; therefore, knowing whether there will be any change in the solar radiation hitting the plant's solar field is of fundamental importance to CSP and PV plant operators in adapting the plant's operation mode to these fluctuations. Consequently, the most useful technology must involve the study of atmospheric conditions. This is the case for sky cameras, an emerging technology that allows one to gather sky information with optimal spatial and temporal resolution. Hence, in this work, a solar radiation estimation using sky camera images is presented for all sky conditions, where beam, diffuse and global solar radiation components are estimated in real-time as a novel way to evaluate the solar resource from a terrestrial viewpoint. - Highlights: • Using a sky camera, the solar resource has been estimated for one minute periods. • The sky images have been processed to estimate the solar radiation at pixel level. • The three radiation components have been estimated under all sky conditions. • Results have been presented for cloudless, partially-cloudy and overcast conditions. • For beam and global radiation, the nRMSE value is of about 11% under overcast skies.

  11. Visual fatigue modeling for stereoscopic video shot based on camera motion

    Science.gov (United States)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  12. Development of an LYSO based gamma camera for positron and scinti-mammography

    Science.gov (United States)

    Liang, H.-C.; Jan, M.-L.; Lin, W.-C.; Yu, S.-F.; Su, J.-L.; Shen, L.-H.

    2009-08-01

    In this research, characteristics of combination of PSPMTs (position sensitive photo-multiplier tube) to form a larger detection area is studied. A home-made linear divider circuit was built for merging signals and readout. Borosilicate glasses were chosen for the scintillation light sharing in the crossover region. Deterioration effect caused by the light guide was understood. The influences of light guide and crossover region on the separable crystal size were evaluated. According to the test results, a gamma camera with a crystal block of 90 × 90 mm2 covered area, composed of 2 mm LYSO crystal pixels, was designed and fabricated. Measured performances showed that this camera worked fine in both 511 keV and lower energy gammas. The light loss behaviour within the crossover region was analyzed and realized. Through count rate measurements, the 176Lu nature background didn't show severe influence on the single photon imaging and exhibited an amount of less than 1/3 of all the events acquired. These results show that with using light sharing techniques, combination of multiple PSPMTs in both X and Y directions to build a large area imaging detector is capable to be achieved. Also this camera design is feasible to keep both the abilities for positron and single photon breast imaging applications. Separable crystal size is 2 mm with 2 mm thick glass applied for the light sharing in current status.

  13. Development of an LYSO based gamma camera for positron and scinti-mammography

    International Nuclear Information System (INIS)

    Liang, H-C; Jan, M-L; Lin, W-C; Yu, S-F; Shen, L-H; Su, J-L

    2009-01-01

    In this research, characteristics of combination of PSPMTs (position sensitive photo-multiplier tube) to form a larger detection area is studied. A home-made linear divider circuit was built for merging signals and readout. Borosilicate glasses were chosen for the scintillation light sharing in the crossover region. Deterioration effect caused by the light guide was understood. The influences of light guide and crossover region on the separable crystal size were evaluated. According to the test results, a gamma camera with a crystal block of 90 x 90 mm 2 covered area, composed of 2 mm LYSO crystal pixels, was designed and fabricated. Measured performances showed that this camera worked fine in both 511 keV and lower energy gammas. The light loss behaviour within the crossover region was analyzed and realized. Through count rate measurements, the 176 Lu nature background didn't show severe influence on the single photon imaging and exhibited an amount of less than 1/3 of all the events acquired. These results show that with using light sharing techniques, combination of multiple PSPMTs in both X and Y directions to build a large area imaging detector is capable to be achieved. Also this camera design is feasible to keep both the abilities for positron and single photon breast imaging applications. Separable crystal size is 2 mm with 2 mm thick glass applied for the light sharing in current status.

  14. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    International Nuclear Information System (INIS)

    Mundy, Daniel W.; Herman, Michael G.

    2011-01-01

    Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly

  15. Imaging performance comparison between a LaBr3: Ce scintillator based and a CdTe semiconductor based photon counting compact gamma camera.

    Science.gov (United States)

    Russo, P; Mettivier, G; Pani, R; Pellegrini, R; Cinti, M N; Bennati, P

    2009-04-01

    The authors report on the performance of two small field of view, compact gamma cameras working in single photon counting in planar imaging tests at 122 and 140 keV. The first camera is based on a LaBr3: Ce scintillator continuous crystal (49 x 49 x 5 mm3) assembled with a flat panel multianode photomultiplier tube with parallel readout. The second one belongs to the class of semiconductor hybrid pixel detectors, specifically, a CdTe pixel detector (14 x 14 x 1 mm3) with 256 x 256 square pixels and a pitch of 55 microm, read out by a CMOS single photon counting integrated circuit of the Medipix2 series. The scintillation camera was operated with selectable energy window while the CdTe camera was operated with a single low-energy detection threshold of about 20 keV, i.e., without energy discrimination. The detectors were coupled to pinhole or parallel-hole high-resolution collimators. The evaluation of their overall performance in basic imaging tasks is presented through measurements of their detection efficiency, intrinsic spatial resolution, noise, image SNR, and contrast recovery. The scintillation and CdTe cameras showed, respectively, detection efficiencies at 122 keV of 83% and 45%, intrinsic spatial resolutions of 0.9 mm and 75 microm, and total background noises of 40.5 and 1.6 cps. Imaging tests with high-resolution parallel-hole and pinhole collimators are also reported.

  16. High-precision real-time 3D shape measurement based on a quad-camera system

    Science.gov (United States)

    Tao, Tianyang; Chen, Qian; Feng, Shijie; Hu, Yan; Zhang, Minliang; Zuo, Chao

    2018-01-01

    Phase-shifting profilometry (PSP) based 3D shape measurement is well established in various applications due to its high accuracy, simple implementation, and robustness to environmental illumination and surface texture. In PSP, higher depth resolution generally requires higher fringe density of projected patterns which, in turn, lead to severe phase ambiguities that must be solved with additional information from phase coding and/or geometric constraints. However, in order to guarantee the reliability of phase unwrapping, available techniques are usually accompanied by increased number of patterns, reduced amplitude of fringe, and complicated post-processing algorithms. In this work, we demonstrate that by using a quad-camera multi-view fringe projection system and carefully arranging the relative spatial positions between the cameras and the projector, it becomes possible to completely eliminate the phase ambiguities in conventional three-step PSP patterns with high-fringe-density without projecting any additional patterns or embedding any auxiliary signals. Benefiting from the position-optimized quad-camera system, stereo phase unwrapping can be efficiently and reliably performed by flexible phase consistency checks. Besides, redundant information of multiple phase consistency checks is fully used through a weighted phase difference scheme to further enhance the reliability of phase unwrapping. This paper explains the 3D measurement principle and the basic design of quad-camera system, and finally demonstrates that in a large measurement volume of 200 mm × 200 mm × 400 mm, the resultant dynamic 3D sensing system can realize real-time 3D reconstruction at 60 frames per second with a depth precision of 50 μm.

  17. Study of a new architecture of gamma cameras with Cd/ZnTe/CdTe semiconductors; Etude d'une nouvelle architecture de gamma camera a base de semi-conducteurs CdZnTe /CdTe

    Energy Technology Data Exchange (ETDEWEB)

    Guerin, L

    2007-11-15

    This thesis studies new semi conductors for gammas cameras in order to improve the quality of image in nuclear medicine. The chapter 1 reminds the general principle of the imaging gamma, by describing the radiotracers, the channel of detection and the types of Anger gamma cameras acquisition. The physiological, physical and technological limits of the camera are then highlighted, to better identify the needs of future gamma cameras. The chapter 2 is dedicated to a bibliographical study. At first, semi-conductors used in imaging gamma are presented, and more particularly semi-conductors CDTE and CdZnTe, by distinguishing planar detectors and monolithic pixelated detectors. Secondly, the classic collimators of the gamma cameras, used in clinical routine for the most part of between them, are described. Their geometry is presented, as well as their characteristics, their advantages and their inconveniences. The chapter 3 is dedicated to a state of art of the simulation codes dedicated to the medical imaging and the methods of reconstruction in imaging gamma. These states of art allow to introduce the software of simulation and the methods of reconstruction used within the framework of this thesis. The chapter 4 presents the new architecture of gamma camera proposed during this work of thesis. It is structured in three parts. The first part justifies the use of semiconducting detectors CdZnTe, in particular the monolithic pixelated detectors, by bringing to light their advantages with regard to the detection modules based on scintillator. The second part presents gamma cameras to base of detectors CdZnTe (prototypes or commercial products) and their associated collimators, as well as the interest of an association of detectors CdZnTe in the classic collimators. Finally, the third part presents in detail the HiSens architecture. The chapter 5 describes both software of simulation used within the framework of this thesis to estimate the performances of the Hi

  18. MO-AB-206-02: Testing Gamma Cameras Based On TG177 WG Report

    Energy Technology Data Exchange (ETDEWEB)

    Halama, J. [Loyola Univ. Medical Center (United States)

    2016-06-15

    This education session will cover the physics and operation principles of gamma cameras and PET scanners. The first talk will focus on PET imaging. An overview of the principles of PET imaging will be provided, including positron decay physics, and the transition from 2D to 3D imaging. More recent advances in hardware and software will be discussed, such as time-of-flight imaging, and improvements in reconstruction algorithms that provide for options such as depth-of-interaction corrections. Quantitative applications of PET will be discussed, as well as the requirements for doing accurate quantitation. Relevant performance tests will also be described. Learning Objectives: Be able to describe basic physics principles of PET and operation of PET scanners. Learn about recent advances in PET scanner hardware technology. Be able to describe advances in reconstruction techniques and improvements Be able to list relevant performance tests. The second talk will focus on gamma cameras. The Nuclear Medicine subcommittee has charged a task group (TG177) to develop a report on the current state of physics testing of gamma cameras, SPECT, and SPECT/CT systems. The report makes recommendations for performance tests to be done for routine quality assurance, annual physics testing, and acceptance tests, and identifies those needed satisfy the ACR accreditation program and The Joint Commission imaging standards. The report is also intended to be used as a manual with detailed instructions on how to perform tests under widely varying conditions. Learning Objectives: At the end of the presentation members of the audience will: Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of gamma cameras for planar imaging. Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of SPECT systems. Be familiar with the tests of a SPECT/CT system that include the CT images

  19. A camera based calculation of 99m Tc-MAG-3 clearance using conjugate views method

    International Nuclear Information System (INIS)

    Hojabr, M.; Rajabi, H.; Eftekhari, M.

    2004-01-01

    Background: measurement of absolute or different renal function using radiotracers plays an important role in the clinical management of various renal diseases. Gamma camera quantitative methods is approximations of renal clearance may potentially be as accurate as plasma clearance methods. However some critical factors such as kidney depth and background counts are still troublesome in the use of this technique. In this study the conjugate-view method along with some background correction technique have been used for the measurement of renal activity in 99m Tc- MAG 3 renography. Transmission data were used for attenuation correction and the source volume was considered for accurate background subtraction. Materials and methods: the study was performed in 35 adult patients referred to our department for conventional renography and ERPF calculation. Depending on patients weight approximately 10-15 mCi 99 Tc-MAG 3 was injected in the form of a sharp bolus and 60 frames of 1 second followed by 174 frames of 10 seconds were acquired for each patient. Imaging was performed on a dual-head gamma camera(SOLUS; SunSpark10, ADAC Laboratories, Milpitas, CA) anterior and posterior views were acquired simultaneously. A LEHR collimator was used to correct the scatter for the emission and transmission images. Buijs factor was applied on background counts before background correction (Rutland-Patlak equation). gamma camera clearance was calculated using renal uptake in 1-2, 1.5-2.5, 2-3 min. The same procedure was repeated for both renograms obtained from posterior projection and conjugated views. The plasma clearance was also directly calculated by three blood samples obtained at 40, 80, 120 min after injection. Results: 99 Tc-MAG 3 clearance using direct sampling method were used as reference values and compared to the results obtained from the renograms. The maximum correlation was found between conjugate view clearance at 2-3 min (R=0.99, R 2 =0.98, SE=15). Conventional

  20. White-light fringe detection based on a novel light source and colour CCD camera

    Czech Academy of Sciences Publication Activity Database

    Buchta, Zdeněk; Mikel, Břetislav; Lazar, Josef; Číp, Ondřej

    2011-01-01

    Roč. 22, č. 9 (2011), 094031:1-6 ISSN 0957-0233 R&D Projects: GA ČR GP102/09/P293; GA ČR GP102/09/P630; GA MPO 2A-1TP1/127; GA MŠk(CZ) LC06007; GA MŠk ED0017/01/01 Institutional research plan: CEZ:AV0Z20650511 Keywords : low-coherence interferometry * phase-crossing algorithm * CCD camera * gauge block Subject RIV: BH - Optics, Masers, Lasers Impact factor: 1.494, year: 2011

  1. Development of a Compton camera for medical applications based on silicon strip and scintillation detectors

    Energy Technology Data Exchange (ETDEWEB)

    Krimmer, J., E-mail: j.krimmer@ipnl.in2p3.fr [Institut de Physique Nucléaire de Lyon, Université de Lyon, Université Lyon 1, CNRS/IN2P3 UMR 5822, 69622 Villeurbanne cedex (France); Ley, J.-L. [Institut de Physique Nucléaire de Lyon, Université de Lyon, Université Lyon 1, CNRS/IN2P3 UMR 5822, 69622 Villeurbanne cedex (France); Abellan, C.; Cachemiche, J.-P. [Aix-Marseille Université, CNRS/IN2P3, CPPM UMR 7346, 13288 Marseille (France); Caponetto, L.; Chen, X.; Dahoumane, M.; Dauvergne, D. [Institut de Physique Nucléaire de Lyon, Université de Lyon, Université Lyon 1, CNRS/IN2P3 UMR 5822, 69622 Villeurbanne cedex (France); Freud, N. [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA - Lyon, Université Lyon 1, Centre Léon Bérard (France); Joly, B.; Lambert, D.; Lestand, L. [Clermont Université, Université Blaise Pascal, CNRS/IN2P3, Laboratoire de Physique Corpusculaire, BP 10448, F-63000 Clermont-Ferrand (France); Létang, J.M. [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA - Lyon, Université Lyon 1, Centre Léon Bérard (France); Magne, M. [Clermont Université, Université Blaise Pascal, CNRS/IN2P3, Laboratoire de Physique Corpusculaire, BP 10448, F-63000 Clermont-Ferrand (France); and others

    2015-07-01

    A Compton camera is being developed for the purpose of ion-range monitoring during hadrontherapy via the detection of prompt-gamma rays. The system consists of a scintillating fiber beam tagging hodoscope, a stack of double sided silicon strip detectors (90×90×2 mm{sup 3}, 2×64 strips) as scatter detectors, as well as bismuth germanate (BGO) scintillation detectors (38×35×30 mm{sup 3}, 100 blocks) as absorbers. The individual components will be described, together with the status of their characterization.

  2. Compact CdZnTe-Based Gamma Camera For Prostate Cancer Imaging

    International Nuclear Information System (INIS)

    Cui, Y.; Lall, T.; Tsui, B.; Yu, J.; Mahler, G.; Bolotnikov, A.; Vaska, P.; DeGeronimo, G.; O'Connor, P.; Meinken, G.; Joyal, J.; Barrett, J.; Camarda, G.; Hossain, A.; Kim, K.H.; Yang, G.; Pomper, M.; Cho, S.; Weisman, K.; Seo, Y.; Babich, J.; LaFrance, N.; James, R.B.

    2011-01-01

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high false-positive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integrated-circuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera

  3. Impact of intense x-ray pulses on a NaI(Tl)-based gamma camera

    Science.gov (United States)

    Koppert, W. J. C.; van der Velden, S.; Steenbergen, J. H. L.; de Jong, H. W. A. M.

    2018-03-01

    In SPECT/CT systems x-ray and γ-ray imaging is performed sequentially. Simultaneous acquisition may have advantages, for instance in interventional settings. However, this may expose a gamma camera to relatively high x-ray doses and deteriorate its functioning. We studied the NaI(Tl) response to x-ray pulses with a photodiode, PMT and gamma camera, respectively. First, we exposed a NaI(Tl)-photodiode assembly to x-ray pulses to investigate potential crystal afterglow. Next, we exposed a NaI(Tl)-PMT assembly to 10 ms LED pulses (mimicking x-ray pulses) and measured the response to flashing LED probe-pulses (mimicking γ-pulses). We then exposed the assembly to x-ray pulses, with detector entrance doses of up to 9 nGy/pulse, and analysed the response for γ-pulse variations. Finally, we studied the response of a Siemens Diacam gamma camera to γ-rays while exposed to x-ray pulses. X-ray exposure of the crystal, read out with a photodiode, revealed 15% afterglow fraction after 3 ms. The NaI(Tl)-PMT assembly showed disturbances up to 10 ms after 10 ms LED exposure. After x-ray exposure however, responses showed elevated baselines, with 60 ms decay-time. Both for x-ray and LED exposure and after baseline subtraction, probe-pulse analysis revealed disturbed pulse height measurements shortly after exposure. X-ray exposure of the Diacam corroborated the elementary experiments. Up to 50 ms after an x-ray pulse, no events are registered, followed by apparent energy elevations up to 100 ms after exposure. Limiting the dose to 0.02 nGy/pulse prevents detrimental effects. Conventional gamma cameras exhibit substantial dead-time and mis-registration of photon energies up to 100 ms after intense x-ray pulses. This is due PMT limitations and due to afterglow in the crystal. Using PMTs with modified circuitry, we show that deteriorative afterglow effects can be reduced without noticeable effects on the PMT performance, up to x-ray pulse doses of 1 nGy.

  4. MO-AB-206-02: Testing Gamma Cameras Based On TG177 WG Report

    International Nuclear Information System (INIS)

    Halama, J.

    2016-01-01

    This education session will cover the physics and operation principles of gamma cameras and PET scanners. The first talk will focus on PET imaging. An overview of the principles of PET imaging will be provided, including positron decay physics, and the transition from 2D to 3D imaging. More recent advances in hardware and software will be discussed, such as time-of-flight imaging, and improvements in reconstruction algorithms that provide for options such as depth-of-interaction corrections. Quantitative applications of PET will be discussed, as well as the requirements for doing accurate quantitation. Relevant performance tests will also be described. Learning Objectives: Be able to describe basic physics principles of PET and operation of PET scanners. Learn about recent advances in PET scanner hardware technology. Be able to describe advances in reconstruction techniques and improvements Be able to list relevant performance tests. The second talk will focus on gamma cameras. The Nuclear Medicine subcommittee has charged a task group (TG177) to develop a report on the current state of physics testing of gamma cameras, SPECT, and SPECT/CT systems. The report makes recommendations for performance tests to be done for routine quality assurance, annual physics testing, and acceptance tests, and identifies those needed satisfy the ACR accreditation program and The Joint Commission imaging standards. The report is also intended to be used as a manual with detailed instructions on how to perform tests under widely varying conditions. Learning Objectives: At the end of the presentation members of the audience will: Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of gamma cameras for planar imaging. Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of SPECT systems. Be familiar with the tests of a SPECT/CT system that include the CT images

  5. Defect testing of large aperture optics based on high resolution CCD camera

    International Nuclear Information System (INIS)

    Cheng Xiaofeng; Xu Xu; Zhang Lin; He Qun; Yuan Xiaodong; Jiang Xiaodong; Zheng Wanguo

    2009-01-01

    A fast testing method on inspecting defects of large aperture optics was introduced. With uniform illumination by LED source at grazing incidence, the image of defects on the surface of and inside the large aperture optics could be enlarged due to scattering. The images of defects were got by high resolution CCD camera and microscope, and the approximate mathematical relation between viewing dimension and real dimension of defects was simulated. Thus the approximate real dimension and location of all defects could be calculated through the high resolution pictures. (authors)

  6. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  7. Camera-based speckle noise reduction for 3-D absolute shape measurements.

    Science.gov (United States)

    Zhang, Hao; Kuschmierz, Robert; Czarske, Jürgen; Fischer, Andreas

    2016-05-30

    Simultaneous position and velocity measurements enable absolute 3-D shape measurements of fast rotating objects for instance for monitoring the cutting process in a lathe. Laser Doppler distance sensors enable simultaneous position and velocity measurements with a single sensor head by evaluating the scattered light signals. The superposition of several speckles with equal Doppler frequency but random phase on the photo detector results in an increased velocity and shape uncertainty, however. In this paper, we present a novel image evaluation method that overcomes the uncertainty limitations due to the speckle effect. For this purpose, the scattered light is detected with a camera instead of single photo detectors. Thus, the Doppler frequency from each speckle can be evaluated separately and the velocity uncertainty decreases with the square root of the number of camera lines. A reduction of the velocity uncertainty by the order of one magnitude is verified by the numerical simulations and experimental results, respectively. As a result, the measurement uncertainty of the absolute shape is not limited by the speckle effect anymore.

  8. Study and use of an infrared camera optimized for ground based observations in the 10 micron wavelength range

    International Nuclear Information System (INIS)

    Remy, Sophie

    1991-01-01

    Astronomical observations in the 10 micron atmospheric window provide very important information for many of astrophysical topics. But because of the very large terrestrial photon background at that wavelength, ground based observations have been impeded. On the other band, the ground based telescopes offer a greater angular resolution than the spatially based telescopes. The recent development of detector arrays for the mid infrared range made easier the development of infrared cameras with optimized detectors for astronomical observations from the ground. The CAMIRAS infrared camera, built by the 'Service d'Astrophysique' in Saclay is the instrument we have studied and we present its performances. Its sensitivity, given for an integration time of one minute on source and a signal to noise ratio of 3, is 0.15 Jy for punctual sources, and 20 mJy arcs"-"2 for extended sources. But we need to get rid of the enormous photon background so we have to find a better way of observation based on modulation techniques as 'chopping' or 'nodding'. Thus we show that a modulation about 1 Hz is satisfactory with our detectors arrays without perturbing the signal to noise ratio. As we have a good instrument and because we are able to get rid of the photon background, we can study astronomical objects. Results from a comet, dusty stellar disks, and an ultra-luminous galaxy are presented. (author) [fr

  9. Scintillating camera

    International Nuclear Information System (INIS)

    Vlasbloem, H.

    1976-01-01

    The invention relates to a scintillating camera and in particular to an apparatus for determining the position coordinates of a light pulse emitting point on the anode of an image intensifier tube which forms part of a scintillating camera, comprising at least three photomultipliers which are positioned to receive light emitted by the anode screen on their photocathodes, circuit means for processing the output voltages of the photomultipliers to derive voltages that are representative of the position coordinates; a pulse-height discriminator circuit adapted to be fed with the sum voltage of the output voltages of the photomultipliers for gating the output of the processing circuit when the amplitude of the sum voltage of the output voltages of the photomultipliers lies in a predetermined amplitude range, and means for compensating the distortion introduced in the image on the anode screen

  10. Gamma camera

    International Nuclear Information System (INIS)

    Reiss, K.H.; Kotschak, O.; Conrad, B.

    1976-01-01

    A gamma camera with a simplified setup as compared with the state of engineering is described permitting, apart from good localization, also energy discrimination. Behind the usual vacuum image amplifier a multiwire proportional chamber filled with trifluorine bromium methane is connected in series. Localizing of the signals is achieved by a delay line, energy determination by means of a pulse height discriminator. With the aid of drawings and circuit diagrams, the setup and mode of operation are explained. (ORU) [de

  11. Angioid streaks, clinical course, complications, and current therapeutic management

    Directory of Open Access Journals (Sweden)

    Ilias Georgalas

    2008-12-01

    Full Text Available Ilias Georgalas1, Dimitris Papaconstantinou2, Chrysanthi Koutsandrea2, George Kalantzis2, Dimitris Karagiannis2, Gerasimos Georgopoulos2, Ioannis Ladas21Department of Ophthalmology, “G. Gennimatas” Hospital of Athens, NHS, Athens, Greece; 2Department of Ophthalmology, “G. Gennimatas” Hospital of Athens, University of Athens, Athens, GreeceAbstract: Angioid streaks are visible irregular crack-like dehiscences in Bruch’s membrane that are associated with atrophic degeneration of the overlying retinal pigmented epithelium. Angioid streaks may be associated with pseudoxanthoma elasticum, Paget’s disease, sickle-cell anemia, acromegaly, Ehlers–Danlos syndrome, and diabetes mellitus, but also appear in patients without any systemic disease. Patients with angioid streaks are generally asymptomatic, unless the lesions extend towards the foveola or develop complications such as traumatic Bruch’s membrane rupture or macular choroidal neovascularization (CNV. The visual prognosis in patients with CNV secondary to angioid streaks if untreated, is poor and most treatment modalities, until recently, have failed to limit the devastating impact of CNV in central vision. However, it is likely that treatment with antivascular endothelial growth factor, especially in treatment-naive eyes to yield favorable results in the future and this has to be investigated in future studies.Keywords: angioid streaks, pseudoxanthoma elasticum, choroidal neovascularization

  12. Orientation tuning of contrast masking caused by motion streaks.

    Science.gov (United States)

    Apthorp, Deborah; Cass, John; Alais, David

    2010-08-01

    We investigated whether the oriented trails of blur left by fast-moving dots (i.e., "motion streaks") effectively mask grating targets. Using a classic overlay masking paradigm, we varied mask contrast and target orientation to reveal underlying tuning. Fast-moving Gaussian blob arrays elevated thresholds for detection of static gratings, both monoptically and dichoptically. Monoptic masking at high mask (i.e., streak) contrasts is tuned for orientation and exhibits a similar bandwidth to masking functions obtained with grating stimuli (∼30 degrees). Dichoptic masking fails to show reliable orientation-tuned masking, but dichoptic masks at very low contrast produce a narrowly tuned facilitation (∼17 degrees). For iso-oriented streak masks and grating targets, we also explored masking as a function of mask contrast. Interestingly, dichoptic masking shows a classic "dipper"-like TVC function, whereas monoptic masking shows no dip and a steeper "handle". There is a very strong unoriented component to the masking, which we attribute to transiently biased temporal frequency masking. Fourier analysis of "motion streak" images shows interesting differences between dichoptic and monoptic functions and the information in the stimulus. Our data add weight to the growing body of evidence that the oriented blur of motion streaks contributes to the processing of fast motion signals.

  13. A compact large-format streak tube for imaging lidar

    Science.gov (United States)

    Hui, Dandan; Luo, Duan; Tian, Liping; Lu, Yu; Chen, Ping; Wang, Junfeng; Sai, Xiaofeng; Wen, Wenlong; Wang, Xing; Xin, Liwei; Zhao, Wei; Tian, Jinshou

    2018-04-01

    The streak tubes with a large effective photocathode area, large effective phosphor screen area, and high photocathode radiant sensitivity are essential for improving the field of view, depth of field, and detectable range of the multiple-slit streak tube imaging lidar. In this paper, a high spatial resolution, large photocathode area, and compact meshless streak tube with a spherically curved cathode and screen is designed and tested. Its spatial resolution reaches 20 lp/mm over the entire Φ28 mm photocathode working area, and the simulated physical temporal resolution is better than 30 ps. The temporal distortion in our large-format streak tube, which is shown to be a non-negligible factor, has a minimum value as the radius of curvature of the photocathode varies. Furthermore, the photocathode radiant sensitivity and radiant power gain reach 41 mA/W and 18.4 at the wavelength of 550 nm, respectively. Most importantly, the external dimensions of our streak tube are no more than Φ60 mm × 110 mm.

  14. MEMS-based thermally-actuated image stabilizer for cellular phone camera

    International Nuclear Information System (INIS)

    Lin, Chun-Ying; Chiou, Jin-Chern

    2012-01-01

    This work develops an image stabilizer (IS) that is fabricated using micro-electro-mechanical system (MEMS) technology and is designed to counteract the vibrations when human using cellular phone cameras. The proposed IS has dimensions of 8.8 × 8.8 × 0.3 mm 3 and is strong enough to suspend an image sensor. The processes that is utilized to fabricate the IS includes inductive coupled plasma (ICP) processes, reactive ion etching (RIE) processes and the flip-chip bonding method. The IS is designed to enable the electrical signals from the suspended image sensor to be successfully emitted out using signal output beams, and the maximum actuating distance of the stage exceeds 24.835 µm when the driving current is 155 mA. Depending on integration of MEMS device and designed controller, the proposed IS can decrease the hand tremor by 72.5%. (paper)

  15. Design of a smartphone-camera-based fluorescence imaging system for the detection of oral cancer

    Science.gov (United States)

    Uthoff, Ross

    Shown is the design of the Smartphone Oral Cancer Detection System (SOCeeDS). The SOCeeDS attaches to a smartphone and utilizes its embedded imaging optics and sensors to capture images of the oral cavity to detect oral cancer. Violet illumination sources excite the oral tissues to induce fluorescence. Images are captured with the smartphone's onboard camera. Areas where the tissues of the oral cavity are darkened signify an absence of fluorescence signal, indicating breakdown in tissue structure brought by precancerous or cancerous conditions. With this data the patient can seek further testing and diagnosis as needed. Proliferation of this device will allow communities with limited access to healthcare professionals a tool to detect cancer in its early stages, increasing the likelihood of cancer reversal.

  16. Using active contour models for feature extraction in camera-based seam tracking of arc welding

    DEFF Research Database (Denmark)

    Liu, Jinchao; Fan, Zhun; Olsen, Søren

    2009-01-01

    of the processes requires the extraction of characteristic parameters of the welding groove close to the molten pool, i.e. in an environment dominated by the very intense light emission from the welding arc. The typical industrial solution today is a laser-scanner containing a camera as well as a laser source......In the recent decades much research has been performed in order to allow better control of arc welding processes, but the success has been limited, and the vast majority of the industrial structural welding work is therefore still being made manually. Closed-loop and nearly-closed-loop control...... illuminating the groove by a light curtain and thus allowing details of the groove geometry to be extracted by triangulation. This solution is relatively expensive and must act several centimetres ahead of the molten pool. In addition laser-scanners often show problems when dealing with shiny surfaces...

  17. The findings of F-18 FDG camera-based coincidence PET in acute leukemia

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, S. N.; Joh, C. W.; Lee, M. H. [Ajou University School of Medicine, Suwon (Korea, Republic of)

    2002-07-01

    We evaluated the usefulness of F-18 FDG coincidence PET (CoDe-PET) using a dual-head gamma camera in the assessment of patients with acute leukemia. F-18 FDG CoDE-PET studies were performed in 5 patients with acute leukemia (6 ALL and 2 AML) before or after treatment. CoDe-PET was performed utilizing a dual-head gamma camera equipped with 5/8 inch NaI(Tl) crystal. Image acquisition began 60 minutes after the injection of F-18 FDG in the fasting state. A whole trunk from cervical to inguinal regions or selected region were scanned. No attenuation correction was made and image reconstruction was done using filtered back-projection. CoDe-PET studies were evaluated visually. F-18 FDG image performed in 5 patients with ALL before therapy depicted multiple lymph node involvement and diffuse increased uptake involving axial skeleton, pelvis and femurs. F-18 FDG image done in 2 AML after chemotherapy showed only diffuse increased uptake in sternum, ribs, spine, pelvis and proximal femur and these may be due to G-CSF stimulation effect in view of drug history. But bone marrow histology showed scattered blast cell suggesting incomplete remission in one and completer remission in another. F-18 image done in 1 ALL after therapy showed no abnormal uptake. CoDe-PET with F-18 FDG in acute lymphoblastic lymphoma showed multiple lymphnode and bone marrow involvement in whole body. Therefore we conclude that CoDe-PET with F-18 FDG usefulness for evaluation of extent in acute lymphoblastic leukemia. But there was a limitation to assess therapy effectiveness during therapy due to reactive bone marrow.

  18. Feasibility study of a novel general purpose CZT-based digital SPECT camera: initial clinical results.

    Science.gov (United States)

    Goshen, Elinor; Beilin, Leonid; Stern, Eli; Kenig, Tal; Goldkorn, Ronen; Ben-Haim, Simona

    2018-03-14

    The performance of a prototype novel digital single-photon emission computed tomography (SPECT) camera with multiple pixelated CZT detectors and high sensitivity collimators (Digital SPECT; Valiance X12 prototype, Molecular Dynamics) was evaluated in various clinical settings. Images obtained in the prototype system were compared to images from an analog camera fitted with high-resolution collimators. Clinical feasibility, image quality, and diagnostic performance of the prototype were evaluated in 36 SPECT studies in 35 patients including bone (n = 21), brain (n = 5), lung perfusion (n = 3), and parathyroid (n = 3) and one study each of sentinel node and labeled white blood cells. Images were graded on a scale of 1-4 for sharpness, contrast, overall quality, and diagnostic confidence. Digital CZT SPECT provided a statistically significant improvement in sharpness and contrast in clinical cases (mean score of 3.79 ± 0.61 vs. 3.26 ± 0.50 and 3.92 ± 0.29 vs. 3.34 ± 0.47 respectively, p < 0.001 for both). Overall image quality was slightly higher for the digital SPECT but not statistically significant (3.74 vs. 3.66). CZT SPECT provided significantly improved image sharpness and contrast compared to the analog system in the clinical settings evaluated. Further studies will evaluate the diagnostic performance of the system in large patient cohorts in additional clinical settings.

  19. Patient positioning in radiotherapy based on surface imaging using time of flight cameras

    Energy Technology Data Exchange (ETDEWEB)

    Gilles, M., E-mail: marlene.gilles@univ-brest.fr; Fayad, H.; Clement, J. F.; Bert, J.; Visvikis, D. [INSERM, UMR 1101, LaTIM, Brest 29609 (France); Miglierini, P. [Academic Radiotherapy Department, CHRU Morvan, Brest 29200 (France); Scheib, S. [Varian Medical Systems Imaging Laboratory GmbH, Baden-Daettwil 5405 (Switzerland); Cozzi, L. [Radiotherapy and Radiosurgery Department, Instituto Clinico Humanitas, Rozzano 20089 (Italy); Boussion, N.; Schick, U.; Pradier, O. [INSERM, UMR 1101, LaTIM, Brest 29609, France and Academic Radiotherapy Department, CHRU Morvan, Brest 29200 (France)

    2016-08-15

    Purpose: To evaluate the patient positioning accuracy in radiotherapy using a stereo-time of flight (ToF)-camera system. Methods: A system using two ToF cameras was used to scan the surface of the patients in order to position them daily on the treatment couch. The obtained point clouds were registered to (a) detect translations applied to the table (intrafraction motion) and (b) predict the displacement to be applied in order to place the patient in its reference position (interfraction motion). The measures provided by this system were compared to the effectively applied translations. The authors analyzed 150 fractions including lung, pelvis/prostate, and head and neck cancer patients. Results: The authors obtained small absolute errors for displacement detection: 0.8 ± 0.7, 0.8 ± 0.7, and 0.7 ± 0.6 mm along the vertical, longitudinal, and lateral axes, respectively, and 0.8 ± 0.7 mm for the total norm displacement. Lung cancer patients presented the largest errors with a respective mean of 1.1 ± 0.9, 0.9 ± 0.9, and 0.8 ± 0.7 mm. Conclusions: The proposed stereo-ToF system allows for sufficient accuracy and faster patient repositioning in radiotherapy. Its capability to track the complete patient surface in real time could allow, in the future, not only for an accurate positioning but also a real time tracking of any patient intrafraction motion (translation, involuntary, and breathing).

  20. Cassava brown streak disease in Rwanda, the associated viruses and disease phenotypes.

    Science.gov (United States)

    Munganyinka, E; Ateka, E M; Kihurani, A W; Kanyange, M C; Tairo, F; Sseruwagi, P; Ndunguru, J

    2018-02-01

    Cassava brown streak disease (CBSD) was first observed on cassava ( Manihot esculenta ) in Rwanda in 2009. In 2014 eight major cassava-growing districts in the country were surveyed to determine the distribution and variability of symptom phenotypes associated with CBSD, and the genetic diversity of cassava brown streak viruses. Distribution of the CBSD symptom phenotypes and their combinations varied greatly between districts, cultivars and their associated viruses. The symptoms on leaf alone recorded the highest (32.2%) incidence, followed by roots (25.7%), leaf + stem (20.3%), leaf + root (10.4%), leaf + stem + root (5.2%), stem + root (3.7%), and stem (2.5%) symptoms. Analysis by RT-PCR showed that single infections of Ugandan cassava brown streak virus (UCBSV) were most common (74.2% of total infections) and associated with all the seven phenotypes studied. Single infections of Cassava brown streak virus (CBSV) were predominant (15.3% of total infections) in CBSD-affected plants showing symptoms on stems alone. Mixed infections (CBSV + UCBSV) comprised 10.5% of total infections and predominated in the combinations of leaf + stem + root phenotypes. Phylogenetic analysis and the estimates of evolutionary divergence, using partial sequences (210 nt) of the coat protein gene, revealed that in Rwanda there is one type of CBSV and an indication of diverse UCBSV. This study is the first to report the occurrence and distribution of both CBSV and UCBSV based on molecular techniques in Rwanda.

  1. Picosecond Streaked K-Shell Spectroscopy of Near Solid-Density Aluminum Plasmas

    Science.gov (United States)

    Stillman, C. R.; Nilson, P. M.; Ivancic, S. T.; Mileham, C.; Froula, D. H.; Golovkin, I. E.

    2016-10-01

    The thermal x-ray emission from rapidly heated solid targets containing a buried-aluminum layer was measured. The targets were driven by high-contrast 1 ω or 2 ω laser pulses at focused intensities up to 1 ×1019W/Wcm2 cm2 . A streaked x-ray spectrometer recorded the Al Heα and lithium-like satellite lines with 2-ps temporal resolution and moderate resolving power (E/E ΔE 700). Time-integrated measurements over the same spectral range were used to correct the streaked data for variations in photocathode sensitivity. Line widths and intensity ratios from the streaked data were interpreted using a collisional radiative atomic model to provide the average plasma conditions in the buried layer as a function of time. It was observed that the resonance line tends toward lower photon energies at high electron densities. The measured shifts will be compared to predicted shifts from Stark-operator calculations at the inferred plasma conditions. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944, the office of Fusion Energy Sciences Award Number DE-SC0012317, and the Stewardship Science Graduate Fellowship Grant Number DE-NA0002135.

  2. Beyond leaf color: Comparing camera-based phenological metrics with leaf biochemical, biophysical, and spectral properties throughout the growing season of a temperate deciduous forest

    Science.gov (United States)

    Yang, Xi; Tang, Jianwu; Mustard, John F.

    2014-03-01

    Plant phenology, a sensitive indicator of climate change, influences vegetation-atmosphere interactions by changing the carbon and water cycles from local to global scales. Camera-based phenological observations of the color changes of the vegetation canopy throughout the growing season have become popular in recent years. However, the linkages between camera phenological metrics and leaf biochemical, biophysical, and spectral properties are elusive. We measured key leaf properties including chlorophyll concentration and leaf reflectance on a weekly basis from June to November 2011 in a white oak forest on the island of Martha's Vineyard, Massachusetts, USA. Concurrently, we used a digital camera to automatically acquire daily pictures of the tree canopies. We found that there was a mismatch between the camera-based phenological metric for the canopy greenness (green chromatic coordinate, gcc) and the total chlorophyll and carotenoids concentration and leaf mass per area during late spring/early summer. The seasonal peak of gcc is approximately 20 days earlier than the peak of the total chlorophyll concentration. During the fall, both canopy and leaf redness were significantly correlated with the vegetation index for anthocyanin concentration, opening a new window to quantify vegetation senescence remotely. Satellite- and camera-based vegetation indices agreed well, suggesting that camera-based observations can be used as the ground validation for satellites. Using the high-temporal resolution dataset of leaf biochemical, biophysical, and spectral properties, our results show the strengths and potential uncertainties to use canopy color as the proxy of ecosystem functioning.

  3. Introgression of chromosome segments from multiple alien species in wheat breeding lines with wheat streak mosaic virus resistance

    Science.gov (United States)

    Pyramiding of alien-derived Wheat streak mosaic virus (WSMV) resistance and resistance enhancing genes in wheat is a costeffective and environmentally safe strategy for disease control. PCR-based markers and cytogenetic analysis with genomic in situ hybridisation were applied to identify alien chrom...

  4. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  5. Streak artifacts on Kidney CT: Ionic vs nonionic contrast media

    International Nuclear Information System (INIS)

    Cho, Eun Ok; Kim, Won Hong; Jung, Myung Suk; Kim, Yong Hoon; Hur, Gham

    1993-01-01

    The authors reviewed findings of enhanced abdominal computed tomography (CT) scans to know the difference between a higher dose of conventional ionic contrast media(iothalamate meglumine) and a lower dose of a new, nonionic contrast material(ioversol). One hundred adult patients were divided into two groups. Each group consisted of 50 patients. Iothalamate meglumine and ioversol were intravenously administered in each group. The radio of the male to female in the former was 28:22, and the latter 29:21. We examine the degree of renal streak artifact and measure the Hounsfield number of urine in renal collecting system. There were significant differences of the degree of the streak artifact depending upon the osmolality of contrast media used and that was related with urine CT number(P value<0.005). We authors conclude that nonionic low osmolar contrast media is prone to cause streak artifacts and distortions of renal image than conventional ionic high osmolar contrast media

  6. Trend of digital camera and interchangeable zoom lenses with high ratio based on patent application over the past 10 years

    Science.gov (United States)

    Sensui, Takayuki

    2012-10-01

    Although digitalization has tripled consumer-class camera market scale, extreme reductions in prices of fixed-lens cameras has reduced profitability. As a result, a number of manufacturers have entered the market of the System DSC i.e. digital still camera with interchangeable lens, where large profit margins are possible, and many high ratio zoom lenses with image stabilization functions have been released. Quiet actuators are another indispensable component. Design with which there is little degradation in performance due to all types of errors is preferred for good balance in terms of size, lens performance, and the rate of quality to sub-standard products. Decentering, such as that caused by tilting, sensitivity of moving groups is especially important. In addition, image stabilization mechanisms actively shift lens groups. Development of high ratio zoom lenses with vibration reduction mechanism is confronted by the challenge of reduced performance due to decentering, making control over decentering sensitivity between lens groups everything. While there are a number of ways to align lenses (axial alignment), shock resistance and ability to stand up to environmental conditions must also be considered. Naturally, it is very difficult, if not impossible, to make lenses smaller and achieve a low decentering sensitivity at the same time. 4-group zoom construction is beneficial in making lenses smaller, but decentering sensitivity is greater. 5-group zoom configuration makes smaller lenses more difficult, but it enables lower decentering sensitivities. At Nikon, the most advantageous construction is selected for each lens based on specifications. The AF-S DX NIKKOR 18-200mm f/3.5-5.6G ED VR II and AF-S NIKKOR 28-300mm f/3.5-5.6G ED VR are excellent examples of this.

  7. Gamma camera

    International Nuclear Information System (INIS)

    Berninger, W.H.

    1975-01-01

    The light pulse output of a scintillator, on which incident collimated gamma rays impinge, is detected by an array of photoelectric tubes each having a convexly curved photocathode disposed in close proximity to the scintillator. Electronic circuitry connected to outputs of the phototubes develops the scintillation event position coordinate electrical signals with good linearity and with substantial independence of the spacing between the scintillator and photocathodes so that the phototubes can be positioned as close to the scintillator as is possible to obtain less distortion in the field of view and improved spatial resolution as compared to conventional planar photocathode gamma cameras

  8. Radioisotope camera

    International Nuclear Information System (INIS)

    Tausch, L.M.; Kump, R.J.

    1978-01-01

    The electronic ciruit corrects distortions caused by the distance between the individual photomultiplier tubes of the multiple radioisotope camera on one hand and between the tube configuration and the scintillator plate on the other. For this purpose the transmission characteristics of the nonlinear circuits are altered as a function of the energy of the incident radiation. By this means the threshold values between lower and higher amplification are adjusted to the energy level of each scintillation. The correcting circuit may be used for any number of isotopes to be measured. (DG) [de

  9. 100ps UV/x-ray framing camera

    International Nuclear Information System (INIS)

    Eagles, R.T.; Freeman, N.J.; Allison, J.M.; Sibbett, W.; Sleat, W.E.; Walker, D.R.

    1988-01-01

    The requirement for a sensitive two-dimensional imaging diagnostic with picosecond time resolution, particularly in the study of laser-produced plasmas, has previously been discussed. A temporal sequence of framed images would provide useful supplementary information to that provided by time resolved streak images across a spectral region of interest from visible to x-ray. To fulfill this requirement the Picoframe camera system has been developed. Results pertaining to the operation of a camera having S20 photocathode sensitivity are reviewed and the characteristics of an UV/x-ray sensitive version of the Picoframe system are presented

  10. Low-cost and high-speed optical mark reader based on an intelligent line camera

    Science.gov (United States)

    Hussmann, Stephan; Chan, Leona; Fung, Celine; Albrecht, Martin

    2003-08-01

    Optical Mark Recognition (OMR) is thoroughly reliable and highly efficient provided that high standards are maintained at both the planning and implementation stages. It is necessary to ensure that OMR forms are designed with due attention to data integrity checks, the best use is made of features built into the OMR, used data integrity is checked before the data is processed and data is validated before it is processed. This paper describes the design and implementation of an OMR prototype system for marking multiple-choice tests automatically. Parameter testing is carried out before the platform and the multiple-choice answer sheet has been designed. Position recognition and position verification methods have been developed and implemented in an intelligent line scan camera. The position recognition process is implemented into a Field Programmable Gate Array (FPGA), whereas the verification process is implemented into a micro-controller. The verified results are then sent to the Graphical User Interface (GUI) for answers checking and statistical analysis. At the end of the paper the proposed OMR system will be compared with commercially available system on the market.

  11. Stereo camera based virtual cane system with identifiable distance tactile feedback for the blind.

    Science.gov (United States)

    Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun

    2014-06-13

    In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.

  12. Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind

    Directory of Open Access Journals (Sweden)

    Donghun Kim

    2014-06-01

    Full Text Available In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user’s pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.

  13. Automatic Human Facial Expression Recognition Based on Integrated Classifier From Monocular Video with Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available An automatic recognition framework for human facial expressions from a monocular video with an uncalibrated camera is proposed. The expression characteristics are first acquired from a kind of deformable template, similar to a facial muscle distribution. After associated regularization, the time sequences from the trait changes in space-time under complete expressional production are then arranged line by line in a matrix. Next, the matrix dimensionality is reduced by a method of manifold learning of neighborhood-preserving embedding. Finally, the refined matrix containing the expression trait information is recognized by a classifier that integrates the hidden conditional random field (HCRF and support vector machine (SVM. In an experiment using the Cohn–Kanade database, the proposed method showed a comparatively higher recognition rate than the individual HCRF or SVM methods in direct recognition from two-dimensional human face traits. Moreover, the proposed method was shown to be more robust than the typical Kotsia method because the former contains more structural characteristics of the data to be classified in space-time

  14. Research on detecting heterogeneous fibre from cotton based on linear CCD camera

    Science.gov (United States)

    Zhang, Xian-bin; Cao, Bing; Zhang, Xin-peng; Shi, Wei

    2009-07-01

    The heterogeneous fibre in cotton make a great impact on production of cotton textile, it will have a bad effect on the quality of product, thereby affect economic benefits and market competitive ability of corporation. So the detecting and eliminating of heterogeneous fibre is particular important to improve machining technics of cotton, advance the quality of cotton textile and reduce production cost. There are favorable market value and future development for this technology. An optical detecting system obtains the widespread application. In this system, we use a linear CCD camera to scan the running cotton, then the video signals are put into computer and processed according to the difference of grayscale, if there is heterogeneous fibre in cotton, the computer will send an order to drive the gas nozzle to eliminate the heterogeneous fibre. In the paper, we adopt monochrome LED array as the new detecting light source, it's lamp flicker, stability of luminous intensity, lumens depreciation and useful life are all superior to fluorescence light. We analyse the reflection spectrum of cotton and various heterogeneous fibre first, then select appropriate frequency of the light source, we finally adopt violet LED array as the new detecting light source. The whole hardware structure and software design are introduced in this paper.

  15. Research on Deep Joints and Lode Extension Based on Digital Borehole Camera Technology

    Directory of Open Access Journals (Sweden)

    Han Zengqiang

    2015-09-01

    Full Text Available Structure characteristics of rock and orebody in deep borehole are obtained by borehole camera technology. By investigating on the joints and fissures in Shapinggou molybdenum mine, the dominant orientation of joint fissure in surrounding rock and orebody were statistically analyzed. Applying the theory of metallogeny and geostatistics, the relationship between joint fissure and lode’s extension direction is explored. The results indicate that joints in the orebody of ZK61borehole have only one dominant orientation SE126° ∠68°, however, the dominant orientations of joints in surrounding rock were SE118° ∠73°, SW225° ∠70° and SE122° ∠65°, NE79° ∠63°. Then a preliminary conclusion showed that the lode’s extension direction is specific and it is influenced by joints of surrounding rock. Results of other boreholes are generally agree well with the ZK61, suggesting the analysis reliably reflects the lode’s extension properties and the conclusion presents important references for deep ore prospecting.

  16. A Novel Indoor Mobile Localization System Based on Optical Camera Communication

    Directory of Open Access Journals (Sweden)

    Md. Tanvir Hossan

    2018-01-01

    Full Text Available Localizing smartphones in indoor environments offers excellent opportunities for e-commence. In this paper, we propose a localization technique for smartphones in indoor environments. This technique can calculate the coordinates of a smartphone using existing illumination infrastructure with light-emitting diodes (LEDs. The system can locate smartphones without further modification of the existing LED light infrastructure. Smartphones do not have fixed position and they may move frequently anywhere in an environment. Our algorithm uses multiple (i.e., more than two LED lights simultaneously. The smartphone gets the LED-IDs from the LED lights that are within the field of view (FOV of the smartphone’s camera. These LED-IDs contain the coordinate information (e.g., x- and y-coordinate of the LED lights. Concurrently, the pixel area on the image sensor (IS of projected image changes with the relative motion between the smartphone and each LED light which allows the algorithm to calculate the distance from the smartphone to that LED. At the end of this paper, we present simulated results for predicting the next possible location of the smartphone using Kalman filter to minimize the time delay for coordinate calculation. These simulated results demonstrate that the position resolution can be maintained within 10 cm.

  17. Data fusion for improved camera-based detection of respiration in neonates

    Science.gov (United States)

    Jorge, João.; Villarroel, Mauricio; Chaichulee, Sitthichok; McCormick, Kenny; Tarassenko, Lionel

    2018-02-01

    Monitoring respiration during neonatal sleep is notoriously difficult due to the nonstationary nature of the signals and the presence of spurious noise. Current approaches rely on the use of adhesive sensors, which can damage the fragile skin of premature infants. Recently, non-contact methods using low-cost RGB cameras have been proposed to acquire this vital sign from (a) motion or (b) photoplethysmographic signals extracted from the video recordings. Recent developments in deep learning have yielded robust methods for subject detection in video data. In the analysis described here, we present a novel technique for combining respiratory information from high-level visual descriptors provided by a multi-task convolutional neural network. Using blind source separation, we find the combination of signals which best suppresses pulse and motion distortions and subsequently use this to extract a respiratory signal. Evaluation results were obtained from recordings on 5 neonatal patients nursed in the Neonatal Intensive Care Unit (NICU) at the John Radcliffe Hospital, Oxford, UK. We compared respiratory rates derived from this fused breathing signal against those measured using the gold standard provided by the attending clinical staff. We show that respiratory rate (RR) be accurately estimated over the entire range of respiratory frequencies.

  18. Visible Watermarking Technique Based on Human Visual System for Single Sensor Digital Cameras

    Directory of Open Access Journals (Sweden)

    Hector Santoyo-Garcia

    2017-01-01

    Full Text Available In this paper we propose a visible watermarking algorithm, in which a visible watermark is embedded into the Bayer Colour Filter Array (CFA domain. The Bayer CFA is the most common raw image representation for images captured by single sensor digital cameras equipped in almost all mobile devices. In proposed scheme, the captured image is watermarked before it is compressed and stored in the storage system. Then this method enforces the rightful ownership of the watermarked image, since there is no other version of the image rather than the watermarked one. We also take into consideration the Human Visual System (HVS so that the proposed technique provides desired characteristics of a visible watermarking scheme, such that the embedded watermark is sufficiently perceptible and at same time not obtrusive in colour and grey-scale images. Unlike other Bayer CFA domain visible watermarking algorithms, in which only binary watermark pattern is supported, proposed watermarking algorithm allows grey-scale and colour images as watermark patterns. It is suitable for advertisement purpose, such as digital library and e-commerce, besides copyright protection.

  19. The design of visualization telemetry system based on camera module of the commercial smartphone

    Science.gov (United States)

    Wang, Chao; Ye, Zhao; Wu, Bin; Yin, Huan; Cao, Qipeng; Zhu, Jun

    2017-09-01

    Satellite telemetry is the vital indicators to estimate the performance of the satellite. The telemetry data, the threshold range and the variation tendency collected during the whole operational life of the satellite, can guide and evaluate the subsequent design of the satellite in the future. The rotational parts on the satellite (e.g. solar arrays, antennas and oscillating mirrors) affect collecting the solar energy and the other functions of the satellite. Visualization telemetries (pictures, video) are captured to interpret the status of the satellite qualitatively in real time as an important supplement for troubleshooting. The mature technology of commercial off-the-shelf (COTS) products have obvious advantages in terms of the design of construction, electronics, interfaces and image processing. Also considering the weight, power consumption, and cost, it can be directly used in our application or can be adopted for secondary development. In this paper, characteristic simulations of solar arrays radiation in orbit are presented, and a suitable camera module of certain commercial smartphone is adopted after the precise calculation and the product selection process. Considering the advantages of the COTS devices, which can solve both the fundamental and complicated satellite problems, this technique proposed is innovative to the project implementation in the future.

  20. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses

    Science.gov (United States)

    Fink, Wolfgang; You, Cindy X.; Tarbell, Mark A.

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (μAVS2) for real-time image processing. Truly standalone, μAVS2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on μAVS2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. μAVS2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, μAVS2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, μAVS2 can easily be reconfigured for other prosthetic systems. Testing of μAVS2 with actual retinal implant carriers is envisioned in the near future.

  1. Simulation-based evaluation and optimization of a new CdZnTe gamma-camera architecture (HiSens)

    International Nuclear Information System (INIS)

    Robert, Charlotte; Montemont, Guillaume; Rebuffel, Veronique; Guerin, Lucie; Verger, Loick; Buvat, Irene

    2010-01-01

    A new gamma-camera architecture named HiSens is presented and evaluated. It consists of a parallel hole collimator, a pixelated CdZnTe (CZT) detector associated with specific electronics for 3D localization and dedicated reconstruction algorithms. To gain in efficiency, a high aperture collimator is used. The spatial resolution is preserved thanks to accurate 3D localization of the interactions inside the detector based on a fine sampling of the CZT detector and on the depth of interaction information. The performance of this architecture is characterized using Monte Carlo simulations in both planar and tomographic modes. Detective quantum efficiency (DQE) computations are then used to optimize the collimator aperture. In planar mode, the simulations show that the fine CZT detector pixelization increases the system sensitivity by 2 compared to a standard Anger camera without loss in spatial resolution. These results are then validated against experimental data. In SPECT, Monte Carlo simulations confirm the merits of the HiSens architecture observed in planar imaging.

  2. A mobile device-based imaging spectrometer for environmental monitoring by attaching a lightweight small module to a commercial digital camera.

    Science.gov (United States)

    Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing

    2017-11-15

    Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.

  3. FPGA-Based HD Camera System for the Micropositioning of Biomedical Micro-Objects Using a Contactless Micro-Conveyor

    Directory of Open Access Journals (Sweden)

    Elmar Yusifli

    2017-03-01

    Full Text Available With recent advancements, micro-object contactless conveyers are becoming an essential part of the biomedical sector. They help avoid any infection and damage that can occur due to external contact. In this context, a smart micro-conveyor is devised. It is a Field Programmable Gate Array (FPGA-based system that employs a smart surface for conveyance along with an OmniVision complementary metal-oxide-semiconductor (CMOS HD camera for micro-object position detection and tracking. A specific FPGA-based hardware design and VHSIC (Very High Speed Integrated Circuit Hardware Description Language (VHDL implementation are realized. It is done without employing any Nios processor or System on a Programmable Chip (SOPC builder based Central Processing Unit (CPU core. It keeps the system efficient in terms of resource utilization and power consumption. The micro-object positioning status is captured with an embedded FPGA-based camera driver and it is communicated to the Image Processing, Decision Making and Command (IPDC module. The IPDC is programmed in C++ and can run on a Personal Computer (PC or on any appropriate embedded system. The IPDC decisions are sent back to the FPGA, which pilots the smart surface accordingly. In this way, an automated closed-loop system is employed to convey the micro-object towards a desired location. The devised system architecture and implementation principle is described. Its functionality is also verified. Results have confirmed the proper functionality of the developed system, along with its outperformance compared to other solutions.

  4. Evaluation of arterial oxygen saturation using RGB camera-based remote photoplethysmography

    Science.gov (United States)

    Nishidate, Izumi; Nakano, Kazuya; McDuff, Daniel; Niizeki, Kyuichi; Aizu, Yoshihisa; Haneishi, Hideaki

    2018-02-01

    Plethysmogram is the periodic variation in blood volume due to the cardiac pulse traveling through the body. Photo-plethysmograph (PPG) has been widely used to assess the cardiovascular system such as heart rate, blood pressure, cardiac output, vascular compliance. We have previously proposed a non-contact PPG imaging method using a digital red-green-blue camera. In the method, the Monte Carlo simulation for light transport is used to specify a relationship among the RGB-values and the concentrations of oxygenated hemoglobin (CHbO) and deoxygenated hemoglobin (CHbR). The total hemoglobin concentration (CHbT) can be calculated as a sum of CHbO and CHbR. Applying the fast Fourier transform (FFT) band pass filters to each pixel of the sequential images for CHbT along the time line, two-dimentional plethysmogram can be reconstructed. In this study, we further extend the method to imaging the arterial oxygen saturation (SaO2). The PPG signals for both CHbO and CHbR are extracted by the FFT band pass filter and the pulse wave amplitudes (PWAs) of CHbO and CHbR are calculated. We assume that the PWA for CHbO and that for CHbR are decreased and increased as SaO2 is decreased. The ratio of PWA for CHbO and that for CHbR are associated to the reference value of SaO2 measured by a commercially available pulse oximeter, which provide an empirical formula to estimate SaO2 from the PPG signal at each pixel of RGB image. In vivo animal experiments with rats during varying the fraction of inspired oxygen (FiO2) demonstrated the feasibility of the proposed method.

  5. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  6. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  7. Monitoring system for isolated limb perfusion based on a portable gamma camera

    International Nuclear Information System (INIS)

    Orero, A.; Muxi, A.; Rubi, S.; Duch, J.; Vidal-Sicart, S.; Pons, F.; Roe, N.; Rull, R.; Pavon, N.; Pavia, J.

    2009-01-01

    Background: The treatment of malignant melanoma or sarcomas on a limb using extremity perfusion with tumour necrosis factor (TNF-α) and melphalan can result in a high degree of systemic toxicity if there is any leakage from the isolated blood territory of the limb into the systemic vascular territory. Leakage is currently controlled by using radiotracers and heavy external probes in a procedure that requires continuous manual calculations. The aim of this work was to develop a light, easily transportable system to monitor limb perfusion leakage by controlling systemic blood pool radioactivity with a portable gamma camera adapted for intraoperative use as an external probe, and to initiate its application in the treatment of MM patients. Methods: A special collimator was built for maximal sensitivity. Software for acquisition and data processing in real time was developed. After testing the adequacy of the system, it was used to monitor limb perfusion leakage in 16 patients with malignant melanoma to be treated with perfusion of TNF-α and melphalan. Results: The field of view of the detector system was 13.8 cm, which is appropriate for the monitoring, since the area to be controlled was the precordial zone. The sensitivity of the system was 257 cps/MBq. When the percentage of leakage reaches 10% the associated absolute error is ±1%. After a mean follow-up period of 12 months, no patients have shown any significant or lasting side-effects. Partial or complete remission of lesions was seen in 9 out of 16 patients (56%) after HILP with TNF-α and melphalan. Conclusion: The detector system together with specially developed software provides a suitable automatic continuous monitoring system of any leakage that may occur during limb perfusion. This technique has been successfully implemented in patients for whom perfusion with TNF-α and melphalan has been indicated. (orig.)

  8. Monitoring system for isolated limb perfusion based on a portable gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Orero, A.; Muxi, A.; Rubi, S.; Duch, J. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); Vidal-Sicart, S.; Pons, F. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); Inst. d' Investigacions Biomediques August Pi i Sunyer (IDIBAPS), Barcelona (Spain); Red Tematica de Investigacion Cooperativa en Cancer (RTICC), Barcelona (Spain); Roe, N. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); CIBER de Bioingenieria, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona (Spain); Rull, R. [Servei de Cirurgia, Hospital Clinic, Barcelona (Spain); Pavon, N. [Inst. de Fisica Corpuscular, CSIC - UV, Valencia (Spain); Pavia, J. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); Inst. d' Investigacions Biomediques August Pi i Sunyer (IDIBAPS), Barcelona (Spain); CIBER de Bioingenieria, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona (Spain)

    2009-07-01

    Background: The treatment of malignant melanoma or sarcomas on a limb using extremity perfusion with tumour necrosis factor (TNF-{alpha}) and melphalan can result in a high degree of systemic toxicity if there is any leakage from the isolated blood territory of the limb into the systemic vascular territory. Leakage is currently controlled by using radiotracers and heavy external probes in a procedure that requires continuous manual calculations. The aim of this work was to develop a light, easily transportable system to monitor limb perfusion leakage by controlling systemic blood pool radioactivity with a portable gamma camera adapted for intraoperative use as an external probe, and to initiate its application in the treatment of MM patients. Methods: A special collimator was built for maximal sensitivity. Software for acquisition and data processing in real time was developed. After testing the adequacy of the system, it was used to monitor limb perfusion leakage in 16 patients with malignant melanoma to be treated with perfusion of TNF-{alpha} and melphalan. Results: The field of view of the detector system was 13.8 cm, which is appropriate for the monitoring, since the area to be controlled was the precordial zone. The sensitivity of the system was 257 cps/MBq. When the percentage of leakage reaches 10% the associated absolute error is {+-}1%. After a mean follow-up period of 12 months, no patients have shown any significant or lasting side-effects. Partial or complete remission of lesions was seen in 9 out of 16 patients (56%) after HILP with TNF-{alpha} and melphalan. Conclusion: The detector system together with specially developed software provides a suitable automatic continuous monitoring system of any leakage that may occur during limb perfusion. This technique has been successfully implemented in patients for whom perfusion with TNF-{alpha} and melphalan has been indicated. (orig.)

  9. MR imaging of medullary streaks in osteosclerosis: a case report

    International Nuclear Information System (INIS)

    Lee, Hak Soo; Joo, Kyung Bin; Park, Tae Soo; Song, Ho Taek; Kim, Yong Soo; Park, Dong Woo; Park, Choong Ki

    2000-01-01

    We present a case of medullary sclerosis of the appendicular skeleton in a patient with chronic renal insufficiency for whom MR imaging findings were characteristic. T1- and T2-weighted MR images showed multiple vertical lines (medullary streaks) of low signal intensity in the metaphyses and diaphyses of the distal femur and proximal tibia

  10. Cassava brown streak disease effects on leaf metabolites and ...

    African Journals Online (AJOL)

    Cassava brown streak disease effects on leaf metabolites and pigment accumulation. ... Total reducing sugar and starch content also dropped significantly (-30 and -60%, respectively), much as NASE 14 maintained a relatively higher amount of carbohydrates. Leaf protein levels were significantly reduced at a rate of 0.07 ...

  11. Avoiding acidic region streaking in two-dimensional gel ...

    Indian Academy of Sciences (India)

    Supplementary figure 6. 2DE gel images ... Number of acidic streaks. Fedyunin et al. 2012. 4.02. 6. Zuo et al. 2000. 2.54. 9. Valenete et ... CE, 3rd 2009 Proteasomal protein degradation in ... Nandakumar MP, Shen J, Raman B and Marten MR.

  12. Laparoscopic Removal of Streak Gonads in Turner Syndrome.

    Science.gov (United States)

    Mandelberger, Adrienne; Mathews, Shyama; Andikyan, Vaagn; Chuang, Linus

    To demonstrate the skills necessary for complete resection of bilateral streak gonads in Turner syndrome. Video case presentation with narration highlighting the key techniques used. The video was deemed exempt from formal review by our institutional review board. Turner syndrome is a form of gonadal dysgenesis that affects 1 in 2500 live births. Patients often have streak gonads and may present with primary amenorrhea or premature ovarian failure. Patients with a mosaic karyotype that includes a Y chromosome are at increased risk for gonadoblastoma and subsequent transformation into malignancy. Gonadectomy is recommended for these patients, typically at adolescence. Streak gonads can be difficult to identify, and tissue margins are often in close proximity to critical retroperitoneal structures. Resection can be technically challenging and requires a thorough understanding of retroperitoneal anatomy and precise dissection techniques to ensure complete removal. Laparoscopic approach to bilateral salpingo-oophorectomy of streak gonads. Retroperitoneal dissection and ureterolysis are performed, with the aid of the Ethicon Harmonic Ace, to ensure complete gonadectomy. Careful and complete resection of gonadal tissue in the hands of a skilled laparoscopic surgeon is key for effective cancer risk reduction surgery in Turner syndrome mosaics. Copyright © 2016 AAGL. Published by Elsevier Inc. All rights reserved.

  13. Significance and transmission of maize streak virus disease in Africa ...

    African Journals Online (AJOL)

    STORAGESEVER

    2008-12-29

    Dec 29, 2008 ... soil nutrients, altitude and temperature on the biology of maize streak virus (MSV) / vector populations is discussed. ... status of maize host plants and its effects on population dynamics of Cicadulina mbila Naudé. (Homoptera: ..... time necessary for the leafhopper to reach the mesophyll of the leaf and ingest ...

  14. Myocardial perfusion imaging with a cadmium zinc telluride-based gamma camera versus invasive fractional flow reserve

    Energy Technology Data Exchange (ETDEWEB)

    Mouden, Mohamed [Isala klinieken, Department of Cardiology, Zwolle (Netherlands); Isala klinieken, Department of Nuclear Medicine, Zwolle (Netherlands); Ottervanger, Jan Paul; Timmer, Jorik R. [Isala klinieken, Department of Cardiology, Zwolle (Netherlands); Knollema, Siert; Reiffers, Stoffer; Oostdijk, Ad H.J.; Jager, Pieter L. [Isala klinieken, Department of Nuclear Medicine, Zwolle (Netherlands); Boer, Menko-Jan de [University Medical Centre Nijmegen, Department of Cardiology, Nijmegen (Netherlands)

    2014-05-15

    Recently introduced ultrafast cardiac SPECT cameras with cadmium zinc telluride-based (CZT) detectors may provide superior image quality allowing faster acquisition with reduced radiation doses. Although the level of concordance between conventional SPECT and invasive fractional flow reserve (FFR) measurement has been studied, that between FFR and CZT-based SPECT is not yet known. Therefore, we aimed to assess the level of concordance between CZT SPECT and FFR in a large patient group with stable coronary artery disease. Both invasive FFR and myocardial perfusion imaging with a CZT-based SPECT camera, using Tc-tetrofosmin as tracer, were performed in 100 patients with stable angina and intermediate grade stenosis on invasive coronary angiography. A cut-off value of <0.75 was used to define abnormal FFR. The mean age of the patients was 64 ± 11 years, and 64 % were men. SPECT demonstrated ischaemia in 31 % of the patients, and 20 % had FFR <0.75. The concordance between CZT SPECT and FFR was 73 % on a per-patient basis and 79 % on a per-vessel basis. Discordant findings were more often seen in older patients and were mainly (19 %) the result of ischaemic SPECT findings in patients with FFR ≥0.75, whereas only 8 % had an abnormal FFR without ischaemia as demonstrated by CZT SPECT. Only 20 - 30 % of patients with intermediate coronary stenoses had significant ischaemia as assessed by CZT SPECT or invasive FFR. CZT SPECT showed a modest degree of concordance with FFR, which is comparable with previous results with conventional SPECT. Further investigations are particularly necessary in patients with normal SPECT and abnormal FFR, especially to determine whether these patients should undergo revascularization. (orig.)

  15. A compact low cost “master–slave” double crystal monochromator for x-ray cameras calibration of the Laser MégaJoule Facility

    Energy Technology Data Exchange (ETDEWEB)

    Hubert, S., E-mail: sebastien.hubert@cea.fr; Prévot, V.

    2014-12-21

    The Alternative Energies and Atomic Energy Commission (CEA-CESTA, France) built a specific double crystal monochromator (DCM) to perform calibration of x-ray cameras (CCD, streak and gated cameras) by means of a multiple anode diode type x-ray source for the MégaJoule Laser Facility. This DCM, based on pantograph geometry, was specifically modeled to respond to relevant engineering constraints and requirements. The major benefits are mechanical drive of the second crystal on the first one, through a single drive motor, as well as compactness of the entire device. Designed for flat beryl or Ge crystals, this DCM covers the 0.9–10 keV range of our High Energy X-ray Source. In this paper we present the mechanical design of the DCM, its features quantitatively measured and its calibration to finally provide monochromatized spectra displaying spectral purities better than 98%.

  16. Investigation of high resolution compact gamma camera module based on a continuous scintillation crystal using a novel charge division readout method

    International Nuclear Information System (INIS)

    Dai Qiusheng; Zhao Cuilan; Qi Yujin; Zhang Hualin

    2010-01-01

    The objective of this study is to investigate a high performance and lower cost compact gamma camera module for a multi-head small animal SPECT system. A compact camera module was developed using a thin Lutetium Oxyorthosilicate (LSO) scintillation crystal slice coupled to a Hamamatsu H8500 position sensitive photomultiplier tube (PSPMT). A two-stage charge division readout board based on a novel subtractive resistive readout with a truncated center-of-gravity (TCOG) positioning method was developed for the camera. The performance of the camera was evaluated using a flood 99m Tc source with a four-quadrant bar-mask phantom. The preliminary experimental results show that the image shrinkage problem associated with the conventional resistive readout can be effectively overcome by the novel subtractive resistive readout with an appropriate fraction subtraction factor. The response output area (ROA) of the camera shown in the flood image was improved up to 34%, and an intrinsic spatial resolution better than 2 mm of detector was achieved. In conclusion, the utilization of a continuous scintillation crystal and a flat-panel PSPMT equipped with a novel subtractive resistive readout is a feasible approach for developing a high performance and lower cost compact gamma camera. (authors)

  17. Detailed measurements and shaping of gate profiles for microchannel-plate-based X-ray framing cameras

    International Nuclear Information System (INIS)

    Landen, O.L.; Hammel, B.A.; Bell, P.M.; Abare, A.; Bradley, D.K.; Univ. of Rochester, NY

    1994-01-01

    Gated, microchannel-plate-based (MCP) framing cameras are increasingly used worldwide for x-ray imaging of subnanosecond laser-plasma phenomena. Large dynamic range (> 1,000) measurements of gain profiles for gated microchannel plates (MCP) are presented. Temporal profiles are reconstructed for any point on the microstrip transmission line from data acquired over many shots with variable delay. No evidence for significant pulse distortion by voltage reflections at the ends of the microstrip is observed. The measured profiles compare well to predictions by a time-dependent discrete dynode model down to the 1% level. The calculations do overestimate the contrast further into the temporal wings. The role of electron transit time dispersion in limiting the minimum achievable gate duration is then investigated by using variable duration flattop gating pulses. A minimum gate duration of 50 ps is achieved with flattop gating, consistent with a fractional transit time spread of ∼ 15%

  18. PROCEDURE ENABLING SIMULATION AND IN-DEPTH ANALYSIS OF OPTICAL EFFECTS IN CAMERA-BASED TIME-OF-FLIGHT SENSORS

    Directory of Open Access Journals (Sweden)

    M. Baumgart

    2018-05-01

    Full Text Available This paper presents a simulation approach for Time-of-Flight cameras to estimate sensor performance and accuracy, as well as to help understanding experimentally discovered effects. The main scope is the detailed simulation of the optical signals. We use a raytracing-based approach and use the optical path length as the master parameter for depth calculations. The procedure is described in detail with references to our implementation in Zemax OpticStudio and Python. Our simulation approach supports multiple and extended light sources and allows accounting for all effects within the geometrical optics model. Especially multi-object reflection/scattering ray-paths, translucent objects, and aberration effects (e.g. distortion caused by the ToF lens are supported. The optical path length approach also enables the implementation of different ToF senor types and transient imaging evaluations. The main features are demonstrated on a simple 3D test scene.

  19. A method for evaluating image quality of monochrome and color displays based on luminance by use of a commercially available color digital camera

    Energy Technology Data Exchange (ETDEWEB)

    Tokurei, Shogo, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp [Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, Fukuoka 812-8582, Japan and Department of Radiology, Yamaguchi University Hospital, 1-1-1 Minamikogushi, Ube, Yamaguchi 755-8505 (Japan); Morishita, Junji, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp [Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, Fukuoka 812-8582 (Japan)

    2015-08-15

    Purpose: The aim of this study is to propose a method for the quantitative evaluation of image quality of both monochrome and color liquid-crystal displays (LCDs) using a commercially available color digital camera. Methods: The intensities of the unprocessed red (R), green (G), and blue (B) signals of a camera vary depending on the spectral sensitivity of the image sensor used in the camera. For consistent evaluation of image quality for both monochrome and color LCDs, the unprocessed RGB signals of the camera were converted into gray scale signals that corresponded to the luminance of the LCD. Gray scale signals for the monochrome LCD were evaluated by using only the green channel signals of the camera. For the color LCD, the RGB signals of the camera were converted into gray scale signals by employing weighting factors (WFs) for each RGB channel. A line image displayed on the color LCD was simulated on the monochrome LCD by using a software application for subpixel driving in order to verify the WF-based conversion method. Furthermore, the results obtained by different types of commercially available color cameras and a photometric camera were compared to examine the consistency of the authors’ method. Finally, image quality for both the monochrome and color LCDs was assessed by measuring modulation transfer functions (MTFs) and Wiener spectra (WS). Results: The authors’ results demonstrated that the proposed method for calibrating the spectral sensitivity of the camera resulted in a consistent and reliable evaluation of the luminance of monochrome and color LCDs. The MTFs and WS showed different characteristics for the two LCD types owing to difference in the subpixel structure. The MTF in the vertical direction of the color LCD was superior to that of the monochrome LCD, although the WS in the vertical direction of the color LCD was inferior to that of the monochrome LCD as a result of luminance fluctuations in RGB subpixels. Conclusions: The authors

  20. Streaked x-ray spectrometer having a discrete selection of Bragg geometries for Omega

    Energy Technology Data Exchange (ETDEWEB)

    Millecchia, M.; Regan, S. P.; Bahr, R. E.; Romanofsky, M.; Sorce, C. [Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623-1299 (United States)

    2012-10-15

    The streaked x-ray spectrometer (SXS) is used with streak cameras [D. H. Kalantar, P. M. Bell, R. L. Costa, B. A. Hammel, O. L. Landen, T. J. Orzechowski, J. D. Hares, and A. K. L. Dymoke-Bradshaw, in 22nd International Congress on High-Speed Photography and Photonics, edited by D. L. Paisley and A. M. Frank (SPIE, Bellingham, WA, 1997), Vol. 2869, p. 680] positioned with a ten-inch manipulator on OMEGA [T. R. Boehly et al., Opt. Commun. 133, 495 (1997)] and OMEGA EP [L. J. Waxer et al., Presented at CLEO/QELS 2008, San Jose, CA, 4-9 May 2008 (Paper JThB1)] for time-resolved, x-ray spectroscopy of laser-produced plasmas in the 1.4- to 20-keV photon-energy range. These experiments require measuring a portion of this photon-energy range to monitor a particular emission or absorption feature of interest. The SXS relies on a pinned mechanical reference system to create a discrete set of Bragg reflection geometries for a variety of crystals. A wide selection of spectral windows is achieved accurately and efficiently using this technique. It replaces the previous spectrometer designs that had a continuous Bragg angle adjustment and required a tedious alignment calibration procedure. The number of spectral windows needed for the SXS was determined by studying the spectral ranges selected by OMEGA users over the last decade. These selections are easily configured in the SXS using one of the 25 discrete Bragg reflection geometries and one of the six types of Bragg crystals, including two curved crystals.

  1. Gamma camera

    International Nuclear Information System (INIS)

    Conrad, B.; Heinzelmann, K.G.

    1975-01-01

    A gamma camera is described which obviates the distortion of locating signals generally caused by the varied light conductive capacities of the light conductors in that the flow of light through each light conductor may be varied by means of a shutter. A balancing of the flow of light through each of the individual light conductors, in effect, collective light conductors may be balanced on the basis of their light conductive capacities or properties, so as to preclude a distortion of the locating signals caused by the varied light conductive properties of the light conductors. Each light conductor has associated therewith two, relative to each other, independently adjustable shutters, of which one forms a closure member and the other an adjusting shutter. In this embodiment of the invention it is thus possible to block all of the light conductors leading to a photoelectric transducer, with the exception of those light conductors which are to be balanced. The balancing of the individual light conductors may then be obtained on the basis of the output signals of the photoelectric transducer. (auth)

  2. Scintillation camera

    International Nuclear Information System (INIS)

    Zioni, J.; Klein, Y.; Inbar, D.

    1975-01-01

    The scintillation camera is to make pictures of the density distribution of radiation fields created by the injection or administration radioactive medicaments into the body of the patient. It contains a scintillation crystal, several photomultipliers and computer circuits to obtain an analytical function at the exits of the photomultiplier which is dependent on the position of the scintillations at the time in the crystal. The scintillation crystal is flat and spatially corresponds to the production site of radiation. The photomultipliers form a pattern whose basic form consists of at least three photomultipliers. They are assigned to at least two crossing parallel series groups where a vertical running reference axis in the crystal plane belongs to each series group. The computer circuits are each assigned to a reference axis. Each series of a series group assigned to one of the reference axes in the computer circuit has an adder to produce a scintillation dependent series signal. Furthermore, the projection of the scintillation on this reference axis is calculated. A series signal is used for this which originates from a series chosen from two neighbouring photomultiplier series of this group. The scintillation must have appeared between these chosen series. They are termed as basic series. The photomultiplier can be arranged hexagonally or rectangularly. (GG/LH) [de

  3. Implementation of an image acquisition and processing system based on FlexRIO, CameraLink and areaDetector

    Energy Technology Data Exchange (ETDEWEB)

    Esquembri, S.; Ruiz, M. [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Barrera, E., E-mail: eduardo.barrera@upm.es [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Sanz, D.; Bustos, A. [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Castro, R.; Vega, J. [National Fusion Laboratory, CIEMAT, Madrid (Spain)

    2016-11-15

    Highlights: • The system presented acquires and process images from any CameraLink compliant camera. • The frame grabber implanted with FlexRIO technology have image time stamping and preprocessing capabilities. • The system is integrated into EPICS using areaDetector for a flexible configuration of image the acquisition and processing chain. • Is fully compatible with the architecture of the ITER Fast Controllers. - Abstract: Image processing systems are commonly used in current physics experiments, such as nuclear fusion experiments. These experiments usually require multiple cameras with different resolutions, framerates and, frequently, different software drivers. The integration of heterogeneous types of cameras without a unified hardware and software interface increases the complexity of the acquisition system. This paper presents the implementation of a distributed image acquisition and processing system for CameraLink cameras. This system implements a camera frame grabber using Field Programmable Gate Arrays (FPGAs), a reconfigurable hardware platform that allows for image acquisition and real-time preprocessing. The frame grabber is integrated into Experimental Physics and Industrial Control System (EPICS) using the areaDetector EPICS software module, which offers a common interface shared among tens of cameras to configure the image acquisition and process these images in a distributed control system. The use of areaDetector also allows the image processing to be parallelized and concatenated using: multiple computers; areaDetector plugins; and the areaDetector standard type for data, NDArrays. The architecture developed is fully compatible with ITER Fast Controllers and the entire system has been validated using a camera hardware simulator that stream videos from fusion experiment databases.

  4. Software development and its description for Geoid determination based on Spherical-Cap-Harmonics Modelling using digital-zenith camera and gravimetric measurements hybrid data

    Science.gov (United States)

    Morozova, K.; Jaeger, R.; Balodis, J.; Kaminskis, J.

    2017-10-01

    Over several years the Institute of Geodesy and Geoinformatics (GGI) was engaged in the design and development of a digital zenith camera. At the moment the camera developments are finished and tests by field measurements are done. In order to check these data and to use them for geoid model determination DFHRS (Digital Finite element Height reference surface (HRS)) v4.3. software is used. It is based on parametric modelling of the HRS as a continous polynomial surface. The HRS, providing the local Geoid height N, is a necessary geodetic infrastructure for a GNSS-based determination of physcial heights H from ellipsoidal GNSS heights h, by H=h-N. The research and this publication is dealing with the inclusion of the data of observed vertical deflections from digital zenith camera into the mathematical model of the DFHRS approach and software v4.3. A first target was to test out and validate the mathematical model and software, using additionally real data of the above mentioned zenith camera observations of deflections of the vertical. A second concern of the research was to analyze the results and the improvement of the Latvian quasi-geoid computation compared to the previous version HRS computed without zenith camera based deflections of the vertical. The further development of the mathematical model and software concerns the use of spherical-cap-harmonics as the designed carrier function for the DFHRS v.5. It enables - in the sense of the strict integrated geodesy approach, holding also for geodetic network adjustment - both a full gravity field and a geoid and quasi-geoid determination. In addition, it allows the inclusion of gravimetric measurements, together with deflections of the vertical from digital-zenith cameras, and all other types of observations. The theoretical description of the updated version of DFHRS software and methods are discussed in this publication.

  5. Calibration of gamma cameras for the evaluation of accidental intakes of high-energy photon emitting radionuclides by humans based on urine samples

    Energy Technology Data Exchange (ETDEWEB)

    Degenhardt, A.L.; Lucena, E.A.; Reis, A.A. dos; Souza, W.O.; Dantas, A.L.A.; Dantas, B.M., E-mail: bmdantas@ird.gov.br [Instituto de Radioproteção e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Div. de Dosimetria

    2017-07-01

    The prompt response to emergency situations involving suspicion of intakes of radionuclides requires the use of simple and rapid methods of internal monitoring of the exposed individuals. The use of gamma cameras to estimate intakes and committed doses was investigated by the Centers for Disease Control and Preventions (CDC) of the USA in 2010.The present study aims to develop a calibration protocol for gamma cameras to be applied on internal monitoring based on urine samples to evaluate the incorporation of high-energy photon emitting radionuclides in emergency situations. A gamma camera available in a public hospital located in the city of Rio de Janeiro was calibrated using a standard liquid source of {sup 152}Eu supplied by the LNMRI of the IRD.'Efficiency vs Energy' curves at 10 and 30 cm were obtained. Calibration factors, Minimum Detectable Activities and Minimum Detectable Effective Doses of the gamma camera were calculated for {sup 137}Cs and {sup 60}Co. The gamma camera evaluated in this work presents enough sensitivity to detect activities of such radionuclides at dose levels suitable to assess suspected accidental intakes. (author)

  6. The Feasibility of Performing Particle Tracking Based Flow Measurements with Acoustic Cameras

    Science.gov (United States)

    2017-08-01

    particles . The motion of the light- reflecting tracer particles is observed, generally with a CCD or complementary metal-oxide semiconductor (CMOS) digital...ER D C/ CH L SR -1 7- 1 Dredging Operations and Environmental Research Program The Feasibility of Performing Particle - Tracking-Based...acwc.sdp.sirsi.net/client/default. Dredging Operations and Environmental Research Program ERDC/CHL SR-17-1 August 2017 The Feasibility of Performing Particle

  7. Real-time camera-based face detection using a modified LAMSTAR neural network system

    Science.gov (United States)

    Girado, Javier I.; Sandin, Daniel J.; DeFanti, Thomas A.; Wolf, Laura K.

    2003-03-01

    This paper describes a cost-effective, real-time (640x480 at 30Hz) upright frontal face detector as part of an ongoing project to develop a video-based, tetherless 3D head position and orientation tracking system. The work is specifically targeted for auto-stereoscopic displays and projection-based virtual reality systems. The proposed face detector is based on a modified LAMSTAR neural network system. At the input stage, after achieving image normalization and equalization, a sub-window analyzes facial features using a neural network. The sub-window is segmented, and each part is fed to a neural network layer consisting of a Kohonen Self-Organizing Map (SOM). The output of the SOM neural networks are interconnected and related by correlation-links, and can hence determine the presence of a face with enough redundancy to provide a high detection rate. To avoid tracking multiple faces simultaneously, the system is initially trained to track only the face centered in a box superimposed on the display. The system is also rotationally and size invariant to a certain degree.

  8. Accelerometer and Camera-Based Strategy for Improved Human Fall Detection

    KAUST Repository

    Zerrouki, Nabil; Harrou, Fouzi; Sun, Ying; Houacine, Amrane

    2016-01-01

    In this paper, we address the problem of detecting human falls using anomaly detection. Detection and classification of falls are based on accelerometric data and variations in human silhouette shape. First, we use the exponentially weighted moving average (EWMA) monitoring scheme to detect a potential fall in the accelerometric data. We used an EWMA to identify features that correspond with a particular type of fall allowing us to classify falls. Only features corresponding with detected falls were used in the classification phase. A benefit of using a subset of the original data to design classification models minimizes training time and simplifies models. Based on features corresponding to detected falls, we used the support vector machine (SVM) algorithm to distinguish between true falls and fall-like events. We apply this strategy to the publicly available fall detection databases from the university of Rzeszow’s. Results indicated that our strategy accurately detected and classified fall events, suggesting its potential application to early alert mechanisms in the event of fall situations and its capability for classification of detected falls. Comparison of the classification results using the EWMA-based SVM classifier method with those achieved using three commonly used machine learning classifiers, neural network, K-nearest neighbor and naïve Bayes, proved our model superior.

  9. Scalable gamma-ray camera for wide-area search based on silicon photomultipliers array

    Science.gov (United States)

    Jeong, Manhee; Van, Benjamin; Wells, Byron T.; D'Aries, Lawrence J.; Hammig, Mark D.

    2018-03-01

    Portable coded-aperture imaging systems based on scintillators and semiconductors have found use in a variety of radiological applications. For stand-off detection of weakly emitting materials, large volume detectors can facilitate the rapid localization of emitting materials. We describe a scalable coded-aperture imaging system based on 5.02 × 5.02 cm2 CsI(Tl) scintillator modules, each partitioned into 4 × 4 × 20 mm3 pixels that are optically coupled to 12 × 12 pixel silicon photo-multiplier (SiPM) arrays. The 144 pixels per module are read-out with a resistor-based charge-division circuit that reduces the readout outputs from 144 to four signals per module, from which the interaction position and total deposited energy can be extracted. All 144 CsI(Tl) pixels are readily distinguishable with an average energy resolution, at 662 keV, of 13.7% FWHM, a peak-to-valley ratio of 8.2, and a peak-to-Compton ratio of 2.9. The detector module is composed of a SiPM array coupled with a 2 cm thick scintillator and modified uniformly redundant array mask. For the image reconstruction, cross correlation and maximum likelihood expectation maximization methods are used. The system shows a field of view of 45° and an angular resolution of 4.7° FWHM.

  10. Accelerometer and Camera-Based Strategy for Improved Human Fall Detection

    KAUST Repository

    Zerrouki, Nabil

    2016-10-29

    In this paper, we address the problem of detecting human falls using anomaly detection. Detection and classification of falls are based on accelerometric data and variations in human silhouette shape. First, we use the exponentially weighted moving average (EWMA) monitoring scheme to detect a potential fall in the accelerometric data. We used an EWMA to identify features that correspond with a particular type of fall allowing us to classify falls. Only features corresponding with detected falls were used in the classification phase. A benefit of using a subset of the original data to design classification models minimizes training time and simplifies models. Based on features corresponding to detected falls, we used the support vector machine (SVM) algorithm to distinguish between true falls and fall-like events. We apply this strategy to the publicly available fall detection databases from the university of Rzeszow’s. Results indicated that our strategy accurately detected and classified fall events, suggesting its potential application to early alert mechanisms in the event of fall situations and its capability for classification of detected falls. Comparison of the classification results using the EWMA-based SVM classifier method with those achieved using three commonly used machine learning classifiers, neural network, K-nearest neighbor and naïve Bayes, proved our model superior.

  11. Towards Adaptive Virtual Camera Control In Computer Games

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user...... model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ...... camera control in games is discussed....

  12. Camera-based ratiometric fluorescence transduction of nucleic acid hybridization with reagentless signal amplification on a paper-based platform using immobilized quantum dots as donors.

    Science.gov (United States)

    Noor, M Omair; Krull, Ulrich J

    2014-10-21

    Paper-based diagnostic assays are gaining increasing popularity for their potential application in resource-limited settings and for point-of-care screening. Achievement of high sensitivity with precision and accuracy can be challenging when using paper substrates. Herein, we implement the red-green-blue color palette of a digital camera for quantitative ratiometric transduction of nucleic acid hybridization on a paper-based platform using immobilized quantum dots (QDs) as donors in fluorescence resonance energy transfer (FRET). A nonenzymatic and reagentless means of signal enhancement for QD-FRET assays on paper substrates is based on the use of dry paper substrates for data acquisition. This approach offered at least a 10-fold higher assay sensitivity and at least a 10-fold lower limit of detection (LOD) as compared to hydrated paper substrates. The surface of paper was modified with imidazole groups to assemble a transduction interface that consisted of immobilized QD-probe oligonucleotide conjugates. Green-emitting QDs (gQDs) served as donors with Cy3 as an acceptor. A hybridization event that brought the Cy3 acceptor dye in close proximity to the surface of immobilized gQDs was responsible for a FRET-sensitized emission from the acceptor dye, which served as an analytical signal. A hand-held UV lamp was used as an excitation source and ratiometric analysis using an iPad camera was possible by a relative intensity analysis of the red (Cy3 photoluminescence (PL)) and green (gQD PL) color channels of the digital camera. For digital imaging using an iPad camera, the LOD of the assay in a sandwich format was 450 fmol with a dynamic range spanning 2 orders of magnitude, while an epifluorescence microscope detection platform offered a LOD of 30 fmol and a dynamic range spanning 3 orders of magnitude. The selectivity of the hybridization assay was demonstrated by detection of a single nucleotide polymorphism at a contrast ratio of 60:1. This work provides an

  13. Electron streaking in the autoionization region of H2

    International Nuclear Information System (INIS)

    Palacios, Alicia; González-Castrillo, Alberto; Martín, Fernando

    2015-01-01

    We use a UV-pump/IR-probe scheme, combining a single attosecond UV pulse and a 750 nm IR pulse, to explore laser-assisted photoionization of the hydrogen molecule in the autoionization region. The electron energy distributions exhibit unusual streaking patterns that are explored for different angles of the electron ejection with respect to the polarization vector and the molecular axis. Moreover, by controlling the time delay between the pulses, we observe that one can suppress the autoionization channel. (paper)

  14. Design and development of AXUV-based soft X-ray diagnostic camera for Aditya Tokamak

    International Nuclear Information System (INIS)

    Raval, Jayesh V.; Purohit, Shishir; Joisa, Y. Shankara

    2015-01-01

    The hot tokamak plasma emits Soft X-rays (SXR) in accordance with the temperature and density which are important to be studied. A silicon photo diode array (AXUV16ELG, Opto-diode, USA) based prototype SXR diagnostics is designed and developed for ADITYA tokamak for the study of SXR radial intensity profile, internal disruption (Saw-tooth crash), MHD instabilities. The diagnostic is having an array of 16 detector of millimeter dimension in a linear configuration. Absolute Extreme Ultra Violate (AXUV) detector offers compact size, improved time response with considerably good quantum efficiency in the soft X-ray range (200 eV to 10 keV). The diagnostic is designed in competence with the ADITYA tokamak protocol. The diagnostic design geometry allows detector view the plasma through a slot hole (0.5 cm X 0.05 cm), 10 μm Beryllium foil filter window, cutting off energies below 750 eV. The diagnostic was installed on Aditya vacuum vessel at radial port no 7 enabling the diagnostics to view the core plasma. The spatial resolution designed for diagnostic configuration is 1.3 cm at plasma centre. The signal generated from SXR detector is acquired with a dedicated single board computer based data acquisition system at 50 kHz. The diagnostic took observation for the ohmically heated plasma. The data was then processed to construct spatial and temporal profile of SXR intensity for Aditya plasma. This information was complimentary to the Silicon surface barrier detector (SBD) based array for the same plasma discharge. The cross calibration between the two was considerably satisfactory under the assumptions considered. (author)

  15. Development of a gamma camera based on a multiwire proportional counter

    International Nuclear Information System (INIS)

    Anisimov, Yu.S.; Zanevskij, Yu.V.; Ivanov, A.B.

    1981-01-01

    The developed high-pressure gamma-chamber based on a gas multiwire detector is discussed. The main characteristics of the detector for a gamma-ray energy of up to 100 keV are given. The chamber operation is possible at a pressure of up to 10 atm. The detector is filled with a Xe-CH 4 (90-10) mixture. The detector efficiency is about 50%, the space resolution is better than 2 mm at a working region of 280x280 mm [ru

  16. An Intelligent Automated Door Control System Based on a Smart Camera

    Directory of Open Access Journals (Sweden)

    Jiann-Jone Chen

    2013-05-01

    Full Text Available This paper presents an innovative access control system, based on human detection and path analysis, to reduce false automatic door system actions while increasing the added values for security applications. The proposed system can first identify a person from the scene, and track his trajectory to predict his intention for accessing the entrance, and finally activate the door accordingly. The experimental results show that the proposed system has the advantages of high precision, safety, reliability, and can be responsive to demands, while preserving the benefits of being low cost and high added value.

  17. Spatial resolution limit study of a CCD camera and scintillator based neutron imaging system according to MTF determination and analysis

    International Nuclear Information System (INIS)

    Kharfi, F.; Denden, O.; Bourenane, A.; Bitam, T.; Ali, A.

    2012-01-01

    Spatial resolution limit is a very important parameter of an imaging system that should be taken into consideration before examination of any object. The objectives of this work are the determination of a neutron imaging system's response in terms of spatial resolution. The proposed procedure is based on establishment of the Modulation Transfer Function (MTF). The imaging system being studied is based on a high sensitivity CCD neutron camera (2×10 −5 lx at f1.4). The neutron beam used is from the horizontal beam port (H.6) of the Algerian Es-Salam research reactor. Our contribution is on the MTF determination by proposing an accurate edge identification method and a line spread function undersampling problem-resolving procedure. These methods and procedure are integrated into a MatLab code. The methods, procedures and approaches proposed in this work are available for any other neutron imaging system and allow for judging the ability of a neutron imaging system to produce spatial (internal details) properties of any object under examination. - Highlights: ► Determination of spatial response of a neutron imaging system. ► Ability of a neutron imaging system to reproduce spatial properties of any object. ► Spatial resolution limits measurement using MTF with the slanted edge method. ► Accurate edge identification and line spread function sampling improvement. ► Development of a MatLab code to compute automatically the MTF.

  18. A Study of Deep CNN-Based Classification of Open and Closed Eyes Using a Visible Light Camera Sensor

    Directory of Open Access Journals (Sweden)

    Ki Wan Kim

    2017-06-01

    Full Text Available The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods.

  19. Imaging camera system of OYGBR-phosphor-based white LED lighting

    Science.gov (United States)

    Kobashi, Katsuya; Taguchi, Tsunemasa

    2005-03-01

    The near-ultraviolet (nUV) white LED approach is analogous to three-color fluorescent lamp technology, which is based on the conversion of nUV radiation to visible light via the photoluminescence process in phosphor materials. The nUV light is not included in the white light generation from nUV-based white LED devices. This technology can thus provide a higher quality of white light than the blue and YAG method. A typical device demonstrates white luminescence with Tc=3,700 K, Ra > 93, K > 40 lm/W and chromaticity (x, y) = (0.39, 0.39), respectively. The orange, yellow, green and blue OYGB) or orange, yellow, red, green and blue (OYRGB) device shows a luminescence spectrum broader than of an RGB white LED and a better color rendering index. Such superior luminous characteristics could be useful for the application of several kinds of endoscope. We have shown the excellent pictures of digestive organs in a stomach of a dog due to the strong green component and high Ra.

  20. Advanced Camera Image Cropping Approach for CNN-Based End-to-End Controls on Sustainable Computing

    Directory of Open Access Journals (Sweden)

    Yunsick Sung

    2018-03-01

    Full Text Available Recent research on deep learning has been applied to a diversity of fields. In particular, numerous studies have been conducted on self-driving vehicles using end-to-end approaches based on images captured by a single camera. End-to-end controls learn the output vectors of output devices directly from the input vectors of available input devices. In other words, an end-to-end approach learns not by analyzing the meaning of input vectors, but by extracting optimal output vectors based on input vectors. Generally, when end-to-end control is applied to self-driving vehicles, the steering wheel and pedals are controlled autonomously by learning from the images captured by a camera. However, high-resolution images captured from a car cannot be directly used as inputs to Convolutional Neural Networks (CNNs owing to memory limitations; the image size needs to be efficiently reduced. Therefore, it is necessary to extract features from captured images automatically and to generate input images by merging the parts of the images that contain the extracted features. This paper proposes a learning method for end-to-end control that generates input images for CNNs by extracting road parts from input images, identifying the edges of the extracted road parts, and merging the parts of the images that contain the detected edges. In addition, a CNN model for end-to-end control is introduced. Experiments involving the Open Racing Car Simulator (TORCS, a sustainable computing environment for cars, confirmed the effectiveness of the proposed method for self-driving by comparing the accumulated difference in the angle of the steering wheel in the images generated by it with those of resized images containing the entire captured area and cropped images containing only a part of the captured area. The results showed that the proposed method reduced the accumulated difference by 0.839% and 0.850% compared to those yielded by the resized images and cropped images

  1. Electro-optical design of a long slit streak tube

    Science.gov (United States)

    Tian, Liping; Tian, Jinshou; Wen, Wenlong; Chen, Ping; Wang, Xing; Hui, Dandan; Wang, Junfeng

    2017-11-01

    A small size and long slit streak tube with high spatial resolution was designed and optimized. Curved photocathode and screen were adopted to increase the photocathode working area and spatial resolution. High physical temporal resolution obtained by using a slit accelerating electrode. Deflection sensitivity of the streak tube was improved by adopting two-folded deflection plates. The simulations indicate that the photocathode effective working area can reach 30mm × 5mm. The static spatial resolution is higher than 40lp/mm and 12lp/mm along scanning and slit directions respectively while the physical temporal resolution is higher than 60ps. The magnification is 0.75 and 0.77 in scanning and slit directions. And also, the deflection sensitivity is as high as 37mm/kV. The external dimension of the streak tube are only ∅74mm×231mm. Thus, it can be applied to laser imaging radar system for large field of view and high range precision detection.

  2. Slope streaks on Mars: A new “wet” mechanism

    Science.gov (United States)

    Kreslavsky, Mikhail A.; Head, James W.

    2009-06-01

    Slope steaks are one of the most intriguing modern phenomena observed on Mars. They have been mostly interpreted as some specific type of granular flow. We propose another mechanism for slope streak formation on Mars. It involves natural seasonal formation of a modest amount of highly concentrated chloride brines within a seasonal thermal skin, and runaway propagation of percolation fronts. Given the current state of knowledge of temperature regimes and the composition and structure of the surface layer in the slope streak regions, this mechanism is consistent with the observational constraints; it requires an assumption that a significant part of the observed chlorine to be in form of calcium and ferric chloride, and a small part of the observed hydrogen to be in form of water ice. This "wet" mechanism has a number of appealing advantages in comparison to the widely accepted "dry" granular flow mechanism. Potential tests for the "wet" mechanism include better modeling of the temperature regime and observations of the seasonality of streak formation.

  3. Development of an Optical Fiber-Based MR Compatible Gamma Camera for SPECT/MRI Systems

    Science.gov (United States)

    Yamamoto, Seiichi; Watabe, Tadashi; Kanai, Yasukazu; Watabe, Hiroshi; Hatazawa, Jun

    2015-02-01

    Optical fiber is a promising material for integrated positron emission tomography (PET) and magnetic resonance imaging (MRI) PET/MRI systems. Because its material is plastic, it has no interference between MRI. However, it is unclear whether this material can also be used for a single photon emission tomography (SPECT)/MRI system. For this purpose, we developed an optical fiber-based block detector for a SPECT/MRI system and tested its performance by combining 1.2 ×1.2 ×6 mm Y2SiO5 (YSO) pixels into a 15 ×15 block and was coupled it to an optical fiber image guide that used was 0.5-mm in diameter with 80-cm long double clad fibers. The image guide had 22 ×22 mm rectangular input and an equal size output. The input of the optical fiber-based image guide was bent at 90 degrees, and the output was optically coupled to a 1-in square high quantum efficiency position sensitive photomultiplier tube (HQE-PSPMT). The parallel hole, 7-mm-thick collimator made of tungsten plastic was mounted on a YSO block. The diameter of the collimator holes was 0.8 mm which was positioned one-to-one coupled to the YSO pixels. We evaluated the intrinsic and system performances. We resolved most of the YSO pixels in a two-dimensional histogram for Co-57 gamma photons (122-keV) with an average peak-to-value ratio of 1.5. The energy resolution was 38% full-width at half-maximum (FWHM). The system resolution was 1.7-mm FWHM, 1.5 mm from the collimator surface, and the sensitivity was 0.06%. Images of a Co-57 point source could be successfully obtained inside 0.3 T MRI without serious interference. We conclude that the developed optical fiber-based YSO block detector is promising for SPECT/MRI systems.

  4. Color blending based on viewpoint and surface normal for generating images from any viewpoint using multiple cameras

    OpenAIRE

    Mukaigawa, Yasuhiro; Genda, Daisuke; Yamane, Ryo; Shakunaga, Takeshi

    2003-01-01

    A color blending method for generating a high quality image of human motion is presented. The 3D (three-dimensional) human shape is reconstructed by volume intersection and expressed as a set of voxels. As each voxel is observed as different colors from different cameras, voxel color needs to be assigned appropriately from several colors. We present a color blending method, which calculates voxel color from a linear combination of the colors observed by multiple cameras. The weightings in the...

  5. Two low-cost digital camera-based platforms for quantitative creatinine analysis in urine.

    Science.gov (United States)

    Debus, Bruno; Kirsanov, Dmitry; Yaroshenko, Irina; Sidorova, Alla; Piven, Alena; Legin, Andrey

    2015-10-01

    In clinical analysis creatinine is a routine biomarker for the assessment of renal and muscular dysfunctions. Although several techniques have been proposed for a fast and accurate quantification of creatinine in human serum or urine, most of them require expensive or complex apparatus, advanced sample preparation or skilled operators. To circumvent these issues, we propose two home-made platforms based on a CD Spectroscope (CDS) and Computer Screen Photo-assisted Technique (CSPT) for the rapid assessment of creatinine level in human urine. Both systems display a linear range (r(2) = 0.9967 and 0.9972, respectively) from 160 μmol L(-1) to 1.6 mmol L(-1) for standard creatinine solutions (n = 15) with respective detection limits of 89 μmol L(-1) and 111 μmol L(-1). Good repeatability was observed for intra-day (1.7-2.9%) and inter-day (3.6-6.5%) measurements evaluated on three consecutive days. The performance of CDS and CSPT was also validated in real human urine samples (n = 26) using capillary electrophoresis data as reference. Corresponding Partial Least-Squares (PLS) regression models provided for mean relative errors below 10% in creatinine quantification. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Filter Selection for Optimizing the Spectral Sensitivity of Broadband Multispectral Cameras Based on Maximum Linear Independence.

    Science.gov (United States)

    Li, Sui-Xian

    2018-05-07

    Previous research has shown that the effectiveness of selecting filter sets from among a large set of commercial broadband filters by a vector analysis method based on maximum linear independence (MLI). However, the traditional MLI approach is suboptimal due to the need to predefine the first filter of the selected filter set to be the maximum ℓ₂ norm among all available filters. An exhaustive imaging simulation with every single filter serving as the first filter is conducted to investigate the features of the most competent filter set. From the simulation, the characteristics of the most competent filter set are discovered. Besides minimization of the condition number, the geometric features of the best-performed filter set comprise a distinct transmittance peak along the wavelength axis of the first filter, a generally uniform distribution for the peaks of the filters and substantial overlaps of the transmittance curves of the adjacent filters. Therefore, the best-performed filter sets can be recognized intuitively by simple vector analysis and just a few experimental verifications. A practical two-step framework for selecting optimal filter set is recommended, which guarantees a significant enhancement of the performance of the systems. This work should be useful for optimizing the spectral sensitivity of broadband multispectral imaging sensors.

  7. Filter Selection for Optimizing the Spectral Sensitivity of Broadband Multispectral Cameras Based on Maximum Linear Independence

    Directory of Open Access Journals (Sweden)

    Sui-Xian Li

    2018-05-01

    Full Text Available Previous research has shown that the effectiveness of selecting filter sets from among a large set of commercial broadband filters by a vector analysis method based on maximum linear independence (MLI. However, the traditional MLI approach is suboptimal due to the need to predefine the first filter of the selected filter set to be the maximum ℓ2 norm among all available filters. An exhaustive imaging simulation with every single filter serving as the first filter is conducted to investigate the features of the most competent filter set. From the simulation, the characteristics of the most competent filter set are discovered. Besides minimization of the condition number, the geometric features of the best-performed filter set comprise a distinct transmittance peak along the wavelength axis of the first filter, a generally uniform distribution for the peaks of the filters and substantial overlaps of the transmittance curves of the adjacent filters. Therefore, the best-performed filter sets can be recognized intuitively by simple vector analysis and just a few experimental verifications. A practical two-step framework for selecting optimal filter set is recommended, which guarantees a significant enhancement of the performance of the systems. This work should be useful for optimizing the spectral sensitivity of broadband multispectral imaging sensors.

  8. Towards Remote Estimation of Radiation Use Efficiency in Maize Using UAV-Based Low-Cost Camera Imagery

    Directory of Open Access Journals (Sweden)

    Andreas Tewes

    2018-02-01

    Full Text Available Radiation Use Efficiency (RUE defines the productivity with which absorbed photosynthetically active radiation (APAR is converted to plant biomass. Readily used in crop growth models to predict dry matter accumulation, RUE is commonly determined by elaborate static sensor measurements in the field. Different definitions are used, based on total absorbed PAR (RUEtotal or PAR absorbed by the photosynthetically active leaf tissue only (RUEgreen. Previous studies have shown that the fraction of PAR absorbed (fAPAR, which supports the assessment of RUE, can be reliably estimated via remote sensing (RS, but unfortunately at spatial resolutions too coarse for experimental agriculture. UAV-based RS offers the possibility to cover plant reflectance at very high spatial and temporal resolution, possibly covering several experimental plots in little time. We investigated if (a UAV-based low-cost camera imagery allowed estimating RUEs in different experimental plots where maize was cultivated in the growing season of 2016, (b those values were different from the ones previously reported in literature and (c there was a difference between RUEtotal and RUEgreen. We determined fractional cover and canopy reflectance based on the RS imagery. Our study found that RUEtotal ranges between 4.05 and 4.59, and RUEgreen between 4.11 and 4.65. These values are higher than those published in other research articles, but not outside the range of plausibility. The difference between RUEtotal and RUEgreen was minimal, possibly due to prolonged canopy greenness induced by the stay-green trait of the cultivar grown. The procedure presented here makes time-consuming APAR measurements for determining RUE especially in large experiments superfluous.

  9. Time domain diffuse Raman spectrometer based on a TCSPC camera for the depth analysis of diffusive media.

    Science.gov (United States)

    Konugolu Venkata Sekar, S; Mosca, S; Tannert, S; Valentini, G; Martelli, F; Binzoni, T; Prokazov, Y; Turbin, E; Zuschratter, W; Erdmann, R; Pifferi, A

    2018-05-01

    We present a time domain diffuse Raman spectrometer for depth probing of highly scattering media. The system is based on, to the best of our knowledge, a novel time-correlated single-photon counting (TCSPC) camera that simultaneously acquires both spectral and temporal information of Raman photons. A dedicated non-contact probe was built, and time domain Raman measurements were performed on a tissue mimicking bilayer phantom. The fluorescence contamination of the Raman signal was eliminated by early time gating (0-212 ps) the Raman photons. Depth sensitivity is achieved by time gating Raman photons at different delays with a gate width of 106 ps. Importantly, the time domain can provide time-dependent depth sensitivity leading to a high contrast between two layers of Raman signal. As a result, an enhancement factor of 2170 was found for our bilayer phantom which is much higher than the values obtained by spatial offset Raman spectroscopy (SORS), frequency offset Raman spectroscopy (FORS), or hybrid FORS-SORS on a similar phantom.

  10. Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors.

    Science.gov (United States)

    Lee, Kwan Woo; Yoon, Hyo Sik; Song, Jong Min; Park, Kang Ryoung

    2018-03-23

    Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.

  11. Selecting a digital camera for telemedicine.

    Science.gov (United States)

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  12. 100-ps framing-camera tube

    International Nuclear Information System (INIS)

    Kalibjian, R.

    1978-01-01

    The optoelectronic framing-camera tube described is capable of recording two-dimensional image frames with high spatial resolution in the <100-ps range. Framing is performed by streaking a two-dimensional electron image across narrow slits. The resulting dissected electron line images from the slits are restored into framed images by a restorer deflector operating synchronously with the dissector deflector. The number of framed images on the tube's viewing screen equals the number of dissecting slits in the tube. Performance has been demonstrated in a prototype tube by recording 135-ps-duration framed images of 2.5-mm patterns at the cathode. The limitation in the framing speed is in the external drivers for the deflectors and not in the tube design characteristics. Faster frame speeds in the <100-ps range can be obtained by use of faster deflection drivers

  13. Measuring cues for stand-off deception detection based on full-body nonverbal features in body-worn cameras

    Science.gov (United States)

    Bouma, Henri; Burghouts, Gertjan; den Hollander, Richard; van der Zee, Sophie; Baan, Jan; ten Hove, Johan-Martijn; van Diepen, Sjaak; van den Haak, Paul; van Rest, Jeroen

    2016-10-01

    Deception detection is valuable in the security domain to distinguish truth from lies. It is desirable in many security applications, such as suspect and witness interviews and airport passenger screening. Interviewers are constantly trying to assess the credibility of a statement, usually based on intuition without objective technical support. However, psychological research has shown that humans can hardly perform better than random guessing. Deception detection is a multi-disciplinary research area with an interest from different fields, such as psychology and computer science. In the last decade, several developments have helped to improve the accuracy of lie detection (e.g., with a concealed information test, increasing the cognitive load, or measurements with motion capture suits) and relevant cues have been discovered (e.g., eye blinking or fiddling with the fingers). With an increasing presence of mobile phones and bodycams in society, a mobile, stand-off, automatic deception detection methodology based on various cues from the whole body would create new application opportunities. In this paper, we study the feasibility of measuring these visual cues automatically on different parts of the body, laying the groundwork for stand-off deception detection in more flexible and mobile deployable sensors, such as body-worn cameras. We give an extensive overview of recent developments in two communities: in the behavioral-science community the developments that improve deception detection with a special attention to the observed relevant non-verbal cues, and in the computer-vision community the recent methods that are able to measure these cues. The cues are extracted from several body parts: the eyes, the mouth, the head and the fullbody pose. We performed an experiment using several state-of-the-art video-content-analysis (VCA) techniques to assess the quality of robustly measuring these visual cues.

  14. Development of miniaturized proximity focused streak tubes for visible light and x-ray applications. Final report and progress, April-September 1977

    International Nuclear Information System (INIS)

    Cuny, J.J.; Knight, A.J.

    1978-02-01

    Research performed to develop miniaturized proximity focused streak camera tubes (PFST) for application in the visible and the x-ray modes of operation is described. The objective of this research was to provide an engineering design and to fabricate a visible and an x-ray prototype tube to be provided to LASL for test and evaluation. Materials selection and fabrication procedures, particularly the joining of beryllium to a suitable support ring for use as the x-ray window, are described in detail. The visible and x-ray PFST's were successfully fabricated

  15. Radiometric Cross-Calibration of GAOFEN-1 Wfv Cameras with LANDSAT-8 Oli and Modis Sensors Based on Radiation and Geometry Matching

    Science.gov (United States)

    Li, J.; Wu, Z.; Wei, X.; Zhang, Y.; Feng, F.; Guo, F.

    2018-04-01

    Cross-calibration has the advantages of high precision, low resource requirements and simple implementation. It has been widely used in recent years. The four wide-field-of-view (WFV) cameras on-board Gaofen-1 satellite provide high spatial resolution and wide combined coverage (4 × 200 km) without onboard calibration. In this paper, the four-band radiometric cross-calibration coefficients of WFV1 camera were obtained based on radiation and geometry matching taking Landsat 8 OLI (Operational Land Imager) sensor as reference. Scale Invariant Feature Transform (SIFT) feature detection method and distance and included angle weighting method were introduced to correct misregistration of WFV-OLI image pair. The radiative transfer model was used to eliminate difference between OLI sensor and WFV1 camera through the spectral match factor (SMF). The near-infrared band of WFV1 camera encompasses water vapor absorption bands, thus a Look Up Table (LUT) for SMF varies from water vapor amount is established to estimate the water vapor effects. The surface synchronization experiment was designed to verify the reliability of the cross-calibration coefficients, which seem to perform better than the official coefficients claimed by the China Centre for Resources Satellite Data and Application (CCRSDA).

  16. Ground-based search for the brightest transiting planets with the Multi-site All-Sky CAmeRA: MASCARA

    Science.gov (United States)

    Snellen, Ignas A. G.; Stuik, Remko; Navarro, Ramon; Bettonvil, Felix; Kenworthy, Matthew; de Mooij, Ernst; Otten, Gilles; ter Horst, Rik; le Poole, Rudolf

    2012-09-01

    The Multi-site All-sky CAmeRA MASCARA is an instrument concept consisting of several stations across the globe, with each station containing a battery of low-cost cameras to monitor the near-entire sky at each location. Once all stations have been installed, MASCARA will be able to provide a nearly 24-hr coverage of the complete dark sky, down to magnitude 8, at sub-minute cadence. Its purpose is to find the brightest transiting exoplanet systems, expected in the V=4-8 magnitude range - currently not probed by space- or ground-based surveys. The bright/nearby transiting planet systems, which MASCARA will discover, will be the key targets for detailed planet atmosphere observations. We present studies on the initial design of a MASCARA station, including the camera housing, domes, and computer equipment, and on the photometric stability of low-cost cameras showing that a precision of 0.3-1% per hour can be readily achieved. We plan to roll out the first MASCARA station before the end of 2013. A 5-station MASCARA can within two years discover up to a dozen of the brightest transiting planet systems in the sky.

  17. Change detection and characterization of volcanic activity using ground based low-light and near infrared cameras to monitor incandescence and thermal signatures

    Science.gov (United States)

    Harrild, Martin; Webley, Peter; Dehn, Jonathan

    2015-04-01

    Knowledge and understanding of precursory events and thermal signatures are vital for monitoring volcanogenic processes, as activity can often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash up to aircraft cruise altitudes. Using ground based remote sensing techniques to monitor and detect this activity is essential, but often the required equipment and maintenance is expensive. Our investigation explores the use of low-light cameras to image volcanic activity in the visible to near infrared (NIR) portion of the electromagnetic spectrum. These cameras are ideal for monitoring as they are cheap, consume little power, are easily replaced and can provide near real-time data. We focus here on the early detection of volcanic activity, using automated scripts, that capture streaming online webcam imagery and evaluate image pixel brightness values to determine relative changes and flag increases in activity. The script is written in Python, an open source programming language, to reduce the overall cost to potential consumers and increase the application of these tools across the volcanological community. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures and effusion rates to be determined from pixel brightness. The results of a field campaign in June, 2013 to Stromboli volcano, Italy, are also presented here. Future field campaigns to Latin America will include collaborations with INSIVUMEH in Guatemala, to apply our techniques to Fuego and Santiaguito volcanoes.

  18. Streaked spectrometry using multilayer x-ray-interference mirrors to investigate energy transport in laser-plasma applications

    International Nuclear Information System (INIS)

    Stradling, G.L.; Barbee, T.W. Jr.; Henke, B.L.; Campbell, E.M.; Mead, W.C.

    1981-08-01

    Transport of energy in laser-produced plasmas is scrutinized by devising spectrally and temporally identifiable characteristics in the x-ray emission history which identify the heat-front position at various times in the heating process. Measurements of the relative turn-on times of these characteristics show the rate of energy transport between various points. These measurements can in turn constrain models of energy transport phenomena. We are time-resolving spectrally distinguishable subkilovolt x-ray emissions from different layers of a disk target to examine the transport rate of energy into the target. A similar technique is used to measure the lateral expansion rate of the plasma spot. A soft x-ray streak camera with 15-psec temporal resolution is used to make the temporal measurements. Spectral discrimination of the incident signal is provided by multilayer x-ray interference mirrors

  19. Intellectual streaking: The value of teachers exposing minds (and hearts).

    Science.gov (United States)

    Bearman, Margaret; Molloy, Elizabeth

    2017-12-01

    As teachers we often ask learners to be vulnerable and yet present ourselves as high status, knowledgeable experts, often with pre-prepared scripts. This paper investigates the metaphoric notion of "intellectual streaking" - the nimble exposure of a teacher's thought processes, dilemmas, or failures - as a way of modeling both reflection-in-action and resilience. While there is a tension between credibility and vulnerability, both of which are necessary for trust, we argue that taking a few risks and revealing deficits in knowledge or performance can be illuminating and valuable for all parties.

  20. Angioid streaks in a case of Camurati–Engelmann disease

    Directory of Open Access Journals (Sweden)

    Betül Tugcu

    2017-01-01

    Full Text Available Camurati–Engelmann disease (CED is a rare autosomal dominant disease with various phenotypic expressions. The hallmark of the disease is bilateral symmetric diaphyseal hyperostosis of the long bones with progressive involvement of the metaphysis. Ocular manifestations occur rarely and mainly result from bony overgrowth of the orbit and optic canal stenosis. We report a case of CED showing angioid streaks (ASs in both fundi with no macular involvement and discuss the possible theories of the pathogenesis of AS in this disease.

  1. Mechanism for propagation of the step leader of streak lightning

    International Nuclear Information System (INIS)

    Golubev, A.I.; Zolotovskil, V.I.; Ivanovskil, A.V.

    1992-01-01

    A hypothetical scheme for the development of the step leader of streak lightning is discussed. The mathematical problem of modeling the propagation of the leader in this scheme is stated. The main parameters of the leader are estimated: the length and propagation velocity of the step, the average propagation velocity, etc. This is compared with data from observations in nature. The propagation of the leader is simulated numerically. Results of the calculation are presented for two 'flashes' of the step leader. 25 refs., 6 figs

  2. Creating personalized memories from social events: community-based support for multi-camera recordings of school concerts

    NARCIS (Netherlands)

    R.L. Guimarães (Rodrigo); P.S. Cesar Garcia (Pablo Santiago); D.C.A. Bulterman (Dick); V. Zsombori; I. Kegel

    2011-01-01

    htmlabstractThe wide availability of relatively high-quality cameras makes it easy for many users to capture video fragments of social events such as concerts, sports events or community gatherings. The wide availability of simple sharing tools makes it nearly as easy to upload individual fragments

  3. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    Science.gov (United States)

    Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.

    2016-06-01

    Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.

  4. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    Directory of Open Access Journals (Sweden)

    J. W. Park

    2016-06-01

    Full Text Available Recently, aerial photography with unmanned aerial vehicle (UAV system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments’s LTE (long-term evolution, Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area’s that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision, RTKLIB, Open Drone Map.

  5. Test bed for real-time image acquisition and processing systems based on FlexRIO, CameraLink, and EPICS

    International Nuclear Information System (INIS)

    Barrera, E.; Ruiz, M.; Sanz, D.; Vega, J.; Castro, R.; Juárez, E.; Salvador, R.

    2014-01-01

    Highlights: • The test bed allows for the validation of real-time image processing techniques. • Offers FPGA (FlexRIO) image processing that does not require CPU intervention. • Is fully compatible with the architecture of the ITER Fast Controllers. • Provides flexibility and easy integration in distributed experiments based on EPICS. - Abstract: Image diagnostics are becoming standard ones in nuclear fusion. At present, images are typically analyzed off-line. However, real-time processing is occasionally required (for instance, hot-spot detection or pattern recognition tasks), which will be the objective for the next generation of fusion devices. In this paper, a test bed for image generation, acquisition, and real-time processing is presented. The proposed solution is built using a Camera Link simulator, a Camera Link frame-grabber, a PXIe chassis, and offers software interface with EPICS. The Camera Link simulator (PCIe card PCIe8 DVa C-Link from Engineering Design Team) generates simulated image data (for example, from video-movies stored in fusion databases) using a Camera Link interface to mimic the frame sequences produced with diagnostic cameras. The Camera Link frame-grabber (FlexRIO Solution from National Instruments) includes a field programmable gate array (FPGA) for image acquisition using a Camera Link interface; the FPGA allows for the codification of ad-hoc image processing algorithms using LabVIEW/FPGA software. The frame grabber is integrated in a PXIe chassis with system architecture similar to that of the ITER Fast Controllers, and the frame grabber provides a software interface with EPICS to program all of its functionalities, capture the images, and perform the required image processing. The use of these four elements allows for the implementation of a test bed system that permits the development and validation of real-time image processing techniques in an architecture that is fully compatible with that of the ITER Fast Controllers

  6. Hyper thin 3D edge measurement of honeycomb core structures based on the triangular camera-projector layout & phase-based stereo matching.

    Science.gov (United States)

    Jiang, Hongzhi; Zhao, Huijie; Li, Xudong; Quan, Chenggen

    2016-03-07

    We propose a novel hyper thin 3D edge measurement technique to measure the profile of 3D outer envelope of honeycomb core structures. The width of the edges of the honeycomb core is less than 0.1 mm. We introduce a triangular layout design consisting of two cameras and one projector to measure hyper thin 3D edges and eliminate data interference from the walls. A phase-shifting algorithm and the multi-frequency heterodyne phase-unwrapping principle are applied for phase retrievals on edges. A new stereo matching method based on phase mapping and epipolar constraint is presented to solve correspondence searching on the edges and remove false matches resulting in 3D outliers. Experimental results demonstrate the effectiveness of the proposed method for measuring the 3D profile of honeycomb core structures.

  7. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  8. Comparison of monthly nighttime cloud fraction products from MODIS and AIRS and ground-based camera over Manila Observatory (14.64N, 121.07E)

    Science.gov (United States)

    Gacal, G. F. B.; Lagrosas, N.

    2017-12-01

    Cloud detection nowadays is primarily achieved by the utilization of various sensors aboard satellites. These include MODIS Aqua, MODIS Terra, and AIRS with products that include nighttime cloud fraction. Ground-based instruments are, however, only secondary to these satellites when it comes to cloud detection. Nonetheless, these ground-based instruments (e.g., LIDARs, ceilometers, and sky-cameras) offer significant datasets about a particular region's cloud cover values. For nighttime operations of cloud detection instruments, satellite-based instruments are more reliably and prominently used than ground-based ones. Therefore if a ground-based instrument for nighttime operations is operated, it ought to produce reliable scientific datasets. The objective of this study is to do a comparison between the results of a nighttime ground-based instrument (sky-camera) and that of MODIS Aqua and MODIS Terra. A Canon Powershot A2300 is placed ontop of Manila Observatory (14.64N, 121.07E) and is configured to take images of the night sky at 5min intervals. To detect pixels with clouds, the pictures are converted to grayscale format. Thresholding technique is used to screen pixels with cloud and pixels without clouds. If the pixel value is greater than 17, it is considered as a cloud; otherwise, a noncloud (Gacal et al., 2016). This algorithm is applied to the data gathered from Oct 2015 to Oct 2016. A scatter plot between satellite cloud fraction in the area covering the area 14.2877N, 120.9869E, 14.7711N and 121.4539E and ground cloud cover is graphed to find the monthly correlation. During wet season (June - November), the satellite nighttime cloud fraction vs ground measured cloud cover produce an acceptable R2 (Aqua= 0.74, Terra= 0.71, AIRS= 0.76). However, during dry season, poor R2 values are obtained (AIRS= 0.39, Aqua & Terra = 0.01). The high correlation during wet season can be attributed to a high probability that the camera and satellite see the same clouds

  9. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  10. Perceptual Color Characterization of Cameras

    Directory of Open Access Journals (Sweden)

    Javier Vazquez-Corral

    2014-12-01

    Full Text Available Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as \\(XYZ\\, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a \\(3 \\times 3\\ matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson al., to perform a perceptual color characterization. In particular, we search for the \\(3 \\times 3\\ matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE \\(\\Delta E\\ error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3for the \\(\\Delta E\\ error, 7& for the S-CIELAB error and 13% for the CID error measures.

  11. Relationships between early spring wheat streak mosaic severity levels and grain yield: Implications for management decisions

    Science.gov (United States)

    Wheat streak mosaic (WSM) caused by Wheat streak mosaic virus, which is transmitted by the wheat curl mite (Aceria tosichella), is a major yield-limiting disease in the Texas High Plains. In addition to its impact on grain production, the disease reduces water-use efficiency by affecting root develo...

  12. Tests of a new CCD-camera based neutron radiography detector system at the reactor stations in Munich and Vienna

    Energy Technology Data Exchange (ETDEWEB)

    Lehmann, E; Pleinert, H [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Schillinger, B [Technische Univ. Muenchen (Germany); Koerner, S [Atominstitut der Oesterreichischen Universitaeten, Vienna (Austria)

    1997-09-01

    The performance of the new neutron radiography detector designed at PSI with a cooled high sensitive CCD-camera was investigated under real neutronic conditions at three beam ports of two reactor stations. Different converter screens were applied for which the sensitivity and the modulation transfer function (MTF) could be obtained. The results are very encouraging concerning the utilization of this detector system as standard tool at the radiography stations at the spallation source SINQ. (author) 3 figs., 5 refs.

  13. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen- er...

  14. A new star tracker concept for satellite attitude determination based on a multi-purpose panoramic camera

    Science.gov (United States)

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele; Pernechele, Claudio; Dionisio, Cesare

    2017-11-01

    This paper presents an innovative algorithm developed for attitude determination of a space platform. The algorithm exploits images taken from a multi-purpose panoramic camera equipped with hyper-hemispheric lens and used as star tracker. The sensor architecture is also original since state-of-the-art star trackers accurately image as many stars as possible within a narrow- or medium-size field-of-view, while the considered sensor observes an extremely large portion of the celestial sphere but its observation capabilities are limited by the features of the optical system. The proposed original approach combines algorithmic concepts, like template matching and point cloud registration, inherited from the computer vision and robotic research fields, to carry out star identification. The final aim is to provide a robust and reliable initial attitude solution (lost-in-space mode), with a satisfactory accuracy level in view of the multi-purpose functionality of the sensor and considering its limitations in terms of resolution and sensitivity. Performance evaluation is carried out within a simulation environment in which the panoramic camera operation is realistically reproduced, including perturbations in the imaged star pattern. Results show that the presented algorithm is able to estimate attitude with accuracy better than 1° with a success rate around 98% evaluated by densely covering the entire space of the parameters representing the camera pointing in the inertial space.

  15. Implications of Articulating Machinery on Operator Line of Sight and Efficacy of Camera Based Proximity Detection Systems

    Directory of Open Access Journals (Sweden)

    Nicholas Schwabe

    2017-07-01

    Full Text Available The underground mining industry, and some above ground operations, rely on the use of heavy equipment that articulates to navigate corners in the tight confines of the tunnels. Poor line of sight (LOS has been identified as a problem for safe operation of this machinery. Proximity detection systems, such as a video system designed to provide a 360 degree view around the machine have been implemented to improve the available LOS for the operator. A four-camera system was modeled in a computer environment to assess LOS on a 3D cad model of a typical, articulated machine. When positioned without any articulation, the system is excellent at removing blind spots for a machine driving straight forward or backward in a straight tunnel. Further analysis reveals that when the machine articulates in a simulated corner section, some camera locations are no longer useful for improving LOS into the corner. In some cases, the operator has a superior view into the corner, when compared to the best available view from the camera. The work points to the need to integrate proximity detection systems at the design, build, and manufacture stage, and to consider proper policy and procedures that would address the gains and limits of the systems prior to implementation.

  16. Betting Decision Under Break-Streak Pattern: Evidence from Casino Gaming.

    Science.gov (United States)

    Fong, Lawrence Hoc Nang; So, Amy Siu Ian; Law, Rob

    2016-03-01

    Cognitive bias is prevalent among gamblers, especially those with gambling problems. Grounded in the heuristics theories, this study contributes to the literature by examining a cognitive bias triggered by the break streak pattern in the casino setting. We postulate that gamblers tend to bet on the latest outcome when there is a break-streak pattern. Moreover, three determinants of the betting decision under break-streak pattern, including the streak length of the alternative outcome, the frequency of the latest outcome, and gender, were identified and examined in this study. A non-participatory observational study was conducted among the Cussec gamblers in a casino in Macao. An analysis of 1229 bets confirms our postulation, particularly when the streak of the alternative outcome is long, the latest outcome is frequent, and the gamblers are females. The findings provide meaningful implications for casino management and public policymakers regarding the minimization of gambling harm.

  17. Diving-related visual loss in the setting of angioid streaks: report of two cases.

    Science.gov (United States)

    Angulo Bocco, Maria I; Spielberg, Leigh; Coppens, Greet; Catherine, Janet; Verougstraete, Claire; Leys, Anita M

    2012-01-01

    The purpose of this study was to report diving-related visual loss in the setting of angioid streaks. Observational case reports of two patients with angioid streaks suffering sudden visual loss immediately after diving. Two young adult male patients presented with visual loss after diving headfirst. Funduscopy revealed angioid streaks, peau d'orange, subretinal hemorrhages, and ruptures of Bruch membrane. Choroidal neovascularization developed during follow-up. Both patients had an otherwise uneventful personal and familial medical history. In patients with angioid streaks, diving headfirst can lead to subretinal hemorrhages and traumatic ruptures in Bruch membrane and increase the risk of maculopathy. Ophthalmologists should caution patients with angioid streaks against diving headfirst.

  18. Ratiometric fluorescence transduction by hybridization after isothermal amplification for determination of zeptomole quantities of oligonucleotide biomarkers with a paper-based platform and camera-based detection

    Energy Technology Data Exchange (ETDEWEB)

    Noor, M. Omair; Hrovat, David [Chemical Sensors Group, Department of Chemical and Physical Sciences, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, ON L5L 1C6 (Canada); Moazami-Goudarzi, Maryam [Department of Cell and Systems Biology, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, ON L5L 1C6 (Canada); Espie, George S. [Department of Cell and Systems Biology, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, ON L5L 1C6 (Canada); Department of Biology, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, ON L5L 1C6 (Canada); Krull, Ulrich J., E-mail: ulrich.krull@utoronto.ca [Chemical Sensors Group, Department of Chemical and Physical Sciences, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, ON L5L 1C6 (Canada)

    2015-07-23

    Highlights: • Solid-phase QD-FRET transduction of isothermal tHDA amplicons on paper substrates. • Ratiometric QD-FRET transduction improves assay precision and lowers the detection limit. • Zeptomole detection limit by an iPad camera after isothermal amplification. • Tunable assay sensitivity by immobilizing different amounts of QD–probe bioconjugates. - Abstract: Paper is a promising platform for the development of decentralized diagnostic assays owing to the low cost and ease of use of paper-based analytical devices (PADs). It can be challenging to detect on PADs very low concentrations of nucleic acid biomarkers of lengths as used in clinical assays. Herein we report the use of thermophilic helicase-dependent amplification (tHDA) in combination with a paper-based platform for fluorescence detection of probe-target hybridization. Paper substrates were patterned using wax printing. The cellulosic fibers were chemically derivatized with imidazole groups for the assembly of the transduction interface that consisted of immobilized quantum dot (QD)–probe oligonucleotide conjugates. Green-emitting QDs (gQDs) served as donors with Cy3 as the acceptor dye in a fluorescence resonance energy transfer (FRET)-based transduction method. After probe-target hybridization, a further hybridization event with a reporter sequence brought the Cy3 acceptor dye in close proximity to the surface of immobilized gQDs, triggering a FRET sensitized emission that served as an analytical signal. Ratiometric detection was evaluated using both an epifluorescence microscope and a low-cost iPad camera as detectors. Addition of the tHDA method for target amplification to produce sequences of ∼100 base length allowed for the detection of zmol quantities of nucleic acid targets using the two detection platforms. The ratiometric QD-FRET transduction method not only offered improved assay precision, but also lowered the limit of detection of the assay when compared with the non

  19. Ratiometric fluorescence transduction by hybridization after isothermal amplification for determination of zeptomole quantities of oligonucleotide biomarkers with a paper-based platform and camera-based detection

    International Nuclear Information System (INIS)

    Noor, M. Omair; Hrovat, David; Moazami-Goudarzi, Maryam; Espie, George S.; Krull, Ulrich J.

    2015-01-01

    Highlights: • Solid-phase QD-FRET transduction of isothermal tHDA amplicons on paper substrates. • Ratiometric QD-FRET transduction improves assay precision and lowers the detection limit. • Zeptomole detection limit by an iPad camera after isothermal amplification. • Tunable assay sensitivity by immobilizing different amounts of QD–probe bioconjugates. - Abstract: Paper is a promising platform for the development of decentralized diagnostic assays owing to the low cost and ease of use of paper-based analytical devices (PADs). It can be challenging to detect on PADs very low concentrations of nucleic acid biomarkers of lengths as used in clinical assays. Herein we report the use of thermophilic helicase-dependent amplification (tHDA) in combination with a paper-based platform for fluorescence detection of probe-target hybridization. Paper substrates were patterned using wax printing. The cellulosic fibers were chemically derivatized with imidazole groups for the assembly of the transduction interface that consisted of immobilized quantum dot (QD)–probe oligonucleotide conjugates. Green-emitting QDs (gQDs) served as donors with Cy3 as the acceptor dye in a fluorescence resonance energy transfer (FRET)-based transduction method. After probe-target hybridization, a further hybridization event with a reporter sequence brought the Cy3 acceptor dye in close proximity to the surface of immobilized gQDs, triggering a FRET sensitized emission that served as an analytical signal. Ratiometric detection was evaluated using both an epifluorescence microscope and a low-cost iPad camera as detectors. Addition of the tHDA method for target amplification to produce sequences of ∼100 base length allowed for the detection of zmol quantities of nucleic acid targets using the two detection platforms. The ratiometric QD-FRET transduction method not only offered improved assay precision, but also lowered the limit of detection of the assay when compared with the non

  20. Detection, Occurrence, and Survey of Rice Stripe and Black-Streaked Dwarf Diseases in Zhejiang Province, China

    OpenAIRE

    Heng-mu ZHANG; Hua-di WANG; Jian YANG; Michael J ADAMS; Jian-ping CHEN

    2013-01-01

    The major viral diseases that occur on rice plants in Zhejiang Province, eastern China, are stripe and rice black-streaked dwarf diseases. Rice stripe disease is only caused by rice stripe tenuivirus (RSV), while rice black-streaked dwarf disease can be caused by rice black-streaked dwarf fijivirus (RBSDV) and/or southern rice black-streaked dwarf fijivirus (SRBSDV). Here we review the characterization of these viruses, methods for their detection, and extensive surveys showing their occurren...

  1. Image compensation for camera and lighting variability

    Science.gov (United States)

    Daley, Wayne D.; Britton, Douglas F.

    1996-12-01

    With the current trend of integrating machine vision systems in industrial manufacturing and inspection applications comes the issue of camera and illumination stabilization. Unless each application is built around a particular camera and highly controlled lighting environment, the interchangeability of cameras of fluctuations in lighting become a problem as each camera usually has a different response. An empirical approach is proposed where color tile data is acquired using the camera of interest, and a mapping is developed to some predetermined reference image using neural networks. A similar analytical approach based on a rough analysis of the imaging systems is also considered for deriving a mapping between cameras. Once a mapping has been determined, all data from one camera is mapped to correspond to the images of the other prior to performing any processing on the data. Instead of writing separate image processing algorithms for the particular image data being received, the image data is adjusted based on each particular camera and lighting situation. All that is required when swapping cameras is the new mapping for the camera being inserted. The image processing algorithms can remain the same as the input data has been adjusted appropriately. The results of utilizing this technique are presented for an inspection application.

  2. Temporal resolution technology of a soft X-ray picosecond framing camera based on Chevron micro-channel plates gated in cascade

    Energy Technology Data Exchange (ETDEWEB)

    Yang Wenzheng [State Key Laboratory of Transient Optics and Photonics, Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China)], E-mail: ywz@opt.ac.cn; Bai Yonglin; Liu Baiyu [State Key Laboratory of Transient Optics and Photonics, Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Bai Xiaohong; Zhao Junping; Qin Junjun [Key Laboratory of Ultra-fast Photoelectric Diagnostics Technology, Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China)

    2009-09-11

    We describe a soft X-ray picosecond framing camera (XFC) based on Chevron micro-channel plates (MCPs) gated in cascade for ultra-fast process diagnostics. The micro-strip lines are deposited on both the input and the output surfaces of the Chevron MCPs and can be gated by a negative (positive) electric pulse on the first (second) MCP. The gating is controlled by the time delay T{sub d} between two gating pulses. By increasing T{sub d}, the temporal resolution and the gain of the camera are greatly improved compared with a single-gated MCP-XFC. The optimal T{sub d}, which results in the best temporal resolution, is within the electron transit time and transit time spread of the MCP. Using 250 ps, {+-}2.5 kV gating pulses, the temporal resolution of the double-gated Chevron MCPs camera is improved from 60 ps for the single-gated MCP-XFC to 37 ps for T{sub d}=350 ps. The principle is presented in detail and accompanied with a theoretic simulation and experimental results.

  3. First high speed imaging of lightning from summer thunderstorms over India: Preliminary results based on amateur recording using a digital camera

    Science.gov (United States)

    Narayanan, V. L.

    2017-12-01

    For the first time, high speed imaging of lightning from few isolated tropical thunderstorms are observed from India. The recordings are made from Tirupati (13.6oN, 79.4oE, 180 m above mean sea level) during summer months with a digital camera capable of recording high speed videos up to 480 fps. At 480 fps, each individual video file is recorded for 30 s resulting in 14400 deinterlaced images per video file. An automatic processing algorithm is developed for quick identification and analysis of the lightning events which will be discussed in detail. Preliminary results indicating different types of phenomena associated with lightning like stepped leader, dart leader, luminous channels corresponding to continuing current and M components are discussed. While most of the examples show cloud to ground discharges, few interesting cases of intra-cloud, inter-cloud and cloud-air discharges will also be displayed. This indicates that though high speed cameras with few 1000 fps are preferred for a detailed study on lightning, moderate range CMOS sensor based digital cameras can provide important information as well. The lightning imaging activity presented herein is initiated as an amateur effort and currently plans are underway to propose a suite of supporting instruments to conduct coordinated campaigns. The images discussed here are acquired from normal residential area and indicate how frequent lightning strikes are in such tropical locations during thunderstorms, though no towering structures are nearby. It is expected that popularizing of such recordings made with affordable digital cameras will trigger more interest in lightning research and provide a possible data source from amateur observers paving the way for citizen science.

  4. Effective data-domain noise and streak reduction for X-ray CT

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Zhi; Zamyatin, Alexander A. [Toshiba Medical Research Institute USA, Inc., Vernon Hills, IL (United States); Akino, Naruomi [Toshiba Medical System Corporation, Tokyo (Japan)

    2011-07-01

    Streaks and noise caused by photon starvation can seriously impair the diagnostic value of the CT imaging. Existing processing methods often have several parameters to tune. The parameters can be ad hoc to the data sets. Iterative methods can achieve better results, however, at the cost of more hardware resources or longer processing time. This paper reports a new scheme of adaptive Gaussian filtering, which is based on the diffusion-derived scale-space concept. In scale-space view, filtering by Gaussians of different sizes is similar to decompose the data into a sequence of scales. The scale measure, which is the variance of the filter, should be linearly related to the noise standard deviation instead of the variance of the noise. This is a fundamental deviation in the way of using filters. The new filter has only one parameter that remains stable once tuned. Singlepass processing can usually reach the desired results. (orig.)

  5. Major QTL Conferring Resistance to Rice Bacterial Leaf Streak

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Bacterial leaf streak (BLS) is one of the important limiting factors to rice production in southern China and other tropical and sub-tropical areas in Asia. Resistance to BLS was found to be a quantitative trait and no major resistant gene was located in rice until date. In the present study, a new major quantitative trait locus (QTL) conferring resistance to BLS was identified from a highly resistant variety Dular by the employment of Dular/Balilla (DB) and Dular/IR24 (DI) segregation populations and was designated qBLSR-11-1. This QTL was located between the simple sequence repeat (SSR) markers RM120 and RM441 on chromosome 11 and could account for 18.1-21.7% and 36.3% of the variance in DB and DI populations, respectively. The genetic pattern of rice resistance to BLS was discussed.

  6. Scanner calibration of a small animal PET camera based on continuous LSO crystals and flat panel PSPMTs

    International Nuclear Information System (INIS)

    Benlloch, J.M.; Carrilero, V.; Gonzalez, A.J.; Catret, J.; Lerche, Ch.W.; Abellan, D.; Garcia de Quiros, F.; Gimenez, M.; Modia, J.; Sanchez, F.; Pavon, N.; Ros, A.; Martinez, J.; Sebastia, A.

    2007-01-01

    We have constructed a small animal PET with four identical detector modules, each consisting of a continuous LYSO crystal attached to a Position Sensitive Photomultiplier Tube (PSPMT). The dimensions of the continuous crystal are 50x50 mm 2 and 10 mm thickness. The modules are separated 11 cm between each other in the scanner. In this paper we discuss the method used for the calibration of the camera for this special system with continuous detectors. We also present the preliminary values for the main performance parameters such as spatial and energy resolution, and sensitivity of the system

  7. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  8. Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection

    Science.gov (United States)

    Yoon, Soweon; Jung, Ho Gi; Park, Kang Ryoung; Kim, Jaihie

    2009-03-01

    Although iris recognition is one of the most accurate biometric technologies, it has not yet been widely used in practical applications. This is mainly due to user inconvenience during the image acquisition phase. Specifically, users try to adjust their eye position within small capture volume at a close distance from the system. To overcome these problems, we propose a novel iris image acquisition system that provides users with unconstrained environments: a large operating range, enabling movement from standing posture, and capturing good-quality iris images in an acceptable time. The proposed system has the following three contributions compared with previous works: (1) the capture volume is significantly increased by using a pan-tilt-zoom (PTZ) camera guided by a light stripe projection, (2) the iris location in the large capture volume is found fast due to 1-D vertical face searching from the user's horizontal position obtained by the light stripe projection, and (3) zooming and focusing on the user's irises at a distance are accurate and fast using the estimated 3-D position of a face by the light stripe projection and the PTZ camera. Experimental results show that the proposed system can capture good-quality iris images in 2.479 s on average at a distance of 1.5 to 3 m, while allowing a limited amount of movement by the user.

  9. Experimental task-based optimization of a four-camera variable-pinhole small-animal SPECT system

    Science.gov (United States)

    Hesterman, Jacob Y.; Kupinski, Matthew A.; Furenlid, Lars R.; Wilson, Donald W.

    2005-04-01

    We have previously utilized lumpy object models and simulated imaging systems in conjunction with the ideal observer to compute figures of merit for hardware optimization. In this paper, we describe the development of methods and phantoms necessary to validate or experimentally carry out these optimizations. Our study was conducted on a four-camera small-animal SPECT system that employs interchangeable pinhole plates to operate under a variety of pinhole configurations and magnifications (representing optimizable system parameters). We developed a small-animal phantom capable of producing random backgrounds for each image sequence. The task chosen for the study was the detection of a 2mm diameter sphere within the phantom-generated random background. A total of 138 projection images were used, half of which included the signal. As our observer, we employed the channelized Hotelling observer (CHO) with Laguerre-Gauss channels. The signal-to-noise (SNR) of this observer was used to compare different system configurations. Results indicate agreement between experimental and simulated data with higher detectability rates found for multiple-camera, multiple-pinhole, and high-magnification systems, although it was found that mixtures of magnifications often outperform systems employing a single magnification. This work will serve as a basis for future studies pertaining to system hardware optimization.

  10. Demonstration of Adaptive Functional Differences Seen in Kidneys Accompanying a Nonfunctioning/Hypofunctioning Partner, using Camera Based Tc 99m MAG3 Clearance Measurement Technique

    Directory of Open Access Journals (Sweden)

    Burcu Esen Akkaş

    2012-08-01

    Full Text Available Objective: The aim of this study was to demonstrate the functional compensation that occurs in kidneys which accompany a partner with total or partial loss of renal functioning mass, using camera-based Tc 99m MAG3 clearance technique. Material and Methods: Eighty five patients (43M, 42F, age: 44.8±12.6, range: 18-77 years with normal serum creatinine levels and normal (camera based Tc 99m MAG3 clearances of normal kidneys in each group were compared. Results: Total Tc 99m MAG3 clearances (mL/min/1.73m 2 were significantly lower in group 1 and group 2 compared to group 3 (281.5±46, 260.5±61.7 and 316.1±84, respectively. Highest isolated Tc 99m MAG3 clearances among normal functioning kidneys were observed in group 1 (281.5±45.6 followed by group 2 (204.4±55 and group 3 (157.5±44. Moderate negative correlation was detected between the Tc99m MAG3 clearances of normal kidneys and contralateral renal function (r=-0.5, p<0.001. Conclusion: Normal kidneys can compensate for the loss of contralateral kidney function via increasing their clearances, which seems to be dependent on the residual function of their partner. Camera based Tc 99m MAG3 clearance measurement is an objective method to demonstrate compensatory differences in renal function seen between kidneys with contralateral normofunctioning, hypofunctioning and nonfunctioning partner. (MIRT 2012;21:56-62

  11. The in vitro and in vivo validation of a mobile non-contact camera-based digital imaging system for tooth colour measurement.

    Science.gov (United States)

    Smith, Richard N; Collins, Luisa Z; Naeeni, Mojgan; Joiner, Andrew; Philpotts, Carole J; Hopkinson, Ian; Jones, Clare; Lath, Darren L; Coxon, Thomas; Hibbard, James; Brook, Alan H

    2008-01-01

    To assess the reproducibility of a mobile non-contact camera-based digital imaging system (DIS) for measuring tooth colour under in vitro and in vivo conditions. One in vitro and two in vivo studies were performed using a mobile non-contact camera-based digital imaging system. In vitro study: two operators used the DIS to image 10 dry tooth specimens in a randomised order on three occasions. In vivo study 1:25 subjects with two natural, normally aligned, upper central incisors had their teeth imaged using the DIS on four consecutive days by one operator to measure day-to-day variability. On one of the four test days, duplicate images were collected by three different operators to measure inter- and intra-operator variability. In vivo study 2:11 subjects with two natural, normally aligned, upper central incisors had their teeth imaged using the DIS twice daily over three days within the same week to assess day-to-day variability. Three operators collected images from subjects in a randomised order to measure inter- and intra-operator variability. Subject-to-subject variability was the largest source of variation within the data. Pairwise correlations and concordance coefficients were > 0.7 for each operator, demonstrating good precision and excellent operator agreement in each of the studies. Intraclass correlation coefficients (ICCs) for each operator indicate that day-to-day reliability was good to excellent, where all ICC's where > 0.75 for each operator. The mobile non-contact camera-based digital imaging system was shown to be a reproducible means of measuring tooth colour in both in vitro and in vivo experiments.

  12. Earth aeolian wind streaks: Comparison to wind data from model and stations

    Science.gov (United States)

    Cohen-Zada, A. L.; Maman, S.; Blumberg, D. G.

    2017-05-01

    Wind streak is a collective term for a variety of aeolian features that display distinctive albedo surface patterns. Wind streaks have been used to map near-surface winds and to estimate atmospheric circulation patterns on Mars and Venus. However, because wind streaks have been studied mostly on Mars and Venus, much of the knowledge regarding the mechanism and time frame of their formation and their relationship to the atmospheric circulation cannot be verified. This study aims to validate previous studies' results by a comparison of real and modeled wind data with wind streak orientations as measured from remote-sensing images. Orientations of Earth wind streaks were statistically correlated to resultant drift direction (RDD) values calculated from reanalysis and wind data from 621 weather stations. The results showed good agreement between wind streak orientations and reanalysis RDD (r = 0.78). A moderate correlation was found between the wind streak orientations and the weather station data (r = 0.47); a similar trend was revealed on a regional scale when the analysis was performed by continent, with r ranging from 0.641 in North America to 0.922 in Antarctica. At sites where wind streak orientations did not correspond to the RDDs (i.e., a difference of 45°), seasonal and diurnal variations in the wind flow were found to be responsible for deviation from the global pattern. The study thus confirms that Earth wind streaks were formed by the present wind regime and they are indeed indicative of the long-term prevailing wind direction on global and regional scales.

  13. Multicenter trial validation of a camera-based method to measure Tc-99m mercaptoacetyltriglycine, or Tc-99m MAG3, clearance.

    Science.gov (United States)

    Taylor, A; Manatunga, A; Morton, K; Reese, L; Prato, F S; Greenberg, E; Folks, R; Kemp, B J; Jones, M E; Corrigan, P E; Galt, J; Eshima, L

    1997-07-01

    To evaluate an improved camera-based method for calculating the clearance of technetium-99m mercaptoacetyltriglycine (MAG3) in a multicenter trial. Tc-99m MAG3 scintigraphy was performed in 49 patients at three sites in the United States and Canada. The percentage of the injected dose of Tc-99m MAG3 in the kidney at 1-2, 1.0-2.5, and 2-3 minutes after injection was correlated with the plasma-based Tc-99m MAG3 clearances. The data were combined with the results obtained in 20 additional patients in a previously published pilot study. Regression models correlating the plasma-based Tc-99m MAG3 clearance with the percentage uptake in the kidney for each time interval were developed; there was no statistically significant difference among sites in the regression equations. Correction for body surface area statistically significantly (P time interval. For the 1.0-2.5-minute interval, the body surface area-corrected correlation coefficient for the four combined sites was .87, and it improved to .93 when one outlier was omitted from the analysis. Similar results were obtained with the other time intervals. Independent processing by two observers showed no clinically important differences in the percentage dose in the kidney or in relative function. An improved camera-based method to calculate the clearance of Tc-99m MAG3 was validated in a multicenter trial.

  14. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  15. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  16. The GISMO-2 Bolometer Camera

    Science.gov (United States)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; hide

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  17. An embedded real-time red peach detection system based on an OV7670 camera, ARM Cortex-M4 processor and 3D Look-Up Tables

    OpenAIRE

    Teixidó Cairol, Mercè; Font Calafell, Davinia; Pallejà Cabrè, Tomàs; Tresánchez Ribes, Marcel; Nogués Aymamí, Miquel; Palacín Roca, Jordi

    2012-01-01

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future...

  18. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  19. Stabilization of the hypersonic boundary layer by finite-amplitude streaks

    Science.gov (United States)

    Ren, Jie; Fu, Song; Hanifi, Ardeshir

    2016-02-01

    Stabilization of two-dimensional disturbances in hypersonic boundary layer flows by finite-amplitude streaks is investigated using nonlinear parabolized stability equations. The boundary-layer flows at Mach numbers 4.5 and 6.0 are studied in which both first and second modes are supported. The streaks considered here are driven either by the so-called optimal perturbations (Klebanoff-type) or the centrifugal instability (Görtler-type). When the streak amplitude is in an appropriate range, i.e., large enough to modulate the laminar boundary layer but low enough to not trigger secondary instability, both first and second modes can effectively be suppressed.

  20. Development of Measurement Device of Working Radius of Crane Based on Single CCD Camera and Laser Range Finder

    Science.gov (United States)

    Nara, Shunsuke; Takahashi, Satoru

    In this paper, what we want to do is to develop an observation device to measure the working radius of a crane truck. The device has a single CCD camera, a laser range finder and two AC servo motors. First, in order to measure the working radius, we need to consider algorithm of a crane hook recognition. Then, we attach the cross mark on the crane hook. Namely, instead of the crane hook, we try to recognize the cross mark. Further, for the observation device, we construct PI control system with an extended Kalman filter to track the moving cross mark. Through experiments, we show the usefulness of our device including new control system of mark tracking.

  1. Optimal configuration of a low-dose breast-specific gamma camera based on semiconductor CdZnTe pixelated detectors

    Science.gov (United States)

    Genocchi, B.; Pickford Scienti, O.; Darambara, DG

    2017-05-01

    Breast cancer is one of the most frequent tumours in women. During the ‘90s, the introduction of screening programmes allowed the detection of cancer before the palpable stage, reducing its mortality up to 50%. About 50% of the women aged between 30 and 50 years present dense breast parenchyma. This percentage decreases to 30% for women between 50 to 80 years. In these women, mammography has a sensitivity of around 30%, and small tumours are covered by the dense parenchyma and missed in the mammogram. Interestingly, breast-specific gamma-cameras based on semiconductor CdZnTe detectors have shown to be of great interest to early diagnosis. Infact, due to the high energy, spatial resolution, and high sensitivity of CdZnTe, molecular breast imaging has been shown to have a sensitivity of about 90% independently of the breast parenchyma. The aim of this work is to determine the optimal combination of the detector pixel size, hole shape, and collimator material in a low dose dual head breast specific gamma camera based on a CdZnTe pixelated detector at 140 keV, in order to achieve high count rate, and the best possible image spatial resolution. The optimal combination has been studied by modeling the system using the Monte Carlo code GATE. Six different pixel sizes from 0.85 mm to 1.6 mm, two hole shapes, hexagonal and square, and two different collimator materials, lead and tungsten were considered. It was demonstrated that the camera achieved higher count rates, and better signal-to-noise ratio when equipped with square hole, and large pixels (> 1.3 mm). In these configurations, the spatial resolution was worse than using small pixel sizes (< 1.3 mm), but remained under 3.6 mm in all cases.

  2. Accuracy assessment of digital surface models based on a small format action camera in a North-East Hungarian sample area

    Directory of Open Access Journals (Sweden)

    Barkóczi Norbert

    2017-01-01

    Full Text Available The use of the small format digital action cameras has been increased in the past few years in various applications, due to their low budget cost, flexibility and reliability. We can mount these small cameras on several devices, like unmanned air vehicles (UAV and create 3D models with photogrammetric technique. Either creating or receiving these kind of databases, one of the most important questions will always be that how accurate these systems are, what the accuracy that can be achieved is. We gathered the overlapping images, created point clouds, and then we generated 21 different digital surface models (DSM. The differences based on the number of images we used in each model, and on the flight height. We repeated the flights three times, to compare the same models with each other. Besides, we measured 129 reference points with RTK-GPS, to compare the height differences with the extracted cell values from each DSM. The results showed that higher flight height has lower errors, and the optimal air base distance is one fourth of the flying height in both cases. The lowest median was 0.08 meter, at the 180 meter flight, 50 meter air base distance model. Raising the number of images does not increase the overall accuracy. The connection between the amount of error and distance from the nearest GCP is not linear in every case.

  3. Invariant Observer-Based State Estimation for Micro-Aerial Vehicles in GPS-Denied Indoor Environments Using an RGB-D Camera and MEMS Inertial Sensors

    Directory of Open Access Journals (Sweden)

    Dachuan Li

    2015-04-01

    Full Text Available This paper presents a non-linear state observer-based integrated navigation scheme for estimating the attitude, position and velocity of micro aerial vehicles (MAV operating in GPS-denied indoor environments, using the measurements from low-cost MEMS (micro electro-mechanical systems inertial sensors and an RGB-D camera. A robust RGB-D visual odometry (VO approach was developed to estimate the MAV’s relative motion by extracting and matching features captured by the RGB-D camera from the environment. The state observer of the RGB-D visual-aided inertial navigation was then designed based on the invariant observer theory for systems possessing symmetries. The motion estimates from the RGB-D VO were fused with inertial and magnetic measurements from the onboard MEMS sensors via the state observer, providing the MAV with accurate estimates of its full six degree-of-freedom states. Implementations on a quadrotor MAV and indoor flight test results demonstrate that the resulting state observer is effective in estimating the MAV’s states without relying on external navigation aids such as GPS. The properties of computational efficiency and simplicity in gain tuning make the proposed invariant observer-based navigation scheme appealing for actual MAV applications in indoor environments.

  4. Clinical usefulness of augmented reality using infrared camera based real-time feedback on gait function in cerebral palsy: a case study.

    Science.gov (United States)

    Lee, Byoung-Hee

    2016-04-01

    [Purpose] This study investigated the effects of real-time feedback using infrared camera recognition technology-based augmented reality in gait training for children with cerebral palsy. [Subjects] Two subjects with cerebral palsy were recruited. [Methods] In this study, augmented reality based real-time feedback training was conducted for the subjects in two 30-minute sessions per week for four weeks. Spatiotemporal gait parameters were used to measure the effect of augmented reality-based real-time feedback training. [Results] Velocity, cadence, bilateral step and stride length, and functional ambulation improved after the intervention in both cases. [Conclusion] Although additional follow-up studies of the augmented reality based real-time feedback training are required, the results of this study demonstrate that it improved the gait ability of two children with cerebral palsy. These findings suggest a variety of applications of conservative therapeutic methods which require future clinical trials.

  5. Analyses of Twelve New Whole Genome Sequences of Cassava Brown Streak Viruses and Ugandan Cassava Brown Streak Viruses from East Africa: Diversity, Supercomputing and Evidence for Further Speciation

    Science.gov (United States)

    Ndunguru, Joseph; Sseruwagi, Peter; Tairo, Fred; Stomeo, Francesca; Maina, Solomon; Djinkeng, Appolinaire; Kehoe, Monica; Boykin, Laura M.

    2015-01-01

    Cassava brown streak disease is caused by two devastating viruses, Cassava brown streak virus (CBSV) and Ugandan cassava brown streak virus (UCBSV) which are frequently found infecting cassava, one of sub-Saharan Africa’s most important staple food crops. Each year these viruses cause losses of up to $100 million USD and can leave entire families without their primary food source, for an entire year. Twelve new whole genomes, including seven of CBSV and five of UCBSV were uncovered in this research, doubling the genomic sequences available in the public domain for these viruses. These new sequences disprove the assumption that the viruses are limited by agro-ecological zones, show that current diagnostic primers are insufficient to provide confident diagnosis of these viruses and give rise to the possibility that there may be as many as four distinct species of virus. Utilizing NGS sequencing technologies and proper phylogenetic practices will rapidly increase the solution to sustainable cassava production. PMID:26439260

  6. Analyses of Twelve New Whole Genome Sequences of Cassava Brown Streak Viruses and Ugandan Cassava Brown Streak Viruses from East Africa: Diversity, Supercomputing and Evidence for Further Speciation.

    Directory of Open Access Journals (Sweden)

    Joseph Ndunguru

    Full Text Available Cassava brown streak disease is caused by two devastating viruses, Cassava brown streak virus (CBSV and Ugandan cassava brown streak virus (UCBSV which are frequently found infecting cassava, one of sub-Saharan Africa's most important staple food crops. Each year these viruses cause losses of up to $100 million USD and can leave entire families without their primary food source, for an entire year. Twelve new whole genomes, including seven of CBSV and five of UCBSV were uncovered in this research, doubling the genomic sequences available in the public domain for these viruses. These new sequences disprove the assumption that the viruses are limited by agro-ecological zones, show that current diagnostic primers are insufficient to provide confident diagnosis of these viruses and give rise to the possibility that there may be as many as four distinct species of virus. Utilizing NGS sequencing technologies and proper phylogenetic practices will rapidly increase the solution to sustainable cassava production.

  7. Photogrammetry-Based Head Digitization for Rapid and Accurate Localization of EEG Electrodes and MEG Fiducial Markers Using a Single Digital SLR Camera.

    Science.gov (United States)

    Clausner, Tommy; Dalal, Sarang S; Crespo-García, Maité

    2017-01-01

    The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D . Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position.

  8. Cameras in mobile phones

    Science.gov (United States)

    Nummela, Ville; Viinikanoja, Jarkko; Alakarhu, Juha

    2006-04-01

    One of the fastest growing markets in consumer markets today are camera phones. During past few years total volume has been growing fast and today millions of mobile phones with camera will be sold. At the same time resolution and functionality of the cameras has been growing from CIF towards DSC level. From camera point of view the mobile world is an extremely challenging field. Cameras should have good image quality but in small size. They also need to be reliable and their construction should be suitable for mass manufacturing. All components of the imaging chain should be well optimized in this environment. Image quality and usability are the most important parameters to user. The current trend of adding more megapixels to cameras and at the same time using smaller pixels is affecting both. On the other hand reliability and miniaturization are key drivers for product development as well as the cost. In optimized solution all parameters are in balance but the process of finding the right trade-offs is not an easy task. In this paper trade-offs related to optics and their effects to image quality and usability of cameras are discussed. Key development areas from mobile phone camera point of view are also listed.

  9. Quantitative trait loci for resistance to maize streak virus disease in ...

    African Journals Online (AJOL)

    STORAGESEVER

    2008-07-18

    Jul 18, 2008 ... African Journal of Biotechnology Vol. ... development ... Biotechnology Center, Kenya Agricultural Research Institute, P.O. Box 58711-00200, Nairobi, ... Maize streak virus disease is an important disease of maize in Kenya.

  10. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  11. Black leaf streak disease affects starch metabolism in banana fruit.

    Science.gov (United States)

    Saraiva, Lorenzo de Amorim; Castelan, Florence Polegato; Shitakubo, Renata; Hassimotto, Neuza Mariko Aymoto; Purgatto, Eduardo; Chillet, Marc; Cordenunsi, Beatriz Rosana

    2013-06-12

    Black leaf streak disease (BLSD), also known as black sigatoka, represents the main foliar disease in Brazilian banana plantations. In addition to photosynthetic leaf area losses and yield losses, this disease causes an alteration in the pre- and postharvest behavior of the fruit. The aim of this work was to investigate the starch metabolism of fruits during fruit ripening from plants infected with BLSD by evaluating carbohydrate content (i.e., starch, soluble sugars, oligosaccharides, amylose), phenolic compound content, phytohormones, enzymatic activities (i.e., starch phosphorylases, α- and β-amylase), and starch granules. The results indicated that the starch metabolism in banana fruit ripening is affected by BLSD infection. Fruit from infested plots contained unusual amounts of soluble sugars in the green stage and smaller starch granules and showed a different pattern of superficial degradation. Enzymatic activities linked to starch degradation were also altered by the disease. Moreover, the levels of indole-acetic acid and phenolic compounds indicated an advanced fruit physiological age for fruits from infested plots.

  12. Performance of the gamma-ray camera based on GSO(Ce) scintillator array and PSPMT with the ASIC readout system

    International Nuclear Information System (INIS)

    Ueno, Kazuki; Hattori, Kaori; Ida, Chihiro; Iwaki, Satoru; Kabuki, Shigeto; Kubo, Hidetoshi; Kurosawa, Shunsuke; Miuchi, Kentaro; Nagayoshi, Tsutomu; Nishimura, Hironobu; Orito, Reiko; Takada, Atsushi; Tanimori, Toru

    2008-01-01

    We have studied the performance of a readout system with ASIC chips for a gamma-ray camera based on a 64-channel multi-anode PSPMT (Hamamatsu flat-panel H8500) coupled to a GSO(Ce) scintillator array. The GSO array consists of 8x8 pixels of 6x6x13 mm 3 with the same pixel pitch as the anode of the H8500. This camera is intended to serve as an absorber of an electron tracking Compton gamma-ray camera that measures gamma rays up to ∼1 MeV. Because we need a readout system with low power consumption for a balloon-borne experiment, we adopted a 32-channel ASIC chip, IDEAS VA32 H DR11, which has one of the widest dynamic range among commercial chips. However, in the case of using a GSO(Ce) crystal and the H8500, the dynamic range of VA32 H DR11 is narrow, and therefore the H8500 has to be operated with a low gain of about 10 5 . If the H8500 is operated with a low gain, the camera has a narrow incident-energy dynamic range from 100 to 700 keV, and a bad energy resolution of 13.0% (FWHM) at 662 keV. We have therefore developed an attenuator board in order to operate the H8500 with the typical gain of 10 6 , which can measure up to ∼1 MeV gamma ray. The board makes the variation of the anode gain uniform and widens the dynamic range of the H8500. The system using the new attenuator board has a good uniformity of min:max∼1:1.6, an incident-energy dynamic range from 30 to 900 keV, a position resolution of less than 6 mm, and a typical energy resolution of 10.6% (FWHM) at 662 keV with a low power consumption of about 1.7 W/64ch

  13. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  14. Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor

    Directory of Open Access Journals (Sweden)

    Toan Minh Hoang

    2017-10-01

    Full Text Available Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road, weather conditions, and illumination (shadows from objects such as cars, trees, and buildings. Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD, and Road Marking dataset, showed that our method outperformed conventional lane detection methods.

  15. Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor.

    Science.gov (United States)

    Hoang, Toan Minh; Baek, Na Rae; Cho, Se Woon; Kim, Ki Wan; Park, Kang Ryoung

    2017-10-28

    Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road), weather conditions, and illumination (shadows from objects such as cars, trees, and buildings). Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD), and Road Marking dataset, showed that our method outperformed conventional lane detection methods.

  16. Recognition of Damaged Arrow-Road Markings by Visible Light Camera Sensor Based on Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Husan Vokhidov

    2016-12-01

    Full Text Available Automobile driver information as displayed on marked road signs indicates the state of the road, traffic conditions, proximity to schools, etc. These signs are important to insure the safety of the driver and pedestrians. They are also important input to the automated advanced driver assistance system (ADAS, installed in many automobiles. Over time, the arrow-road markings may be eroded or otherwise damaged by automobile contact, making it difficult for the driver to correctly identify the marking. Failure to properly identify an arrow-road marker creates a dangerous situation that may result in traffic accidents or pedestrian injury. Very little research exists that studies the problem of automated identification of damaged arrow-road marking painted on the road. In this study, we propose a method that uses a convolutional neural network (CNN to recognize six types of arrow-road markings, possibly damaged, by visible light camera sensor. Experimental results with six databases of Road marking dataset, KITTI dataset, Málaga dataset 2009, Málaga urban dataset, Naver street view dataset, and Road/Lane detection evaluation 2013 dataset, show that our method outperforms conventional methods.

  17. Recognition of Damaged Arrow-Road Markings by Visible Light Camera Sensor Based on Convolutional Neural Network.

    Science.gov (United States)

    Vokhidov, Husan; Hong, Hyung Gil; Kang, Jin Kyu; Hoang, Toan Minh; Park, Kang Ryoung

    2016-12-16

    Automobile driver information as displayed on marked road signs indicates the state of the road, traffic conditions, proximity to schools, etc. These signs are important to insure the safety of the driver and pedestrians. They are also important input to the automated advanced driver assistance system (ADAS), installed in many automobiles. Over time, the arrow-road markings may be eroded or otherwise damaged by automobile contact, making it difficult for the driver to correctly identify the marking. Failure to properly identify an arrow-road marker creates a dangerous situation that may result in traffic accidents or pedestrian injury. Very little research exists that studies the problem of automated identification of damaged arrow-road marking painted on the road. In this study, we propose a method that uses a convolutional neural network (CNN) to recognize six types of arrow-road markings, possibly damaged, by visible light camera sensor. Experimental results with six databases of Road marking dataset, KITTI dataset, Málaga dataset 2009, Málaga urban dataset, Naver street view dataset, and Road/Lane detection evaluation 2013 dataset, show that our method outperforms conventional methods.

  18. Recognition of Damaged Arrow-Road Markings by Visible Light Camera Sensor Based on Convolutional Neural Network

    Science.gov (United States)

    Vokhidov, Husan; Hong, Hyung Gil; Kang, Jin Kyu; Hoang, Toan Minh; Park, Kang Ryoung

    2016-01-01

    Automobile driver information as displayed on marked road signs indicates the state of the road, traffic conditions, proximity to schools, etc. These signs are important to insure the safety of the driver and pedestrians. They are also important input to the automated advanced driver assistance system (ADAS), installed in many automobiles. Over time, the arrow-road markings may be eroded or otherwise damaged by automobile contact, making it difficult for the driver to correctly identify the marking. Failure to properly identify an arrow-road marker creates a dangerous situation that may result in traffic accidents or pedestrian injury. Very little research exists that studies the problem of automated identification of damaged arrow-road marking painted on the road. In this study, we propose a method that uses a convolutional neural network (CNN) to recognize six types of arrow-road markings, possibly damaged, by visible light camera sensor. Experimental results with six databases of Road marking dataset, KITTI dataset, Málaga dataset 2009, Málaga urban dataset, Naver street view dataset, and Road/Lane detection evaluation 2013 dataset, show that our method outperforms conventional methods. PMID:27999301

  19. D Reconstruction of Cultural Tourism Attractions from Indoor to Outdoor Based on Portable Four-Camera Stereo Vision System

    Science.gov (United States)

    Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.

    2015-05-01

    Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.

  20. Spatiotemporal mechanical variation reveals critical role for rho kinase during primitive streak morphogenesis.

    Science.gov (United States)

    Henkels, Julia; Oh, Jaeho; Xu, Wenwei; Owen, Drew; Sulchek, Todd; Zamir, Evan

    2013-02-01

    Large-scale morphogenetic movements during early embryo development are driven by complex changes in biochemical and biophysical factors. Current models for amniote primitive streak morphogenesis and gastrulation take into account numerous genetic pathways but largely ignore the role of mechanical forces. Here, we used atomic force microscopy (AFM) to obtain for the first time precise biomechanical properties of the early avian embryo. Our data reveal that the primitive streak is significantly stiffer than neighboring regions of the epiblast, and that it is stiffer than the pre-primitive streak epiblast. To test our hypothesis that these changes in mechanical properties are due to a localized increase of actomyosin contractility, we inhibited actomyosin contractility via the Rho kinase (ROCK) pathway using the small-molecule inhibitor Y-27632. Our results using several different assays show the following: (1) primitive streak formation was blocked; (2) the time-dependent increase in primitive streak stiffness was abolished; and (3) convergence of epiblast cells to the midline was inhibited. Taken together, our data suggest that actomyosin contractility is necessary for primitive streak morphogenesis, and specifically, ROCK plays a critical role. To better understand the underlying mechanisms of this fundamental process, future models should account for the findings presented in this study.