WorldWideScience

Sample records for wide field camera

  1. A wide field X-ray camera

    International Nuclear Information System (INIS)

    Sims, M.; Turner, M.J.L.; Willingale, R.

    1980-01-01

    A wide field of view X-ray camera based on the Dicke or Coded Mask principle is described. It is shown that this type of instrument is more sensitive than a pin-hole camera, or than a scanning survey of a given region of sky for all wide field conditions. The design of a practical camera is discussed and the sensitivity and performance of the chosen design are evaluated by means of computer simulations. The Wiener Filter and Maximum Entropy methods of deconvolution are described and these methods are compared with each other and cross-correlation using data from the computer simulations. It is shown that the analytic expressions for sensitivity used by other workers are confirmed by the simulations, and that ghost images caused by incomplete coding can be substantially eliminated by the use of the Wiener Filter and the Maximum Entropy Method, with some penalty in computer time for the latter. The cyclic mask configuration is compared with the simple mask camera. It is shown that when the diffuse X-ray background dominates, the simple system is more sensitive and has the better angular resolution. When sources dominate the simple system is less sensitive. It is concluded that the simple coded mask camera is the best instrument for wide field imaging of the X-ray sky. (orig.)

  2. SHOK—The First Russian Wide-Field Optical Camera in Space

    Science.gov (United States)

    Lipunov, V. M.; Gorbovskoy, E. S.; Kornilov, V. G.; Panasyuk, M. I.; Amelushkin, A. M.; Petrov, V. L.; Yashin, I. V.; Svertilov, S. I.; Vedenkin, N. N.

    2018-02-01

    Onboard the spacecraft Lomonosov is established two fast, fixed, very wide-field cameras SHOK. The main goal of this experiment is the observation of GRB optical emission before, synchronously, and after the gamma-ray emission. The field of view of each of the cameras is placed in the gamma-ray burst detection area of other devices located onboard the "Lomonosov" spacecraft. SHOK provides measurements of optical emissions with a magnitude limit of ˜ 9-10m on a single frame with an exposure of 0.2 seconds. The device is designed for continuous sky monitoring at optical wavelengths in the very wide field of view (1000 square degrees each camera), detection and localization of fast time-varying (transient) optical sources on the celestial sphere, including provisional and synchronous time recording of optical emissions from the gamma-ray burst error boxes, detected by the BDRG device and implemented by a control signal (alert trigger) from the BDRG. The Lomonosov spacecraft has two identical devices, SHOK1 and SHOK2. The core of each SHOK device is a fast-speed 11-Megapixel CCD. Each of the SHOK devices represents a monoblock, consisting of a node observations of optical emission, the electronics node, elements of the mechanical construction, and the body.

  3. Radiometric calibration of wide-field camera system with an application in astronomy

    Science.gov (United States)

    Vítek, Stanislav; Nasyrova, Maria; Stehlíková, Veronika

    2017-09-01

    Camera response function (CRF) is widely used for the description of the relationship between scene radiance and image brightness. Most common application of CRF is High Dynamic Range (HDR) reconstruction of the radiance maps of imaged scenes from a set of frames with different exposures. The main goal of this work is to provide an overview of CRF estimation algorithms and compare their outputs with results obtained under laboratory conditions. These algorithms, typically designed for multimedia content, are unfortunately quite useless with astronomical image data, mostly due to their nature (blur, noise, and long exposures). Therefore, we propose an optimization of selected methods to use in an astronomical imaging application. Results are experimentally verified on the wide-field camera system using Digital Single Lens Reflex (DSLR) camera.

  4. Non-mydriatic, wide field, fundus video camera

    Science.gov (United States)

    Hoeher, Bernhard; Voigtmann, Peter; Michelson, Georg; Schmauss, Bernhard

    2014-02-01

    We describe a method we call "stripe field imaging" that is capable of capturing wide field color fundus videos and images of the human eye at pupil sizes of 2mm. This means that it can be used with a non-dilated pupil even with bright ambient light. We realized a mobile demonstrator to prove the method and we could acquire color fundus videos of subjects successfully. We designed the demonstrator as a low-cost device consisting of mass market components to show that there is no major additional technical outlay to realize the improvements we propose. The technical core idea of our method is breaking the rotational symmetry in the optical design that is given in many conventional fundus cameras. By this measure we could extend the possible field of view (FOV) at a pupil size of 2mm from a circular field with 20° in diameter to a square field with 68° by 18° in size. We acquired a fundus video while the subject was slightly touching and releasing the lid. The resulting video showed changes at vessels in the region of the papilla and a change of the paleness of the papilla.

  5. Wide field and diffraction limited array camera for SIRTF

    International Nuclear Information System (INIS)

    Fazio, G.G.; Koch, D.G.; Melnick, G.J.

    1986-01-01

    The Infrared Array Camera for the Space Infrared Telescope Facility (SIRTF/IRAC) is capable of two-dimensional photometry in either a wide field or diffraction-limited mode over the wavelength interval from 2 to 30 microns. Three different two-dimensional direct readout (DRO) array detectors are being considered: Band 1-InSb or Si:In (2-5 microns) 128 x 128 pixels, Band 2-Si:Ga (5-18 microns) 64 x 64 pixels, and Band 3-Si:Sb (18-30 microns) 64 x 64 pixels. The hybrid DRO readout architecture has the advantages of low read noise, random pixel access with individual readout rates, and nondestructive readout. The scientific goals of IRAC are discussed, which are the basis for several important requirements and capabilities of the array camera: (1) diffraction-limited resolution from 2-30 microns, (2) use of the maximum unvignetted field of view of SIRTF, (3) simultaneous observations within the three infrared spectral bands, and (4) the capability for broad and narrow bandwidth spectral resolution. A strategy has been developed to minimize the total electronic and environmental noise sources to satisfy the scientific requirements. 7 references

  6. Cryogenic solid Schmidt camera as a base for future wide-field IR systems

    Science.gov (United States)

    Yudin, Alexey N.

    2011-11-01

    Work is focused on study of capability of solid Schmidt camera to serve as a wide-field infrared lens for aircraft system with whole sphere coverage, working in 8-14 um spectral range, coupled with spherical focal array of megapixel class. Designs of 16 mm f/0.2 lens with 60 and 90 degrees sensor diagonal are presented, their image quality is compared with conventional solid design. Achromatic design with significantly improved performance, containing enclosed soft correcting lens behind protective front lens is proposed. One of the main goals of the work is to estimate benefits from curved detector arrays in 8-14 um spectral range wide-field systems. Coupling of photodetector with solid Schmidt camera by means of frustrated total internal reflection is considered, with corresponding tolerance analysis. The whole lens, except front element, is considered to be cryogenic, with solid Schmidt unit to be flown by hydrogen for improvement of bulk transmission.

  7. Contributed Review: Camera-limits for wide-field magnetic resonance imaging with a nitrogen-vacancy spin sensor

    Science.gov (United States)

    Wojciechowski, Adam M.; Karadas, Mürsel; Huck, Alexander; Osterkamp, Christian; Jankuhn, Steffen; Meijer, Jan; Jelezko, Fedor; Andersen, Ulrik L.

    2018-03-01

    Sensitive, real-time optical magnetometry with nitrogen-vacancy centers in diamond relies on accurate imaging of small (≪10-2), fractional fluorescence changes across the diamond sample. We discuss the limitations on magnetic field sensitivity resulting from the limited number of photoelectrons that a camera can record in a given time. Several types of camera sensors are analyzed, and the smallest measurable magnetic field change is estimated for each type. We show that most common sensors are of a limited use in such applications, while certain highly specific cameras allow achieving nanotesla-level sensitivity in 1 s of a combined exposure. Finally, we demonstrate the results obtained with a lock-in camera that paves the way for real-time, wide-field magnetometry at the nanotesla level and with a micrometer resolution.

  8. SAAO's new robotic telescope and WiNCam (Wide-field Nasmyth Camera)

    Science.gov (United States)

    Worters, Hannah L.; O'Connor, James E.; Carter, David B.; Loubser, Egan; Fourie, Pieter A.; Sickafoose, Amanda; Swanevelder, Pieter

    2016-08-01

    The South African Astronomical Observatory (SAAO) is designing and manufacturing a wide-field camera for use on two of its telescopes. The initial concept was of a Prime focus camera for the 74" telescope, an equatorial design made by Grubb Parsons, where it would employ a 61mmx61mm detector to cover a 23 arcmin diameter field of view. However, while in the design phase, SAAO embarked on the process of acquiring a bespoke 1-metre robotic alt-az telescope with a 43 arcmin field of view, which needs a homegrown instrument suite. The Prime focus camera design was thus adapted for use on either telescope, increasing the detector size to 92mmx92mm. Since the camera will be mounted on the Nasmyth port of the new telescope, it was dubbed WiNCam (Wide-field Nasmyth Camera). This paper describes both WiNCam and the new telescope. Producing an instrument that can be swapped between two very different telescopes poses some unique challenges. At the Nasmyth port of the alt-az telescope there is ample circumferential space, while on the 74 inch the available envelope is constrained by the optical footprint of the secondary, if further obscuration is to be avoided. This forces the design into a cylindrical volume of 600mm diameter x 250mm height. The back focal distance is tightly constrained on the new telescope, shoehorning the shutter, filter unit, guider mechanism, a 10mm thick window and a tip/tilt mechanism for the detector into 100mm depth. The iris shutter and filter wheel planned for prime focus could no longer be accommodated. Instead, a compact shutter with a thickness of less than 20mm has been designed in-house, using a sliding curtain mechanism to cover an aperture of 125mmx125mm, while the filter wheel has been replaced with 2 peripheral filter cartridges (6 filters each) and a gripper to move a filter into the beam. We intend using through-vacuum wall PCB technology across the cryostat vacuum interface, instead of traditional hermetic connector-based wiring. This

  9. Contact-free trans-pars-planar illumination enables snapshot fundus camera for nonmydriatic wide field photography.

    Science.gov (United States)

    Wang, Benquan; Toslak, Devrim; Alam, Minhaj Nur; Chan, R V Paul; Yao, Xincheng

    2018-06-08

    In conventional fundus photography, trans-pupillary illumination delivers illuminating light to the interior of the eye through the peripheral area of the pupil, and only the central part of the pupil can be used for collecting imaging light. Therefore, the field of view of conventional fundus cameras is limited, and pupil dilation is required for evaluating the retinal periphery which is frequently affected by diabetic retinopathy (DR), retinopathy of prematurity (ROP), and other chorioretinal conditions. We report here a nonmydriatic wide field fundus camera employing trans-pars-planar illumination which delivers illuminating light through the pars plana, an area outside of the pupil. Trans-pars-planar illumination frees the entire pupil for imaging purpose only, and thus wide field fundus photography can be readily achieved with less pupil dilation. For proof-of-concept testing, using all off-the-shelf components a prototype instrument that can achieve 90° fundus view coverage in single-shot fundus images, without the need of pharmacologic pupil dilation was demonstrated.

  10. Improved iris localization by using wide and narrow field of view cameras for iris recognition

    Science.gov (United States)

    Kim, Yeong Gon; Shin, Kwang Yong; Park, Kang Ryoung

    2013-10-01

    Biometrics is a method of identifying individuals by their physiological or behavioral characteristics. Among other biometric identifiers, iris recognition has been widely used for various applications that require a high level of security. When a conventional iris recognition camera is used, the size and position of the iris region in a captured image vary according to the X, Y positions of a user's eye and the Z distance between a user and the camera. Therefore, the searching area of the iris detection algorithm is increased, which can inevitably decrease both the detection speed and accuracy. To solve these problems, we propose a new method of iris localization that uses wide field of view (WFOV) and narrow field of view (NFOV) cameras. Our study is new as compared to previous studies in the following four ways. First, the device used in our research acquires three images, one each of the face and both irises, using one WFOV and two NFOV cameras simultaneously. The relation between the WFOV and NFOV cameras is determined by simple geometric transformation without complex calibration. Second, the Z distance (between a user's eye and the iris camera) is estimated based on the iris size in the WFOV image and anthropometric data of the size of the human iris. Third, the accuracy of the geometric transformation between the WFOV and NFOV cameras is enhanced by using multiple matrices of the transformation according to the Z distance. Fourth, the searching region for iris localization in the NFOV image is significantly reduced based on the detected iris region in the WFOV image and the matrix of geometric transformation corresponding to the estimated Z distance. Experimental results showed that the performance of the proposed iris localization method is better than that of conventional methods in terms of accuracy and processing time.

  11. Characteristics of a single photon emission tomography system with a wide field gamma camera

    International Nuclear Information System (INIS)

    Mathonnat, F.; Soussaline, F.; Todd-Pokropek, A.E.; Kellershohn, C.

    1979-01-01

    This text summarizes a work study describing the imagery possibilities of a single photon emission tomography system composed of a conventional wide field gamma camera, connected to a computer. The encouraging results achieved on the various phantoms studied suggest a significant development of this technique in clinical work in Nuclear Medicine Departments [fr

  12. The influence of distrubing effects on the performance of a wide field coded mask X-ray camera

    International Nuclear Information System (INIS)

    Sims, M.R.; Turner, M.J.L.; Willingale, R.

    1985-01-01

    The coded aperture telescope, or Dicke camera, is seen as an instrument suitable for many applications in X-ray and gamma ray imaging. In this paper the effects of a partially obscuring window mask support or collimator, a detector with limited spatial resolution, and motion of the camera during image integration are considered using a computer simulation of the performance of such a camera. Cross correlation and the Wiener filter are used to deconvolve the data. It is shown that while these effects cause a degradation in performance this is in no case catastrophic. Deterioration of the image is shown to be greatest where strong sources are present in the field of view and is quite small (proportional 10%) when diffuse background is the major element. A comparison between the cyclic mask camera and the single mask camera is made under various conditions and it is shown the single mask camera has a moderate advantage particularly when imaging a wide field of view. (orig.)

  13. Searching for transits in the Wide Field Camera Transit Survey with difference-imaging light curves

    NARCIS (Netherlands)

    Zendejas, Dominguez J.; Koppenhoefer, J.; Saglia, R.; Birkby, J.L.; Hodgkin, S.; Kovács, G.; Pinfield, D.; Sipocz, B.; Barrado, D.; Bender, R.; Burgo, del C.; Cappetta, M.; Martín, E.; Nefs, B.; Riffeser, A.; Steele, P.

    2013-01-01

    The Wide Field Camera Transit Survey is a pioneer program aiming at for searching extra-solar planets in the near-infrared. The images from the survey are processed by a data reduction pipeline, which uses aperture photometry to construct the light curves. We produce an alternative set of light

  14. Wide-Field Imaging of Omega Centauri with the Advanced Camera for Surveys

    Science.gov (United States)

    Haggard, D.; Dorfman, J. L.; Cool, A. M.; Anderson, J.; Bailyn, C. D.; Edmonds, P. D.; Grindlay, J. E.

    2003-12-01

    We present initial results of a wide-field imaging study of the globular cluster Omega Cen (NGC 5139) using the Advanced Camera for Surveys (ACS). We have obtained a mosaic of 3x3 pointings of the cluster using the HST/ACS Wide Field Camera covering approximately 10' x 10', roughly out to the cluster's half-mass radius. Using F435W (B435), F625W (R625) and F658N (H-alpha) filters, we are searching for optical counterparts of Chandra X-ray sources and studying the cluster's stellar populations. Here we report the discovery of an optical counterpart to the X-ray source identified by Rutledge et al. (2002) as a possible quiescent neutron star on the basis of its X-ray spectrum. The star's magnitude and color (R625 = 24.4, B435-R625 = 1.5) place it more than 1.5 magnitudes to the blue side of the main sequence. Through the H-alpha filter it is about 1.3 magnitudes brighter than cluster stars of comparable R625 magnitude. The blue color and H-alpha excess suggest the presence of an accretion disk, implying that the neutron star is a member of a quiescent low-mass X-ray binary. The object's faint absolute magnitude (M625 ˜ 10.6, M435 ˜ 11.8) implies that the system contains an unusually weak disk and that the companion, if it is a main-sequence star, is of very low mass (ACS study. This work is supported by NASA grant GO-9442 from the Space Telescope Science Institute.

  15. Lensless imaging for wide field of view

    Science.gov (United States)

    Nagahara, Hajime; Yagi, Yasushi

    2015-02-01

    It is desirable to engineer a small camera with a wide field of view (FOV) because of current developments in the field of wearable cameras and computing products, such as action cameras and Google Glass. However, typical approaches for achieving wide FOV, such as attaching a fisheye lens and convex mirrors, require a trade-off between optics size and the FOV. We propose camera optics that achieve a wide FOV, and are at the same time small and lightweight. The proposed optics are a completely lensless and catoptric design. They contain four mirrors, two for wide viewing, and two for focusing the image on the camera sensor. The proposed optics are simple and can be simply miniaturized, since we use only mirrors for the proposed optics and the optics are not susceptible to chromatic aberration. We have implemented the prototype optics of our lensless concept. We have attached the optics to commercial charge-coupled device/complementary metal oxide semiconductor cameras and conducted experiments to evaluate the feasibility of our proposed optics.

  16. UVUDF: Ultraviolet Imaging of the Hubble Ultra Deep Field with Wide-Field Camera 3

    Science.gov (United States)

    Teplitz, Harry I.; Rafelski, Marc; Kurczynski, Peter; Bond, Nicholas A.; Grogin, Norman; Koekemoer, Anton M.; Atek, Hakim; Brown, Thomas M.; Coe, Dan; Colbert, James W.; Ferguson, Henry C.; Finkelstein, Steven L.; Gardner, Jonathan P.; Gawiser, Eric; Giavalisco, Mauro; Gronwall, Caryl; Hanish, Daniel J.; Lee, Kyoung-Soo; de Mello, Duilia F.; Ravindranath, Swara; Ryan, Russell E.; Siana, Brian D.; Scarlata, Claudia; Soto, Emmaris; Voyer, Elysse N.; Wolfe, Arthur M.

    2013-12-01

    We present an overview of a 90 orbit Hubble Space Telescope treasury program to obtain near-ultraviolet imaging of the Hubble Ultra Deep Field using the Wide Field Camera 3 UVIS detector with the F225W, F275W, and F336W filters. This survey is designed to: (1) investigate the episode of peak star formation activity in galaxies at 1 dropouts at redshifts 1.7, 2.1, and 2.7 is largely consistent with the number predicted by published luminosity functions. We also confirm that the image mosaics have sufficient sensitivity and resolution to support the analysis of the evolution of star-forming clumps, reaching 28-29th magnitude depth at 5σ in a 0.''2 radius aperture depending on filter and observing epoch. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are #12534.

  17. Contributed review: camera-limits for wide-field magnetic resonance imaging with a nitrogen-vacancy spin sensor

    DEFF Research Database (Denmark)

    Wojciechowski, Adam M.; Karadas, Mürsel; Huck, Alexander

    2018-01-01

    Sensitive, real-time optical magnetometry with nitrogen-vacancy centers in diamond relies on accurate imaging of small (≪10−2), fractional fluorescence changes across the diamond sample. We discuss the limitations on magnetic field sensitivity resulting from the limited number of photoelectrons t......-level sensitivity in 1 s of a combined exposure. Finally, we demonstrate the results obtained with a lock-in camera that paves the way for real-time, wide-field magnetometry at the nanotesla level and with a micrometer resolution....

  18. Face Liveness Detection Using a Light Field Camera

    Directory of Open Access Journals (Sweden)

    Sooyeon Kim

    2014-11-01

    Full Text Available A light field camera is a sensor that can record the directions as well as the colors of incident rays. This camera is widely utilized from 3D reconstruction to face and iris recognition. In this paper, we suggest a novel approach for defending spoofing face attacks, like printed 2D facial photos (hereinafter 2D photos and HD tablet images, using the light field camera. By viewing the raw light field photograph from a different standpoint, we extract two special features which cannot be obtained from the conventional camera. To verify the performance, we compose light field photograph databases and conduct experiments. Our proposed method achieves at least 94.78% accuracy or up to 99.36% accuracy under different types of spoofing attacks.

  19. OP09O-OP404-9 Wide Field Camera 3 CCD Quantum Efficiency Hysteresis

    Science.gov (United States)

    Collins, Nick

    2009-01-01

    The HST/Wide Field Camera (WFC) 3 UV/visible channel CCD detectors have exhibited an unanticipated quantum efficiency hysteresis (QEH) behavior. At the nominal operating temperature of -83C, the QEH feature contrast was typically 0.1-0.2% or less. The behavior was replicated using flight spare detectors. A visible light flat-field (540nm) with a several times full-well signal level can pin the detectors at both optical (600nm) and near-UV (230nm) wavelengths, suppressing the QEH behavior. We are characterizing the timescale for the detectors to become unpinned and developing a protocol for flashing the WFC3 CCDs with the instrument's internal calibration system in flight. The HST/Wide Field Camera 3 UV/visible channel CCD detectors have exhibited an unanticipated quantum efficiency hysteresis (QEH) behavior. The first observed manifestation of QEH was the presence in a small percentage of flat-field images of a bowtie-shaped contrast that spanned the width of each chip. At the nominal operating temperature of -83C, the contrast observed for this feature was typically 0.1-0.2% or less, though at warmer temperatures contrasts up to 5% (at -50C) have been observed. The bowtie morphology was replicated using flight spare detectors in tests at the GSFC Detector Characterization Laboratory by power cycling the detector while cold. Continued investigation revealed that a clearly-related global QE suppression at the approximately 5% level can be produced by cooling the detector in the dark; subsequent flat-field exposures at a constant illumination show asymptotically increasing response. This QE "pinning" can be achieved with a single high signal flat-field or a series of lower signal flats; a visible light (500-580nm) flat-field with a signal level of several hundred thousand electrons per pixel is sufficient for QE pinning at both optical (600nm) and near-UV (230nm) wavelengths. We are characterizing the timescale for the detectors to become unpinned and developing a

  20. Stellar photometry with the Wide Field/Planetary Camera of the Hubble Space Telescope

    International Nuclear Information System (INIS)

    Holtzman, J.A.

    1990-01-01

    Simulations of Wide Field/Planetary Camera (WF/PC) images are analyzed in order to discover the most effective techniques for stellar photometry and to evaluate the accuracy and limitations of these techniques. The capabilities and operation of the WF/PC and the simulations employed in the study are described. The basic techniques of stellar photometry and methods to improve these techniques for the WF/PC are discussed. The correct parameters for star detection, aperture photometry, and point-spread function (PSF) fitting with the DAOPHOT software of Stetson (1987) are determined. Consideration is given to undersampling of the stellar images by the detector; variations in the PSF; and the crowding of the stellar images. It is noted that, with some changes DAOPHOT, is able to generate photometry almost to the level of photon statistics. 10 refs

  1. Updates to Post-Flash Calibration for the Advanced Camera for Surveys Wide Field Channel

    Science.gov (United States)

    Miles, Nathan

    2018-03-01

    This report presents a new technique for generating the post-flash calibration reference file for the Advanced Camera for Surveys (ACS) Wide Field Channel (WFC). The new method substantially reduces, if not, eliminates all together the presence of dark current artifacts arising from improper dark subtraction, while simultaneously preserving flat-field artifacts. The stability of the post-flash calibration reference file over time is measured using data taken yearly since 2012 and no statistically significant deviations are found. An analysis of all short-flashed darks taken every two days since January 2015 reveals a periodic modulation of the LED intensity on timescales of about one year. This effect is most readily explained by changes to the local temperature in the area surrounding the LED. However, a slight offset between the periods of the temperature and LED modulations lends to the possibility that the effect is a chance observation of the two sinusoids at an unfortunate point in their beat cycle.

  2. 3D for the people: multi-camera motion capture in the field with consumer-grade cameras and open source software

    Directory of Open Access Journals (Sweden)

    Brandon E. Jackson

    2016-09-01

    Full Text Available Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts.

  3. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the

  4. The Brazilian wide field imaging camera (WFI) for the China/Brazil earth resources satellite: CBERS 3 and 4

    Science.gov (United States)

    Scaduto, L. C. N.; Carvalho, E. G.; Modugno, R. G.; Cartolano, R.; Evangelista, S. H.; Segoria, D.; Santos, A. G.; Stefani, M. A.; Castro Neto, J. C.

    2017-11-01

    The purpose of this paper is to present the optical system developed for the Wide Field imaging Camera - WFI that will be integrated to the CBERS 3 and 4 satellites (China Brazil Earth resources Satellite). This camera will be used for remote sensing of the Earth and it is aimed to work at an altitude of 778 km. The optical system is designed for four spectral bands covering the range of wavelengths from blue to near infrared and its field of view is +/-28.63°, which covers 866 km, with a ground resolution of 64 m at nadir. WFI has been developed through a consortium formed by Opto Electrônica S. A. and Equatorial Sistemas. In particular, we will present the optical analysis based on the Modulation Transfer Function (MTF) obtained during the Engineering Model phase (EM) and the optical tests performed to evaluate the requirements. Measurements of the optical system MTF have been performed using an interferometer at the wavelength of 632.8nm and global MTF tests (including the CCD and signal processing electronic) have been performed by using a collimator with a slit target. The obtained results showed that the performance of the optical system meets the requirements of project.

  5. The Light Field Attachment: Turning a DSLR into a Light Field Camera Using a Low Budget Camera Ring

    KAUST Repository

    Wang, Yuwang

    2016-11-16

    We propose a concept for a lens attachment that turns a standard DSLR camera and lens into a light field camera. The attachment consists of 8 low-resolution, low-quality side cameras arranged around the central high-quality SLR lens. Unlike most existing light field camera architectures, this design provides a high-quality 2D image mode, while simultaneously enabling a new high-quality light field mode with a large camera baseline but little added weight, cost, or bulk compared with the base DSLR camera. From an algorithmic point of view, the high-quality light field mode is made possible by a new light field super-resolution method that first improves the spatial resolution and image quality of the side cameras and then interpolates additional views as needed. At the heart of this process is a super-resolution method that we call iterative Patch- And Depth-based Synthesis (iPADS), which combines patch-based and depth-based synthesis in a novel fashion. Experimental results obtained for both real captured data and synthetic data confirm that our method achieves substantial improvements in super-resolution for side-view images as well as the high-quality and view-coherent rendering of dense and high-resolution light fields.

  6. Ultra-wide-field imaging in diabetic retinopathy.

    Science.gov (United States)

    Ghasemi Falavarjani, Khalil; Tsui, Irena; Sadda, Srinivas R

    2017-10-01

    Since 1991, 7-field images captured with 30-50 degree cameras in the Early Treatment Diabetic Retinopathy Study were the gold standard for fundus imaging to study diabetic retinopathy. Ultra-wide-field images cover significantly more area (up to 82%) of the fundus and with ocular steering can in many cases image 100% of the fundus ("panretinal"). Recent advances in image analysis of ultra-wide-field imaging allow for precise measurements of the peripheral retinal lesions. There is a growing consensus in the literature that ultra-wide-field imaging improves detection of peripheral lesions in diabetic retinopathy and leads to more accurate classification of the disease. There is discordance among studies, however, on the correlation between peripheral diabetic lesions and diabetic macular edema and optimal management strategies to treat diabetic retinopathy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. UVUDF: Ultraviolet imaging of the Hubble ultra deep field with wide-field camera 3

    Energy Technology Data Exchange (ETDEWEB)

    Teplitz, Harry I.; Rafelski, Marc; Colbert, James W.; Hanish, Daniel J. [Infrared Processing and Analysis Center, MS 100-22, Caltech, Pasadena, CA 91125 (United States); Kurczynski, Peter; Gawiser, Eric [Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854 (United States); Bond, Nicholas A.; Gardner, Jonathan P.; De Mello, Duilia F. [Laboratory for Observational Cosmology, Astrophysics Science Division, Code 665, Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Grogin, Norman; Koekemoer, Anton M.; Brown, Thomas M.; Coe, Dan; Ferguson, Henry C. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Atek, Hakim [Laboratoire d' Astrophysique, École Polytechnique Fédérale de Lausanne (EPFL), Observatoire, CH-1290 Sauverny (Switzerland); Finkelstein, Steven L. [Department of Astronomy, The University of Texas at Austin, Austin, TX 78712 (United States); Giavalisco, Mauro [Astronomy Department, University of Massachusetts, Amherst, MA 01003 (United States); Gronwall, Caryl [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Lee, Kyoung-Soo [Department of Physics, Purdue University, 525 Northwestern Avenue, West Lafayette, IN 47907 (United States); Ravindranath, Swara, E-mail: hit@ipac.caltech.edu [Inter-University Centre for Astronomy and Astrophysics, Pune (India); and others

    2013-12-01

    We present an overview of a 90 orbit Hubble Space Telescope treasury program to obtain near-ultraviolet imaging of the Hubble Ultra Deep Field using the Wide Field Camera 3 UVIS detector with the F225W, F275W, and F336W filters. This survey is designed to: (1) investigate the episode of peak star formation activity in galaxies at 1 < z < 2.5; (2) probe the evolution of massive galaxies by resolving sub-galactic units (clumps); (3) examine the escape fraction of ionizing radiation from galaxies at z ∼ 2-3; (4) greatly improve the reliability of photometric redshift estimates; and (5) measure the star formation rate efficiency of neutral atomic-dominated hydrogen gas at z ∼ 1-3. In this overview paper, we describe the survey details and data reduction challenges, including both the necessity of specialized calibrations and the effects of charge transfer inefficiency. We provide a stark demonstration of the effects of charge transfer inefficiency on resultant data products, which when uncorrected, result in uncertain photometry, elongation of morphology in the readout direction, and loss of faint sources far from the readout. We agree with the STScI recommendation that future UVIS observations that require very sensitive measurements use the instrument's capability to add background light through a 'post-flash'. Preliminary results on number counts of UV-selected galaxies and morphology of galaxies at z ∼ 1 are presented. We find that the number density of UV dropouts at redshifts 1.7, 2.1, and 2.7 is largely consistent with the number predicted by published luminosity functions. We also confirm that the image mosaics have sufficient sensitivity and resolution to support the analysis of the evolution of star-forming clumps, reaching 28-29th magnitude depth at 5σ in a 0.''2 radius aperture depending on filter and observing epoch.

  8. The Light Field Attachment: Turning a DSLR into a Light Field Camera Using a Low Budget Camera Ring

    KAUST Repository

    Wang, Yuwang; Liu, Yebin; Heidrich, Wolfgang; Dai, Qionghai

    2016-01-01

    camera. From an algorithmic point of view, the high-quality light field mode is made possible by a new light field super-resolution method that first improves the spatial resolution and image quality of the side cameras and then interpolates additional

  9. THE SIZE EVOLUTION OF PASSIVE GALAXIES: OBSERVATIONS FROM THE WIDE-FIELD CAMERA 3 EARLY RELEASE SCIENCE PROGRAM

    International Nuclear Information System (INIS)

    Ryan, R. E. Jr.; McCarthy, P. J.; Cohen, S. H.; Rutkowski, M. J.; Mechtley, M. R.; Windhorst, R. A.; Yan, H.; Hathi, N. P.; Koekemoer, A. M.; Bond, H. E.; Bushouse, H.; O'Connell, R. W.; Balick, B.; Calzetti, D.; Crockett, R. M.; Disney, M.; Dopita, M. A.; Frogel, J. A.; Hall, D. N. B.; Holtzman, J. A.

    2012-01-01

    We present the size evolution of passively evolving galaxies at z ∼ 2 identified in Wide-Field Camera 3 imaging from the Early Release Science program. Our sample was constructed using an analog to the passive BzK galaxy selection criterion, which isolates galaxies with little or no ongoing star formation at z ∼> 1.5. We identify 30 galaxies in ∼40 arcmin 2 to H obs ∼ * ∼ 10 11 M ☉ ) undergo the strongest evolution from z ∼ 2 to the present. Parameterizing the size evolution as (1 + z) –α , we find a tentative scaling of α ≈ (– 0.6 ± 0.7) + (0.9 ± 0.4)log (M * /10 9 M ☉ ), where the relatively large uncertainties reflect the poor sampling in stellar mass due to the low numbers of high-redshift systems. We discuss the implications of this result for the redshift evolution of the M * -R e relation for red galaxies.

  10. Micrometeoroid Impacts on the Hubble Sace Telescope Wide Field and Planetary Camera 2: Ion Beam Analysis of Subtle Impactor Traces

    Science.gov (United States)

    Grime, G. W.; Webb, R. P.; Jeynes, C.; Palitsin, V. V.; Colaux, J. L.; Kearsley, A. T.; Ross, D. K.; Anz-Meador, P.; Liou, J. C.; Opiela, J.; hide

    2014-01-01

    Recognition of origin for particles responsible for impact damage on spacecraft such as the Hubble Space Telescope (HST) relies upon postflight analysis of returned materials. A unique opportunity arose in 2009 with collection of the Wide Field and Planetary Camera 2 (WFPC2) from HST by shuttle mission STS-125. A preliminary optical survey confirmed that there were hundreds of impact features on the radiator surface. Following extensive discussion between NASA, ESA, NHM and IBC, a collaborative research program was initiated, employing scanning electron microscopy (SEM) and ion beam analysis (IBA) to determine the nature of the impacting grains. Even though some WFPC2 impact features are large, and easily seen without the use of a microscope, impactor remnants may be hard to find.

  11. External Mask Based Depth and Light Field Camera

    Science.gov (United States)

    2013-12-08

    External mask based depth and light field camera Dikpal Reddy NVIDIA Research Santa Clara, CA dikpalr@nvidia.com Jiamin Bai University of California...passive depth acquisition technology is illustrated by the emergence of light field camera companies like Lytro [1], Raytrix [2] and Pelican Imaging

  12. A PANCHROMATIC CATALOG OF EARLY-TYPE GALAXIES AT INTERMEDIATE REDSHIFT IN THE HUBBLE SPACE TELESCOPE WIDE FIELD CAMERA 3 EARLY RELEASE SCIENCE FIELD

    Energy Technology Data Exchange (ETDEWEB)

    Rutkowski, M. J.; Cohen, S. H.; Windhorst, R. A. [School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287-1404 (United States); Kaviraj, S.; Crockett, R. M.; Silk, J. [Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH (United Kingdom); O' Connell, R. W. [Department of Astronomy, University of Virginia, P.O. Box 3818, Charlottesville, VA 22903 (United States); Hathi, N. P.; McCarthy, P. J. [Observatories of the Carnegie Institute of Washington, Pasadena, CA 91101 (United States); Ryan, R. E. Jr.; Koekemoer, A.; Bond, H. E. [Space Telescope Science Institute, Baltimore, MD 21218 (United States); Yan, H. [Center for Cosmology and Astroparticle Physics, Ohio State University, Columbus, OH 43210 (United States); Kimble, R. A. [NASA-Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Balick, B. [Department of Astronomy, University of Washington, Seattle, WA 98195-1580 (United States); Calzetti, D. [Department of Astronomy, University of Massachusetts, Amherst, MA 01003 (United States); Disney, M. J. [School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); Dopita, M. A. [Research School of Physics and Astronomy, The Australian National University, ACT 2611 (Australia); Frogel, J. A. [Astronomy Department, King Abdulaziz University, P.O. Box 80203, Jeddah (Saudi Arabia); Hall, D. N. B. [Institute for Astronomy, University of Hawaii, Honolulu, HI 96822 (United States); and others

    2012-03-01

    In the first of a series of forthcoming publications, we present a panchromatic catalog of 102 visually selected early-type galaxies (ETGs) from observations in the Early Release Science (ERS) program with the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST) of the Great Observatories Origins Deep Survey-South (GOODS-S) field. Our ETGs span a large redshift range, 0.35 {approx}< z {approx}< 1.5, with each redshift spectroscopically confirmed by previous published surveys of the ERS field. We combine our measured WFC3 ERS and Advanced Camera for Surveys (ACS) GOODS-S photometry to gain continuous sensitivity from the rest-frame far-UV to near-IR emission for each ETG. The superior spatial resolution of the HST over this panchromatic baseline allows us to classify the ETGs by their small-scale internal structures, as well as their local environment. By fitting stellar population spectral templates to the broadband photometry of the ETGs, we determine that the average masses of the ETGs are comparable to the characteristic stellar mass of massive galaxies, 10{sup 11} < M{sub *}[M{sub Sun }]<10{sup 12}. By transforming the observed photometry into the Galaxy Evolution Explorer FUV and NUV, Johnson V, and Sloan Digital Sky Survey g' and r' bandpasses we identify a noteworthy diversity in the rest-frame UV-optical colors and find the mean rest-frame (FUV-V) = 3.5 and (NUV-V) = 3.3, with 1{sigma} standard deviations {approx_equal}1.0. The blue rest-frame UV-optical colors observed for most of the ETGs are evidence for star formation during the preceding gigayear, but no systems exhibit UV-optical photometry consistent with major recent ({approx}<50 Myr) starbursts. Future publications which address the diversity of stellar populations likely to be present in these ETGs, and the potential mechanisms by which recent star formation episodes are activated, are discussed.

  13. Dynamic Artificial Potential Fields for Autonomous Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Jhala, Arnav Harish

    2009-01-01

    the implementation and evaluation of Artificial Potential Fields for automatic camera placement. We first describe the re- casting of the frame composition problem as a solution to a two particles suspended in an Artificial Potential Field. We demonstrate the application of this technique to control both camera...

  14. Astronomical Orientation Method Based on Lunar Observations Utilizing Super Wide Field of View

    Directory of Open Access Journals (Sweden)

    PU Junyu

    2018-04-01

    Full Text Available In this paper,astronomical orientation is achieved by observing the moon utilizing camera with super wide field of view,and formulae are deduced in detail.An experiment based on real observations verified the stability of the method.In this experiment,after 15 minutes' tracking shoots,the internal precision could be superior to ±7.5" and the external precision could approximately reach ±20".This camera-based method for astronomical orientation can change the traditional mode (aiming by human eye based on theodolite,thus lowering the requirements for operator's skill to some extent.Furthermore,camera with super wide field of view can realize the function of continuous tracking shoots on the moon without complicated servo control devices.Considering the similar existence of gravity on the moon and the earth's phase change when observed from the moon,once the technology of self-leveling is developed,this method can be extended to orientation for lunar rover by shooting the earth.

  15. X-ray powder diffraction camera for high-field experiments

    International Nuclear Information System (INIS)

    Koyama, K; Mitsui, Y; Takahashi, K; Watanabe, K

    2009-01-01

    We have designed a high-field X-ray diffraction (HF-XRD) camera which will be inserted into an experimental room temperature bore (100 mm) of a conventional solenoid-type cryocooled superconducting magnet (10T-CSM). Using the prototype camera that is same size of the HF-XRD camera, a XRD pattern of Si is taken at room temperature in a zero magnetic field. From the obtained results, the expected ability of the designed HF-XRD camera is presented.

  16. Impacts on the Hubble Space Telescope Wide Field and Planetary Camera 2: Microanalysis and Recognition of Micrometeoroid Compositions

    Science.gov (United States)

    Kearsley, A. T.; Ross, D. K.; Anz-Meador, P.; Liou, J. C.; Opiela, J.; Grime, G. W.; Webb, R. P.; Jeynes, C.; Palitsin, V. V.; Colaux, J. L.; hide

    2014-01-01

    Postflight surveys of the Wide Field and Planetary Camera 2 (WFPC2) on the Hubble Space Telescope have located hundreds of features on the 2.2 by 0.8 m curved plate, evidence of hypervelocity impact by small particles during 16 years of exposure to space in low Earth orbit (LEO). The radiator has a 100 - 200 micron surface layer of white paint, overlying 4 mm thick Al alloy, which was not fully penetrated by any impact. Over 460 WFPC2 samples were extracted by coring at JSC. About half were sent to NHM in a collaborative program with NASA, ESA and IBC. The structural and compositional heterogeneity at micrometer scale required microanalysis by electron and ion beam microscopes to determine the nature of the impactors (artificial orbital debris, or natural micrometeoroids, MM). Examples of MM impacts are described elsewhere. Here we describe the development of novel electron beam analysis protocols, required to recognize the subtle traces of MM residues.

  17. A PANCHROMATIC CATALOG OF EARLY-TYPE GALAXIES AT INTERMEDIATE REDSHIFT IN THE HUBBLE SPACE TELESCOPE WIDE FIELD CAMERA 3 EARLY RELEASE SCIENCE FIELD

    International Nuclear Information System (INIS)

    Rutkowski, M. J.; Cohen, S. H.; Windhorst, R. A.; Kaviraj, S.; Crockett, R. M.; Silk, J.; O'Connell, R. W.; Hathi, N. P.; McCarthy, P. J.; Ryan, R. E. Jr.; Koekemoer, A.; Bond, H. E.; Yan, H.; Kimble, R. A.; Balick, B.; Calzetti, D.; Disney, M. J.; Dopita, M. A.; Frogel, J. A.; Hall, D. N. B.

    2012-01-01

    In the first of a series of forthcoming publications, we present a panchromatic catalog of 102 visually selected early-type galaxies (ETGs) from observations in the Early Release Science (ERS) program with the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST) of the Great Observatories Origins Deep Survey-South (GOODS-S) field. Our ETGs span a large redshift range, 0.35 ∼ 11 * [M ☉ ] 12 . By transforming the observed photometry into the Galaxy Evolution Explorer FUV and NUV, Johnson V, and Sloan Digital Sky Survey g' and r' bandpasses we identify a noteworthy diversity in the rest-frame UV-optical colors and find the mean rest-frame (FUV–V) = 3.5 and (NUV–V) = 3.3, with 1σ standard deviations ≅1.0. The blue rest-frame UV-optical colors observed for most of the ETGs are evidence for star formation during the preceding gigayear, but no systems exhibit UV-optical photometry consistent with major recent (∼<50 Myr) starbursts. Future publications which address the diversity of stellar populations likely to be present in these ETGs, and the potential mechanisms by which recent star formation episodes are activated, are discussed.

  18. Measuring metallicities with Hubble space telescope/wide-field camera 3 photometry

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Teresa L.; Holtzman, Jon A. [Department of Astronomy, New Mexico State University, P.O. Box 30001, MSC 4500, Las Cruces, NM 88003-8001 (United States); Anthony-Twarog, Barbara J.; Twarog, Bruce [Department of Physics and Astronomy, University of Kansas, Lawrence, KS 66045-7582 (United States); Bond, Howard E. [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States); Saha, Abhijit [National Optical Astronomy Observatory, P.O. Box 26732, Tucson, AZ 85726 (United States); Walker, Alistair, E-mail: rosst@nmsu.edu, E-mail: holtz@nmsu.edu, E-mail: bjat@ku.edu, E-mail: btwarog@ku.edu, E-mail: heb11@psu.edu, E-mail: awalker@ctio.noao.edu [Cerro Tololo Inter-American Observatory (CTIO), National Optical Astronomy Observatory, Casilla 603, La Serena (Chile)

    2014-01-01

    We quantified and calibrated the metallicity and temperature sensitivities of colors derived from nine Wide-Field Camera 3 filters on board the Hubble Space Telescope using Dartmouth isochrones and Kurucz atmosphere models. The theoretical isochrone colors were tested and calibrated against observations of five well studied galactic clusters, M92, NGC 6752, NGC 104, NGC 5927, and NGC 6791, all of which have spectroscopically determined metallicities spanning –2.30 < [Fe/H] <+0.4. We found empirical corrections to the Dartmouth isochrone grid for each of the following color-magnitude diagrams (CMDs): (F555W-F814W, F814W), (F336W-F555W, F814W), (F390M-F555W, F814W), and (F390W-F555W, F814W). Using empirical corrections, we tested the accuracy and spread of the photometric metallicities assigned from CMDs and color-color diagrams (which are necessary to break the age-metallicity degeneracy). Testing three color-color diagrams [(F336W-F555W),(F390M-F555W),(F390W-F555W), versus (F555W-F814W)], we found the colors (F390M-F555W) and (F390W-F555W) to be the best suited to measure photometric metallicities. The color (F390W-F555W) requires much less integration time, but generally produces wider metallicity distributions and, at very low metallicity, the metallicity distribution function (MDF) from (F390W-F555W) is ∼60% wider than that from (F390M-F555W). Using the calibrated isochrones, we recovered the overall cluster metallicity to within ∼0.1 dex in [Fe/H] when using CMDs (i.e., when the distance, reddening, and ages are approximately known). The measured MDF from color-color diagrams shows that this method measures metallicities of stellar clusters of unknown age and metallicity with an accuracy of ∼0.2-0.5 dex using F336W-F555W, ∼0.15-0.25 dex using F390M-F555W, and ∼0.2-0.4 dex with F390W-F555W, with the larger uncertainty pertaining to the lowest metallicity range.

  19. Laser-based terahertz-field-driven streak camera for the temporal characterization of ultrashort processes

    Energy Technology Data Exchange (ETDEWEB)

    Schuette, Bernd

    2011-09-15

    In this work, a novel laser-based terahertz-field-driven streak camera is presented. It allows for a pulse length characterization of femtosecond (fs) extreme ultraviolet (XUV) pulses by a cross-correlation with terahertz (THz) pulses generated with a Ti:sapphire laser. The XUV pulses are emitted by a source of high-order harmonic generation (HHG) in which an intense near-infrared (NIR) fs laser pulse is focused into a gaseous medium. The design and characterization of a high-intensity THz source needed for the streak camera is also part of this thesis. The source is based on optical rectification of the same NIR laser pulse in a lithium niobate crystal. For this purpose, the pulse front of the NIR beam is tilted via a diffraction grating to achieve velocity matching between NIR and THz beams within the crystal. For the temporal characterization of the XUV pulses, both HHG and THz beams are focused onto a gas target. The harmonic radiation creates photoelectron wavepackets which are then accelerated by the THz field depending on its phase at the time of ionization. This principle adopted from a conventional streak camera and now widely used in attosecond metrology. The streak camera presented here is an advancement of a terahertz-field-driven streak camera implemented at the Free Electron Laser in Hamburg (FLASH). The advantages of the laser-based streak camera lie in its compactness, cost efficiency and accessibility, while providing the same good quality of measurements as obtained at FLASH. In addition, its flexibility allows for a systematic investigation of streaked Auger spectra which is presented in this thesis. With its fs time resolution, the terahertz-field-driven streak camera thereby bridges the gap between attosecond and conventional cameras. (orig.)

  20. Laser-based terahertz-field-driven streak camera for the temporal characterization of ultrashort processes

    International Nuclear Information System (INIS)

    Schuette, Bernd

    2011-09-01

    In this work, a novel laser-based terahertz-field-driven streak camera is presented. It allows for a pulse length characterization of femtosecond (fs) extreme ultraviolet (XUV) pulses by a cross-correlation with terahertz (THz) pulses generated with a Ti:sapphire laser. The XUV pulses are emitted by a source of high-order harmonic generation (HHG) in which an intense near-infrared (NIR) fs laser pulse is focused into a gaseous medium. The design and characterization of a high-intensity THz source needed for the streak camera is also part of this thesis. The source is based on optical rectification of the same NIR laser pulse in a lithium niobate crystal. For this purpose, the pulse front of the NIR beam is tilted via a diffraction grating to achieve velocity matching between NIR and THz beams within the crystal. For the temporal characterization of the XUV pulses, both HHG and THz beams are focused onto a gas target. The harmonic radiation creates photoelectron wavepackets which are then accelerated by the THz field depending on its phase at the time of ionization. This principle adopted from a conventional streak camera and now widely used in attosecond metrology. The streak camera presented here is an advancement of a terahertz-field-driven streak camera implemented at the Free Electron Laser in Hamburg (FLASH). The advantages of the laser-based streak camera lie in its compactness, cost efficiency and accessibility, while providing the same good quality of measurements as obtained at FLASH. In addition, its flexibility allows for a systematic investigation of streaked Auger spectra which is presented in this thesis. With its fs time resolution, the terahertz-field-driven streak camera thereby bridges the gap between attosecond and conventional cameras. (orig.)

  1. Fabrication of multi-focal microlens array on curved surface for wide-angle camera module

    Science.gov (United States)

    Pan, Jun-Gu; Su, Guo-Dung J.

    2017-08-01

    In this paper, we present a wide-angle and compact camera module that consists of microlens array with different focal lengths on curved surface. The design integrates the principle of an insect's compound eye and the human eye. It contains a curved hexagonal microlens array and a spherical lens. Compared with normal mobile phone cameras which usually need no less than four lenses, but our proposed system only uses one lens. Furthermore, the thickness of our proposed system is only 2.08 mm and diagonal full field of view is about 100 degrees. In order to make the critical microlens array, we used the inkjet printing to control the surface shape of each microlens for achieving different focal lengths and use replication method to form curved hexagonal microlens array.

  2. SUPERNOVA REMNANTS AND THE INTERSTELLAR MEDIUM OF M83: IMAGING AND PHOTOMETRY WITH THE WIDE FIELD CAMERA 3 ON THE HUBBLE SPACE TELESCOPE

    International Nuclear Information System (INIS)

    Dopita, Michael A.; Blair, William P.; Kuntz, Kip D.; Long, Knox S.; Mutchler, Max; Whitmore, Bradley C.; Bond, Howard E.; MacKenty, John; Balick, Bruce; Calzetti, Daniela; Carollo, Marcella; Disney, Michael; Frogel, Jay A.; O'Connell, Robert; Hall, Donald; Holtzman, Jon A.; Kimble, Randy A.; McCarthy, Patrick; Paresce, Francesco; Saha, Abhijit

    2010-01-01

    We present Wide Field Camera 3 images taken with the Hubble Space Telescope within a single field in the southern grand design star-forming galaxy M83. Based on their size, morphology, and photometry in continuum-subtracted Hα, [S II], Hβ, [O III], and [O II] filters, we have identified 60 supernova remnant (SNR) candidates, as well as a handful of young ejecta-dominated candidates. A catalog of these remnants, their sizes and, where possible, their Hα fluxes are given. Radiative ages and pre-shock densities are derived from those SNRs that have good photometry. The ages lie in the range 2.62 rad /yr) 0 /cm -3 min = 16 +7 -5 M sun . Finally, we give evidence for the likely detection of the remnant of the historical supernova, SN1968L.

  3. Imaging design of the wide field x-ray monitor onboard the HETE satellite

    International Nuclear Information System (INIS)

    Zand, J.J.M. In'T; Fenimore, E.E.; Kawai, N.; Yoshida, A.; Matsuoka, M.; Yamauchi, M.

    1994-01-01

    The High Energy Transient Experiment (HETE), to be launched in 1995, will study Gamma-Ray Bursts in an unprecendented wide wavelength range from Gamma- and X-ray to UV wavelengths. The X-ray range (2 to 25 keV) will be covered by 2 perpendicularly oriented 1-dimensional coded aperture cameras. These instruments cover a wide field of view of 2 sr and thus have a relatively large potential to locate GRBs to a fraction of a degree, which is an order of magnitude better than BATSE. The imaging design of these coded aperture cameras relates to the design of the coded apertures and the decoding algorithm. The aperture pattern is to a large extent determined by the high background in this wide field application and the low number of pattern elements (∼100) in each direction. The result is a random pattern with an open fraction of 33%. The onboard decoding algorithm is dedicated to the localization of a single point source

  4. A detailed comparison of single-camera light-field PIV and tomographic PIV

    Science.gov (United States)

    Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.

    2018-03-01

    This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.

  5. An experiment in big data: storage, querying and visualisation of data taken from the Liverpool Telescope's wide field cameras

    Science.gov (United States)

    Barnsley, R. M.; Steele, Iain A.; Smith, R. J.; Mawson, Neil R.

    2014-07-01

    The Small Telescopes Installed at the Liverpool Telescope (STILT) project has been in operation since March 2009, collecting data with three wide field unfiltered cameras: SkycamA, SkycamT and SkycamZ. To process the data, a pipeline was developed to automate source extraction, catalogue cross-matching, photometric calibration and database storage. In this paper, modifications and further developments to this pipeline will be discussed, including a complete refactor of the pipeline's codebase into Python, migration of the back-end database technology from MySQL to PostgreSQL, and changing the catalogue used for source cross-matching from USNO-B1 to APASS. In addition to this, details will be given relating to the development of a preliminary front-end to the source extracted database which will allow a user to perform common queries such as cone searches and light curve comparisons of catalogue and non-catalogue matched objects. Some next steps and future ideas for the project will also be presented.

  6. The ArTéMiS wide-field sub-millimeter camera: preliminary on-sky performance at 350 microns

    Science.gov (United States)

    Revéret, Vincent; André, Philippe; Le Pennec, Jean; Talvard, Michel; Agnèse, Patrick; Arnaud, Agnès.; Clerc, Laurent; de Breuck, Carlos; Cigna, Jean-Charles; Delisle, Cyrille; Doumayrou, Eric; Duband, Lionel; Dubreuil, Didier; Dumaye, Luc; Ercolani, Eric; Gallais, Pascal; Groult, Elodie; Jourdan, Thierry; Leriche, Bernadette; Maffei, Bruno; Lortholary, Michel; Martignac, Jérôme; Rabaud, Wilfried; Relland, Johan; Rodriguez, Louis; Vandeneynde, Aurélie; Visticot, François

    2014-07-01

    ArTeMiS is a wide-field submillimeter camera operating at three wavelengths simultaneously (200, 350 and 450 μm). A preliminary version of the instrument equipped with the 350 μm focal plane, has been successfully installed and tested on APEX telescope in Chile during the 2013 and 2014 austral winters. This instrument is developed by CEA (Saclay and Grenoble, France), IAS (France) and University of Manchester (UK) in collaboration with ESO. We introduce the mechanical and optical design, as well as the cryogenics and electronics of the ArTéMiS camera. ArTeMiS detectors consist in Si:P:B bolometers arranged in 16×18 sub-arrays operating at 300 mK. These detectors are similar to the ones developed for the Herschel PACS photometer but they are adapted to the high optical load encountered at APEX site. Ultimately, ArTeMiS will contain 4 sub-arrays at 200 μm and 2×8 sub-arrays at 350 and 450 μm. We show preliminary lab measurements like the responsivity of the instrument to hot and cold loads illumination and NEP calculation. Details on the on-sky commissioning runs made in 2013 and 2014 at APEX are shown. We used planets (Mars, Saturn, Uranus) to determine the flat-field and to get the flux calibration. A pointing model was established in the first days of the runs. The average relative pointing accuracy is 3 arcsec. The beam at 350 μm has been estimated to be 8.5 arcsec, which is in good agreement with the beam of the 12 m APEX dish. Several observing modes have been tested, like "On- The-Fly" for beam-maps or large maps, spirals or raster of spirals for compact sources. With this preliminary version of ArTeMiS, we concluded that the mapping speed is already more than 5 times better than the previous 350 μm instrument at APEX. The median NEFD at 350 μm is 600 mJy.s1/2, with best values at 300 mJy.s1/2. The complete instrument with 5760 pixels and optimized settings will be installed during the first half of 2015.

  7. Microprocessor-controlled wide-range streak camera

    Science.gov (United States)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  8. Microprocessor-controlled, wide-range streak camera

    International Nuclear Information System (INIS)

    Amy E. Lewis; Craig Hollabaugh

    2006-01-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized

  9. The HST/WFC3 Quicklook Project: A User Interface to Hubble Space Telescope Wide Field Camera 3 Data

    Science.gov (United States)

    Bourque, Matthew; Bajaj, Varun; Bowers, Ariel; Dulude, Michael; Durbin, Meredith; Gosmeyer, Catherine; Gunning, Heather; Khandrika, Harish; Martlin, Catherine; Sunnquist, Ben; Viana, Alex

    2017-06-01

    The Hubble Space Telescope's Wide Field Camera 3 (WFC3) instrument, comprised of two detectors, UVIS (Ultraviolet-Visible) and IR (Infrared), has been acquiring ~ 50-100 images daily since its installation in 2009. The WFC3 Quicklook project provides a means for instrument analysts to store, calibrate, monitor, and interact with these data through the various Quicklook systems: (1) a ~ 175 TB filesystem, which stores the entire WFC3 archive on disk, (2) a MySQL database, which stores image header data, (3) a Python-based automation platform, which currently executes 22 unique calibration/monitoring scripts, (4) a Python-based code library, which provides system functionality such as logging, downloading tools, database connection objects, and filesystem management, and (5) a Python/Flask-based web interface to the Quicklook system. The Quicklook project has enabled large-scale WFC3 analyses and calibrations, such as the monitoring of the health and stability of the WFC3 instrument, the measurement of ~ 20 million WFC3/UVIS Point Spread Functions (PSFs), the creation of WFC3/IR persistence calibration products, and many others.

  10. Hubble Space Telescope Wide Field Planetary Camera 2 Observations of Neptune

    Science.gov (United States)

    1995-01-01

    Two groups have recently used the Hubble Space Telescope (HST) Wide Field Planetary Camera 2 (WFPC 2) to acquire new high-resolution images of the planet Neptune. Members of the WFPC-2 Science Team, lead by John Trauger, acquired the first series of images on 27 through 29 June 1994. These were the highest resolution images of Neptune taken since the Voyager-2 flyby in August of 1989. A more comprehensive program is currently being conducted by Heidi Hammel and Wes Lockwood. These two sets of observations are providing a wealth of new information about the structure, composition, and meteorology of this distant planet's atmosphere.Neptune is currently the most distant planet from the sun, with an orbital radius of 4.5 billion kilometers (2.8 billion miles, or 30 Astronomical Units). Even though its diameter is about four times that of the Earth (49,420 vs. 12,742 km), ground-based telescopes reveal a tiny blue disk that subtends less than 1/1200 of a degree (2.3 arc-seconds). Neptune has therefore been a particularly challenging object to study from the ground because its disk is badly blurred by the Earth's atmosphere. In spite of this, ground-based astronomers had learned a great deal about this planet since its position was first predicted by John C. Adams and Urbain Leverrier in 1845. For example, they had determined that Neptune was composed primarily of hydrogen and helium gas, and that its blue color caused by the presence of trace amounts of the gas methane, which absorbs red light. They had also detected bright cloud features whose brightness changed with time, and tracked these clouds to infer a rotation period between 17 and 22 hours.When the Voyager-2 spacecraft flew past the Neptune in 1989, its instruments revealed a surprising array of meteorological phenomena, including strong winds, bright, high-altitude clouds, and two large dark spots attributed to long-lived giant storm systems. These bright clouds and dark spots were tracked as they moved

  11. The Wide Field Imager instrument for Athena

    Science.gov (United States)

    Meidinger, Norbert; Barbera, Marco; Emberger, Valentin; Fürmetz, Maria; Manhart, Markus; Müller-Seidlitz, Johannes; Nandra, Kirpal; Plattner, Markus; Rau, Arne; Treberspurg, Wolfgang

    2017-08-01

    ESA's next large X-ray mission ATHENA is designed to address the Cosmic Vision science theme 'The Hot and Energetic Universe'. It will provide answers to the two key astrophysical questions how does ordinary matter assemble into the large-scale structures we see today and how do black holes grow and shape the Universe. The ATHENA spacecraft will be equipped with two focal plane cameras, a Wide Field Imager (WFI) and an X-ray Integral Field Unit (X-IFU). The WFI instrument is optimized for state-of-the-art resolution spectroscopy over a large field of view of 40 amin x 40 amin and high count rates up to and beyond 1 Crab source intensity. The cryogenic X-IFU camera is designed for high-spectral resolution imaging. Both cameras share alternately a mirror system based on silicon pore optics with a focal length of 12 m and large effective area of about 2 m2 at an energy of 1 keV. Although the mission is still in phase A, i.e. studying the feasibility and developing the necessary technology, the definition and development of the instrumentation made already significant progress. The herein described WFI focal plane camera covers the energy band from 0.2 keV to 15 keV with 450 μm thick fully depleted back-illuminated silicon active pixel sensors of DEPFET type. The spatial resolution will be provided by one million pixels, each with a size of 130 μm x 130 μm. The time resolution requirement for the WFI large detector array is 5 ms and for the WFI fast detector 80 μs. The large effective area of the mirror system will be completed by a high quantum efficiency above 90% for medium and higher energies. The status of the various WFI subsystems to achieve this performance will be described and recent changes will be explained here.

  12. ON THE BINARY FREQUENCY OF THE LOWEST MASS MEMBERS OF THE PLEIADES WITH HUBBLE SPACE TELESCOPE WIDE FIELD CAMERA 3

    International Nuclear Information System (INIS)

    Garcia, E. V.; Dupuy, Trent J.; Allers, Katelyn N.; Liu, Michael C.; Deacon, Niall R.

    2015-01-01

    We present the results of a Hubble Space Telescope Wide Field Camera 3 (WFC3) imaging survey of 11 of the lowest mass brown dwarfs in the Pleiades known (25–40 M Jup ). These objects represent the predecessors to T dwarfs in the field. Using a semi-empirical binary point-spread function (PSF)-fitting technique, we are able to probe to 0.″ 03 (0.75 pixel), better than 2x the WFC3/UVIS diffraction limit. We did not find any companions to our targets. From extensive testing of our PSF-fitting method on simulated binaries, we compute detection limits which rule out companions to our targets with mass ratios of ≳0.7 and separations ≳4 AU. Thus, our survey is the first to attain the high angular resolution needed to resolve brown dwarf binaries in the Pleiades at separations that are most common in the field population. We constrain the binary frequency over this range of separation and mass ratio of 25–40 M Jup Pleiades brown dwarfs to be <11% for 1σ (<26% at 2σ). This binary frequency is consistent with both younger and older brown dwarfs in this mass range

  13. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Yu Lu

    2016-04-01

    Full Text Available A new compact large field of view (FOV multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second.

  14. Camera Networks The Acquisition and Analysis of Videos over Wide Areas

    CERN Document Server

    Roy-Chowdhury, Amit K

    2012-01-01

    As networks of video cameras are installed in many applications like security and surveillance, environmental monitoring, disaster response, and assisted living facilities, among others, image understanding in camera networks is becoming an important area of research and technology development. There are many challenges that need to be addressed in the process. Some of them are listed below: - Traditional computer vision challenges in tracking and recognition, robustness to pose, illumination, occlusion, clutter, recognition of objects, and activities; - Aggregating local information for wide

  15. Microprocessor-controlled, wide-range streak camera

    Energy Technology Data Exchange (ETDEWEB)

    Amy E. Lewis, Craig Hollabaugh

    2006-09-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  16. O-6 Optical Property Degradation of the Hubble Space Telescope's Wide Field Camera-2 Pick Off Mirror

    Science.gov (United States)

    McNamara, Karen M.; Hughes, D. W.; Lauer, H. V.; Burkett, P. J.; Reed, B. B.

    2011-01-01

    Degradation in the performance of optical components can be greatly affected by exposure to the space environment. Many factors can contribute to such degradation including surface contaminants; outgassing; vacuum, UV, and atomic oxygen exposure; temperature cycling; or combinations of parameters. In-situ observations give important clues to degradation processes, but there are relatively few opportunities to correlate those observations with post-flight ground analyses. The return of instruments from the Hubble Space Telescope (HST) after its final servicing mission in May 2009 provided such an opportunity. Among the instruments returned from HST was the Wide-Field Planetary Camera-2 (WFPC-2), which had been exposed to the space environment for 16 years. This work focuses on the identifying the sources of degradation in the performance of the Pick-off mirror (POM) from WFPC-2. Techniques including surface reflectivity measurements, spectroscopic ellipsometry, FTIR (and ATR-FTIR) analyses, SEM/EDS, X-ray photoelectron spectroscopy (XPS) with and without ion milling, and wet and dry physical surface sampling were performed. Destructive and contact analyses took place only after completion of the non-destructive measurements. Spectroscopic ellipsometry was then repeated to determine the extent of contaminant removal by the destructive techniques, providing insight into the nature and extent of polymerization of the contaminant layer.

  17. Constrained optimization for position calibration of an NMR field camera.

    Science.gov (United States)

    Chang, Paul; Nassirpour, Sahar; Eschelbach, Martin; Scheffler, Klaus; Henning, Anke

    2018-07-01

    Knowledge of the positions of field probes in an NMR field camera is necessary for monitoring the B 0 field. The typical method of estimating these positions is by switching the gradients with known strengths and calculating the positions using the phases of the FIDs. We investigated improving the accuracy of estimating the probe positions and analyzed the effect of inaccurate estimations on field monitoring. The field probe positions were estimated by 1) assuming ideal gradient fields, 2) using measured gradient fields (including nonlinearities), and 3) using measured gradient fields with relative position constraints. The fields measured with the NMR field camera were compared to fields acquired using a dual-echo gradient recalled echo B 0 mapping sequence. Comparisons were done for shim fields from second- to fourth-order shim terms. The position estimation was the most accurate when relative position constraints were used in conjunction with measured (nonlinear) gradient fields. The effect of more accurate position estimates was seen when compared to fields measured using a B 0 mapping sequence (up to 10%-15% more accurate for some shim fields). The models acquired from the field camera are sensitive to noise due to the low number of spatial sample points. Position estimation of field probes in an NMR camera can be improved using relative position constraints and nonlinear gradient fields. Magn Reson Med 80:380-390, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  18. EVALUATION OF THE QUALITY OF ACTION CAMERAS WITH WIDE-ANGLE LENSES IN UAV PHOTOGRAMMETRY

    OpenAIRE

    Hastedt, H.; Ekkel, T.; Luhmann, T.

    2016-01-01

    The application of light-weight cameras in UAV photogrammetry is required due to restrictions in payload. In general, consumer cameras with normal lens type are applied to a UAV system. The availability of action cameras, like the GoPro Hero4 Black, including a wide-angle lens (fish-eye lens) offers new perspectives in UAV projects. With these investigations, different calibration procedures for fish-eye lenses are evaluated in order to quantify their accuracy potential in UAV photogrammetry....

  19. High Precision Sunphotometer using Wide Dynamic Range (WDR) Camera Tracking

    Science.gov (United States)

    Liss, J.; Dunagan, S. E.; Johnson, R. R.; Chang, C. S.; LeBlanc, S. E.; Shinozuka, Y.; Redemann, J.; Flynn, C. J.; Segal-Rosenhaimer, M.; Pistone, K.; Kacenelenbogen, M. S.; Fahey, L.

    2016-12-01

    wide dynamic range camera that provides a high precision solar position tracking signal as well as an image of the sky in the 45° field of view around the solar axis, which can be of great assistance in flagging data for cloud effects or other factors that might impact data quality.

  20. Wide-field optical mapping of neural activity and brain haemodynamics: considerations and novel approaches

    Science.gov (United States)

    Ma, Ying; Shaik, Mohammed A.; Kozberg, Mariel G.; Thibodeaux, David N.; Zhao, Hanzhi T.; Yu, Hang

    2016-01-01

    Although modern techniques such as two-photon microscopy can now provide cellular-level three-dimensional imaging of the intact living brain, the speed and fields of view of these techniques remain limited. Conversely, two-dimensional wide-field optical mapping (WFOM), a simpler technique that uses a camera to observe large areas of the exposed cortex under visible light, can detect changes in both neural activity and haemodynamics at very high speeds. Although WFOM may not provide single-neuron or capillary-level resolution, it is an attractive and accessible approach to imaging large areas of the brain in awake, behaving mammals at speeds fast enough to observe widespread neural firing events, as well as their dynamic coupling to haemodynamics. Although such wide-field optical imaging techniques have a long history, the advent of genetically encoded fluorophores that can report neural activity with high sensitivity, as well as modern technologies such as light emitting diodes and sensitive and high-speed digital cameras have driven renewed interest in WFOM. To facilitate the wider adoption and standardization of WFOM approaches for neuroscience and neurovascular coupling research, we provide here an overview of the basic principles of WFOM, considerations for implementation of wide-field fluorescence imaging of neural activity, spectroscopic analysis and interpretation of results. This article is part of the themed issue ‘Interpreting BOLD: a dialogue between cognitive and cellular neuroscience’. PMID:27574312

  1. Wide-field optical mapping of neural activity and brain haemodynamics: considerations and novel approaches.

    Science.gov (United States)

    Ma, Ying; Shaik, Mohammed A; Kim, Sharon H; Kozberg, Mariel G; Thibodeaux, David N; Zhao, Hanzhi T; Yu, Hang; Hillman, Elizabeth M C

    2016-10-05

    Although modern techniques such as two-photon microscopy can now provide cellular-level three-dimensional imaging of the intact living brain, the speed and fields of view of these techniques remain limited. Conversely, two-dimensional wide-field optical mapping (WFOM), a simpler technique that uses a camera to observe large areas of the exposed cortex under visible light, can detect changes in both neural activity and haemodynamics at very high speeds. Although WFOM may not provide single-neuron or capillary-level resolution, it is an attractive and accessible approach to imaging large areas of the brain in awake, behaving mammals at speeds fast enough to observe widespread neural firing events, as well as their dynamic coupling to haemodynamics. Although such wide-field optical imaging techniques have a long history, the advent of genetically encoded fluorophores that can report neural activity with high sensitivity, as well as modern technologies such as light emitting diodes and sensitive and high-speed digital cameras have driven renewed interest in WFOM. To facilitate the wider adoption and standardization of WFOM approaches for neuroscience and neurovascular coupling research, we provide here an overview of the basic principles of WFOM, considerations for implementation of wide-field fluorescence imaging of neural activity, spectroscopic analysis and interpretation of results.This article is part of the themed issue 'Interpreting BOLD: a dialogue between cognitive and cellular neuroscience'. © 2016 The Authors.

  2. VizieR Online Data Catalog: Isaac Newton Telescope Wide Field Survey (CASU 2002)

    Science.gov (United States)

    Cambridge Astronomical Survey Unit

    2002-04-01

    The INT Wide Field Survey (WFS) is using the Wide Field Camera (~0.3 square degrees) on the 2.5m Isaac Newton Telescope (INT). The project was initiated in August 1998 and is expected to have a duration of up to five years. Multicolour data will be obtained over 200+ square degrees to a typical depth of ~25 mag (u' through z'). The data is publically accessible via the Cambridge Astronomical Survey Unit to UK and NL communities from day one, with access to the rest of the world after one year. This observation log lists all observations older than the one year proprietary period. (1 data file).

  3. Camera Network Coverage Improving by Particle Swarm Optimization

    NARCIS (Netherlands)

    Xu, Y.C.; Lei, B.; Hendriks, E.A.

    2011-01-01

    This paper studies how to improve the field of view (FOV) coverage of a camera network. We focus on a special but practical scenario where the cameras are randomly scattered in a wide area and each camera may adjust its orientation but cannot move in any direction. We propose a particle swarm

  4. Preliminary field evaluation of solid state cameras for security applications

    International Nuclear Information System (INIS)

    1987-01-01

    Recent developments in solid state imager technology have resulted in a series of compact, lightweight, all-solid-state closed circuit television (CCTV) cameras. Although it is widely known that the various solid state cameras have less light sensitivity and lower resolution than their vacuum tube counterparts, the potential for having a much longer Mean Time Between Failure (MTBF) for the all-solid-state cameras is generating considerable interest within the security community. Questions have been raised as to whether the newest and best of the solid state cameras are a viable alternative to the high maintenance vacuum tube cameras in exterior security applications. To help answer these questions, a series of tests were performed by Sandia National Laboratories at various test sites and under several lighting conditions. In general, all-solid-state cameras need to be improved in four areas before they can be used as wholesale replacements for tube cameras in exterior security applications: resolution, sensitivity, contrast, and smear. However, with careful design some of the higher performance cameras can be used for perimeter security systems, and all of the cameras have applications where they are uniquely qualified. Many of the cameras are well suited for interior assessment and surveillance uses, and several of the cameras are well designed as robotics and machine vision devices

  5. New light field camera based on physical based rendering tracing

    Science.gov (United States)

    Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung

    2014-03-01

    Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.

  6. THE FLAT TRANSMISSION SPECTRUM OF THE SUPER-EARTH GJ1214b FROM WIDE FIELD CAMERA 3 ON THE HUBBLE SPACE TELESCOPE

    International Nuclear Information System (INIS)

    Berta, Zachory K.; Charbonneau, David; Désert, Jean-Michel; Irwin, Jonathan; Miller-Ricci Kempton, Eliza; Fortney, Jonathan J.; Nutzman, Philip; McCullough, Peter R.; Burke, Christopher J.; Homeier, Derek

    2012-01-01

    Capitalizing on the observational advantage offered by its tiny M dwarf host, we present Hubble Space Telescope/Wide Field Camera 3 (WFC3) grism measurements of the transmission spectrum of the super-Earth exoplanet GJ1214b. These are the first published WFC3 observations of a transiting exoplanet atmosphere. After correcting for a ramp-like instrumental systematic, we achieve nearly photon-limited precision in these observations, finding the transmission spectrum of GJ1214b to be flat between 1.1 and 1.7 μm. Inconsistent with a cloud-free solar composition atmosphere at 8.2σ, the measured achromatic transit depth most likely implies a large mean molecular weight for GJ1214b's outer envelope. A dense atmosphere rules out bulk compositions for GJ1214b that explain its large radius by the presence of a very low density gas layer surrounding the planet. High-altitude clouds can alternatively explain the flat transmission spectrum, but they would need to be optically thick up to 10 mbar or consist of particles with a range of sizes approaching 1 μm in diameter.

  7. Preliminary field evaluation of solid state cameras for security applications

    International Nuclear Information System (INIS)

    Murray, D.W.

    1987-01-01

    Recent developments in solid state imager technology have resulted in a series of compact, lightweight, all-solid-state closed circuit television (CCTV) cameras. Although it is widely known that the various solid state cameras have less light sensitivity and lower resolution than their vacuum tube counterparts, the potential for having a much longer Mean Time Between Failure (MTBF) for the all-solid-state cameras is generating considerable interest within the security community. Questions have been raised as to whether the newest and best of the solid state cameras are a viable alternative to the high maintenance vacuum tube cameras in exterior security applications. To help answer these questions, a series of tests were performed by Sandia National Laboratories at various test sites and under several lighting conditions. The results of these tests as well as a description of the test equipment, test sites, and procedures are presented in this report

  8. CONFIRMATION OF THE COMPACTNESS OF A z = 1.91 QUIESCENT GALAXY WITH HUBBLE SPACE TELESCOPE'S WIDE FIELD CAMERA 3

    International Nuclear Information System (INIS)

    Szomoru, Daniel; Franx, Marijn; Bouwens, Rychard J.; Van Dokkum, Pieter G.; Trenti, Michele; Illingworth, Garth D.; Labbe, Ivo; Oesch, Pascal A.; Carollo, C. Marcella

    2010-01-01

    We present very deep Wide Field Camera 3 (WFC3) photometry of a massive, compact galaxy located in the Hubble Ultra Deep Field. This quiescent galaxy has a spectroscopic redshift z = 1.91 and has been identified as an extremely compact galaxy by Daddi et al. We use new H F160W imaging data obtained with Hubble Space Telescope/WFC3 to measure the deconvolved surface brightness profile to H ∼ 28 mag arcsec -2 . We find that the surface brightness profile is well approximated by an n = 3.7 Sersic profile. Our deconvolved profile is constructed by a new technique which corrects the best-fit Sersic profile with the residual of the fit to the observed image. This allows for galaxy profiles which deviate from a Sersic profile. We determine the effective radius of this galaxy: r e = 0.42 ± 0.14 kpc in the observed H F160W band. We show that this result is robust to deviations from the Sersic model used in the fit. We test the sensitivity of our analysis to faint 'wings' in the profile using simulated galaxy images consisting of a bright compact component and a faint extended component. We find that due to the combination of the WFC3 imaging depth and our method's sensitivity to extended faint emission we can accurately trace the intrinsic surface brightness profile, and that we can therefore confidently rule out the existence of a faint extended envelope around the observed galaxy down to our surface brightness limit. These results confirm that the galaxy lies a factor ∼10 off from the local mass-size relation.

  9. THE SIZE EVOLUTION OF PASSIVE GALAXIES: OBSERVATIONS FROM THE WIDE-FIELD CAMERA 3 EARLY RELEASE SCIENCE PROGRAM

    Energy Technology Data Exchange (ETDEWEB)

    Ryan, R. E. Jr. [Physics Department, University of California, Davis, CA 95616 (United States); McCarthy, P. J. [Observatories of the Carnegie Institute of Washington, Pasadena, CA 91101 (United States); Cohen, S. H.; Rutkowski, M. J.; Mechtley, M. R.; Windhorst, R. A. [School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287 (United States); Yan, H. [Center for Cosmology and Astroparticle Physics, Ohio State University, Columbus, OH 43210 (United States); Hathi, N. P. [Department of Physics and Astronomy, University of California, Riverside, CA 92521 (United States); Koekemoer, A. M.; Bond, H. E.; Bushouse, H. [Space Telescope Science Institute, Baltimore, MD 21218 (United States); O' Connell, R. W. [Department of Astronomy, University of Virginia, Charlottesville, VA 22904 (United States); Balick, B. [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Calzetti, D. [Department of Astronomy, University of Massachusetts, Amherst, MA 01003 (United States); Crockett, R. M. [Department of Physics, University of Oxford, Oxford OX1 3PU (United Kingdom); Disney, M. [School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); Dopita, M. A. [Research School of Astronomy and Astrophysics, The Australian National University, Weston Creek, ACT 2611 (Australia); Frogel, J. A. [Galaxies Unlimited, Lutherville, MD 21093 (United States); Hall, D. N. B. [Institute for Astronomy, University of Hawaii, Honolulu, HI 96822 (United States); Holtzman, J. A., E-mail: rryan@physics.ucdavis.edu [Department of Astronomy, New Mexico State University, Las Cruces, NM 88003 (United States); and others

    2012-04-10

    We present the size evolution of passively evolving galaxies at z {approx} 2 identified in Wide-Field Camera 3 imaging from the Early Release Science program. Our sample was constructed using an analog to the passive BzK galaxy selection criterion, which isolates galaxies with little or no ongoing star formation at z {approx}> 1.5. We identify 30 galaxies in {approx}40 arcmin{sup 2} to H < 25 mag. By fitting the 10-band Hubble Space Telescope photometry from 0.22 {mu}m {approx}< {lambda}{sub obs} {approx}< 1.6 {mu}m with stellar population synthesis models, we simultaneously determine photometric redshift, stellar mass, and a bevy of other population parameters. Based on the six galaxies with published spectroscopic redshifts, we estimate a typical redshift uncertainty of {approx}0.033(1 + z). We determine effective radii from Sersic profile fits to the H-band image using an empirical point-spread function. By supplementing our data with published samples, we propose a mass-dependent size evolution model for passively evolving galaxies, where the most massive galaxies (M{sub *} {approx} 10{sup 11} M{sub Sun }) undergo the strongest evolution from z {approx} 2 to the present. Parameterizing the size evolution as (1 + z){sup -{alpha}}, we find a tentative scaling of {alpha} Almost-Equal-To (- 0.6 {+-} 0.7) + (0.9 {+-} 0.4)log (M{sub *}/10{sup 9} M{sub Sun }), where the relatively large uncertainties reflect the poor sampling in stellar mass due to the low numbers of high-redshift systems. We discuss the implications of this result for the redshift evolution of the M{sub *}-R{sub e} relation for red galaxies.

  10. Wide-field spectrally resolved quantitative fluorescence imaging system: toward neurosurgical guidance in glioma resection

    Science.gov (United States)

    Xie, Yijing; Thom, Maria; Ebner, Michael; Wykes, Victoria; Desjardins, Adrien; Miserocchi, Anna; Ourselin, Sebastien; McEvoy, Andrew W.; Vercauteren, Tom

    2017-11-01

    In high-grade glioma surgery, tumor resection is often guided by intraoperative fluorescence imaging. 5-aminolevulinic acid-induced protoporphyrin IX (PpIX) provides fluorescent contrast between normal brain tissue and glioma tissue, thus achieving improved tumor delineation and prolonged patient survival compared with conventional white-light-guided resection. However, commercially available fluorescence imaging systems rely solely on visual assessment of fluorescence patterns by the surgeon, which makes the resection more subjective than necessary. We developed a wide-field spectrally resolved fluorescence imaging system utilizing a Generation II scientific CMOS camera and an improved computational model for the precise reconstruction of the PpIX concentration map. In our model, the tissue's optical properties and illumination geometry, which distort the fluorescent emission spectra, are considered. We demonstrate that the CMOS-based system can detect low PpIX concentration at short camera exposure times, while providing high-pixel resolution wide-field images. We show that total variation regularization improves the contrast-to-noise ratio of the reconstructed quantitative concentration map by approximately twofold. Quantitative comparison between the estimated PpIX concentration and tumor histopathology was also investigated to further evaluate the system.

  11. Galaxy formation in the reionization epoch as hinted by Wide Field Camera 3 observations of the Hubble Ultra Deep Field

    International Nuclear Information System (INIS)

    Yan Haojing; Windhorst, Rogier A.; Cohen, Seth H.; Hathi, Nimish P.; Ryan, Russell E.; O'Connell, Robert W.; McCarthy, Patrick J.

    2010-01-01

    We present a large sample of candidate galaxies at z ∼ 7-10, selected in the Hubble Ultra Deep Field using the new observations of the Wide Field Camera 3 that was recently installed on the Hubble Space Telescope. Our sample is composed of 20 z 850 -dropouts (four new discoveries), 15 Y 105 -dropouts (nine new discoveries) and 20 J 125 -dropouts (all new discoveries). The surface densities of the z 850 -dropouts are close to what was predicted by earlier studies, however, those of the Y 105 - and J 125 -dropouts are quite unexpected. While no Y 105 - or J 125 -dropouts have been found at AB ≤ 28.0 mag, their surface densities seem to increase sharply at fainter levels. While some of these candidates seem to be close to foreground galaxies and thus could possibly be gravitationally lensed, the overall surface densities after excluding such cases are still much higher than what would be expected if the luminosity function does not evolve from z ∼ 7 to 10. Motivated by such steep increases, we tentatively propose a set of Schechter function parameters to describe the luminosity functions at z ∼ 8 and 10. As compared to their counterpart at z ∼ 7, here L * decreases by a factor of ∼ 6.5 and φ * increases by a factor of 17-90. Although such parameters are not yet demanded by the existing observations, they are allowed and seem to agree with the data better than other alternatives. If these luminosity functions are still valid beyond our current detection limit, this would imply a sudden emergence of a large number of low-luminosity galaxies when looking back in time to z ∼ 10, which, while seemingly exotic, would naturally fit in the picture of the cosmic hydrogen reionization. These early galaxies could easily account for the ionizing photon budget required by the reionization, and they would imply that the global star formation rate density might start from a very high value at z ∼ 10, rapidly reach the minimum at z ∼ 7, and start to rise again

  12. Wide-field fluorescent microscopy and fluorescent imaging flow cytometry on a cell-phone.

    Science.gov (United States)

    Zhu, Hongying; Ozcan, Aydogan

    2013-04-11

    Fluorescent microscopy and flow cytometry are widely used tools in biomedical research and clinical diagnosis. However these devices are in general relatively bulky and costly, making them less effective in the resource limited settings. To potentially address these limitations, we have recently demonstrated the integration of wide-field fluorescent microscopy and imaging flow cytometry tools on cell-phones using compact, light-weight, and cost-effective opto-fluidic attachments. In our flow cytometry design, fluorescently labeled cells are flushed through a microfluidic channel that is positioned above the existing cell-phone camera unit. Battery powered light-emitting diodes (LEDs) are butt-coupled to the side of this microfluidic chip, which effectively acts as a multi-mode slab waveguide, where the excitation light is guided to uniformly excite the fluorescent targets. The cell-phone camera records a time lapse movie of the fluorescent cells flowing through the microfluidic channel, where the digital frames of this movie are processed to count the number of the labeled cells within the target solution of interest. Using a similar opto-fluidic design, we can also image these fluorescently labeled cells in static mode by e.g. sandwiching the fluorescent particles between two glass slides and capturing their fluorescent images using the cell-phone camera, which can achieve a spatial resolution of e.g. - 10 μm over a very large field-of-view of - 81 mm(2). This cell-phone based fluorescent imaging flow cytometry and microscopy platform might be useful especially in resource limited settings, for e.g. counting of CD4+ T cells toward monitoring of HIV+ patients or for detection of water-borne parasites in drinking water.

  13. Collaborative 3D Target Tracking in Distributed Smart Camera Networks for Wide-Area Surveillance

    Directory of Open Access Journals (Sweden)

    Xenofon Koutsoukos

    2013-05-01

    Full Text Available With the evolution and fusion of wireless sensor network and embedded camera technologies, distributed smart camera networks have emerged as a new class of systems for wide-area surveillance applications. Wireless networks, however, introduce a number of constraints to the system that need to be considered, notably the communication bandwidth constraints. Existing approaches for target tracking using a camera network typically utilize target handover mechanisms between cameras, or combine results from 2D trackers in each camera into 3D target estimation. Such approaches suffer from scale selection, target rotation, and occlusion, drawbacks typically associated with 2D tracking. In this paper, we present an approach for tracking multiple targets directly in 3D space using a network of smart cameras. The approach employs multi-view histograms to characterize targets in 3D space using color and texture as the visual features. The visual features from each camera along with the target models are used in a probabilistic tracker to estimate the target state. We introduce four variations of our base tracker that incur different computational and communication costs on each node and result in different tracking accuracy. We demonstrate the effectiveness of our proposed trackers by comparing their performance to a 3D tracker that fuses the results of independent 2D trackers. We also present performance analysis of the base tracker along Quality-of-Service (QoS and Quality-of-Information (QoI metrics, and study QoS vs. QoI trade-offs between the proposed tracker variations. Finally, we demonstrate our tracker in a real-life scenario using a camera network deployed in a building.

  14. Atmospheric Characterization of Five Hot Jupiters with the Wide Field Camera 3 on the Hubble Space Telescope

    Science.gov (United States)

    Ranjan, Sukrit; Charbonneau, David; Desert, Jean-Michel; Madhusudhan, Nikku; Deming, Drake; Wilkins, Ashlee; Mandell, Avi M.

    2014-01-01

    We probe the structure and composition of the atmospheres of five hot Jupiter exoplanets using the Hubble Space Telescope Wide Field Camera 3 (WFC3) instrument. We use the G141 grism (1.1-1.7 micrometers) to study TrES-2b, TrES-4b, and CoRoT-1b in transit; TrES-3b in secondary eclipse; and WASP-4b in both. This wavelength region includes a predicted absorption feature from water at 1.4 micrometers, which we expect to be nondegenerate with the other molecules that are likely to be abundant for hydrocarbon-poor (e.g., solar composition) hot Jupiter atmospheres. We divide our wavelength regions into 10 bins. For each bin we produce a spectrophotometric light curve spanning the time of transit or eclipse. We correct these light curves for instrumental systematics without reference to an instrument model. For our transmission spectra, our mean 1s precision per bin corresponds to variations of 2.1, 2.8, and 3.0 atmospheric scale heights for TrES-2b, TrES-4b, and CoRoT-1b, respectively. We find featureless spectra for these three planets. We are unable to extract a robust transmission spectrum for WASP-4b. For our dayside emission spectra, our mean 1 sigma precision per bin corresponds to a planet-to-star flux ratio of 1.5 x 10(exp -4) and 2.1 x 10(exp -4) for WASP-4b and TrES-3b, respectively. We combine these estimates with previous broadband measurements and conclude that for both planets isothermal atmospheres are disfavored. We find no signs of features due to water. We confirm that WFC3 is suitable for studies of transiting exoplanets, but in staring mode multivisit campaigns are necessary to place strong constraints on water abundance.

  15. Atmospheric characterization of five hot Jupiters with the wide field Camera 3 on the Hubble space telescope

    Energy Technology Data Exchange (ETDEWEB)

    Ranjan, Sukrit; Charbonneau, David [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Désert, Jean-Michel [Department of Astrophysical and Planetary Sciences, University of Colorado, Boulder, CO 80309 (United States); Madhusudhan, Nikku [Yale Center for Astronomy and Astrophysics, Yale University, New Haven, CT 06511 (United States); Deming, Drake; Wilkins, Ashlee [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States); Mandell, Avi M., E-mail: sranjan@cfa.harvard.edu [NASA' s Goddard Space Flight Center, Greenbelt, MD 20771 (United States)

    2014-04-20

    We probe the structure and composition of the atmospheres of five hot Jupiter exoplanets using the Hubble Space Telescope Wide Field Camera 3 (WFC3) instrument. We use the G141 grism (1.1-1.7 μm) to study TrES-2b, TrES-4b, and CoRoT-1b in transit; TrES-3b in secondary eclipse; and WASP-4b in both. This wavelength region includes a predicted absorption feature from water at 1.4 μm, which we expect to be nondegenerate with the other molecules that are likely to be abundant for hydrocarbon-poor (e.g., solar composition) hot Jupiter atmospheres. We divide our wavelength regions into 10 bins. For each bin we produce a spectrophotometric light curve spanning the time of transit or eclipse. We correct these light curves for instrumental systematics without reference to an instrument model. For our transmission spectra, our mean 1σ precision per bin corresponds to variations of 2.1, 2.8, and 3.0 atmospheric scale heights for TrES-2b, TrES-4b, and CoRoT-1b, respectively. We find featureless spectra for these three planets. We are unable to extract a robust transmission spectrum for WASP-4b. For our dayside emission spectra, our mean 1σ precision per bin corresponds to a planet-to-star flux ratio of 1.5 × 10{sup –4} and 2.1 × 10{sup –4} for WASP-4b and TrES-3b, respectively. We combine these estimates with previous broadband measurements and conclude that for both planets isothermal atmospheres are disfavored. We find no signs of features due to water. We confirm that WFC3 is suitable for studies of transiting exoplanets, but in staring mode multivisit campaigns are necessary to place strong constraints on water abundance.

  16. KMTNET: A Network of 1.6 m Wide-Field Optical Telescopes Installed at Three Southern Observatories

    Science.gov (United States)

    Kim, Seung-Lee; Lee, Chung-Uk; Park, Byeong-Gon; Kim, Dong-Jin; Cha, Sang-Mok; Lee, Yongseok; Han, Cheongho; Chun, Moo-Young; Yuk, Insoo

    2016-02-01

    The Korea Microlensing Telescope Network (KMTNet) is a wide-field photometric system installed by the Korea Astronomy and Space Science Institute (KASI). Here, we present the overall technical specifications of the KMTNet observation system, test observation results, data transfer and image processing procedure, and finally, the KMTNet science programs. The system consists of three 1.6 m wide-field optical telescopes equipped with mosaic CCD cameras of 18k by 18k pixels. Each telescope provides a 2.0 by 2.0 square degree field of view. We have finished installing all three telescopes and cameras sequentially at the Cerro-Tololo Inter-American Observatory (CTIO) in Chile, the South African Astronomical Observatory (SAAO) in South Africa, and the Siding Spring Observatory (SSO) in Australia. This network of telescopes, which is spread over three different continents at a similar latitude of about -30 degrees, enables 24-hour continuous monitoring of targets observable in the Southern Hemisphere. The test observations showed good image quality that meets the seeing requirement of less than 1.0 arcsec in I-band. All of the observation data are transferred to the KMTNet data center at KASI via the international network communication and are processed with the KMTNet data pipeline. The primary scientific goal of the KMTNet is to discover numerous extrasolar planets toward the Galactic bulge by using the gravitational microlensing technique, especially earth-mass planets in the habitable zone. During the non-bulge season, the system is used for wide-field photometric survey science on supernovae, asteroids, and external galaxies.

  17. SVBRDF-Invariant Shape and Reflectance Estimation from a Light-Field Camera.

    Science.gov (United States)

    Wang, Ting-Chun; Chandraker, Manmohan; Efros, Alexei A; Ramamoorthi, Ravi

    2018-03-01

    Light-field cameras have recently emerged as a powerful tool for one-shot passive 3D shape capture. However, obtaining the shape of glossy objects like metals or plastics remains challenging, since standard Lambertian cues like photo-consistency cannot be easily applied. In this paper, we derive a spatially-varying (SV)BRDF-invariant theory for recovering 3D shape and reflectance from light-field cameras. Our key theoretical insight is a novel analysis of diffuse plus single-lobe SVBRDFs under a light-field setup. We show that, although direct shape recovery is not possible, an equation relating depths and normals can still be derived. Using this equation, we then propose using a polynomial (quadratic) shape prior to resolve the shape ambiguity. Once shape is estimated, we also recover the reflectance. We present extensive synthetic data on the entire MERL BRDF dataset, as well as a number of real examples to validate the theory, where we simultaneously recover shape and BRDFs from a single image taken with a Lytro Illum camera.

  18. The development of large-aperture test system of infrared camera and visible CCD camera

    Science.gov (United States)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  19. Optical camera system for radiation field

    International Nuclear Information System (INIS)

    Maki, Koichi; Senoo, Makoto; Takahashi, Fuminobu; Shibata, Keiichiro; Honda, Takuro.

    1995-01-01

    An infrared-ray camera comprises a transmitting filter used exclusively for infrared-rays at a specific wavelength, such as far infrared-rays and a lens used exclusively for infrared rays. An infrared ray emitter-incorporated photoelectric image converter comprising an infrared ray emitting device, a focusing lens and a semiconductor image pick-up plate is disposed at a place of low gamma-ray dose rate. Infrared rays emitted from an objective member are passed through the lens system of the camera, and real images are formed by way of the filter. They are transferred by image fibers, introduced to the photoelectric image converter and focused on the image pick-up plate by the image-forming lens. Further, they are converted into electric signals and introduced to a display and monitored. With such a constitution, an optical material used exclusively for infrared rays, for example, ZnSe can be used for the lens system and the optical transmission system. Accordingly, it can be used in a radiation field of high gamma ray dose rate around the periphery of the reactor container. (I.N.)

  20. Improved depth estimation with the light field camera

    Science.gov (United States)

    Wang, Huachun; Sang, Xinzhu; Chen, Duo; Guo, Nan; Wang, Peng; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    Light-field cameras are used in consumer and industrial applications. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth estimation from both defocus and correspondence are now available in a single capture. And Lytro.Inc also provides a depth estimation from a single-shot capture with light field camera, like Lytro Illum. This Lytro depth estimation containing many correct depth information can be used for higher quality estimation. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining defocus, correspondence and Lytro depth estimations. We analyze 2D epipolar image (EPI) to get defocus and correspondence depth maps. Defocus depth is obtained by computing the spatial gradient after angular integration and correspondence depth by computing the angular variance from EPIs. Lytro depth can be extracted from Lyrto Illum with software. We then show how to combine the three cues into a high quality depth map. Our method for depth estimation is suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction, as well as light filed display

  1. Rapid wide-field Mueller matrix polarimetry imaging based on four photoelastic modulators with no moving parts.

    Science.gov (United States)

    Alali, Sanaz; Gribble, Adam; Vitkin, I Alex

    2016-03-01

    A new polarimetry method is demonstrated to image the entire Mueller matrix of a turbid sample using four photoelastic modulators (PEMs) and a charge coupled device (CCD) camera, with no moving parts. Accurate wide-field imaging is enabled with a field-programmable gate array (FPGA) optical gating technique and an evolutionary algorithm (EA) that optimizes imaging times. This technique accurately and rapidly measured the Mueller matrices of air, polarization elements, and turbid phantoms. The system should prove advantageous for Mueller matrix analysis of turbid samples (e.g., biological tissues) over large fields of view, in less than a second.

  2. Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras

    Science.gov (United States)

    2017-10-01

    ARL-TR-8185 ● OCT 2017 US Army Research Laboratory Field Test Data for Detecting Vibrations of a Building Using High-Speed Video...Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras by Caitlin P Conn and Geoffrey H Goldman Sensors and...June 2016 – October 2017 4. TITLE AND SUBTITLE Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras 5a. CONTRACT

  3. Space telescope phase B definition study. Volume 2A: Science instruments, f24 field camera

    Science.gov (United States)

    Grosso, R. P.; Mccarthy, D. J.

    1976-01-01

    The analysis and design of the F/24 field camera for the space telescope are discussed. The camera was designed for application to the radial bay of the optical telescope assembly and has an on axis field of view of 3 arc-minutes by 3 arc-minutes.

  4. Examination of the ''Ultra-wide-angle compton camera'' in Fukushima

    International Nuclear Information System (INIS)

    Takeda, Shin'ichiro; Watanabe, Shin; Takahashi, Tadayuki

    2012-01-01

    Japan Aerospace Exploration Agency (JAXA) has made the camera in the title, which can visualize radioactive substances emitting gamma ray in a wide-angle view of almost 180 degrees (hemisphere) and this paper explains its technological details and actual examination in Iitatemura Village, Fukushima Prefecture. The camera has a detector module consisting from 5-laminated structure of 2 layers of Si-double-sided strip detector (Si-DSD) and 3 layers of CdTe-DSD at 4 mm pitch, and their device size and electrode pitch are made the same, which enables the detector tray and analog application specific integrated circuit (ASIC) usable to communize the read-out circuits and for economical reduction. Two modules are placed side by side for increasing sensitivity and car-loaded to operate at -5 degree for the examination. The CdTe-DSD has actually Pt cathode and Al anode (Pt/CdTe/Al) for reduction of electric leaking and increase of energy resolution for 137 Cs gamma ray (662 keV). Data from the detector are digital pulse height values, which are then converted to the hit information of the detected position and energy. The hit event due to photoelectric absorption peak in CdTe originated from Compton scattering in Si is selected to be back-projected on the celestial hemisphere, leading to the torus depending on the direction of the gamma ray, of which accumulation results in specifying the position of the source. At the Village of 2-3 mcSv/h of ambient dose environment, locally accumulated radioactive substances (30 mcSv/h) are successfully visualized. With use of soft gamma ray detector in ASTRO-H satellite under development in JAXA, the improved camera can be more sensitive and may be useful in such a case as de-contamination to monitor its results in real time. (T.T.)

  5. Flat-field response and geometric distortion measurements of optical streak cameras

    International Nuclear Information System (INIS)

    Montgomery, D.S.; Drake, R.P.; Jones, B.A.; Wiedwald, J.D.

    1987-01-01

    To accurately measure pulse amplitude, shape, and relative time histories of optical signals with an optical streak camera, it is necessary to correct each recorded image for spatially-dependent gain nonuniformity and geometric distortion. Gain nonuniformities arise from sensitivity variations in the streak-tube photocathode, phosphor screen, image-intensifier tube, and image recording system. By using a 1.053-μm, long-pulse, high-power laser to generate a spatially and temporally uniform source as input to the streak camera, the combined effects of flat-field response and geometric distortion can be measured under the normal dynamic operation of cameras with S-1 photocathodes. Additionally, by using the same laser system to generate a train of short pulses that can be spatially modulated at the input of the streak camera, the authors can create a two-dimensional grid of equally-spaced pulses. This allows a dynamic measurement of the geometric distortion of the streak camera. The author discusses the techniques involved in performing these calibrations, present some of the measured results for LLNL optical streak cameras, and will discuss software methods to correct for these effects

  6. Observing GRBs with the LOFT Wide Field Monitor

    DEFF Research Database (Denmark)

    Brandt, Søren; Hernanz, M.; Feroci, M.

    2013-01-01

    (LAD) with a monitor (WFM) instrument. The WFM is based on the coded mask principle, and 5 camera units will provide coverage of more than 1/3 of the sky. The prime goal of the WFM is to detect transient sources to be observed by the LAD. With its wide...

  7. Wide-Field Imaging Using Nitrogen Vacancies

    Science.gov (United States)

    Englund, Dirk Robert (Inventor); Trusheim, Matthew Edwin (Inventor)

    2017-01-01

    Nitrogen vacancies in bulk diamonds and nanodiamonds can be used to sense temperature, pressure, electromagnetic fields, and pH. Unfortunately, conventional sensing techniques use gated detection and confocal imaging, limiting the measurement sensitivity and precluding wide-field imaging. Conversely, the present sensing techniques do not require gated detection or confocal imaging and can therefore be used to image temperature, pressure, electromagnetic fields, and pH over wide fields of view. In some cases, wide-field imaging supports spatial localization of the NVs to precisions at or below the diffraction limit. Moreover, the measurement range can extend over extremely wide dynamic range at very high sensitivity.

  8. THE HUBBLE WIDE FIELD CAMERA 3 TEST OF SURFACES IN THE OUTER SOLAR SYSTEM: SPECTRAL VARIATION ON KUIPER BELT OBJECTS

    International Nuclear Information System (INIS)

    Fraser, Wesley C.; Brown, Michael E.; Glass, Florian

    2015-01-01

    Here, we present additional photometry of targets observed as part of the Hubble Wide Field Camera 3 (WFC3) Test of Surfaces in the Outer Solar System. Twelve targets were re-observed with the WFC3 in the optical and NIR wavebands designed to complement those used during the first visit. Additionally, all of the observations originally presented by Fraser and Brown were reanalyzed through the same updated photometry pipeline. A re-analysis of the optical and NIR color distribution reveals a bifurcated optical color distribution and only two identifiable spectral classes, each of which occupies a broad range of colors and has correlated optical and NIR colors, in agreement with our previous findings. We report the detection of significant spectral variations on five targets which cannot be attributed to photometry errors, cosmic rays, point-spread function or sensitivity variations, or other image artifacts capable of explaining the magnitude of the variation. The spectrally variable objects are found to have a broad range of dynamical classes and absolute magnitudes, exhibit a broad range of apparent magnitude variations, and are found in both compositional classes. The spectrally variable objects with sufficiently accurate colors for spectral classification maintain their membership, belonging to the same class at both epochs. 2005 TV189 exhibits a sufficiently broad difference in color at the two epochs that span the full range of colors of the neutral class. This strongly argues that the neutral class is one single class with a broad range of colors, rather than the combination of multiple overlapping classes

  9. Wide-field time-correlated single photon counting (TCSPC) microscopy with time resolution below the frame exposure time

    Energy Technology Data Exchange (ETDEWEB)

    Hirvonen, Liisa M. [Department of Physics, King' s College London, Strand, London WC2R 2LS (United Kingdom); Petrášek, Zdeněk [Max Planck Institute of Biochemistry, Department of Cellular and Molecular Biophysics, Am Klopferspitz 18, D-82152 Martinsried (Germany); Suhling, Klaus, E-mail: klaus.suhling@kcl.ac.uk [Department of Physics, King' s College London, Strand, London WC2R 2LS (United Kingdom)

    2015-07-01

    Fast frame rate CMOS cameras in combination with photon counting intensifiers can be used for fluorescence imaging with single photon sensitivity at kHz frame rates. We show here how the phosphor decay of the image intensifier can be exploited for accurate timing of photon arrival well below the camera exposure time. This is achieved by taking ratios of the intensity of the photon events in two subsequent frames, and effectively allows wide-field TCSPC. This technique was used for measuring decays of ruthenium compound Ru(dpp) with lifetimes as low as 1 μs with 18.5 μs frame exposure time, including in living HeLa cells, using around 0.1 μW excitation power. We speculate that by using an image intensifier with a faster phosphor decay to match a higher camera frame rate, photon arrival time measurements on the nanosecond time scale could well be possible.

  10. Camera systems in human motion analysis for biomedical applications

    Science.gov (United States)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  11. Wide-Field Imaging Telescope-0 (WIT0) with automatic observing system

    Science.gov (United States)

    Ji, Tae-Geun; Byeon, Seoyeon; Lee, Hye-In; Park, Woojin; Lee, Sang-Yun; Hwang, Sungyong; Choi, Changsu; Gibson, Coyne Andrew; Kuehne, John W.; Prochaska, Travis; Marshall, Jennifer L.; Im, Myungshin; Pak, Soojong

    2018-01-01

    We introduce Wide-Field Imaging Telescope-0 (WIT0), with an automatic observing system. It is developed for monitoring the variabilities of many sources at a time, e.g. young stellar objects and active galactic nuclei. It can also find the locations of transient sources such as a supernova or gamma-ray bursts. In 2017 February, we installed the wide-field 10-inch telescope (Takahashi CCA-250) as a piggyback system on the 30-inch telescope at the McDonald Observatory in Texas, US. The 10-inch telescope has a 2.35 × 2.35 deg field-of-view with a 4k × 4k CCD Camera (FLI ML16803). To improve the observational efficiency of the system, we developed a new automatic observing software, KAOS30 (KHU Automatic Observing Software for McDonald 30-inch telescope), which was developed by Visual C++ on the basis of a windows operating system. The software consists of four control packages: the Telescope Control Package (TCP), the Data Acquisition Package (DAP), the Auto Focus Package (AFP), and the Script Mode Package (SMP). Since it also supports the instruments that are using the ASCOM driver, the additional hardware installations become quite simplified. We commissioned KAOS30 in 2017 August and are in the process of testing. Based on the WIT0 experiences, we will extend KAOS30 to control multiple telescopes in future projects.

  12. Sky light polarization detection with linear polarizer triplet in light field camera inspired by insect vision.

    Science.gov (United States)

    Zhang, Wenjing; Cao, Yu; Zhang, Xuanzhe; Liu, Zejin

    2015-10-20

    Stable information of a sky light polarization pattern can be used for navigation with various advantages such as better performance of anti-interference, no "error cumulative effect," and so on. But the existing method of sky light polarization measurement is weak in real-time performance or with a complex system. Inspired by the navigational capability of a Cataglyphis with its compound eyes, we introduce a new approach to acquire the all-sky image under different polarization directions with one camera and without a rotating polarizer, so as to detect the polarization pattern across the full sky in a single snapshot. Our system is based on a handheld light field camera with a wide-angle lens and a triplet linear polarizer placed over its aperture stop. Experimental results agree with the theoretical predictions. Not only real-time detection but simple and costless architecture demonstrates the superiority of the approach proposed in this paper.

  13. Michelson wide-field stellar interferometry

    NARCIS (Netherlands)

    Montilla, I.

    2004-01-01

    The main goal of this thesis is to develop a system to permit wide field operation of Michelson Interferometers. A wide field of view is very important in applications such as the observation of extended or multiple objects, the fringe acquisition and/ or tracking on a nearby unresolved object, and

  14. Flat-field response and geometric distortion measurements of optical streak cameras

    International Nuclear Information System (INIS)

    Montgomery, D.S.; Drake, R.P.; Jones, B.A.; Wiedwald, J.D.

    1987-08-01

    To accurately measure pulse amplitude, shape, and relative time histories of optical signals with an optical streak camera, it is necessary to correct each recorded image for spatially-dependent gain nonuniformity and geometric distortion. Gain nonuniformities arise from sensitivity variations in the streak-tube photocathode, phosphor screen, image-intensifier tube, and image recording system. These nonuniformities may be severe, and have been observed to be on the order of 100% for some LLNL optical streak cameras. Geometric distortion due to optical couplings, electron-optics, and sweep nonlinearity not only affects pulse position and timing measurements, but affects pulse amplitude and shape measurements as well. By using a 1.053-μm, long-pulse, high-power laser to generate a spatially and temporally uniform source as input to the streak camera, the combined effects of flat-field response and geometric distortion can be measured under the normal dynamic operation of cameras with S-1 photocathodes. Additionally, by using the same laser system to generate a train of short pulses that can be spatially modulated at the input of the streak camera, we can effectively create a two-dimensional grid of equally-spaced pulses. This allows a dynamic measurement of the geometric distortion of the streak camera. We will discuss the techniques involved in performing these calibrations, will present some of the measured results for LLNL optical streak cameras, and will discuss software methods to correct for these effects. 6 refs., 6 figs

  15. Cost-effective and compact wide-field fluorescent imaging on a cell-phone.

    Science.gov (United States)

    Zhu, Hongying; Yaglidere, Oguzhan; Su, Ting-Wei; Tseng, Derek; Ozcan, Aydogan

    2011-01-21

    We demonstrate wide-field fluorescent and darkfield imaging on a cell-phone with compact, light-weight and cost-effective optical components that are mechanically attached to the existing camera unit of the cell-phone. For this purpose, we used battery powered light-emitting diodes (LEDs) to pump the sample of interest from the side using butt-coupling, where the pump light was guided within the sample cuvette to uniformly excite the specimen. The fluorescent emission from the sample was then imaged using an additional lens that was positioned right in front of the existing lens of the cell-phone camera. Because the excitation occurs through guided waves that propagate perpendicular to our detection path, an inexpensive plastic colour filter was sufficient to create the dark-field background required for fluorescent imaging, without the need for a thin-film interference filter. We validate the performance of this platform by imaging various fluorescent micro-objects in 2 colours (i.e., red and green) over a large field-of-view (FOV) of ∼81 mm(2) with a raw spatial resolution of ∼20 μm. With additional digital processing of the captured cell-phone images, through the use of compressive sampling theory, we demonstrate ∼2 fold improvement in our resolving power, achieving ∼10 μm resolution without a trade-off in our FOV. Further, we also demonstrate darkfield imaging of non-fluorescent specimen using the same interface, where this time the scattered light from the objects is detected without the use of any filters. The capability of imaging a wide FOV would be exceedingly important to probe large sample volumes (e.g., >0.1 mL) of e.g., blood, urine, sputum or water, and for this end we also demonstrate fluorescent imaging of labeled white-blood cells from whole blood samples, as well as water-borne pathogenic protozoan parasites such as Giardia Lamblia cysts. Weighing only ∼28 g (∼1 ounce), this compact and cost-effective fluorescent imaging platform

  16. Development of a high sensitivity pinhole type gamma camera using semiconductors for low dose rate fields

    Science.gov (United States)

    Ueno, Yuichiro; Takahashi, Isao; Ishitsu, Takafumi; Tadokoro, Takahiro; Okada, Koichi; Nagumo, Yasushi; Fujishima, Yasutake; Yoshida, Akira; Umegaki, Kikuo

    2018-06-01

    We developed a pinhole type gamma camera, using a compact detector module of a pixelated CdTe semiconductor, which has suitable sensitivity and quantitative accuracy for low dose rate fields. In order to improve the sensitivity of the pinhole type semiconductor gamma camera, we adopted three methods: a signal processing method to set the discriminating level lower, a high sensitivity pinhole collimator and a smoothing image filter that improves the efficiency of the source identification. We tested basic performances of the developed gamma camera and carefully examined effects of the three methods. From the sensitivity test, we found that the effective sensitivity was about 21 times higher than that of the gamma camera for high dose rate fields which we had previously developed. We confirmed that the gamma camera had sufficient sensitivity and high quantitative accuracy; for example, a weak hot spot (0.9 μSv/h) around a tree root could be detected within 45 min in a low dose rate field test, and errors of measured dose rates with point sources were less than 7% in a dose rate accuracy test.

  17. Science, conservation, and camera traps

    Science.gov (United States)

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  18. The Fifteen-Year Attitude History of the Wide Field Planetary Camera 2 Radiator and Collection Efficiencies for Micrometeoroids and Orbital Debris

    Science.gov (United States)

    Anz-Meador, Phillip D.; Liou, Jer-Chyi; Cooke, William J.; Koehler, H.

    2010-01-01

    An examination of the Hubble Space Telescope (HST) Wide Field Planetary Camera 2 (WFPC-2) radiator assembly was conducted at NASA Goddard Space Flight Center (GSFC) during the summer of 2009. Immediately apparent was a distinct biasing of the largest 45 impact features towards one side of the radiator, in contrast to an approximately uniform distribution of smaller impacts. Such a distribution may be a consequence of the HST s attitude history and pointing requirements for the cold radiator, or of environmental effects, such as an anisotropic distribution of the responsible population in that size regime. Understanding the size-dependent spatial distribution of impact features is essential to the general analysis of these features. We have obtained from GSFC a 15 minute temporal resolution record of the state vector (Earth Centered Inertial position and velocity) and HST attitude, consisting of the orientation of the velocity and HST-sun vectors in HST body coordinates. This paper reviews the actual state vector and attitude history of the radiator in the context of the randomly tumbling plate assumption and assesses the statistical likelihood (or collection efficiency) of the radiator for the micrometeoroid and orbital debris environments. The NASA Marshall Space Flight Center s Meteoroid Environment Model is used to assess the micrometeoroid component. The NASA Orbital Debris Engineering Model (ORDEM) is used to model the orbital debris component. Modeling results are compared with observations of the impact feature spatial distribution, and the relative contribution of each environmental component are examined in detail.

  19. Wide-field fundus autofluorescence corresponds to visual fields in chorioretinitis patients

    Directory of Open Access Journals (Sweden)

    Seidensticker F

    2011-11-01

    Full Text Available Florian Seidensticker1, Aljoscha S Neubauer1, Tamer Wasfy1,2, Carmen Stumpf1, Stephan R Thurau1,*, Anselm Kampik1, Marcus Kernt1,*1Department of Ophthalmology, Ludwig-Maximilians-University, Munich, Germany; 2Department of Ophthalmology, Tanta University, Tanta, Egypt *Both authors contributed equally to this workBackground and objectives: Detection of peripheral fundus autofluorescence (FAF using conventional scanning laser ophthalmoscopes (SLOs is difficult and requires pupil dilation. Here we evaluated the diagnostic properties of wide-field FAF detected by a two-laser wavelength wide-field SLO in uveitis patients.Study design/materials and methods: Observational case series of four patients suffering from different types of posterior uveitis/chorioretinitis. Wide-field FAF images were compared to visual fields. Panretinal FAF was detected by a newly developed SLO, which allows FAF imaging of up to 200° of the retina in one scan without the need for pupil dilation. Visual fields were obtained by Goldmann manual perimetry.Results: Findings from wide-field FAF imaging showed correspondence to visual field defects in all cases.Conclusion: Wide-field FAF allowed the detection of visual field defect-related alterations of the retinal pigment epithelium in all four uveitis cases.Keywords: fundus autofluorescence (FAF, Optomap, wide-field scanning laser ophthalmoscopy, imaging, uveitis, visual field

  20. Localization and Mapping Using a Non-Central Catadioptric Camera System

    Science.gov (United States)

    Khurana, M.; Armenakis, C.

    2018-05-01

    This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to "see and move" more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.

  1. Speckle correlation resolution enhancement of wide-field fluorescence imaging (Conference Presentation)

    Science.gov (United States)

    Yilmaz, Hasan

    2016-03-01

    Structured illumination enables high-resolution fluorescence imaging of nanostructures [1]. We demonstrate a new high-resolution fluorescence imaging method that uses a scattering layer with a high-index substrate as a solid immersion lens [2]. Random scattering of coherent light enables a speckle pattern with a very fine structure that illuminates the fluorescent nanospheres on the back surface of the high-index substrate. The speckle pattern is raster-scanned over the fluorescent nanospheres using a speckle correlation effect known as the optical memory effect. A series of standard-resolution fluorescence images per each speckle pattern displacement are recorded by an electron-multiplying CCD camera using a commercial microscope objective. We have developed a new phase-retrieval algorithm to reconstruct a high-resolution, wide-field image from several standard-resolution wide-field images. We have introduced phase information of Fourier components of standard-resolution images as a new constraint in our algorithm which discards ambiguities therefore ensures convergence to a unique solution. We demonstrate two-dimensional fluorescence images of a collection of nanospheres with a deconvolved Abbe resolution of 116 nm and a field of view of 10 µm × 10 µm. Our method is robust against optical aberrations and stage drifts, therefore excellent for imaging nanostructures under ambient conditions. [1] M. G. L. Gustafsson, J. Microsc. 198, 82-87 (2000). [2] H. Yilmaz, E. G. van Putten, J. Bertolotti, A. Lagendijk, W. L. Vos, and A. P. Mosk, Optica 2, 424-429 (2015).

  2. EVALUATION OF THE QUALITY OF ACTION CAMERAS WITH WIDE-ANGLE LENSES IN UAV PHOTOGRAMMETRY

    Directory of Open Access Journals (Sweden)

    H. Hastedt

    2016-06-01

    Full Text Available The application of light-weight cameras in UAV photogrammetry is required due to restrictions in payload. In general, consumer cameras with normal lens type are applied to a UAV system. The availability of action cameras, like the GoPro Hero4 Black, including a wide-angle lens (fish-eye lens offers new perspectives in UAV projects. With these investigations, different calibration procedures for fish-eye lenses are evaluated in order to quantify their accuracy potential in UAV photogrammetry. Herewith the GoPro Hero4 is evaluated using different acquisition modes. It is investigated to which extent the standard calibration approaches in OpenCV or Agisoft PhotoScan/Lens can be applied to the evaluation processes in UAV photogrammetry. Therefore different calibration setups and processing procedures are assessed and discussed. Additionally a pre-correction of the initial distortion by GoPro Studio and its application to the photogrammetric purposes will be evaluated. An experimental setup with a set of control points and a prospective flight scenario is chosen to evaluate the processing results using Agisoft PhotoScan. Herewith it is analysed to which extent a pre-calibration and pre-correction of a GoPro Hero4 will reinforce the reliability and accuracy of a flight scenario.

  3. Evaluation of the Quality of Action Cameras with Wide-Angle Lenses in Uav Photogrammetry

    Science.gov (United States)

    Hastedt, H.; Ekkel, T.; Luhmann, T.

    2016-06-01

    The application of light-weight cameras in UAV photogrammetry is required due to restrictions in payload. In general, consumer cameras with normal lens type are applied to a UAV system. The availability of action cameras, like the GoPro Hero4 Black, including a wide-angle lens (fish-eye lens) offers new perspectives in UAV projects. With these investigations, different calibration procedures for fish-eye lenses are evaluated in order to quantify their accuracy potential in UAV photogrammetry. Herewith the GoPro Hero4 is evaluated using different acquisition modes. It is investigated to which extent the standard calibration approaches in OpenCV or Agisoft PhotoScan/Lens can be applied to the evaluation processes in UAV photogrammetry. Therefore different calibration setups and processing procedures are assessed and discussed. Additionally a pre-correction of the initial distortion by GoPro Studio and its application to the photogrammetric purposes will be evaluated. An experimental setup with a set of control points and a prospective flight scenario is chosen to evaluate the processing results using Agisoft PhotoScan. Herewith it is analysed to which extent a pre-calibration and pre-correction of a GoPro Hero4 will reinforce the reliability and accuracy of a flight scenario.

  4. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  5. Characterization of a direct detection device imaging camera for transmission electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Milazzo, Anna-Clare, E-mail: amilazzo@ncmir.ucsd.edu [University of California at San Diego, 9500 Gilman Dr., La Jolla, CA 92093 (United States); Moldovan, Grigore [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Lanman, Jason [Department of Molecular Biology, The Scripps Research Institute, La Jolla, CA 92037 (United States); Jin, Liang; Bouwer, James C. [University of California at San Diego, 9500 Gilman Dr., La Jolla, CA 92093 (United States); Klienfelder, Stuart [University of California at Irvine, Irvine, CA 92697 (United States); Peltier, Steven T.; Ellisman, Mark H. [University of California at San Diego, 9500 Gilman Dr., La Jolla, CA 92093 (United States); Kirkland, Angus I. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Xuong, Nguyen-Huu [University of California at San Diego, 9500 Gilman Dr., La Jolla, CA 92093 (United States)

    2010-06-15

    The complete characterization of a novel direct detection device (DDD) camera for transmission electron microscopy is reported, for the first time at primary electron energies of 120 and 200 keV. Unlike a standard charge coupled device (CCD) camera, this device does not require a scintillator. The DDD transfers signal up to 65 lines/mm providing the basis for a high-performance platform for a new generation of wide field-of-view high-resolution cameras. An image of a thin section of virus particles is presented to illustrate the substantially improved performance of this sensor over current indirectly coupled CCD cameras.

  6. Characterization of a direct detection device imaging camera for transmission electron microscopy

    International Nuclear Information System (INIS)

    Milazzo, Anna-Clare; Moldovan, Grigore; Lanman, Jason; Jin, Liang; Bouwer, James C.; Klienfelder, Stuart; Peltier, Steven T.; Ellisman, Mark H.; Kirkland, Angus I.; Xuong, Nguyen-Huu

    2010-01-01

    The complete characterization of a novel direct detection device (DDD) camera for transmission electron microscopy is reported, for the first time at primary electron energies of 120 and 200 keV. Unlike a standard charge coupled device (CCD) camera, this device does not require a scintillator. The DDD transfers signal up to 65 lines/mm providing the basis for a high-performance platform for a new generation of wide field-of-view high-resolution cameras. An image of a thin section of virus particles is presented to illustrate the substantially improved performance of this sensor over current indirectly coupled CCD cameras.

  7. IOT Overview: Wide-Field Imaging

    Science.gov (United States)

    Selman, F. J.

    The Wide Field Imager (WFI) instrument at La Silla has been the workhorse of wide-field imaging instruments at ESO for several years. In this contribution I will summarize the issues relating to its productivity for the community both in terms of the quality and quantity of data that has come out of it. Although only surveys of limited scope have been completed using WFI, it is ESO's stepping-stone to the new generation of survey telescopes.

  8. Networked web-cameras monitor congruent seasonal development of birches with phenological field observations

    Science.gov (United States)

    Peltoniemi, Mikko; Aurela, Mika; Böttcher, Kristin; Kolari, Pasi; Loehr, John; Karhu, Jouni; Kubin, Eero; Linkosalmi, Maiju; Melih Tanis, Cemal; Nadir Arslan, Ali

    2017-04-01

    Ecosystems' potential to provide services, e.g. to sequester carbon is largely driven by the phenological cycle of vegetation. Timing of phenological events is required for understanding and predicting the influence of climate change on ecosystems and to support various analyses of ecosystem functioning. We established a network of cameras for automated monitoring of phenological activity of vegetation in boreal ecosystems of Finland. Cameras were mounted on 14 sites, each site having 1-3 cameras. In this study, we used cameras at 11 of these sites to investigate how well networked cameras detect phenological development of birches (Betula spp.) along the latitudinal gradient. Birches are interesting focal species for the analyses as they are common throughout Finland. In our cameras they often appear in smaller quantities within dominant species in the images. Here, we tested whether small scattered birch image elements allow reliable extraction of color indices and changes therein. We compared automatically derived phenological dates from these birch image elements to visually determined dates from the same image time series, and to independent observations recorded in the phenological monitoring network from the same region. Automatically extracted season start dates based on the change of green color fraction in the spring corresponded well with the visually interpreted start of season, and field observed budburst dates. During the declining season, red color fraction turned out to be superior over green color based indices in predicting leaf yellowing and fall. The latitudinal gradients derived using automated phenological date extraction corresponded well with gradients based on phenological field observations from the same region. We conclude that already small and scattered birch image elements allow reliable extraction of key phenological dates for birch species. Devising cameras for species specific analyses of phenological timing will be useful for

  9. Wide-Field Optic for Autonomous Acquisition of Laser Link

    Science.gov (United States)

    Page, Norman A.; Charles, Jeffrey R.; Biswas, Abhijit

    2011-01-01

    An innovation reported in Two-Camera Acquisition and Tracking of a Flying Target, NASA Tech Briefs, Vol. 32, No. 8 (August 2008), p. 20, used a commercial fish-eye lens and an electronic imaging camera for initially locating objects with subsequent handover to an actuated narrow-field camera. But this operated against a dark-sky background. An improved solution involves an optical design based on custom optical components for the wide-field optical system that directly addresses the key limitations in acquiring a laser signal from a moving source such as an aircraft or a spacecraft. The first challenge was to increase the light collection entrance aperture diameter, which was approximately 1 mm in the first prototype. The new design presented here increases this entrance aperture diameter to 4.2 mm, which is equivalent to a more than 16 times larger collection area. One of the trades made in realizing this improvement was to restrict the field-of-view to +80 deg. elevation and 360 azimuth. This trade stems from practical considerations where laser beam propagation over the excessively high air mass, which is in the line of sight (LOS) at low elevation angles, results in vulnerability to severe atmospheric turbulence and attenuation. An additional benefit of the new design is that the large entrance aperture is maintained even at large off-axis angles when the optic is pointed at zenith. The second critical limitation for implementing spectral filtering in the design was tackled by collimating the light prior to focusing it onto the focal plane. This allows the placement of the narrow spectral filter in the collimated portion of the beam. For the narrow band spectral filter to function properly, it is necessary to adequately control the range of incident angles at which received light intercepts the filter. When this angle is restricted via collimation, narrower spectral filtering can be implemented. The collimated beam (and the filter) must be relatively large to

  10. TRANSFORMATION ALGORITHM FOR IMAGES OBTAINED BY OMNIDIRECTIONAL CAMERAS

    Directory of Open Access Journals (Sweden)

    V. P. Lazarenko

    2015-01-01

    Full Text Available Omnidirectional optoelectronic systems find their application in areas where a wide viewing angle is critical. However, omnidirectional optoelectronic systems have a large distortion that makes their application more difficult. The paper compares the projection functions of traditional perspective lenses and omnidirectional wide angle fish-eye lenses with a viewing angle not less than 180°. This comparison proves that distortion models of omnidirectional cameras cannot be described as a deviation from the classic model of pinhole camera. To solve this problem, an algorithm for transforming omnidirectional images has been developed. The paper provides a brief comparison of the four calibration methods available in open source toolkits for omnidirectional optoelectronic systems. Geometrical projection model is given used for calibration of omnidirectional optical system. The algorithm consists of three basic steps. At the first step, we calculate he field of view of a virtual pinhole PTZ camera. This field of view is characterized by an array of 3D points in the object space. At the second step the array of corresponding pixels for these three-dimensional points is calculated. Then we make a calculation of the projection function that expresses the relation between a given 3D point in the object space and a corresponding pixel point. In this paper we use calibration procedure providing the projection function for calibrated instance of the camera. At the last step final image is formed pixel-by-pixel from the original omnidirectional image using calculated array of 3D points and projection function. The developed algorithm gives the possibility for obtaining an image for a part of the field of view of an omnidirectional optoelectronic system with the corrected distortion from the original omnidirectional image. The algorithm is designed for operation with the omnidirectional optoelectronic systems with both catadioptric and fish-eye lenses

  11. Low power multi-camera system and algorithms for automated threat detection

    Science.gov (United States)

    Huber, David J.; Khosla, Deepak; Chen, Yang; Van Buer, Darrel J.; Martin, Kevin

    2013-05-01

    A key to any robust automated surveillance system is continuous, wide field-of-view sensor coverage and high accuracy target detection algorithms. Newer systems typically employ an array of multiple fixed cameras that provide individual data streams, each of which is managed by its own processor. This array can continuously capture the entire field of view, but collecting all the data and back-end detection algorithm consumes additional power and increases the size, weight, and power (SWaP) of the package. This is often unacceptable, as many potential surveillance applications have strict system SWaP requirements. This paper describes a wide field-of-view video system that employs multiple fixed cameras and exhibits low SWaP without compromising the target detection rate. We cycle through the sensors, fetch a fixed number of frames, and process them through a modified target detection algorithm. During this time, the other sensors remain powered-down, which reduces the required hardware and power consumption of the system. We show that the resulting gaps in coverage and irregular frame rate do not affect the detection accuracy of the underlying algorithms. This reduces the power of an N-camera system by up to approximately N-fold compared to the baseline normal operation. This work was applied to Phase 2 of DARPA Cognitive Technology Threat Warning System (CT2WS) program and used during field testing.

  12. ROBUST PERSON TRACKING WITH MULTIPLE NON-OVERLAPPING CAMERAS IN AN OUTDOOR ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    S. Hellwig

    2012-07-01

    Full Text Available The aim of our work is to combine multiple cameras for a robust tracking of persons in an outdoor environment. Although surveillance is a well established field, many algorithms apply various constraints like overlapping fields of view or precise calibration of the cameras to improve results. An application of these developed systems in a realistic outdoor environment is often difficult. Our aim is to be widely independent from the camera setup and the observed scene, in order to use existing cameras. Thereby our algorithm needs to be capable to work with both overlapping and non-overlapping fields of views. We propose an algorithm that allows flexible combination of different static cameras with varying properties. Another requirement of a practical application is that the algorithm is able to work online. Our system is able to process the data during runtime and to provide results immediately. In addition to seeking flexibility in the camera setup, we present a specific approach that combines state of the art algorithms in order to be robust to environment influences. We present results that indicate a good performance of our introduced algorithm in different scenarios. We show its robustness to different types of image artifacts. In addition we demonstrate that our algorithm is able to match persons between cameras in a non-overlapping scenario.

  13. First results from the TOPSAT camera

    Science.gov (United States)

    Greenway, Paul; Tosh, Ian; Morris, Nigel; Burton, Gary; Cawley, Steve

    2017-11-01

    The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. The instrument was designed and assembled at the Space Science & Technology Department of the CCLRC's Rutherford Appleton Laboratory (RAL) in the UK, and was launched on the 27th October 2005 from Plesetsk Cosmodrome in Northern Russia on a Kosmos-3M. The camera utilises an off-axis three mirror system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; QinetiQ, Surrey Satellite Technology Ltd (SSTL), RAL and Infoterra. Its objective is to demonstrate provision of rapid response high resolution imagery to fixed and mobile ground stations using a low cost minisatellite. The paper "Development of the TopSat Camera" presented by RAL at the 5th ICSO in 2004 described the opto-mechanical design, assembly, alignment and environmental test methods implemented. Now that the spacecraft is in orbit and successfully acquiring images, this paper presents the first results from the camera and makes an initial assessment of the camera's in-orbit performance.

  14. AWARE Wide Field View

    Science.gov (United States)

    2016-04-29

    G. Anderson, S. D. Feller, E . M. Vera , H. S. Son, S.-H. Youn, J. Kim, M. E . Gehm, D. J. Brady, J. M. Nichols, K. P. Judd, M. D. Duncan, J. R...scale in monocentric gigapixel cameras." Applied Optics 50(30): 5824-5833. Tremblay, E . J., et al. (2012). "Design and scaling of monocentric...cameras. Optomechanical Engineering 2013. A. E . Hatheway. 8836. Youn, S. H., et al. (2013). Efficient testing methodologies for microcameras in a

  15. Novel computer-based endoscopic camera

    Science.gov (United States)

    Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia

    1995-05-01

    We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.

  16. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    Science.gov (United States)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  17. A method of camera calibration with adaptive thresholding

    Science.gov (United States)

    Gao, Lei; Yan, Shu-hua; Wang, Guo-chao; Zhou, Chun-lei

    2009-07-01

    In order to calculate the parameters of the camera correctly, we must figure out the accurate coordinates of the certain points in the image plane. Corners are the important features in the 2D images. Generally speaking, they are the points that have high curvature and lie in the junction of different brightness regions of images. So corners detection has already widely used in many fields. In this paper we use the pinhole camera model and SUSAN corner detection algorithm to calibrate the camera. When using the SUSAN corner detection algorithm, we propose an approach to retrieve the gray difference threshold, adaptively. That makes it possible to pick up the right chessboard inner comers in all kinds of gray contrast. The experiment result based on this method was proved to be feasible.

  18. A 3D technique for simulation of irregular electron treatment fields using a digital camera

    International Nuclear Information System (INIS)

    Bassalow, Roustem; Sidhu, Narinder P.

    2003-01-01

    Cerrobend inserts, which define electron field apertures, are manufactured at our institution using perspex templates. Contours are reproduced manually on these templates at the simulator from the field outlines drawn on the skin or mask of a patient. A previously reported technique for simulation of electron treatment fields uses a digital camera to eliminate the need for such templates. However, avoidance of the image distortions introduced by non-flat surfaces on which the electron field outlines were drawn could only be achieved by limiting the application of this technique to surfaces which were flat or near flat. We present a technique that employs a digital camera and allows simulation of electron treatment fields contoured on an anatomical surface of an arbitrary three-dimensional (3D) shape, such as that of the neck, extremities, face, or breast. The procedure is fast, accurate, and easy to perform

  19. Robust sky light polarization detection with an S-wave plate in a light field camera.

    Science.gov (United States)

    Zhang, Wenjing; Zhang, Xuanzhe; Cao, Yu; Liu, Haibo; Liu, Zejin

    2016-05-01

    The sky light polarization navigator has many advantages, such as low cost, no decrease in accuracy with continuous operation, etc. However, current celestial polarization measurement methods often suffer from low performance when the sky is covered by clouds, which reduce the accuracy of navigation. In this paper we introduce a new method and structure based on a handheld light field camera and a radial polarizer, composed of an S-wave plate and a linear polarizer, to detect the sky light polarization pattern across a wide field of view in a single snapshot. Each micro-subimage has a special intensity distribution. After extracting the texture feature of these subimages, stable distribution information of the angle of polarization under a cloudy sky can be obtained. Our experimental results match well with the predicted properties of the theory. Because the polarization pattern is obtained through image processing, rather than traditional methods based on mathematical computation, this method is less sensitive to errors of pixel gray value and thus has better anti-interference performance.

  20. Calibration of Low Cost RGB and NIR Uav Cameras

    Science.gov (United States)

    Fryskowska, A.; Kedzierski, M.; Grochala, A.; Braula, A.

    2016-06-01

    Non-metric digital cameras are being widely used for photogrammetric studies. The increase in resolution and quality of images obtained by non-metric cameras, allows to use it in low-cost UAV and terrestrial photogrammetry. Imagery acquired with non-metric cameras can be used in 3D modeling of objects or landscapes, reconstructing of historical sites, generating digital elevation models (DTM), orthophotos, or in the assessment of accidents. Non-metric digital camcorders are characterized by instability and ignorance of the interior orientation parameters. Therefore, the use of these devices requires prior calibration. Calibration research was conducted using non-metric camera, different calibration tests and various software. The first part of the paper contains a brief theoretical introduction including the basic definitions, like the construction of non-metric cameras or description of different optical distortions. The second part of the paper contains cameras calibration process, details of the calibration methods and models that have been used. Sony Nex 5 camera calibration has been done using software: Image Master Calib, Matlab - Camera Calibrator application and Agisoft Lens. For the study 2D test fields has been used. As a part of the research a comparative analysis of the results have been done.

  1. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  2. Optical Design of the Camera for Transiting Exoplanet Survey Satellite (TESS)

    Science.gov (United States)

    Chrisp, Michael; Clark, Kristin; Primeau, Brian; Dalpiaz, Michael; Lennon, Joseph

    2015-01-01

    The optical design of the wide field of view refractive camera, 34 degrees diagonal field, for the TESS payload is described. This fast f/1.4 cryogenic camera, operating at -75 C, has no vignetting for maximum light gathering within the size and weight constraints. Four of these cameras capture full frames of star images for photometric searches of planet crossings. The optical design evolution, from the initial Petzval design, took advantage of Forbes aspheres to develop a hybrid design form. This maximized the correction from the two aspherics resulting in a reduction of average spot size by sixty percent in the final design. An external long wavelength pass filter was replaced by an internal filter coating on a lens to save weight, and has been fabricated to meet the specifications. The stray light requirements were met by an extended lens hood baffle design, giving the necessary off-axis attenuation.

  3. Mobile phone camera benchmarking: combination of camera speed and image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  4. Optical registration of spaceborne low light remote sensing camera

    Science.gov (United States)

    Li, Chong-yang; Hao, Yan-hui; Xu, Peng-mei; Wang, Dong-jie; Ma, Li-na; Zhao, Ying-long

    2018-02-01

    For the high precision requirement of spaceborne low light remote sensing camera optical registration, optical registration of dual channel for CCD and EMCCD is achieved by the high magnification optical registration system. System integration optical registration and accuracy of optical registration scheme for spaceborne low light remote sensing camera with short focal depth and wide field of view is proposed in this paper. It also includes analysis of parallel misalignment of CCD and accuracy of optical registration. Actual registration results show that imaging clearly, MTF and accuracy of optical registration meet requirements, it provide important guarantee to get high quality image data in orbit.

  5. PSF Estimation of Space-Variant Ultra-Wide Field of View Imaging Systems

    Directory of Open Access Journals (Sweden)

    Petr Janout

    2017-02-01

    Full Text Available Ultra-wide-field of view (UWFOV imaging systems are affected by various aberrations, most of which are highly angle-dependent. A description of UWFOV imaging systems, such as microscopy optics, security camera systems and other special space-variant imaging systems, is a difficult task that can be achieved by estimating the Point Spread Function (PSF of the system. This paper proposes a novel method for modeling the space-variant PSF of an imaging system using the Zernike polynomials wavefront description. The PSF estimation algorithm is based on obtaining field-dependent expansion coefficients of the Zernike polynomials by fitting real image data of the analyzed imaging system using an iterative approach in an initial estimate of the fitting parameters to ensure convergence robustness. The method is promising as an alternative to the standard approach based on Shack–Hartmann interferometry, since the estimate of the aberration coefficients is processed directly in the image plane. This approach is tested on simulated and laboratory-acquired image data that generally show good agreement. The resulting data are compared with the results of other modeling methods. The proposed PSF estimation method provides around 5% accuracy of the optical system model.

  6. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  7. Deepest Wide-Field Colour Image in the Southern Sky

    Science.gov (United States)

    2003-01-01

    LA SILLA CAMERA OBSERVES CHANDRA DEEP FIELD SOUTH ESO PR Photo 02a/03 ESO PR Photo 02a/03 [Preview - JPEG: 400 x 437 pix - 95k] [Normal - JPEG: 800 x 873 pix - 904k] [HiRes - JPEG: 4000 x 4366 pix - 23.1M] Caption : PR Photo 02a/03 shows a three-colour composite image of the Chandra Deep Field South (CDF-S) , obtained with the Wide Field Imager (WFI) camera on the 2.2-m MPG/ESO telescope at the ESO La Silla Observatory (Chile). It was produced by the combination of about 450 images with a total exposure time of nearly 50 hours. The field measures 36 x 34 arcmin 2 ; North is up and East is left. Technical information is available below. The combined efforts of three European teams of astronomers, targeting the same sky field in the southern constellation Fornax (The Oven) have enabled them to construct a very deep, true-colour image - opening an exceptionally clear view towards the distant universe . The image ( PR Photo 02a/03 ) covers an area somewhat larger than the full moon. It displays more than 100,000 galaxies, several thousand stars and hundreds of quasars. It is based on images with a total exposure time of nearly 50 hours, collected under good observing conditions with the Wide Field Imager (WFI) on the MPG/ESO 2.2m telescope at the ESO La Silla Observatory (Chile) - many of them extracted from the ESO Science Data Archive . The position of this southern sky field was chosen by Riccardo Giacconi (Nobel Laureate in Physics 2002) at a time when he was Director General of ESO, together with Piero Rosati (ESO). It was selected as a sky region towards which the NASA Chandra X-ray satellite observatory , launched in July 1999, would be pointed while carrying out a very long exposure (lasting a total of 1 million seconds, or 278 hours) in order to detect the faintest possible X-ray sources. The field is now known as the Chandra Deep Field South (CDF-S) . The new WFI photo of CDF-S does not reach quite as deep as the available images of the "Hubble Deep Fields

  8. Review of Calibration Methods for Scheimpflug Camera

    Directory of Open Access Journals (Sweden)

    Cong Sun

    2018-01-01

    Full Text Available The Scheimpflug camera offers a wide range of applications in the field of typical close-range photogrammetry, particle image velocity, and digital image correlation due to the fact that the depth-of-view of Scheimpflug camera can be greatly extended according to the Scheimpflug condition. Yet, the conventional calibration methods are not applicable in this case because the assumptions used by classical calibration methodologies are not valid anymore for cameras undergoing Scheimpflug condition. Therefore, various methods have been investigated to solve the problem over the last few years. However, no comprehensive review exists that provides an insight into recent calibration methods of Scheimpflug cameras. This paper presents a survey of recent calibration methods of Scheimpflug cameras with perspective lens, including the general nonparametric imaging model, and analyzes in detail the advantages and drawbacks of the mainstream calibration models with respect to each other. Real data experiments including calibrations, reconstructions, and measurements are performed to assess the performance of the models. The results reveal that the accuracies of the RMM, PLVM, PCIM, and GNIM are basically equal, while the accuracy of GNIM is slightly lower compared with the other three parametric models. Moreover, the experimental results reveal that the parameters of the tangential distortion are likely coupled with the tilt angle of the sensor in Scheimpflug calibration models. The work of this paper lays the foundation of further research of Scheimpflug cameras.

  9. Hydra phantom applicability for carrying out tests of field uniformity in gamma cameras

    International Nuclear Information System (INIS)

    Aragao Filho, Geraldo L.; Oliveira, Alex C.H.

    2014-01-01

    Nuclear Medicine is a medical modality that makes use of radioactive material 'in vivo' in humans, making them a temporary radioactive source. The radiation emitted by the patient's body is detected by a specific equipment, called a gamma camera, creates an image showing the spatial and temporal biodistribution of radioactive material administered to the patient. Therefore, it's of fundamental importance a number of specific measures to make sure that procedure be satisfactory, called quality control. To Nuclear Medicine, quality control of gamma camera has the purpose of ensuring accurate scintillographic imaging, truthful and reliable for the diagnosis, guaranteeing visibility and clarity of details of structures, and also to determine the frequency and the need for preventive maintenance of equipment. To ensure the quality control of the gamma camera it's necessary to use some simulators, called phantom, used in Nuclear Medicine to evaluate system performance, system calibration and simulation of injuries. The goal of this study was to validate a new simulator for nuclear medicine, the Hydra phantom. The phantom was initially built for construction of calibration curves used in radiotherapy planning and quality control in CT. It has similar characteristics to specific phantoms in nuclear medicine, containing inserts and water area. Those inserts are regionally sourced materials, many of them are already used in the literature and based on information about density and interaction of radiation with matter. To verify its efficiency in quality control in Nuclear Medicine, was performed a test for uniformity field, one of the main tests performed daily, so we can verify the ability of the gamma camera to reproduce a uniform distribution of the administered activity in the phantom, been analysed qualitatively, through the image, and quantitatively, through values established for Central Field Of View (CFOV) and Useful Field Of View (UFOV). Also, was evaluated their

  10. Radiation Dose-Rate Extraction from the Camera Image of Quince 2 Robot System using Optical Character Recognition

    International Nuclear Information System (INIS)

    Cho, Jai Wan; Jeong, Kyung Min

    2012-01-01

    In the case of the Japanese Quince 2 robot system, 7 CCD/CMOS cameras were used. 2 CCD cameras of Quince robot are used for the forward and backward monitoring of the surroundings during navigation. And 2 CCD (or CMOS) cameras are used for monitoring the status of front-end and back-end motion mechanics such as flippers and crawlers. A CCD camera with wide field of view optics is used for monitoring the status of the communication (VDSL) cable reel. And another 2 CCD cameras are assigned for reading the indication value of the radiation dosimeter and the instrument. The Quince 2 robot measured radiation in the unit 2 reactor building refueling floor of the Fukushima nuclear power plant. The CCD camera with wide field-of-view (fisheye) lens reads indicator of the dosimeter loaded on the Quince 2 robot, which was sent to carry out investigating the unit 2 reactor building refueling floor situation. The camera image with gamma ray dose-rate information is transmitted to the remote control site via VDSL communication line. At the remote control site, the radiation information in the unit 2 reactor building refueling floor can be perceived by monitoring the camera image. To make up the radiation profile in the surveyed refueling floor, the gamma ray dose-rate information in the image should be converted to numerical value. In this paper, we extract the gamma ray dose-rate value in the unit 2 reactor building refueling floor using optical character recognition method

  11. Radiation Dose-Rate Extraction from the Camera Image of Quince 2 Robot System using Optical Character Recognition

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Jeong, Kyung Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    In the case of the Japanese Quince 2 robot system, 7 CCD/CMOS cameras were used. 2 CCD cameras of Quince robot are used for the forward and backward monitoring of the surroundings during navigation. And 2 CCD (or CMOS) cameras are used for monitoring the status of front-end and back-end motion mechanics such as flippers and crawlers. A CCD camera with wide field of view optics is used for monitoring the status of the communication (VDSL) cable reel. And another 2 CCD cameras are assigned for reading the indication value of the radiation dosimeter and the instrument. The Quince 2 robot measured radiation in the unit 2 reactor building refueling floor of the Fukushima nuclear power plant. The CCD camera with wide field-of-view (fisheye) lens reads indicator of the dosimeter loaded on the Quince 2 robot, which was sent to carry out investigating the unit 2 reactor building refueling floor situation. The camera image with gamma ray dose-rate information is transmitted to the remote control site via VDSL communication line. At the remote control site, the radiation information in the unit 2 reactor building refueling floor can be perceived by monitoring the camera image. To make up the radiation profile in the surveyed refueling floor, the gamma ray dose-rate information in the image should be converted to numerical value. In this paper, we extract the gamma ray dose-rate value in the unit 2 reactor building refueling floor using optical character recognition method

  12. Non-invasive diagnostics of ion beams in strong toroidal magnetic fields with standard CMOS cameras

    Science.gov (United States)

    Ates, Adem; Ates, Yakup; Niebuhr, Heiko; Ratzinger, Ulrich

    2018-01-01

    A superconducting Figure-8 stellarator type magnetostatic Storage Ring (F8SR) is under investigation at the Institute for Applied Physics (IAP) at Goethe University Frankfurt. Besides numerical simulations on an optimized design for beam transport and injection a scaled down (0.6T) experiment with two 30°toroidal magnets is set up for further investigations. A great challenge is the development of a non-destructive, magnetically insensitive and flexible detector for local investigations of an ion beam propagating through the toroidal magnetostatic field. This paper introduces a new way of beam path measurement by residual gas monitoring. It uses a single board camera connected to a standard single board computer by a camera serial interface all placed inside the vacuum chamber. First experiments with one camera were done and in a next step two under 90 degree arranged cameras were installed. With the help of the two cameras which are moveable along the beam pipe the theoretical predictions are experimentally verified successfully. Previous experimental results have been confirmed. The transport of H+ and H2+ ion beams with energies of 7 keV and at beam currents of about 1 mA is investigated successfully.

  13. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  14. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    Science.gov (United States)

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  15. TIFR Near Infrared Imaging Camera-II on the 3.6 m Devasthal Optical Telescope

    Science.gov (United States)

    Baug, T.; Ojha, D. K.; Ghosh, S. K.; Sharma, S.; Pandey, A. K.; Kumar, Brijesh; Ghosh, Arpan; Ninan, J. P.; Naik, M. B.; D’Costa, S. L. A.; Poojary, S. S.; Sandimani, P. R.; Shah, H.; Krishna Reddy, B.; Pandey, S. B.; Chand, H.

    Tata Institute of Fundamental Research (TIFR) Near Infrared Imaging Camera-II (TIRCAM2) is a closed-cycle Helium cryo-cooled imaging camera equipped with a Raytheon 512×512 pixels InSb Aladdin III Quadrant focal plane array (FPA) having sensitivity to photons in the 1-5μm wavelength band. In this paper, we present the performance of the camera on the newly installed 3.6m Devasthal Optical Telescope (DOT) based on the calibration observations carried out during 2017 May 11-14 and 2017 October 7-31. After the preliminary characterization, the camera has been released to the Indian and Belgian astronomical community for science observations since 2017 May. The camera offers a field-of-view (FoV) of ˜86.5‧‧×86.5‧‧ on the DOT with a pixel scale of 0.169‧‧. The seeing at the telescope site in the near-infrared (NIR) bands is typically sub-arcsecond with the best seeing of ˜0.45‧‧ realized in the NIR K-band on 2017 October 16. The camera is found to be capable of deep observations in the J, H and K bands comparable to other 4m class telescopes available world-wide. Another highlight of this camera is the observational capability for sources up to Wide-field Infrared Survey Explorer (WISE) W1-band (3.4μm) magnitudes of 9.2 in the narrow L-band (nbL; λcen˜ 3.59μm). Hence, the camera could be a good complementary instrument to observe the bright nbL-band sources that are saturated in the Spitzer-Infrared Array Camera (IRAC) ([3.6] ≲ 7.92 mag) and the WISE W1-band ([3.4] ≲ 8.1 mag). Sources with strong polycyclic aromatic hydrocarbon (PAH) emission at 3.3μm are also detected. Details of the observations and estimated parameters are presented in this paper.

  16. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  17. Cosmological implication of wide field Sunyaev-Zel'dovich galaxy clusters survey: exploration by simulation

    International Nuclear Information System (INIS)

    Juin, Jean-Baptiste

    2005-01-01

    The goal of my Phd research is to prepare the data analysis of the near future wide-field observations of galaxy clusters detected by Sunyaev Zel'dovitch effect. I set up a complete chain of original tools to carry out this study. These tools allow me to highlight critical and important points of selection effects that has to be taken into account in future analysis. Analysis chain is composed by: a simulation of observed millimeter sky, state-of-the-art algorithms of SZ galaxy clusters extraction from observed maps, a statistical model of selection effects of the whole detection chain and, finally, tools to constrain, from detected SZ sources catalog, the cosmological parameters. I focus myself on multi-channel experiments equipped with large bolometer camera. I use these tools for a prospecting on Olimpo experiment. (author) [fr

  18. Global calibration of multi-cameras with non-overlapping fields of view based on photogrammetry and reconfigurable target

    Science.gov (United States)

    Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling

    2018-06-01

    Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.

  19. Wide-field schematic eye models with gradient-index lens.

    Science.gov (United States)

    Goncharov, Alexander V; Dainty, Chris

    2007-08-01

    We propose a wide-field schematic eye model, which provides a more realistic description of the optical system of the eye in relation to its anatomical structure. The wide-field model incorporates a gradient-index (GRIN) lens, which enables it to fulfill properties of two well-known schematic eye models, namely, Navarro's model for off-axis aberrations and Thibos's chromatic on-axis model (the Indiana eye). These two models are based on extensive experimental data, which makes the derived wide-field eye model also consistent with that data. A mathematical method to construct a GRIN lens with its iso-indicial contours following the optical surfaces of given asphericity is presented. The efficiency of the method is demonstrated with three variants related to different age groups. The role of the GRIN structure in relation to the lens paradox is analyzed. The wide-field model with a GRIN lens can be used as a starting design for the eye inverse problem, i.e., reconstructing the optical structure of the eye from off-axis wavefront measurements. Anatomically more accurate age-dependent optical models of the eye could ultimately help an optical designer to improve wide-field retinal imaging.

  20. New design for the UCO/Lick Observatory CCD guide camera

    Science.gov (United States)

    Wei, Mingzhi; Stover, Richard J.

    1996-03-01

    A new CCD based field acquisition and telescope guiding camera is being designed and built at UCO/Lick Observatory. Our goal is a camera which is fully computer controllable, compact in size, versatile enough to provide a wide variety of image acquisition modes, and able to operate with a wide variety of CCD detectors. The camera will improve our remote-observing capabilities since it will be easy to control the camera and obtain images over the Observatory computer network. To achieve the desired level of operating flexibility, the design incorporates state-of-the-art technologies such as high density, high speed programmable logic devices and non-volatile static memory. Various types of CCDs can be used in this system without major modification of the hardware or software. Though fully computer controllable, the camera can be operated as a stand-alone unit with most operating parameters set locally. A stand-alone display subsystem is also available. A thermoelectric device is used to cool the CCD to about -45c. Integration times can be varied over a range of 0.1 to 1000 seconds. High speed pixel skipping in both horizontal and vertical directions allows us to quickly access a selected subarea of the detector. Three different read out speeds allow the astronomer to select between high-speed/high-noise and low-speed/low-noise operation. On- chip pixel binning and MPP operation are also selectable options. This system can provide automatic sky level measurement and subtraction to accommodate dynamically changing background levels.

  1. Full-Field Calibration of Color Camera Chromatic Aberration using Absolute Phase Maps.

    Science.gov (United States)

    Liu, Xiaohong; Huang, Shujun; Zhang, Zonghua; Gao, Feng; Jiang, Xiangqian

    2017-05-06

    The refractive index of a lens varies for different wavelengths of light, and thus the same incident light with different wavelengths has different outgoing light. This characteristic of lenses causes images captured by a color camera to display chromatic aberration (CA), which seriously reduces image quality. Based on an analysis of the distribution of CA, a full-field calibration method based on absolute phase maps is proposed in this paper. Red, green, and blue closed sinusoidal fringe patterns are generated, consecutively displayed on an LCD (liquid crystal display), and captured by a color camera from the front viewpoint. The phase information of each color fringe is obtained using a four-step phase-shifting algorithm and optimum fringe number selection method. CA causes the unwrapped phase of the three channels to differ. These pixel deviations can be computed by comparing the unwrapped phase data of the red, blue, and green channels in polar coordinates. CA calibration is accomplished in Cartesian coordinates. The systematic errors introduced by the LCD are analyzed and corrected. Simulated results show the validity of the proposed method and experimental results demonstrate that the proposed full-field calibration method based on absolute phase maps will be useful for practical software-based CA calibration.

  2. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  3. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  4. THE SPITZER DEEP, WIDE-FIELD SURVEY

    International Nuclear Information System (INIS)

    Ashby, M. L. N.; Brodwin, M.; Stern, D.; Griffith, R.; Eisenhardt, P.; Gorjian, V.; Kozlowski, S.; Kochanek, C. S.; Bock, J. J.; Borys, C.; Brand, K.; Grogin, N. A.; Brown, M. J. I.; Cool, R.; Cooray, A.; Croft, S.; Dey, A.; Eisenstein, D.; Gonzalez, A. H.; Ivison, R. J.

    2009-01-01

    The Spitzer Deep, Wide-Field Survey (SDWFS) is a four-epoch infrared survey of 10 deg. 2 in the Booetes field of the NOAO Deep Wide-Field Survey using the IRAC instrument on the Spitzer Space Telescope. SDWFS, a Spitzer Cycle 4 Legacy project, occupies a unique position in the area-depth survey space defined by other Spitzer surveys. The four epochs that make up SDWFS permit-for the first time-the selection of infrared-variable and high proper motion objects over a wide field on timescales of years. Because of its large survey volume, SDWFS is sensitive to galaxies out to z ∼ 3 with relatively little impact from cosmic variance for all but the richest systems. The SDWFS data sets will thus be especially useful for characterizing galaxy evolution beyond z ∼ 1.5. This paper explains the SDWFS observing strategy and data processing, presents the SDWFS mosaics and source catalogs, and discusses some early scientific findings. The publicly released, full-depth catalogs contain 6.78, 5.23, 1.20, and 0.96 x 10 5 distinct sources detected to the average 5σ, 4''-diameter, aperture-corrected limits of 19.77, 18.83, 16.50, and 15.82 Vega mag at 3.6, 4.5, 5.8, and 8.0 μm, respectively. The SDWFS number counts and color-color distribution are consistent with other, earlier Spitzer surveys. At the 6 minute integration time of the SDWFS IRAC imaging, >50% of isolated Faint Images of the Radio Sky at Twenty cm radio sources and >80% of on-axis XBooetes sources are detected out to 8.0 μm. Finally, we present the four highest proper motion IRAC-selected sources identified from the multi-epoch imaging, two of which are likely field brown dwarfs of mid-T spectral class.

  5. Lesion detection in ultra-wide field retinal images for diabetic retinopathy diagnosis

    Science.gov (United States)

    Levenkova, Anastasia; Sowmya, Arcot; Kalloniatis, Michael; Ly, Angelica; Ho, Arthur

    2018-02-01

    Diabetic retinopathy (DR) leads to irreversible vision loss. Diagnosis and staging of DR is usually based on the presence, number, location and type of retinal lesions. Ultra-wide field (UWF) digital scanning laser technology provides an opportunity for computer-aided DR lesion detection. High-resolution UWF images (3078×2702 pixels) may allow detection of more clinically relevant retinopathy in comparison with conventional retinal images as UWF imaging covers a 200° retinal area, versus 45° by conventional cameras. Current approaches to DR diagnosis that analyze 7-field Early Treatment Diabetic Retinopathy Study (ETDRS) retinal images provide similar results to UWF imaging. However, in 40% of cases, more retinopathy was found outside the 7- field ETDRS fields by UWF and in 10% of cases, retinopathy was reclassified as more severe. The reason is that UWF images examine both the central retina and more peripheral regions. We propose an algorithm for automatic detection and classification of DR lesions such as cotton wool spots, exudates, microaneurysms and haemorrhages in UWF images. The algorithm uses convolutional neural network (CNN) as a feature extractor and classifies the feature vectors extracted from colour-composite UWF images using a support vector machine (SVM). The main contribution includes detection of four types of DR lesions in the peripheral retina for diagnostic purposes. The evaluation dataset contains 146 UWF images. The proposed method for detection of DR lesion subtypes in UWF images using two scenarios for transfer learning achieved AUC ≈ 80%. Data was split at the patient level to validate the proposed algorithm.

  6. Accurate estimation of camera shot noise in the real-time

    Science.gov (United States)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the

  7. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    Science.gov (United States)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between

  8. CamOn: A Real-Time Autonomous Camera Control System

    DEFF Research Database (Denmark)

    Burelli, Paolo; Jhala, Arnav Harish

    2009-01-01

    This demonstration presents CamOn, an autonomous cam- era control system for real-time 3D games. CamOn employs multiple Artificial Potential Fields (APFs), a robot motion planning technique, to control both the location and orienta- tion of the camera. Scene geometry from the 3D environment...... contributes to the potential field that is used to determine po- sition and movement of the camera. Composition constraints for the camera are modelled as potential fields for controlling the view target of the camera. CamOn combines the compositional benefits of constraint- based camera systems, and improves...

  9. MUSIC - Multifunctional stereo imaging camera system for wide angle and high resolution stereo and color observations on the Mars-94 mission

    Science.gov (United States)

    Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.

    1990-10-01

    Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.

  10. HUBBLE SPACE TELESCOPE SPECTROSCOPY OF BROWN DWARFS DISCOVERED WITH THE WIDE-FIELD INFRARED SURVEY EXPLORER

    International Nuclear Information System (INIS)

    Schneider, Adam C.; Cushing, Michael C.; Kirkpatrick, J. Davy; Gelino, Christopher R.; Mace, Gregory N.; Wright, Edward L.; Eisenhardt, Peter R.; Skrutskie, M. F.; Griffith, Roger L.; Marsh, Kenneth A.

    2015-01-01

    We present a sample of brown dwarfs identified with the Wide-field Infrared Survey Explorer (WISE) for which we have obtained Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) near-infrared grism spectroscopy. The sample (22 in total) was observed with the G141 grism covering 1.10–1.70 μm, while 15 were also observed with the G102 grism, which covers 0.90–1.10 μm. The additional wavelength coverage provided by the G102 grism allows us to (1) search for spectroscopic features predicted to emerge at low effective temperatures (e.g.,ammonia bands) and (2) construct a smooth spectral sequence across the T/Y boundary. We find no evidence of absorption due to ammonia in the G102 spectra. Six of these brown dwarfs are new discoveries, three of which are found to have spectral types of T8 or T9. The remaining three, WISE J082507.35+280548.5 (Y0.5), WISE J120604.38+840110.6 (Y0), and WISE J235402.77+024015.0 (Y1), are the 19th, 20th, and 21st spectroscopically confirmed Y dwarfs to date. We also present HST grism spectroscopy and reevaluate the spectral types of five brown dwarfs for which spectral types have been determined previously using other instruments

  11. HUBBLE SPACE TELESCOPE SPECTROSCOPY OF BROWN DWARFS DISCOVERED WITH THE WIDE-FIELD INFRARED SURVEY EXPLORER

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Adam C.; Cushing, Michael C. [Department of Physics and Astronomy, University of Toledo, 2801 W. Bancroft St., Toledo, OH 43606 (United States); Kirkpatrick, J. Davy; Gelino, Christopher R. [Infrared Processing and Analysis Center, MS 100-22, California Institute of Technology, Pasadena, CA 91125 (United States); Mace, Gregory N.; Wright, Edward L. [Department of Physics and Astronomy, UCLA, 430 Portola Plaza, Box 951547, Los Angeles, CA 90095-1547 (United States); Eisenhardt, Peter R. [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr., Pasadena, CA 91109 (United States); Skrutskie, M. F. [Department of Astronomy, University of Virginia, 530 McCormick Road, Charlottesville, VA 22904 (United States); Griffith, Roger L. [Department of Astronomy and Astrophysics, 525 Davey Lab, The Pennsylvania State University, University Park, PA 16802 (United States); Marsh, Kenneth A., E-mail: Adam.Schneider@Utoledo.edu [School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom)

    2015-05-10

    We present a sample of brown dwarfs identified with the Wide-field Infrared Survey Explorer (WISE) for which we have obtained Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) near-infrared grism spectroscopy. The sample (22 in total) was observed with the G141 grism covering 1.10–1.70 μm, while 15 were also observed with the G102 grism, which covers 0.90–1.10 μm. The additional wavelength coverage provided by the G102 grism allows us to (1) search for spectroscopic features predicted to emerge at low effective temperatures (e.g.,ammonia bands) and (2) construct a smooth spectral sequence across the T/Y boundary. We find no evidence of absorption due to ammonia in the G102 spectra. Six of these brown dwarfs are new discoveries, three of which are found to have spectral types of T8 or T9. The remaining three, WISE J082507.35+280548.5 (Y0.5), WISE J120604.38+840110.6 (Y0), and WISE J235402.77+024015.0 (Y1), are the 19th, 20th, and 21st spectroscopically confirmed Y dwarfs to date. We also present HST grism spectroscopy and reevaluate the spectral types of five brown dwarfs for which spectral types have been determined previously using other instruments.

  12. Multi-Angle Snowflake Camera Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Stuefer, Martin [Univ. of Alaska, Fairbanks, AK (United States); Bailey, J. [Univ. of Alaska, Fairbanks, AK (United States)

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASC cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.

  13. Electron-tracking Compton gamma-ray camera for small animal and phantom imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kabuki, Shigeto, E-mail: kabuki@cr.scphys.kyoto-u.ac.j [Department of Physics, Gradulate School of Science, Kyoto University, Kyoto 606-8502 (Japan); Kimura, Hiroyuki; Amano, Hiroo [Department of Patho-functional Bioanalysis, Graduate School of Pharmaceutical Sciences, Kyoto University, Kyoto 606-8501 (Japan); Nakamoto, Yuji [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Kyoto 606-8507 (Japan); Kubo, Hidetoshi; Miuchi, Kentaro; Kurosawa, Shunsuke; Takahashi, Michiaki [Department of Physics, Gradulate School of Science, Kyoto University, Kyoto 606-8502 (Japan); Kawashima, Hidekazu [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Kyoto 606-8507 (Japan); Ueda, Masashi [Radioisotopes Research Labaoratory, Kyoto University Hospital, Kyoto 606-8507 (Japan); Okada, Tomohisa [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Kyoto 606-8507 (Japan); Kubo, Atsushi; Kunieda, Etuso; Nakahara, Tadaki [Department of Radiology, Keio University School of Medicine, Tokyo 160-8582 (Japan); Kohara, Ryota; Miyazaki, Osamu; Nakazawa, Tetsuo; Shirahata, Takashi; Yamamoto, Etsuji [Application Development Office, Hitachi Medical Corporation, Chiba 277-0804 (Japan); Ogawa, Koichi [Department of Electronic Informatics, Faculty of Engineering, Hosei University, Tokyo 184-8584 (Japan)

    2010-11-01

    We have developed an electron-tracking Compton camera (ETCC) for medical use. Our ETCC has a wide energy dynamic range (200-1300 keV) and wide field of view (3 sr), and thus has potential for advanced medical use. To evaluate the ETCC, we imaged the head (brain) and bladder of mice that had been administered with F-18-FDG. We also imaged the head and thyroid gland of mice using double tracers of F-18-FDG and I-131 ions.

  14. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  15. Michelson wide-field stellar interferometry : Principles and experimental verification

    NARCIS (Netherlands)

    Montilla, I.; Pereira, S.F.; Braat, J.J.M.

    2005-01-01

    A new interferometric technique for Michelson wide-field interferometry is presented that consists of a Michelson pupil-plane combination scheme in which a wide field of view can be achieved in one shot. This technique uses a stair-shaped mirror in the intermediate image plane of each telescope in

  16. THE LUMINOSITY, MASS, AND AGE DISTRIBUTIONS OF COMPACT STAR CLUSTERS IN M83 BASED ON HUBBLE SPACE TELESCOPE/WIDE FIELD CAMERA 3 OBSERVATIONS

    International Nuclear Information System (INIS)

    Chandar, Rupali; Whitmore, Bradley C.; Mutchler, Max; Bond, Howard; Kim, Hwihyun; Kaleida, Catherine; Calzetti, Daniela; Saha, Abhijit; O'Connell, Robert; Balick, Bruce; Carollo, Marcella; Disney, Michael; Dopita, Michael A.; Frogel, Jay A.; Hall, Donald; Holtzman, Jon A.; Kimble, Randy A.; McCarthy, Patrick; Paresce, Francesco; Silk, Joe

    2010-01-01

    The newly installed Wide Field Camera 3 (WFC3) on the Hubble Space Telescope has been used to obtain multi-band images of the nearby spiral galaxy M83. These new observations are the deepest and highest resolution images ever taken of a grand-design spiral, particularly in the near-ultraviolet, and allow us to better differentiate compact star clusters from individual stars and to measure the luminosities of even faint clusters in the U band. We find that the luminosity function (LF) for clusters outside of the very crowded starburst nucleus can be approximated by a power law, dN/dL ∝ L α , with α = -2.04 ± 0.08, down to M V ∼ -5.5. We test the sensitivity of the LF to different selection techniques, filters, binning, and aperture correction determinations, and find that none of these contribute significantly to uncertainties in α. We estimate ages and masses for the clusters by comparing their measured UBVI, Hα colors with predictions from single stellar population models. The age distribution of the clusters can be approximated by a power law, dN/dτ ∝ τ γ , with γ = -0.9 ± 0.2, for M ∼> few x 10 3 M sun and τ ∼ 8 yr. This indicates that clusters are disrupted quickly, with ∼80%-90% disrupted each decade in age over this time. The mass function of clusters over the same M-τ range is a power law, dN/dM ∝ M β , with β = -1.94 ± 0.16, and does not have bends or show curvature at either high or low masses. Therefore, we do not find evidence for a physical upper mass limit, M C , or for the earlier disruption of lower mass clusters when compared with higher mass clusters, i.e., mass-dependent disruption. We briefly discuss these implications for the formation and disruption of the clusters.

  17. Small Field of View Scintimammography Gamma Camera Integrated to a Stereotactic Core Biopsy Digital X-ray System

    Energy Technology Data Exchange (ETDEWEB)

    Andrew Weisenberger; Fernando Barbosa; T. D. Green; R. Hoefer; Cynthia Keppel; Brian Kross; Stanislaw Majewski; Vladimir Popov; Randolph Wojcik

    2002-10-01

    A small field of view gamma camera has been developed for integration with a commercial stereotactic core biopsy system. The goal is to develop and implement a dual-modality imaging system utilizing scintimammography and digital radiography to evaluate the reliability of scintimammography in predicting the malignancy of suspected breast lesions from conventional X-ray mammography. The scintimammography gamma camera is a custom-built mini gamma camera with an active area of 5.3 cm /spl times/ 5.3 cm and is based on a 2 /spl times/ 2 array of Hamamatsu R7600-C8 position-sensitive photomultiplier tubes. The spatial resolution of the gamma camera at the collimator surface is < 4 mm full-width at half-maximum and a sensitivity of /spl sim/ 4000 Hz/mCi. The system is also capable of acquiring dynamic scintimammographic data to allow for dynamic uptake studies. Sample images of preliminary clinical results are presented to demonstrate the performance of the system.

  18. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  19. Review The Ooty Wide Field Array

    Indian Academy of Sciences (India)

    c Indian Academy of Sciences. DOI 10.1007/s12036-017-9430-4. Review. The Ooty Wide Field ... salient features of the upgrade, as well as its main science drivers. There are three ..... tecture for low frequency arrays, Ph.D. thesis, Jawaharalal.

  20. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  1. Neutron imaging system based on a video camera

    International Nuclear Information System (INIS)

    Dinca, M.

    2004-01-01

    possess versatile and unique readout capabilities that have established their utility in scientific and especially for radiation field applications. A detector for neutron radiography based on a cooled CID camera offers some capabilities, as follows: - Extended linear dynamic range up to 109 without blooming or streaking; - Arbitrary pixel selection and nondestructive readout makes it possible to introduce a high degree of exposure control to low-light viewing of static scenes; - Read multiple areas of interest of an image within a given frame at higher rates; - Wide spectral response (185 nm - 1100 nm); - CIDs tolerate high radiation environments up to 3 Mrad integrated dose; - The contiguous pixel structure of CID arrays contributes to accurate imaging because there are virtually no opaque areas between pixels. (author)

  2. WFIRST: Astrometry with the Wide-Field Imager

    Science.gov (United States)

    Bellini, Andrea; WFIRST Astrometry Working Group

    2018-01-01

    The wide field of view and stable, sharp images delivered by WFIRST's Wide-Field Imager make it an excellent instrument for astrometry, one of five major discovery areas identified in the 2010 Decadal Survey. Compared to the Hubble Space Telescope, WFIRST's wider field of view with similar image quality will provide hundreds more astrometric targets per image as well as background galaxies and stars with precise positions in the Gaia catalog. In addition, WFIRST will operate in the infrared, a wavelength regime where the most precise astrometry has so far been achieved with adaptive optics images from large ground-based telescopes. WFIRST will provide at least a factor of three improvement in astrometry over the current state of the art in this wavelength range, while spanning a field of view thousands of times larger. WFIRST is thus poised to make major contributions to multiple science topics in which astrometry plays an important role, without major alterations to the planned mission or instrument. We summarize a few of the most compelling science cases where WFIRST astrometry could prove transformational.

  3. Be Foil ''Filter Knee Imaging'' NSTX Plasma with Fast Soft X-ray Camera

    International Nuclear Information System (INIS)

    B.C. Stratton; S. von Goeler; D. Stutman; K. Tritz; L.E. Zakharov

    2005-01-01

    A fast soft x-ray (SXR) pinhole camera has been implemented on the National Spherical Torus Experiment (NSTX). This paper presents observations and describes the Be foil Filter Knee Imaging (FKI) technique for reconstructions of a m/n=1/1 mode on NSTX. The SXR camera has a wide-angle (28 o ) field of view of the plasma. The camera images nearly the entire diameter of the plasma and a comparable region in the vertical direction. SXR photons pass through a beryllium foil and are imaged by a pinhole onto a P47 scintillator deposited on a fiber optic faceplate. An electrostatic image intensifier demagnifies the visible image by 6:1 to match it to the size of the charge-coupled device (CCD) chip. A pair of lenses couples the image to the CCD chip

  4. Camera Control and Geo-Registration for Video Sensor Networks

    Science.gov (United States)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  5. Stray-field-induced Faraday contributions in wide-field Kerr microscopy and -magnetometry

    International Nuclear Information System (INIS)

    Markó, D.; Soldatov, I.; Tekielak, M.; Schäfer, R.

    2015-01-01

    The magnetic domain contrast in wide-field Kerr microscopy on bulk specimens can be substantially distorted by non-linear, field-dependent Faraday rotations in the objective lens that are caused by stray-field components emerging from the specimen. These Faraday contributions, which were detected by Kerr-magnetometry on grain-oriented iron–silicon steel samples, are thoroughly elaborated and characterized. They express themselves as a field-dependent gray-scale offset to the domain contrast and in highly distorted surface magnetization curves if optically measured in a wide field Kerr microscope. An experimental method to avoid such distortions is suggested. In the course of these studies, a low-permeability part in the surface magnetization loop of slightly misoriented (110)-surfaces in iron–silicon sheets was discovered that is attributed to demagnetization effects in direction perpendicular to the sheet surface. - Highlights: • Magnetizing a finite sample in a Kerr microscope leads to sample-generated stray-fields. • They cause non-linear, field- and position-dependent Faraday rotations in the objective. • This leads to a modulation of the Kerr contrast and to distorted MOKE loops. • A method to compensate these Faraday rotations is presented

  6. Performance analysis for gait in camera networks

    OpenAIRE

    Michela Goffredo; Imed Bouchrika; John Carter; Mark Nixon

    2008-01-01

    This paper deploys gait analysis for subject identification in multi-camera surveillance scenarios. We present a new method for viewpoint independent markerless gait analysis that does not require camera calibration and works with a wide range of directions of walking. These properties make the proposed method particularly suitable for gait identification in real surveillance scenarios where people and their behaviour need to be tracked across a set of cameras. Tests on 300 synthetic and real...

  7. Cameras in mobile phones

    Science.gov (United States)

    Nummela, Ville; Viinikanoja, Jarkko; Alakarhu, Juha

    2006-04-01

    One of the fastest growing markets in consumer markets today are camera phones. During past few years total volume has been growing fast and today millions of mobile phones with camera will be sold. At the same time resolution and functionality of the cameras has been growing from CIF towards DSC level. From camera point of view the mobile world is an extremely challenging field. Cameras should have good image quality but in small size. They also need to be reliable and their construction should be suitable for mass manufacturing. All components of the imaging chain should be well optimized in this environment. Image quality and usability are the most important parameters to user. The current trend of adding more megapixels to cameras and at the same time using smaller pixels is affecting both. On the other hand reliability and miniaturization are key drivers for product development as well as the cost. In optimized solution all parameters are in balance but the process of finding the right trade-offs is not an easy task. In this paper trade-offs related to optics and their effects to image quality and usability of cameras are discussed. Key development areas from mobile phone camera point of view are also listed.

  8. Advanced MOKE magnetometry in wide-field Kerr-microscopy

    Science.gov (United States)

    Soldatov, I. V.; Schäfer, R.

    2017-10-01

    The measurement of MOKE (Magneto-Optical Kerr Effect) magnetization loops in a wide-field Kerr microscope offers the advantage that the relevant domain images along the loop can be readily recorded. As the microscope's objective lens is exposed to the magnetic field, the loops are usually strongly distorted by non-linear Faraday rotations of the polarized light that occur in the objective lens and that are superimposed to the MOKE signal. In this paper, an experimental method, based on a motorized analyzer, is introduced which allows to compensate the Faraday contributions, thus leading to pure MOKE loops. A wide field Kerr microscope, equipped with this technology, works well as a laser-based MOKE magnetometer, additionally offering domain images and thus providing the basis for loop interpretation.

  9. On-Line High Dose-Rate Gamma Ray Irradiation Test of the CCD/CMOS Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Jeong, Kyung Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    In this paper, test results of gamma ray irradiation to CCD/CMOS cameras are described. From the CAMS (containment atmospheric monitoring system) data of Fukushima Dai-ichi nuclear power plant station, we found out that the gamma ray dose-rate when the hydrogen explosion occurred in nuclear reactors 1{approx}3 is about 160 Gy/h. If assumed that the emergency response robot for the management of severe accident of the nuclear power plant has been sent into the reactor area to grasp the inside situation of reactor building and to take precautionary measures against releasing radioactive materials, the CCD/CMOS cameras, which are loaded with the robot, serve as eye of the emergency response robot. In the case of the Japanese Quince robot system, which was sent to carry out investigating the unit 2 reactor building refueling floor situation, 7 CCD/CMOS cameras are used. 2 CCD cameras of Quince robot are used for the forward and backward monitoring of the surroundings during navigation. And 2 CCD (or CMOS) cameras are used for monitoring the status of front-end and back-end motion mechanics such as flippers and crawlers. A CCD camera with wide field of view optics is used for monitoring the status of the communication (VDSL) cable reel. And another 2 CCD cameras are assigned for reading the indication value of the radiation dosimeter and the instrument. In the preceding assumptions, a major problem which arises when dealing with CCD/CMOS cameras in the severe accident situations of the nuclear power plant is the presence of high dose-rate gamma irradiation fields. In the case of the DBA (design basis accident) situations of the nuclear power plant, in order to use a CCD/CMOS camera as an ad-hoc monitoring unit in the vicinity of high radioactivity structures and components of the nuclear reactor area, a robust survivability of this camera in such intense gamma-radiation fields therefore should be verified. The CCD/CMOS cameras of various types were gamma irradiated at a

  10. A Compton camera application for the GAMOS GEANT4-based framework

    Energy Technology Data Exchange (ETDEWEB)

    Harkness, L.J., E-mail: ljh@ns.ph.liv.ac.uk [Oliver Lodge Laboratory, The University of Liverpool, Liverpool L69 7ZE (United Kingdom); Arce, P. [Department of Basic Research, CIEMAT, Madrid (Spain); Judson, D.S.; Boston, A.J.; Boston, H.C.; Cresswell, J.R.; Dormand, J.; Jones, M.; Nolan, P.J.; Sampson, J.A.; Scraggs, D.P.; Sweeney, A. [Oliver Lodge Laboratory, The University of Liverpool, Liverpool L69 7ZE (United Kingdom); Lazarus, I.; Simpson, J. [STFC Daresbury Laboratory, Daresbury, Warrington WA4 4AD (United Kingdom)

    2012-04-11

    Compton camera systems can be used to image sources of gamma radiation in a variety of applications such as nuclear medicine, homeland security and nuclear decommissioning. To locate gamma-ray sources, a Compton camera employs electronic collimation, utilising Compton kinematics to reconstruct the paths of gamma rays which interact within the detectors. The main benefit of this technique is the ability to accurately identify and locate sources of gamma radiation within a wide field of view, vastly improving the efficiency and specificity over existing devices. Potential advantages of this imaging technique, along with advances in detector technology, have brought about a rapidly expanding area of research into the optimisation of Compton camera systems, which relies on significant input from Monte-Carlo simulations. In this paper, the functionality of a Compton camera application that has been integrated into GAMOS, the GEANT4-based Architecture for Medicine-Oriented Simulations, is described. The application simplifies the use of GEANT4 for Monte-Carlo investigations by employing a script based language and plug-in technology. To demonstrate the use of the Compton camera application, simulated data have been generated using the GAMOS application and acquired through experiment for a preliminary validation, using a Compton camera configured with double sided high purity germanium strip detectors. Energy spectra and reconstructed images for the data sets are presented.

  11. Using DSLR cameras in digital holography

    Science.gov (United States)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  12. The use of a portable gamma camera for preoperative lymphatic mapping: a comparison with a conventional gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Vidal-Sicart, Sergi; Paredes, Pilar [Hospital Clinic Barcelona, Nuclear Medicine Department (CDIC), Barcelona (Spain); Institut d' Investigacio Biomedica Agusti Pi Sunyer (IDIBAPS), Barcelona (Spain); Vermeeren, Lenka; Valdes-Olmos, Renato A. [Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital (NKI-AVL), Nuclear Medicine Department, Amsterdam (Netherlands); Sola, Oriol [Hospital Clinic Barcelona, Nuclear Medicine Department (CDIC), Barcelona (Spain)

    2011-04-15

    Planar lymphoscintigraphy is routinely used for preoperative sentinel node visualization, but large gamma cameras are not always available. We evaluated the reproducibility of lymphatic mapping with a smaller and portable gamma camera. In two centres, 52 patients with breast cancer received preoperative lymphoscintigraphy with a conventional gamma camera with a field of view of 40 x 40 cm. Static anterior and lateral images were performed at 15 min, 2 h and 4 h after injection of the radiotracer ({sup 99m}Tc-nanocolloid). At 2 h after injection, anterior and oblique images were also performed with a portable gamma camera (Sentinella, Oncovision) positioned to obtain a field of view of 20 x 20 cm. Visualization of lymphatic drainage on conventional images and images with the portable device were compared for number of nodes depicted, their intensity and localization of sentinel nodes. The images performed with the conventional gamma camera depicted sentinel nodes in 94%, while the portable gamma camera showed drainage in 73%. There was however no significant difference in visualization between the two devices when a lead shield was used to mask the injection area in 43 patients (95 vs 88%, p = 0.25). Second-echelon nodes were visualized in 62% of the patients with the conventional gamma camera and in 29% of the cases with the portable gamma camera. Preoperative imaging with a portable gamma camera fitted with a pinhole collimator to obtain a field of view of 20 x 20 cm is able to depict sentinel nodes in 88% of the cases, if a lead shield is used to mask the injection site. This device may be useful in centres without the possibility to perform a preoperative image. (orig.)

  13. The use of a portable gamma camera for preoperative lymphatic mapping: a comparison with a conventional gamma camera

    International Nuclear Information System (INIS)

    Vidal-Sicart, Sergi; Paredes, Pilar; Vermeeren, Lenka; Valdes-Olmos, Renato A.; Sola, Oriol

    2011-01-01

    Planar lymphoscintigraphy is routinely used for preoperative sentinel node visualization, but large gamma cameras are not always available. We evaluated the reproducibility of lymphatic mapping with a smaller and portable gamma camera. In two centres, 52 patients with breast cancer received preoperative lymphoscintigraphy with a conventional gamma camera with a field of view of 40 x 40 cm. Static anterior and lateral images were performed at 15 min, 2 h and 4 h after injection of the radiotracer ( 99m Tc-nanocolloid). At 2 h after injection, anterior and oblique images were also performed with a portable gamma camera (Sentinella, Oncovision) positioned to obtain a field of view of 20 x 20 cm. Visualization of lymphatic drainage on conventional images and images with the portable device were compared for number of nodes depicted, their intensity and localization of sentinel nodes. The images performed with the conventional gamma camera depicted sentinel nodes in 94%, while the portable gamma camera showed drainage in 73%. There was however no significant difference in visualization between the two devices when a lead shield was used to mask the injection area in 43 patients (95 vs 88%, p = 0.25). Second-echelon nodes were visualized in 62% of the patients with the conventional gamma camera and in 29% of the cases with the portable gamma camera. Preoperative imaging with a portable gamma camera fitted with a pinhole collimator to obtain a field of view of 20 x 20 cm is able to depict sentinel nodes in 88% of the cases, if a lead shield is used to mask the injection site. This device may be useful in centres without the possibility to perform a preoperative image. (orig.)

  14. Acceptance/Operational Test Report for Tank 241-AN-104 camera and camera purge control system

    International Nuclear Information System (INIS)

    Castleberry, J.L.

    1995-11-01

    This Acceptance/Operational Test Procedure (ATP/OTP) will document the satisfactory operation of the camera purge panel, purge control panel, color camera system and associated control components destined for installation. The final acceptance of the complete system will be performed in the field. The purge panel and purge control panel will be tested for its safety interlock which shuts down the camera and pan-and-tilt inside the tank vapor space during loss of purge pressure and that the correct purge volume exchanges are performed as required by NFPA 496. This procedure is separated into seven sections. This Acceptance/Operational Test Report documents the successful acceptance and operability testing of the 241-AN-104 camera system and camera purge control system

  15. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i automatic camera calibration using both moving objects and a background structure; (ii object depth estimation; and (iii detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  16. Homography-based multiple-camera person-tracking

    Science.gov (United States)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of

  17. 1-Million droplet array with wide-field fluorescence imaging for digital PCR.

    Science.gov (United States)

    Hatch, Andrew C; Fisher, Jeffrey S; Tovar, Armando R; Hsieh, Albert T; Lin, Robert; Pentoney, Stephen L; Yang, David L; Lee, Abraham P

    2011-11-21

    Digital droplet reactors are useful as chemical and biological containers to discretize reagents into picolitre or nanolitre volumes for analysis of single cells, organisms, or molecules. However, most DNA based assays require processing of samples on the order of tens of microlitres and contain as few as one to as many as millions of fragments to be detected. Presented in this work is a droplet microfluidic platform and fluorescence imaging setup designed to better meet the needs of the high-throughput and high-dynamic-range by integrating multiple high-throughput droplet processing schemes on the chip. The design is capable of generating over 1-million, monodisperse, 50 picolitre droplets in 2-7 minutes that then self-assemble into high density 3-dimensional sphere packing configurations in a large viewing chamber for visualization and analysis. This device then undergoes on-chip polymerase chain reaction (PCR) amplification and fluorescence detection to digitally quantify the sample's nucleic acid contents. Wide-field fluorescence images are captured using a low cost 21-megapixel digital camera and macro-lens with an 8-12 cm(2) field-of-view at 1× to 0.85× magnification, respectively. We demonstrate both end-point and real-time imaging ability to perform on-chip quantitative digital PCR analysis of the entire droplet array. Compared to previous work, this highly integrated design yields a 100-fold increase in the number of on-chip digitized reactors with simultaneous fluorescence imaging for digital PCR based assays.

  18. Automatic detection of diabetic retinopathy features in ultra-wide field retinal images

    Science.gov (United States)

    Levenkova, Anastasia; Sowmya, Arcot; Kalloniatis, Michael; Ly, Angelica; Ho, Arthur

    2017-03-01

    Diabetic retinopathy (DR) is a major cause of irreversible vision loss. DR screening relies on retinal clinical signs (features). Opportunities for computer-aided DR feature detection have emerged with the development of Ultra-WideField (UWF) digital scanning laser technology. UWF imaging covers 82% greater retinal area (200°), against 45° in conventional cameras3 , allowing more clinically relevant retinopathy to be detected4 . UWF images also provide a high resolution of 3078 x 2702 pixels. Currently DR screening uses 7 overlapping conventional fundus images, and the UWF images provide similar results1,4. However, in 40% of cases, more retinopathy was found outside the 7-field ETDRS) fields by UWF and in 10% of cases, retinopathy was reclassified as more severe4 . This is because UWF imaging allows examination of both the central retina and more peripheral regions, with the latter implicated in DR6 . We have developed an algorithm for automatic recognition of DR features, including bright (cotton wool spots and exudates) and dark lesions (microaneurysms and blot, dot and flame haemorrhages) in UWF images. The algorithm extracts features from grayscale (green "red-free" laser light) and colour-composite UWF images, including intensity, Histogram-of-Gradient and Local binary patterns. Pixel-based classification is performed with three different classifiers. The main contribution is the automatic detection of DR features in the peripheral retina. The method is evaluated by leave-one-out cross-validation on 25 UWF retinal images with 167 bright lesions, and 61 other images with 1089 dark lesions. The SVM classifier performs best with AUC of 94.4% / 95.31% for bright / dark lesions.

  19. An evolution of image source camera attribution approaches.

    Science.gov (United States)

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics

  20. Reducing the Variance of Intrinsic Camera Calibration Results in the ROS Camera_Calibration Package

    Science.gov (United States)

    Chiou, Geoffrey Nelson

    The intrinsic calibration of a camera is the process in which the internal optical and geometric characteristics of the camera are determined. If accurate intrinsic parameters of a camera are known, the ray in 3D space that every point in the image lies on can be determined. Pairing with another camera allows for the position of the points in the image to be calculated by intersection of the rays. Accurate intrinsics also allow for the position and orientation of a camera relative to some world coordinate system to be calculated. These two reasons for having accurate intrinsic calibration for a camera are especially important in the field of industrial robotics where 3D cameras are frequently mounted on the ends of manipulators. In the ROS (Robot Operating System) ecosystem, the camera_calibration package is the default standard for intrinsic camera calibration. Several researchers from the Industrial Robotics & Automation division at Southwest Research Institute have noted that this package results in large variances in the intrinsic parameters of the camera when calibrating across multiple attempts. There are also open issues on this matter in their public repository that have not been addressed by the developers. In this thesis, we confirm that the camera_calibration package does indeed return different results across multiple attempts, test out several possible hypothesizes as to why, identify the reason, and provide simple solution to fix the cause of the issue.

  1. The LOFT wide field monitor simulator

    DEFF Research Database (Denmark)

    Donnarumma, I.; Evangelista, Y.; Campana, R.

    2012-01-01

    We present the simulator we developed for the Wide Field Monitor (WFM) aboard the Large Observatory For Xray Timing (LOFT) mission, one of the four ESA M3 candidate missions considered for launch in the 2022–2024 timeframe. The WFM is designed to cover a large FoV in the same bandpass as the Large...

  2. Streak electronic camera with slow-scanning storage tube used in the field of high-speed cineradiography

    International Nuclear Information System (INIS)

    Marilleau, J.; Bonnet, L.; Garcin, G.; Guix, R.; Loichot, R.

    The cineradiographic machine designed for measurements in the field of detonics consists of a linear accelerator associated with a braking target, a scintillator and a remote controlled electronic camera. The quantum factor of X-ray detection and the energetic efficiency of the scintillator are given. The electronic camera is built upon a deflection-converter tube (RCA C. 73 435 AJ) coupled by optical fibres to a photosensitive storage tube (TH-CSF Esicon) used in a slow-scanning process with electronic recording of the information. The different parts of the device are described. Some capabilities such as data processing numerical outputs, measurements and display are outlined. A streak cineradiogram of a typical implosion experiment is given [fr

  3. A hands-free region-of-interest selection interface for solo surgery with a wide-angle endoscope: preclinical proof of concept.

    Science.gov (United States)

    Jung, Kyunghwa; Choi, Hyunseok; Hong, Hanpyo; Adikrishna, Arnold; Jeon, In-Ho; Hong, Jaesung

    2017-02-01

    A hands-free region-of-interest (ROI) selection interface is proposed for solo surgery using a wide-angle endoscope. A wide-angle endoscope provides images with a larger field of view than a conventional endoscope. With an appropriate selection interface for a ROI, surgeons can also obtain a detailed local view as if they moved a conventional endoscope in a specific position and direction. To manipulate the endoscope without releasing the surgical instrument in hand, a mini-camera is attached to the instrument, and the images taken by the attached camera are analyzed. When a surgeon moves the instrument, the instrument orientation is calculated by an image processing. Surgeons can select the ROI with this instrument movement after switching from 'task mode' to 'selection mode.' The accelerated KAZE algorithm is used to track the features of the camera images once the instrument is moved. Both the wide-angle and detailed local views are displayed simultaneously, and a surgeon can move the local view area by moving the mini-camera attached to the surgical instrument. Local view selection for a solo surgery was performed without releasing the instrument. The accuracy of camera pose estimation was not significantly different between camera resolutions, but it was significantly different between background camera images with different numbers of features (P solo surgeries without a camera assistant.

  4. Radiometric Cross-Calibration of GAOFEN-1 Wfv Cameras with LANDSAT-8 Oli and Modis Sensors Based on Radiation and Geometry Matching

    Science.gov (United States)

    Li, J.; Wu, Z.; Wei, X.; Zhang, Y.; Feng, F.; Guo, F.

    2018-04-01

    Cross-calibration has the advantages of high precision, low resource requirements and simple implementation. It has been widely used in recent years. The four wide-field-of-view (WFV) cameras on-board Gaofen-1 satellite provide high spatial resolution and wide combined coverage (4 × 200 km) without onboard calibration. In this paper, the four-band radiometric cross-calibration coefficients of WFV1 camera were obtained based on radiation and geometry matching taking Landsat 8 OLI (Operational Land Imager) sensor as reference. Scale Invariant Feature Transform (SIFT) feature detection method and distance and included angle weighting method were introduced to correct misregistration of WFV-OLI image pair. The radiative transfer model was used to eliminate difference between OLI sensor and WFV1 camera through the spectral match factor (SMF). The near-infrared band of WFV1 camera encompasses water vapor absorption bands, thus a Look Up Table (LUT) for SMF varies from water vapor amount is established to estimate the water vapor effects. The surface synchronization experiment was designed to verify the reliability of the cross-calibration coefficients, which seem to perform better than the official coefficients claimed by the China Centre for Resources Satellite Data and Application (CCRSDA).

  5. Development of plenoptic infrared camera using low dimensional material based photodetectors

    Science.gov (United States)

    Chen, Liangliang

    Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and

  6. Driving with head-slaved camera system

    NARCIS (Netherlands)

    Oving, A.B.; Erp, J.B.F. van

    2001-01-01

    In a field experiment, we tested the effectiveness of a head-slaved camera system for driving an armoured vehicle under armour. This system consists of a helmet-mounted display (HMD), a headtracker, and a motion platform with two cameras. Subjects performed several driving tasks on paved and in

  7. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  8. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    Science.gov (United States)

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  9. Cloud Computing with Context Cameras

    Science.gov (United States)

    Pickles, A. J.; Rosing, W. E.

    2016-05-01

    We summarize methods and plans to monitor and calibrate photometric observations with our autonomous, robotic network of 2m, 1m and 40cm telescopes. These are sited globally to optimize our ability to observe time-variable sources. Wide field "context" cameras are aligned with our network telescopes and cycle every ˜2 minutes through BVr'i'z' filters, spanning our optical range. We measure instantaneous zero-point offsets and transparency (throughput) against calibrators in the 5-12m range from the all-sky Tycho2 catalog, and periodically against primary standards. Similar measurements are made for all our science images, with typical fields of view of ˜0.5 degrees. These are matched against Landolt, Stetson and Sloan standards, and against calibrators in the 10-17m range from the all-sky APASS catalog. Such measurements provide pretty good instantaneous flux calibration, often to better than 5%, even in cloudy conditions. Zero-point and transparency measurements can be used to characterize, monitor and inter-compare sites and equipment. When accurate calibrations of Target against Standard fields are required, monitoring measurements can be used to select truly photometric periods when accurate calibrations can be automatically scheduled and performed.

  10. Laser Light-field Fusion for Wide-field Lensfree On-chip Phase Contrast Microscopy of Nanoparticles

    Science.gov (United States)

    Kazemzadeh, Farnoud; Wong, Alexander

    2016-12-01

    Wide-field lensfree on-chip microscopy, which leverages holography principles to capture interferometric light-field encodings without lenses, is an emerging imaging modality with widespread interest given the large field-of-view compared to lens-based techniques. In this study, we introduce the idea of laser light-field fusion for lensfree on-chip phase contrast microscopy for detecting nanoparticles, where interferometric laser light-field encodings acquired using a lensfree, on-chip setup with laser pulsations at different wavelengths are fused to produce marker-free phase contrast images of particles at the nanometer scale. As a proof of concept, we demonstrate, for the first time, a wide-field lensfree on-chip instrument successfully detecting 300 nm particles across a large field-of-view of ~30 mm2 without any specialized or intricate sample preparation, or the use of synthetic aperture- or shift-based techniques.

  11. Transverse electric fields' effects in the Dark Energy Camera CCDs

    International Nuclear Information System (INIS)

    Plazas, A A; Sheldon, E S; Bernstein, G M

    2014-01-01

    Spurious electric fields transverse to the surface of thick CCDs displace the photo-generated charges, effectively modifying the pixel area and producing noticeable signals in astrometric and photometric measurements. We use data from the science verification period of the Dark Energy Survey (DES) to characterize these effects in the Dark Energy Camera (DECam) CCDs, where the transverse fields manifest as concentric rings (impurity gradients or ''tree rings'') and bright stripes near the boundaries of the detectors (''edge distortions'') with relative amplitudes of about 1% and 10%, respectively. Using flat-field images, we derive templates in the five DES photometric bands (grizY) for the tree rings and the edge distortions as a function of their position on each DECam detector. Comparison of the astrometric and photometric residuals confirms their nature as pixel-size variations. The templates are directly incorporated into the derivation of photometric and astrometric residuals. The results presented in these proceedings are a partial report of analysis performed before the workshop ''Precision Astronomy with Fully depleted CDDs'' at Brookhaven National Laboratory. Additional work is underway, and the final results and analysis will be published elsewhere (Plazas, Bernstein and Sheldon 2014, in prep.)

  12. A multiframe soft x-ray camera with fast video capture for the LSX field reversed configuration (FRC) experiment

    International Nuclear Information System (INIS)

    Crawford, E.A.

    1992-01-01

    Soft x-ray pinhole imaging has proven to be an exceptionally useful diagnostic for qualitative observation of impurity radiation from field reversed configuration plasmas. We used a four frame device, similar in design to those discussed in an earlier paper [E. A. Crawford, D. P. Taggart, and A. D. Bailey III, Rev. Sci. Instrum. 61, 2795 (1990)] as a routine diagnostic during the last six months of the Large s Experiment (LSX) program. Our camera is an improvement over earlier implementations in several significant aspects. It was designed and used from the onset of the LSX experiments with a video frame capture system so that an instant visual record of the shot was available to the machine operator as well as facilitating quantitative interpretation of intensity information recorded in the images. The camera was installed in the end region of the LSX on axis approximately 5.5 m from the plasma midplane. Experience with bolometers on LSX showed serious problems with ''particle dumps'' at the axial location at various times during the plasma discharge. Therefore, the initial implementation of the camera included an effective magnetic sweeper assembly. Overall performance of the camera, video capture system, and sweeper is discussed

  13. Solid-state framing camera with multiple time frames

    Energy Technology Data Exchange (ETDEWEB)

    Baker, K. L.; Stewart, R. E.; Steele, P. T.; Vernon, S. P.; Hsing, W. W.; Remington, B. A. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States)

    2013-10-07

    A high speed solid-state framing camera has been developed which can operate over a wide range of photon energies. This camera measures the two-dimensional spatial profile of the flux incident on a cadmium selenide semiconductor at multiple times. This multi-frame camera has been tested at 3.1 eV and 4.5 keV. The framing camera currently records two frames with a temporal separation between the frames of 5 ps but this separation can be varied between hundreds of femtoseconds up to nanoseconds and the number of frames can be increased by angularly multiplexing the probe beam onto the cadmium selenide semiconductor.

  14. SFR test fixture for hemispherical and hyperhemispherical camera systems

    Science.gov (United States)

    Tamkin, John M.

    2017-08-01

    Optical testing of camera systems in volume production environments can often require expensive tooling and test fixturing. Wide field (fish-eye, hemispheric and hyperhemispheric) optical systems create unique challenges because of the inherent distortion, and difficulty in controlling reflections from front-lit high resolution test targets over the hemisphere. We present a unique design for a test fixture that uses low-cost manufacturing methods and equipment such as 3D printing and an Arduino processor to control back-lit multi-color (VIS/NIR) targets and sources. Special care with LED drive electronics is required to accommodate both global and rolling shutter sensors.

  15. Development of field-wide risk based remediation objectives for an aging oil field : Devon Canada Swan Hills Field

    Energy Technology Data Exchange (ETDEWEB)

    Brewster, M.; North, C.; Leighton-Boyce, G. [WorleyParsons Komex, Calgary, AB (Canada); Moore, D. [Devon Canada Corp., Calgary, AB (Canada)

    2006-07-01

    The development of field-wide risk based remediation objectives for the aging Devon Canada Swan Hills oil field was examined along with the key components of the closure strategy. These included source removal to the extent practical, long term monitoring, and achievable risk-based remedial objectives that were appropriate to the remote boreal forest setting of the Swan Hills field. A two stage approach was presented. The first stage involved a field wide background framework which included defining areas of common physical and ecological setting and developing appropriate exposure scenarios. The second stage involved site-specific risk assessments which included adjusting for site-specific conditions and an early demonstration project to prove the concept. A GIS approach was used to identify areas of common physical and ecological setting including: physiography; surface water; land use; vegetation ecozones; surficial and bedrock geology; and water well use. Species lists were compiled for vegetation, terrestrial wildlife (mammals, birds, amphibians), and aquatic species (fish and invertebrates). Major contaminant sources, problem formulation, vegetation bioassays, invertebrate bioassays, black spruce emergence, and guideline development were other topics covered during the presentation. Last, a summary of progress was presented. A field-wide review and development of risk zones and site-specific risk assessment has been completed. A regulatory review is underway. tabs., figs.

  16. Global Calibration of Multiple Cameras Based on Sphere Targets

    Directory of Open Access Journals (Sweden)

    Junhua Sun

    2016-01-01

    Full Text Available Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three, while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view.

  17. a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging

    Science.gov (United States)

    Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.

    2017-08-01

    Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.

  18. A SPATIO-SPECTRAL CAMERA FOR HIGH RESOLUTION HYPERSPECTRAL IMAGING

    Directory of Open Access Journals (Sweden)

    S. Livens

    2017-08-01

    Full Text Available Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600–900 nm in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots, horticulture (crop status monitoring to evaluate irrigation management in strawberry fields and geology (meteorite detection on a grassland field. Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475–925 nm, and we discuss future work.

  19. Wide-field absolute transverse blood flow velocity mapping in vessel centerline

    Science.gov (United States)

    Wu, Nanshou; Wang, Lei; Zhu, Bifeng; Guan, Caizhong; Wang, Mingyi; Han, Dingan; Tan, Haishu; Zeng, Yaguang

    2018-02-01

    We propose a wide-field absolute transverse blood flow velocity measurement method in vessel centerline based on absorption intensity fluctuation modulation effect. The difference between the light absorption capacities of red blood cells and background tissue under low-coherence illumination is utilized to realize the instantaneous and average wide-field optical angiography images. The absolute fuzzy connection algorithm is used for vessel centerline extraction from the average wide-field optical angiography. The absolute transverse velocity in the vessel centerline is then measured by a cross-correlation analysis according to instantaneous modulation depth signal. The proposed method promises to contribute to the treatment of diseases, such as those related to anemia or thrombosis.

  20. Colour-magnitude diagrams of star clusters in the Magellanic Clouds from wide-field electronography

    International Nuclear Information System (INIS)

    Andersen, J.; Walker, M.F.

    1984-01-01

    Utilizing the good image quality and large field available with the 9-cm McMullan electronographic camera when attached to the Danish 1.54-m Ritchey-Chretien reflector at La Silla, Chile, a number of star clusters in the Magellanic Clouds have been observed in order to determine their colour-magnitude diagrams with proper correction for the field star contribution. In Hodge 11, the first cluster to be reported from this programme, good measurements have been obtained of 180 stars in the annular field 34 <= R <= 71 arcsec of the cluster itself, and of 154 stars in a nearby control field of similar area, to a limit of V of the order of 22. (author)

  1. Self-supervised Traversability Assessment in Field Environments with Lidar and Camera

    DEFF Research Database (Denmark)

    Hansen, Mikkel Kragh; Underwood, James; Karstoft, Henrik

    , the visual classifier detects non-traversable image patches as outliers from a Gaussian Mixture Model that maintains the appearance of only traversable ground. Results Our method is evaluated using a diverse dataset of agricultural fields and orchards gathered with a perception research robot developed......Introduction The application of robotic automation within agriculture is increasing. There is a high demand for fully autonomous robots that are both efficient, reliable and affordable. In order to ensure safety, autonomous agricultural vehicles must perceive the environment and detect potential...... obstacles and threats across a variety of environmental conditions. In this paper, a self-supervised framework is proposed, combining laser range sensing from a lidar with images from a monocular camera to reliably assess terrain traversability/navigability. Methods The method uses a near-to-far approach...

  2. Practical target location and accuracy indicator in digital close range photogrammetry using consumer grade cameras

    Science.gov (United States)

    Moriya, Gentaro; Chikatsu, Hirofumi

    2011-07-01

    Recently, pixel numbers and functions of consumer grade digital camera are amazingly increasing by modern semiconductor and digital technology, and there are many low-priced consumer grade digital cameras which have more than 10 mega pixels on the market in Japan. In these circumstances, digital photogrammetry using consumer grade cameras is enormously expected in various application fields. There is a large body of literature on calibration of consumer grade digital cameras and circular target location. Target location with subpixel accuracy had been investigated as a star tracker issue, and many target location algorithms have been carried out. It is widely accepted that the least squares models with ellipse fitting is the most accurate algorithm. However, there are still problems for efficient digital close range photogrammetry. These problems are reconfirmation of the target location algorithms with subpixel accuracy for consumer grade digital cameras, relationship between number of edge points along target boundary and accuracy, and an indicator for estimating the accuracy of normal digital close range photogrammetry using consumer grade cameras. With this motive, an empirical testing of several algorithms for target location with subpixel accuracy and an indicator for estimating the accuracy are investigated in this paper using real data which were acquired indoors using 7 consumer grade digital cameras which have 7.2 mega pixels to 14.7 mega pixels.

  3. In vivo calcium imaging from dentate granule cells with wide-field fluorescence microscopy.

    Directory of Open Access Journals (Sweden)

    Yuichiro Hayashi

    Full Text Available A combination of genetically-encoded calcium indicators and micro-optics has enabled monitoring of large-scale dynamics of neuronal activity from behaving animals. In these studies, wide-field microscopy is often used to visualize neural activity. However, this method lacks optical sectioning capability, and therefore its axial resolution is generally poor. At present, it is unclear whether wide-field microscopy can visualize activity of densely packed small neurons at cellular resolution. To examine the applicability of wide-field microscopy for small-sized neurons, we recorded calcium activity of dentate granule cells having a small soma diameter of approximately 10 micrometers. Using a combination of high numerical aperture (0.8 objective lens and independent component analysis-based image segmentation technique, activity of putative single granule cell activity was separated from wide-field calcium imaging data. The result encourages wider application of wide-field microscopy in in vivo neurophysiology.

  4. Estimating tiger abundance from camera trap data: Field surveys and analytical issues

    Science.gov (United States)

    Karanth, K. Ullas; Nichols, James D.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Automated photography of tigers Panthera tigris for purely illustrative purposes was pioneered by British forester Fred Champion (1927, 1933) in India in the early part of the Twentieth Century. However, it was McDougal (1977) in Nepal who first used camera traps, equipped with single-lens reflex cameras activated by pressure pads, to identify individual tigers and study their social and predatory behaviors. These attempts involved a small number of expensive, cumbersome camera traps, and were not, in any formal sense, directed at “sampling” tiger populations.

  5. Portraiture lens concept in a mobile phone camera

    Science.gov (United States)

    Sheil, Conor J.; Goncharov, Alexander V.

    2017-11-01

    A small form-factor lens was designed for the purpose of portraiture photography, the size of which allows use within smartphone casing. The current general requirement of mobile cameras having good all-round performance results in a typical, familiar, many-element design. Such designs have little room for improvement, in terms of the available degrees of freedom and highly-demanding target metrics such as low f-number and wide field of view. However, the specific application of the current portraiture lens relaxed the requirement of an all-round high-performing lens, allowing improvement of certain aspects at the expense of others. With a main emphasis on reducing depth of field (DoF), the current design takes advantage of the simple geometrical relationship between DoF and pupil diameter. The system has a large aperture, while a reasonable f-number gives a relatively large focal length, requiring a catadioptric lens design with double ray path; hence, field of view is reduced. Compared to typical mobile lenses, the large diameter reduces depth of field by a factor of four.

  6. A real-time camera calibration system based on OpenCV

    Science.gov (United States)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  7. Ultra-wide-field angiography improves the detection and classification of diabetic retinopathy.

    Science.gov (United States)

    Wessel, Matthew M; Aaker, Grant D; Parlitsis, George; Cho, Minhee; D'Amico, Donald J; Kiss, Szilárd

    2012-04-01

    To evaluate patients with diabetic retinopathy using ultra-wide-field fluorescein angiography and to compare the visualized retinal pathology with that seen on an overly of conventional 7 standard field (7SF) imaging. Two hundred and eighteen eyes of 118 diabetic patients who underwent diagnostic fluorescein angiography using the Optos Optomap Panoramic 200A imaging system were included. The visualized area of the retina, retinal nonperfusion, retinal neovascularization, and panretinal photocoagulation were quantified by two independent masked graders. The respective areas identified on the ultra-wide-field fluorescein angiography image were compared with an overly of a modified 7SF image as outlined in the Early Treatment Diabetic Retinopathy Study. Ultra-wide-field fluorescein angiograms imaging, on average, demonstrated 3.2 times more total retinal surface area than 7SF. When compared with 7SF, ultra-wide-field fluorescein angiography showed 3.9 times more nonperfusion (P diabetic retinopathy. Improved retinal visualization may alter the classification of diabetic retinopathy and may therefore influence follow-up and treatment of these patients.

  8. Wide-field surveys from the SNAP mission

    International Nuclear Information System (INIS)

    2002-01-01

    The Supernova/Acceleration Probe (SNAP) is a proposed space-borne observatory that will survey the sky with a wide-field optical/NIR imager. The images produced by SNAP will have an unprecedented combination of depth, solid-angle, angular resolution, and temporal sampling. Two 7.5 square-degree fields will be observed every four days over 16 months to a magnitude depth of AB = 27.7 in each of nine filters. Co-adding images over all epochs will give an AB = 30.3 per filter. A 300 square-degree field will be surveyed with no repeat visits to AB = 28 per filter. The nine filters span 3500-17000 (angstrom). Although the survey strategy is tailored for supernova and weak gravitational lensing observations, the resulting data supports a broad range of auxiliary science programs

  9. Automatic multi-camera calibration for deployable positioning systems

    Science.gov (United States)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  10. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i generation of a three-dimensional (3D human model; (ii human object-based automatic scene calibration; and (iii metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  11. Video Surveillance using a Multi-Camera Tracking and Fusion System

    OpenAIRE

    Zhang , Zhong; Scanlon , Andrew; Yin , Weihong; Yu , Li; Venetianer , Péter L.

    2008-01-01

    International audience; Usage of intelligent video surveillance (IVS) systems is spreading rapidly. These systems are being utilized in a wide range of applications. In most cases, even in multi-camera installations, the video is processed independently in each feed. This paper describes a system that fuses tracking information from multiple cameras, thus vastly expanding its capabilities. The fusion relies on all cameras being calibrated to a site map, while the individual sensors remain lar...

  12. Study and Monitoring of Itinerant Tourism along the Francigena Route, by Camera Trapping System

    Directory of Open Access Journals (Sweden)

    Gianluca Bambi

    2017-01-01

    Full Text Available Tourism along the Via Francigena is a growing phenomenon. It is important to develop a direct survey of path’s users (pilgrims, tourists travel, day-trippers, etc. able to define user’s profiles, phenomenon extent, and its evolution over time. This in order to develop possible actions to promote the socio-economic impact on rural areas concerned. With this research, we propose the creation of a monitoring network based on camera trapping system to estimate the number of tourists in a simple and expeditious way. Recently, the camera trapping, as well as the faunal field, is finding wide use even in population surveys. An innovative application field is the one in the tourist sector, becoming the basis of statistical and planning analysis. To carry out a survey of the pilgrims/tourists, we applied this type of sampling method. It is an interesting method since it allows to obtain data about type and number of users. The application of camera trapping along the Francigena allows to obtain several information about users profiles, such as sex, age, average lengths of pilgrimages, type of journey (by foot, by horseback or by bike, in a continuous time period distributed in the tourist months of the 2014.

  13. A filtered backprojection reconstruction algorithm for Compton camera

    Energy Technology Data Exchange (ETDEWEB)

    Lojacono, Xavier; Maxim, Voichita; Peyrin, Francoise; Prost, Remy [Lyon Univ., Villeurbanne (France). CNRS, Inserm, INSA-Lyon, CREATIS, UMR5220; Zoglauer, Andreas [California Univ., Berkeley, CA (United States). Space Sciences Lab.

    2011-07-01

    In this paper we present a filtered backprojection reconstruction algorithm for Compton Camera detectors of particles. Compared to iterative methods, widely used for the reconstruction of images from Compton camera data, analytical methods are fast, easy to implement and avoid convergence issues. The method we propose is exact for an idealized Compton camera composed of two parallel plates of infinite dimension. We show that it copes well with low number of detected photons simulated from a realistic device. Images reconstructed from both synthetic data and realistic ones obtained with Monte Carlo simulations demonstrate the efficiency of the algorithm. (orig.)

  14. Portable retinal imaging for eye disease screening using a consumer-grade digital camera

    Science.gov (United States)

    Barriga, Simon; Larichev, Andrey; Zamora, Gilberto; Soliz, Peter

    2012-03-01

    The development of affordable means to image the retina is an important step toward the implementation of eye disease screening programs. In this paper we present the i-RxCam, a low-cost, hand-held, retinal camera for widespread applications such as tele-retinal screening for eye diseases like diabetic retinopathy (DR), glaucoma, and age-related ocular diseases. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities. The i-RxCam uses a Nikon D3100 digital camera body. The camera has a CMOS sensor with 14.8 million pixels. We use a 50mm focal lens that gives a retinal field of view of 45 degrees. The internal autofocus can compensate for about 2D (diopters) of focusing error. The light source is an LED produced by Philips with a linear emitting area that is transformed using a light pipe to the optimal shape at the eye pupil, an annulus. To eliminate corneal reflex we use a polarization technique in which the light passes through a nano-wire polarizer plate. This is a novel type of polarizer featuring high polarization separation (contrast ratio of more than 1000) and very large acceptance angle (>45 degrees). The i-RxCam approach will yield a significantly more economical retinal imaging device that would allow mass screening of the at-risk population.

  15. Calibration of high resolution digital camera based on different photogrammetric methods

    International Nuclear Information System (INIS)

    Hamid, N F A; Ahmad, A

    2014-01-01

    This paper presents method of calibrating high-resolution digital camera based on different configuration which comprised of stereo and convergent. Both methods are performed in the laboratory and in the field calibration. Laboratory calibration is based on a 3D test field where a calibration plate of dimension 0.4 m × 0.4 m with grid of targets at different height is used. For field calibration, it uses the same concept of 3D test field which comprised of 81 target points located on a flat ground and the dimension is 9 m × 9 m. In this study, a non-metric high resolution digital camera called Canon Power Shot SX230 HS was calibrated in the laboratory and in the field using different configuration for data acquisition. The aim of the calibration is to investigate the behavior of the internal digital camera whether all the digital camera parameters such as focal length, principal point and other parameters remain the same or vice-versa. In the laboratory, a scale bar is placed in the test field for scaling the image and approximate coordinates were used for calibration process. Similar method is utilized in the field calibration. For both test fields, the digital images were acquired within short period using stereo and convergent configuration. For field calibration, aerial digital images were acquired using unmanned aerial vehicle (UAV) system. All the images were processed using photogrammetric calibration software. Different calibration results were obtained for both laboratory and field calibrations. The accuracy of the results is evaluated based on standard deviation. In general, for photogrammetric applications and other applications the digital camera must be calibrated for obtaining accurate measurement or results. The best method of calibration depends on the type of applications. Finally, for most applications the digital camera is calibrated on site, hence, field calibration is the best method of calibration and could be employed for obtaining accurate

  16. Noise and sensitivity of x-ray framing cameras at Nike (abstract)

    Science.gov (United States)

    Pawley, C. J.; Deniz, A. V.; Lehecka, T.

    1999-01-01

    X-ray framing cameras are the most widely used tool for radiographing density distributions in laser and Z-pinch driven experiments. The x-ray framing cameras that were developed specifically for experiments on the Nike laser system are described. One of these cameras has been coupled to a CCD camera and was tested for resolution and image noise using both electrons and x rays. The largest source of noise in the images was found to be due to low quantum detection efficiency of x-ray photons.

  17. Full image-processing pipeline in field-programmable gate array for a small endoscopic camera

    Science.gov (United States)

    Mostafa, Sheikh Shanawaz; Sousa, L. Natércia; Ferreira, Nuno Fábio; Sousa, Ricardo M.; Santos, Joao; Wäny, Martin; Morgado-Dias, F.

    2017-01-01

    Endoscopy is an imaging procedure used for diagnosis as well as for some surgical purposes. The camera used for the endoscopy should be small and able to produce a good quality image or video, to reduce discomfort of the patients, and to increase the efficiency of the medical team. To achieve these fundamental goals, a small endoscopy camera with a footprint of 1 mm×1 mm×1.65 mm is used. Due to the physical properties of the sensors and human vision system limitations, different image-processing algorithms, such as noise reduction, demosaicking, and gamma correction, among others, are needed to faithfully reproduce the image or video. A full image-processing pipeline is implemented using a field-programmable gate array (FPGA) to accomplish a high frame rate of 60 fps with minimum processing delay. Along with this, a viewer has also been developed to display and control the image-processing pipeline. The control and data transfer are done by a USB 3.0 end point in the computer. The full developed system achieves real-time processing of the image and fits in a Xilinx Spartan-6LX150 FPGA.

  18. The fly's eye camera system

    Science.gov (United States)

    Mészáros, L.; Pál, A.; Csépány, G.; Jaskó, A.; Vida, K.; Oláh, K.; Mezö, G.

    2014-12-01

    We introduce the Fly's Eye Camera System, an all-sky monitoring device intended to perform time domain astronomy. This camera system design will provide complementary data sets for other synoptic sky surveys such as LSST or Pan-STARRS. The effective field of view is obtained by 19 cameras arranged in a spherical mosaic form. These individual cameras of the device stand on a hexapod mount that is fully capable of achieving sidereal tracking for the subsequent exposures. This platform has many advantages. First of all it requires only one type of moving component and does not include unique parts. Hence this design not only eliminates problems implied by unique elements, but the redundancy of the hexapod allows smooth operations even if one or two of the legs are stuck. In addition, it can calibrate itself by observed stars independently from both the geographical location (including northen and southern hemisphere) and the polar alignment of the full mount. All mechanical elements and electronics are designed within the confines of our institute Konkoly Observatory. Currently, our instrument is in testing phase with an operating hexapod and reduced number of cameras.

  19. Novel X-ray telescopes for wide-field X-ray monitoring

    International Nuclear Information System (INIS)

    Hudec, R.; Inneman, A.; Pina, L.; Sveda, L.

    2005-01-01

    We report on fully innovative very wide-field of view X-ray telescopes with high sensitivity as well as large field of view. The prototypes are very promising, allowing the proposals for space projects with very wide-field Lobster-eye X-ray optics to be considered. The novel telescopes will monitor the sky with unprecedented sensitivity and angular resolution of order of 1 arcmin. They are expected to contribute essentially to study and to understand various astrophysical objects such as AGN, SNe, Gamma-ray bursts (GRBs), X-ray flashes (XRFs), galactic binary sources, stars, CVs, X-ray novae, various transient sources, etc. The Lobster optics based X-ray All Sky Monitor is capable to detect around 20 GRBs and 8 XRFs yearly and this will surely significantly contribute to the related science

  20. Multispectral calibration to enhance the metrology performance of C-mount camera systems

    Directory of Open Access Journals (Sweden)

    S. Robson

    2014-06-01

    Full Text Available Low cost monochrome camera systems based on CMOS sensors and C-mount lenses have been successfully applied to a wide variety of metrology tasks. For high accuracy work such cameras are typically equipped with ring lights to image retro-reflective targets as high contrast image features. Whilst algorithms for target image measurement and lens modelling are highly advanced, including separate RGB channel lens distortion correction, target image circularity compensation and a wide variety of detection and centroiding approaches, less effort has been directed towards optimising physical target image quality by considering optical performance in narrow wavelength bands. This paper describes an initial investigation to assess the effect of wavelength on camera calibration parameters for two different camera bodies and the same ‘C-mount’ wide angle lens. Results demonstrate the expected strong influence on principal distance, radial and tangential distortion, and also highlight possible trends in principal point, orthogonality and affinity parameters which are close to the parameter estimation noise level from the strong convergent self-calibrating image networks.

  1. Depth-resolved incoherent and coherent wide-field high-content imaging (Conference Presentation)

    Science.gov (United States)

    So, Peter T.

    2016-03-01

    Recent advances in depth-resolved wide-field imaging technique has enabled many high throughput applications in biology and medicine. Depth resolved imaging of incoherent signals can be readily accomplished with structured light illumination or nonlinear temporal focusing. The integration of these high throughput systems with novel spectroscopic resolving elements further enable high-content information extraction. We will introduce a novel near common-path interferometer and demonstrate its uses in toxicology and cancer biology applications. The extension of incoherent depth-resolved wide-field imaging to coherent modality is non-trivial. Here, we will cover recent advances in wide-field 3D resolved mapping of refractive index, absorbance, and vibronic components in biological specimens.

  2. The use of a Micromegas as a detector for gamma camera

    International Nuclear Information System (INIS)

    Barbouchi, Asma; Trabelsi, Adel

    2008-01-01

    The micromegas (Micro Mesh Gaseaous) is a gas detector; it was developed by I.Giomattaris and G.Charpak for application in the field of experimental particle physics. But the polyvalence of this detector makes it to be used in several areas such as medical imaging. This detector has an X-Y readout capability of resolution less than 100μm, an energy resolution down to 14% for energy range 1-10 keV and an overall efficiency of 70%. Monte carlo simulation is widely used in nuclear medicine. It allows predicting the behaviour of system. Gate (Geant4 for Application Tomography Emission) is a platform for monte carlo simulation. It is dedicated to PET/SPECT (Position Emission Tomography / single Photon Emission Tomography) applications. Our goal is to model a gamma camera that use a Micromegas as a detector and to compare their performances (energy resolution, point spread function...) with those of a scintillated gamma camera by using Gate

  3. Application of infrared camera to bituminous concrete pavements: measuring vehicle

    Science.gov (United States)

    Janků, Michal; Stryk, Josef

    2017-09-01

    Infrared thermography (IR) has been used for decades in certain fields. However, the technological level of advancement of measuring devices has not been sufficient for some applications. Over the recent years, good quality thermal cameras with high resolution and very high thermal sensitivity have started to appear on the market. The development in the field of measuring technologies allowed the use of infrared thermography in new fields and for larger number of users. This article describes the research in progress in Transport Research Centre with a focus on the use of infrared thermography for diagnostics of bituminous road pavements. A measuring vehicle, equipped with a thermal camera, digital camera and GPS sensor, was designed for the diagnostics of pavements. New, highly sensitive, thermal cameras allow to measure very small temperature differences from the moving vehicle. This study shows the potential of a high-speed inspection without lane closures while using IR thermography.

  4. Motorcycle detection and counting using stereo camera, IR camera, and microphone array

    Science.gov (United States)

    Ling, Bo; Gibson, David R. P.; Middleton, Dan

    2013-03-01

    Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.

  5. An ordinary camera in an extraordinary location: Outreach with the Mars Webcam

    Science.gov (United States)

    Ormston, T.; Denis, M.; Scuka, D.; Griebel, H.

    2011-09-01

    The European Space Agency's Mars Express mission was launched in 2003 and was Europe's first mission to Mars. On-board was a small camera designed to provide ‘visual telemetry’ of the separation of the Beagle-2 lander. After achieving its goal it was shut down while the primary science mission of Mars Express got underway. In 2007 this camera was reactivated by the flight control team of Mars Express for the purpose of providing public education and outreach—turning it into the ‘Mars Webcam’.The camera is a small, 640×480 pixel colour CMOS camera with a wide-angle 30°×40° field of view. This makes it very similar in almost every way to the average home PC webcam. The major difference is that this webcam is not in an average location but is instead in orbit around Mars. On a strict basis of non-interference with the primary science activities, the camera is turned on to provide unique wide-angle views of the planet below.A highly automated process ensures that the observations are scheduled on the spacecraft and then uploaded to the internet as rapidly as possible. There is no intermediate stage, so that visitors to the Mars Webcam blog serve as ‘citizen scientists’. Full raw datasets and processing instructions are provided along with a mechanism to allow visitors to comment on the blog. Members of the public are encouraged to use this in either a personal or an educational context and work with the images. We then take their excellent work and showcase it back on the blog. We even apply techniques developed by them to improve the data and webcam experience for others.The accessibility and simplicity of the images also makes the data ideal for educational use, especially as educational projects can then be showcased on the site as inspiration for others. The oft-neglected target audience of space enthusiasts is also important as this allows them to participate as part of an interplanetary instrument team.This paper will cover the history of the

  6. The GISMO-2 Bolometer Camera

    Science.gov (United States)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; hide

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  7. Passive auto-focus for digital still cameras and camera phones: Filter-switching and low-light techniques

    Science.gov (United States)

    Gamadia, Mark Noel

    In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras

  8. Research on Wide-field Imaging Technologies for Low-frequency Radio Array

    Science.gov (United States)

    Lao, B. Q.; An, T.; Chen, X.; Wu, X. C.; Lu, Y.

    2017-09-01

    Wide-field imaging of low-frequency radio telescopes are subject to a number of difficult problems. One particularly pernicious problem is the non-coplanar baseline effect. It will lead to distortion of the final image when the phase of w direction called w-term is ignored. The image degradation effects are amplified for telescopes with the wide field of view. This paper summarizes and analyzes several w-term correction methods and their technical principles. Their advantages and disadvantages have been analyzed after comparing their computational cost and computational complexity. We conduct simulations with two of these methods, faceting and w-projection, based on the configuration of the first-phase Square Kilometre Array (SKA) low frequency array. The resulted images are also compared with the two-dimensional Fourier transform method. The results show that image quality and correctness derived from both faceting and w-projection are better than the two-dimensional Fourier transform method in wide-field imaging. The image quality and run time affected by the number of facets and w steps have been evaluated. The results indicate that the number of facets and w steps must be reasonable. Finally, we analyze the effect of data size on the run time of faceting and w-projection. The results show that faceting and w-projection need to be optimized before the massive amounts of data processing. The research of the present paper initiates the analysis of wide-field imaging techniques and their application in the existing and future low-frequency array, and fosters the application and promotion to much broader fields.

  9. Portable mini gamma camera for medical applications

    CERN Document Server

    Porras, E; Benlloch, J M; El-Djalil-Kadi-Hanifi, M; López, S; Pavon, N; Ruiz, J A; Sánchez, F; Sebastiá, A

    2002-01-01

    A small, portable and low-cost gamma camera for medical applications has been developed and clinically tested. This camera, based on a scintillator crystal and a Position Sensitive Photo-Multiplier Tube, has a useful field of view of 4.6 cm diameter and provides 2.2 mm of intrinsic spatial resolution. Its mobility and light weight allow to reach the patient from any desired direction. This camera images small organs with high efficiency and so addresses the demand for devices of specific clinical applications. In this paper, we present the camera and briefly describe the procedures that have led us to choose its configuration and the image reconstruction method. The clinical tests and diagnostic capability are also presented and discussed.

  10. X-ray astronomy 2000: Wide field X-ray monitoring with lobster-eye telescopes

    International Nuclear Information System (INIS)

    Inneman, A.; Hudec, R.; Pina, L.; Gorenstein, P.

    2001-01-01

    The recently available first prototypes of innovative very wide field X-ray telescopes of Lobster-Eye type confirm the feasibility to develop such flight instruments in a near future. These devices are expected to allow very wide field (more than 1000 square degrees) monitoring of the sky in X-rays (up to 10 keV and perhaps even more) with faint limits. We will discuss the recent status of the development of very wide field X-ray telescopes as well as related scientific questions including expected major contributions such as monitoring and study of X-ray afterglows of Gamma Ray Bursts

  11. Fuzzy logic control for camera tracking system

    Science.gov (United States)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  12. An operative gamma camera for sentinel lymph node procedure in case of breast cancer

    CERN Document Server

    Salvador, S; Mathelin, C; Guyonne, J; Huss, D

    2007-01-01

    Large field of view gamma cameras are widely used to perform lymphoscintigraphy in the sentinel lymph nodes (SLN) procedure in case of breast cancer. However, they are not specified for this application and their sizes do not enable their use in the operative room to control the excision of the all SLN. We present the results obtained with a prototype of a new mini gamma camera developed especially for the operative lymphoscintigraphy of the axillary area in case of breast cancer. This prototype is composed of 10 mm thick parallel lead collimator, a 2 mm thick GSO:Ce inorganic scintillating crystal from Hitachi and a Hamamatsu H8500 flat panel multianode (64 channels) photomultiplier tube (MAPMT) equipped with a dedicated electronics. Its actual field of view is 50 × 50mm2. The gamma interaction position in the GSO scintillating plate is obtained by calculating the center of gravity of the fired MAPMT channels. The measurements performed with this prototype demonstrate the usefulness of this mini gamma camer...

  13. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  14. A Physical Model-based Correction for Charge Traps in the Hubble Space Telescope ’s Wide Field Camera 3 Near-IR Detector and Its Applications to Transiting Exoplanets and Brown Dwarfs

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Yifan; Apai, Dániel; Schneider, Glenn [Department of Astronomy/Steward Observatory, The University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721 (United States); Lew, Ben W. P., E-mail: yzhou@as.arizona.edu [Department of Planetary Science/Lunar and Planetary Laboratory, The University of Arizona, 1640 E. University Boulevard, Tucson, AZ 85718 (United States)

    2017-06-01

    The Hubble Space Telescope Wide Field Camera 3 (WFC3) near-IR channel is extensively used in time-resolved observations, especially for transiting exoplanet spectroscopy as well as brown dwarf and directly imaged exoplanet rotational phase mapping. The ramp effect is the dominant source of systematics in the WFC3 for time-resolved observations, which limits its photometric precision. Current mitigation strategies are based on empirical fits and require additional orbits to help the telescope reach a thermal equilibrium . We show that the ramp-effect profiles can be explained and corrected with high fidelity using charge trapping theories. We also present a model for this process that can be used to predict and to correct charge trap systematics. Our model is based on a very small number of parameters that are intrinsic to the detector. We find that these parameters are very stable between the different data sets, and we provide best-fit values. Our model is tested with more than 120 orbits (∼40 visits) of WFC3 observations and is proved to be able to provide near photon noise limited corrections for observations made with both staring and scanning modes of transiting exoplanets as well as for starting-mode observations of brown dwarfs. After our model correction, the light curve of the first orbit in each visit has the same photometric precision as subsequent orbits, so data from the first orbit no longer need to be discarded. Near-IR arrays with the same physical characteristics (e.g., JWST/NIRCam ) may also benefit from the extension of this model if similar systematic profiles are observed.

  15. Hubble Space Telescope  Wide Field Camera 3 Observations of Escaping Lyman Continuum Radiation from Galaxies and Weak AGN at Redshifts z ∼ 2.3–4.1

    Science.gov (United States)

    Smith, Brent M.; Windhorst, Rogier A.; Jansen, Rolf A.; Cohen, Seth H.; Jiang, Linhua; Dijkstra, Mark; Koekemoer, Anton M.; Bielby, Richard; Inoue, Akio K.; MacKenty, John W.; O’Connell, Robert W.; Silk, Joseph I.

    2018-02-01

    We present observations of escaping Lyman Continuum (LyC) radiation from 34 massive star-forming galaxies (SFGs) and 12 weak AGN with reliably measured spectroscopic redshifts at z≃ 2.3{--}4.1. We analyzed Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) mosaics of the Early Release Science (ERS) field in three UVIS filters to sample the rest-frame LyC over this redshift range. With our best current assessment of the WFC3 systematics, we provide 1σ upper limits for the average LyC emission of galaxies at = 2.35, 2.75, and 3.60 to ∼28.5, 28.1, and 30.7 mag in image stacks of 11–15 galaxies in the WFC3/UVIS F225W, F275W, and F336W, respectively. The LyC flux of weak AGN at = 2.62 and 3.32 are detected at 28.3 and 27.4 mag with S/Ns of ∼2.7 and 2.5 in F275W and F336W for stacks of 7 and 3 AGN, respectively, while AGN at = 2.37 are constrained to ≳27.9 mag at 1σ in a stack of 2 AGN. The stacked AGN LyC light profiles are flatter than their corresponding non-ionizing UV continuum profiles out to radii of r≲ 0\\buildrel{\\prime\\prime}\\over{.} 9, which may indicate a radial dependence of porosity in the ISM. With synthetic stellar SEDs fit to UV continuum measurements longward of {{Ly}}α and IGM transmission models, we constrain the absolute LyC escape fractions to {f}{esc}{abs}≃ {22}-22+44% at = 2.35 and ≲55% at = 2.75 and 3.60, respectively. All available data for galaxies, including published work, suggests a more sudden increase of {f}{esc} with redshift at z≃ 2. Dust accumulating in (massive) galaxies over cosmic time correlates with increased H I column density, which may lead to reducing {f}{esc} more suddenly at z≲ 2. This may suggest that SFGs collectively contributed to maintaining cosmic reionization at redshifts z≳ 2{--}4, while AGN likely dominated reionization at z≲ 2.

  16. A new omni-directional multi-camera system for high resolution surveillance

    Science.gov (United States)

    Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2014-05-01

    Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.

  17. Design drivers for a wide-field multi-object spectrograph for the William Herschel Telescope

    NARCIS (Netherlands)

    Balcells, Marc; Benn, Chris R.; Carter, David; Dalton, Gavin B.; Trager, Scott C.; Feltzing, Sofia; Verheijen, M.A.W.; Jarvis, Matt; Percival, Will; Abrams, Don C.; Agocs, Tibor; Brown, Anthony G. A.; Cano, Diego; Evans, Chris; Helmi, Amina; Lewis, Ian J.; McLure, Ross; Peletier, Reynier F.; Pérez-Fournon, Ismael; Sharples, Ray M.; Tosh, Ian A. J.; Trujillo, Ignacio; Walton, Nic; Westhall, Kyle B.

    Wide-field multi-object spectroscopy is a high priority for European astronomy over the next decade. Most 8-10m telescopes have a small field of view, making 4-m class telescopes a particularly attractive option for wide-field instruments. We present a science case and design drivers for a

  18. A digital gigapixel large-format tile-scan camera.

    Science.gov (United States)

    Ben-Ezra, M

    2011-01-01

    Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.

  19. Europe's space camera unmasks a cosmic gamma-ray machine

    Science.gov (United States)

    1996-11-01

    The new-found neutron star is the visible counterpart of a pulsating radio source, Pulsar 1055-52. It is a mere 20 kilometres wide. Although the neutron star is very hot, at about a million degrees C, very little of its radiant energy takes the form of visible light. It emits mainly gamma-rays, an extremely energetic form of radiation. By examining it at visible wavelengths, astronomers hope to figure out why Pulsar 1055-52 is the most efficient generator of gamma-rays known so far, anywhere the Universe. The Faint Object Camera found Pulsar 1055-52 in near ultraviolet light at 3400 angstroms, a little shorter in wavelength than the violet light at the extremity of the human visual range. Roberto Mignani, Patrizia Caraveo and Giovanni Bignami of the Istituto di Fisica Cosmica in Milan, Italy, report its optical identification in a forthcoming issue of Astrophysical Journal Letters (1 January 1997). The formal name of the object is PSR 1055-52. Evading the glare of an adjacent star The Italian team had tried since 1988 to spot Pulsar 1055-52 with two of the most powerful ground-based optical telescopes in the Southern Hemisphere. These were the 3.6-metre Telescope and the 3.5-metre New Technology Telescope of the European Southern Observatory at La Silla, Chile. Unfortunately an ordinary star 100,000 times brighter lay in almost the same direction in the sky, separated from the neutron star by only a thousandth of a degree. The Earth's atmosphere defocused the star's light sufficiently to mask the glimmer from Pulsar 1055-52. The astronomers therefore needed an instrument in space. The Faint Object Camera offered the best precision and sensitivity to continue the hunt. Devised by European astronomers to complement the American wide field camera in the Hubble Space Telescope, the Faint Object Camera has a relatively narrow field of view. It intensifies the image of a faint object by repeatedly accelerating electrons from photo-electric films, so as to produce

  20. A real-time MTFC algorithm of space remote-sensing camera based on FPGA

    Science.gov (United States)

    Zhao, Liting; Huang, Gang; Lin, Zhe

    2018-01-01

    A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.

  1. A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i

    Science.gov (United States)

    Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.

    2015-01-01

    We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity. 

  2. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  3. NV-CMOS HD camera for day/night imaging

    Science.gov (United States)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  4. Real-time vehicle matching for multi-camera tunnel surveillance

    Science.gov (United States)

    Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried

    2011-03-01

    Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.

  5. Comparison of myocardial perfusion imaging between the new high-speed gamma camera and the standard anger camera

    International Nuclear Information System (INIS)

    Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi

    2013-01-01

    Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest 99m Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time. (author)

  6. Design of wide flat-topped low transverse field solenoid magnet

    International Nuclear Information System (INIS)

    Jing Xiaobing; Chen Nan; Li Qin

    2010-01-01

    A wide flat-topped low transverse error field solenoid magnet design for linear induction accelerator is presented. The design features non-uniform winding to reduce field fluctuation due to the magnets' gap, and homogenizer rings within the solenoid to greatly reduce the effects of winding errors. Numerical modeling of several designs for 12 MeV linear induction accelerator (LIA) in China Academy of Engineering Physics has demonstrated that by using these two techniques the magnetic field fluctuations in the accelerator gap can be reduced by 70% and the transverse error field can be reduced by 96.5%. (authors)

  7. Distributed Sensing and Processing for Multi-Camera Networks

    Science.gov (United States)

    Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.

    Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.

  8. DUST EXTINCTION FROM BALMER DECREMENTS OF STAR-FORMING GALAXIES AT 0.75 {<=} z {<=} 1.5 WITH HUBBLE SPACE TELESCOPE/WIDE-FIELD-CAMERA 3 SPECTROSCOPY FROM THE WFC3 INFRARED SPECTROSCOPIC PARALLEL SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    Dominguez, A.; Siana, B.; Masters, D. [Department of Physics and Astronomy, University of California Riverside, Riverside, CA 92521 (United States); Henry, A. L.; Martin, C. L. [Department of Physics, University of California, Santa Barbara, CA 93106 (United States); Scarlata, C.; Bedregal, A. G. [Minnesota Institute for Astrophysics, University of Minnesota, Minneapolis, MN 55455 (United States); Malkan, M.; Ross, N. R. [Department of Physics and Astronomy, University of California Los Angeles, Los Angeles, CA 90095 (United States); Atek, H.; Colbert, J. W. [Spitzer Science Center, Caltech, Pasadena, CA 91125 (United States); Teplitz, H. I.; Rafelski, M. [Infrared Processing and Analysis Center, Caltech, Pasadena, CA 91125 (United States); McCarthy, P.; Hathi, N. P.; Dressler, A. [Observatories of the Carnegie Institution for Science, Pasadena, CA 91101 (United States); Bunker, A., E-mail: albertod@ucr.edu [Department of Physics, Oxford University, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH (United Kingdom)

    2013-02-15

    Spectroscopic observations of H{alpha} and H{beta} emission lines of 128 star-forming galaxies in the redshift range 0.75 {<=} z {<=} 1.5 are presented. These data were taken with slitless spectroscopy using the G102 and G141 grisms of the Wide-Field-Camera 3 (WFC3) on board the Hubble Space Telescope as part of the WFC3 Infrared Spectroscopic Parallel survey. Interstellar dust extinction is measured from stacked spectra that cover the Balmer decrement (H{alpha}/H{beta}). We present dust extinction as a function of H{alpha} luminosity (down to 3 Multiplication-Sign 10{sup 41} erg s{sup -1}), galaxy stellar mass (reaching 4 Multiplication-Sign 10{sup 8} M {sub Sun }), and rest-frame H{alpha} equivalent width. The faintest galaxies are two times fainter in H{alpha} luminosity than galaxies previously studied at z {approx} 1.5. An evolution is observed where galaxies of the same H{alpha} luminosity have lower extinction at higher redshifts, whereas no evolution is found within our error bars with stellar mass. The lower H{alpha} luminosity galaxies in our sample are found to be consistent with no dust extinction. We find an anti-correlation of the [O III] {lambda}5007/H{alpha} flux ratio as a function of luminosity where galaxies with L {sub H{alpha}} < 5 Multiplication-Sign 10{sup 41} erg s{sup -1} are brighter in [O III] {lambda}5007 than H{alpha}. This trend is evident even after extinction correction, suggesting that the increased [O III] {lambda}5007/H{alpha} ratio in low-luminosity galaxies is likely due to lower metallicity and/or higher ionization parameters.

  9. Mitigating fluorescence spectral overlap in wide-field endoscopic imaging

    Science.gov (United States)

    Hou, Vivian; Nelson, Leonard Y.; Seibel, Eric J.

    2013-01-01

    Abstract. The number of molecular species suitable for multispectral fluorescence imaging is limited due to the overlap of the emission spectra of indicator fluorophores, e.g., dyes and nanoparticles. To remove fluorophore emission cross-talk in wide-field multispectral fluorescence molecular imaging, we evaluate three different solutions: (1) image stitching, (2) concurrent imaging with cross-talk ratio subtraction algorithm, and (3) frame-sequential imaging. A phantom with fluorophore emission cross-talk is fabricated, and a 1.2-mm ultrathin scanning fiber endoscope (SFE) is used to test and compare these approaches. Results show that fluorophore emission cross-talk could be successfully avoided or significantly reduced. Near term, the concurrent imaging method of wide-field multispectral fluorescence SFE is viable for early stage cancer detection and localization in vivo. Furthermore, a means to enhance exogenous fluorescence target-to-background ratio by the reduction of tissue autofluorescence background is demonstrated. PMID:23966226

  10. Radiometric calibration of digital cameras using neural networks

    Science.gov (United States)

    Grunwald, Michael; Laube, Pascal; Schall, Martin; Umlauf, Georg; Franz, Matthias O.

    2017-08-01

    Digital cameras are used in a large variety of scientific and industrial applications. For most applications, the acquired data should represent the real light intensity per pixel as accurately as possible. However, digital cameras are subject to physical, electronic and optical effects that lead to errors and noise in the raw image. Temperature- dependent dark current, read noise, optical vignetting or different sensitivities of individual pixels are examples of such effects. The purpose of radiometric calibration is to improve the quality of the resulting images by reducing the influence of the various types of errors on the measured data and thus improving the quality of the overall application. In this context, we present a specialized neural network architecture for radiometric calibration of digital cameras. Neural networks are used to learn a temperature- and exposure-dependent mapping from observed gray-scale values to true light intensities for each pixel. In contrast to classical at-fielding, neural networks have the potential to model nonlinear mappings which allows for accurately capturing the temperature dependence of the dark current and for modeling cameras with nonlinear sensitivities. Both scenarios are highly relevant in industrial applications. The experimental comparison of our network approach to classical at-fielding shows a consistently higher reconstruction quality, also for linear cameras. In addition, the calibration is faster than previous machine learning approaches based on Gaussian processes.

  11. ACT-Vision: active collaborative tracking for multiple PTZ cameras

    Science.gov (United States)

    Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet

    2009-04-01

    We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.

  12. Wide field-of-view dual-band multispectral muzzle flash detection

    Science.gov (United States)

    Montoya, J.; Melchor, J.; Spiliotis, P.; Taplin, L.

    2013-06-01

    Sensor technologies are undergoing revolutionary advances, as seen in the rapid growth of multispectral methodologies. Increases in spatial, spectral, and temporal resolution, and in breadth of spectral coverage, render feasible sensors that function with unprecedented performance. A system was developed that addresses many of the key hardware requirements for a practical dual-band multispectral acquisition system, including wide field of view and spectral/temporal shift between dual bands. The system was designed using a novel dichroic beam splitter and dual band-pass filter configuration that creates two side-by-side images of a scene on a single sensor. A high-speed CMOS sensor was used to simultaneously capture data from the entire scene in both spectral bands using a short focal-length lens that provided a wide field-of-view. The beam-splitter components were arranged such that the two images were maintained in optical alignment and real-time intra-band processing could be carried out using only simple arithmetic on the image halves. An experiment related to limitations of the system to address multispectral detection requirements was performed. This characterized the system's low spectral variation across its wide field of view. This paper provides lessons learned on the general limitation of key hardware components required for multispectral muzzle flash detection, using the system as a hardware example combined with simulated multispectral muzzle flash and background signatures.

  13. Calibration of action cameras for photogrammetric purposes.

    Science.gov (United States)

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  14. Calibration of Action Cameras for Photogrammetric Purposes

    Directory of Open Access Journals (Sweden)

    Caterina Balletti

    2014-09-01

    Full Text Available The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a easy to handle, (b capable of performing under extreme conditions and more importantly (c able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  15. The Mars Science Laboratory (MSL) Mast cameras and Descent imager: Investigation and instrument descriptions

    Science.gov (United States)

    Malin, Michal C.; Ravine, Michael A.; Caplinger, Michael A.; Tony Ghaemi, F.; Schaffner, Jacob A.; Maki, Justin N.; Bell, James F.; Cameron, James F.; Dietrich, William E.; Edgett, Kenneth S.; Edwards, Laurence J.; Garvin, James B.; Hallet, Bernard; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sletten, Ron; Sullivan, Robert J.; Sumner, Dawn Y.; Aileen Yingst, R.; Duston, Brian M.; McNair, Sean; Jensen, Elsa H.

    2017-08-01

    The Mars Science Laboratory Mast camera and Descent Imager investigations were designed, built, and operated by Malin Space Science Systems of San Diego, CA. They share common electronics and focal plane designs but have different optics. There are two Mastcams of dissimilar focal length. The Mastcam-34 has an f/8, 34 mm focal length lens, and the M-100 an f/10, 100 mm focal length lens. The M-34 field of view is about 20° × 15° with an instantaneous field of view (IFOV) of 218 μrad; the M-100 field of view (FOV) is 6.8° × 5.1° with an IFOV of 74 μrad. The M-34 can focus from 0.5 m to infinity, and the M-100 from 1.6 m to infinity. All three cameras can acquire color images through a Bayer color filter array, and the Mastcams can also acquire images through seven science filters. Images are ≤1600 pixels wide by 1200 pixels tall. The Mastcams, mounted on the 2 m tall Remote Sensing Mast, have a 360° azimuth and 180° elevation field of regard. Mars Descent Imager is fixed-mounted to the bottom left front side of the rover at 66 cm above the surface. Its fixed focus lens is in focus from 2 m to infinity, but out of focus at 66 cm. The f/3 lens has a FOV of 70° by 52° across and along the direction of motion, with an IFOV of 0.76 mrad. All cameras can acquire video at 4 frames/second for full frames or 720p HD at 6 fps. Images can be processed using lossy Joint Photographic Experts Group and predictive lossless compression.

  16. Projecting range-wide sun bear population trends using tree cover and camera-trap bycatch data.

    Directory of Open Access Journals (Sweden)

    Lorraine Scotson

    Full Text Available Monitoring population trends of threatened species requires standardized techniques that can be applied over broad areas and repeated through time. Sun bears Helarctos malayanus are a forest dependent tropical bear found throughout most of Southeast Asia. Previous estimates of global population trends have relied on expert opinion and cannot be systematically replicated. We combined data from 1,463 camera traps within 31 field sites across sun bear range to model the relationship between photo catch rates of sun bears and tree cover. Sun bears were detected in all levels of tree cover above 20%, and the probability of presence was positively associated with the amount of tree cover within a 6-km2 buffer of the camera traps. We used the relationship between catch rates and tree cover across space to infer temporal trends in sun bear abundance in response to tree cover loss at country and global-scales. Our model-based projections based on this "space for time" substitution suggested that sun bear population declines associated with tree cover loss between 2000-2014 in mainland southeast Asia were ~9%, with declines highest in Cambodia and lowest in Myanmar. During the same period, sun bear populations in insular southeast Asia (Malaysia, Indonesia and Brunei were projected to have declined at a much higher rate (22%. Cast forward over 30-years, from the year 2000, by assuming a constant rate of change in tree cover, we projected population declines in the insular region that surpassed 50%, meeting the IUCN criteria for endangered if sun bears were listed on the population level. Although this approach requires several assumptions, most notably that trends in abundance across space can be used to infer temporal trends, population projections using remotely sensed tree cover data may serve as a useful alternative (or supplement to expert opinion. The advantages of this approach is that it is objective, data-driven, repeatable, and it requires that

  17. Lock-in thermography using a cellphone attachment infrared camera

    Science.gov (United States)

    Razani, Marjan; Parkhimchyk, Artur; Tabatabaei, Nima

    2018-03-01

    Lock-in thermography (LIT) is a thermal-wave-based, non-destructive testing, technique which has been widely utilized in research settings for characterization and evaluation of biological and industrial materials. However, despite promising research outcomes, the wide spread adaptation of LIT in industry, and its commercialization, is hindered by the high cost of the infrared cameras used in the LIT setups. In this paper, we report on the feasibility of using inexpensive cellphone attachment infrared cameras for performing LIT. While the cost of such cameras is over two orders of magnitude less than their research-grade counterparts, our experimental results on block sample with subsurface defects and tooth with early dental caries suggest that acceptable performance can be achieved through careful instrumentation and implementation of proper data acquisition and image processing steps. We anticipate this study to pave the way for development of low-cost thermography systems and their commercialization as inexpensive tools for non-destructive testing of industrial samples as well as affordable clinical devices for diagnostic imaging of biological tissues.

  18. Lock-in thermography using a cellphone attachment infrared camera

    Directory of Open Access Journals (Sweden)

    Marjan Razani

    2018-03-01

    Full Text Available Lock-in thermography (LIT is a thermal-wave-based, non-destructive testing, technique which has been widely utilized in research settings for characterization and evaluation of biological and industrial materials. However, despite promising research outcomes, the wide spread adaptation of LIT in industry, and its commercialization, is hindered by the high cost of the infrared cameras used in the LIT setups. In this paper, we report on the feasibility of using inexpensive cellphone attachment infrared cameras for performing LIT. While the cost of such cameras is over two orders of magnitude less than their research-grade counterparts, our experimental results on block sample with subsurface defects and tooth with early dental caries suggest that acceptable performance can be achieved through careful instrumentation and implementation of proper data acquisition and image processing steps. We anticipate this study to pave the way for development of low-cost thermography systems and their commercialization as inexpensive tools for non-destructive testing of industrial samples as well as affordable clinical devices for diagnostic imaging of biological tissues.

  19. Extended spectrum SWIR camera with user-accessible Dewar

    Science.gov (United States)

    Benapfl, Brendan; Miller, John Lester; Vemuri, Hari; Grein, Christoph; Sivananthan, Siva

    2017-02-01

    Episensors has developed a series of extended short wavelength infrared (eSWIR) cameras based on high-Cd concentration Hg1-xCdxTe absorbers. The cameras have a bandpass extending to 3 microns cutoff wavelength, opening new applications relative to traditional InGaAs-based cameras. Applications and uses are discussed and examples given. A liquid nitrogen pour-filled version was initially developed. This was followed by a compact Stirling-cooled version with detectors operating at 200 K. Each camera has unique sensitivity and performance characteristics. The cameras' size, weight and power specifications are presented along with images captured with band pass filters and eSWIR sources to demonstrate spectral response beyond 1.7 microns. The soft seal Dewars of the cameras are designed for accessibility, and can be opened and modified in a standard laboratory environment. This modular approach allows user flexibility for swapping internal components such as cold filters and cold stops. The core electronics of the Stirlingcooled camera are based on a single commercial field programmable gate array (FPGA) that also performs on-board non-uniformity corrections, bad pixel replacement, and directly drives any standard HDMI display.

  20. Camera-enabled techniques for organic synthesis

    Directory of Open Access Journals (Sweden)

    Steven V. Ley

    2013-05-01

    Full Text Available A great deal of time is spent within synthetic chemistry laboratories on non-value-adding activities such as sample preparation and work-up operations, and labour intensive activities such as extended periods of continued data collection. Using digital cameras connected to computer vision algorithms, camera-enabled apparatus can perform some of these processes in an automated fashion, allowing skilled chemists to spend their time more productively. In this review we describe recent advances in this field of chemical synthesis and discuss how they will lead to advanced synthesis laboratories of the future.

  1. Twelve Years of the HST Advanced Camera for Surveys : Calibration Update

    Science.gov (United States)

    Grogin, Norman A.

    2014-06-01

    The Advanced Camera for Surveys (ACS) has been a workhorse HST imager for over twelve years, subsequent to its Servicing Mission 3B installation. The once defunct ACS Wide Field Channel (WFC) has now been operating longer since its Servicing Mission 4 repair than it had originally operated prior to its 2007 failure. Despite the accumulating radiation damage to the WFC CCDs during their long stay in low Earth orbit, ACS continues to be heavily exploited by the HST community as both a prime and a parallel detector. Conspicuous examples include the recently completed HST Multi-cycle Treasury programs, and the ongoing HST Frontier Fields (HFF) program.We review recent developments in ACS calibration that enable the continued high performance of this instrument, with particular attention the to the Wide Field Channel. Highlights include: 1) the refinement of the WFC geometric distortion solution and its time dependency; 2) the efficacy of both pixel-based and catalog-based corrections for the worsening WFC charge-transfer efficiency (CTE); 3) the extension of pixel-based CTE correction to the WFC 2K subarray mode; and 4) a novel "self-calibration" technique appropriate for large-number stacks of deep WFC exposures (such as the HFF targets) that provides superior reductions compared to the standard CALACS reduction pipeline.

  2. Development of a wide-field fluorescence imaging system for evaluation of wound re-epithelialization

    Science.gov (United States)

    Franco, Walfre; Gutierrez-Herrera, Enoch; Purschke, Martin; Wang, Ying; Tam, Josh; Anderson, R. Rox; Doukas, Apostolos

    2013-03-01

    Normal skin barrier function depends on having a viable epidermis, an epithelial layer formed by keratinocytes. The transparent epidermis, which is less than a 100 mum thick, is nearly impossible to see. Thus, the clinical evaluation of re-epithelialization is difficult, which hinders selecting appropriate therapy for promoting wound healing. An imaging system was developed to evaluate epithelialization by detecting endogenous fluorescence emissions of cellular proliferation over a wide field of view. A custom-made 295 nm ultraviolet (UV) light source was used for excitation. Detection was done by integrating a near-UV camera with sensitivity down to 300 nm, a 12 mm quartz lens with iris and focus lock for the UV regime, and a fluorescence bandpass filter with 340 nm center wavelength. To demonstrate that changes in fluorescence are related to cellular processes, the epithelialization of a skin substitute was monitored in vitro. The skin substitute or construct was made by embedding microscopic live human skin tissue columns, 1 mm in diameter and spaced 1 mm apart, in acellular porcine dermis. Fluorescence emissions clearly delineate the extent of lateral surface migration of keratinocytes and the total surface covered by the new epithelium. The fluorescence image of new epidermis spatially correlates with the corresponding color image. A simple, user-friendly way of imaging the presence of skin epithelium would improve wound care in civilian burns, ulcers and surgeries.

  3. Super-resolution in plenoptic cameras using FPGAs.

    Science.gov (United States)

    Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime

    2014-05-16

    Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.

  4. Super-Resolution in Plenoptic Cameras Using FPGAs

    Directory of Open Access Journals (Sweden)

    Joel Pérez

    2014-05-01

    Full Text Available Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA devices using VHDL (very high speed integrated circuit (VHSIC hardware description language. With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.

  5. Reliable and repeatable characterization of optical streak cameras

    International Nuclear Information System (INIS)

    Charest, Michael R. Jr.; Torres, Peter III; Silbernagel, Christopher T.; Kalantar, Daniel H.

    2008-01-01

    Optical streak cameras are used as primary diagnostics for a wide range of physics and laser experiments at facilities such as the National Ignition Facility. To meet the strict accuracy requirements needed for these experiments, the systematic nonlinearities of the streak cameras (attributed to nonlinearities in the optical and electrical components that make up the streak camera system) must be characterized. In some cases the characterization information is used as a guide to help determine how experiment data should be taken. In other cases, the characterization data are applied to the raw data images to correct for the nonlinearities. In order to characterize an optical streak camera, a specific set of data is collected, where the response to defined inputs are recorded. A set of analysis software routines has been developed to extract information such as spatial resolution, dynamic range, and temporal resolution from this data set. The routines are highly automated, requiring very little user input and thus provide very reliable and repeatable results that are not subject to interpretation. An emphasis on quality control has been placed on these routines due to the high importance of the camera characterization information.

  6. Reliable and Repeatable Characterization of Optical Streak Cameras

    International Nuclear Information System (INIS)

    Kalantar, D; Charest, M; Torres III, P; Charest, M

    2008-01-01

    Optical streak cameras are used as primary diagnostics for a wide range of physics and laser experiments at facilities such as the National Ignition Facility (NIF). To meet the strict accuracy requirements needed for these experiments, the systematic nonlinearities of the streak cameras (attributed to nonlinearities in the optical and electrical components that make up the streak camera system) must be characterized. In some cases the characterization information is used as a guide to help determine how experiment data should be taken. In other cases, the characterization data are applied to the raw data images to correct for the nonlinearities. In order to characterize an optical streak camera, a specific set of data is collected, where the response to defined inputs are recorded. A set of analysis software routines has been developed to extract information such as spatial resolution, dynamic range, and temporal resolution from this data set. The routines are highly automated, requiring very little user input and thus provide very reliable and repeatable results that are not subject to interpretation. An emphasis on quality control has been placed on these routines due to the high importance of the camera characterization information

  7. Reliable and Repeatable Characterization of Optical Streak Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Michael Charest Jr., Peter Torres III, Christopher Silbernagel, and Daniel Kalantar

    2008-10-31

    Optical streak cameras are used as primary diagnostics for a wide range of physics and laser experiments at facilities such as the National Ignition Facility (NIF). To meet the strict accuracy requirements needed for these experiments, the systematic nonlinearities of the streak cameras (attributed to nonlinearities in the optical and electrical components that make up the streak camera system) must be characterized. In some cases the characterization information is used as a guide to help determine how experiment data should be taken. In other cases, the characterization data are applied to the raw data images to correct for the nonlinearities. In order to characterize an optical streak camera, a specific set of data is collected, where the response to defined inputs are recorded. A set of analysis software routines has been developed to extract information such as spatial resolution, dynamic range, and temporal resolution from this data set. The routines are highly automated, requiring very little user input and thus provide very reliable and repeatable results that are not subject to interpretation. An emphasis on quality control has been placed on these routines due to the high importance of the camera characterization information.

  8. Reliable and Repeatable Characterication of Optical Streak Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Kalantar, D; Charest, M; Torres III, P; Charest, M

    2008-05-06

    Optical streak cameras are used as primary diagnostics for a wide range of physics and laser experiments at facilities such as the National Ignition Facility (NIF). To meet the strict accuracy requirements needed for these experiments, the systematic nonlinearities of the streak cameras (attributed to nonlinearities in the optical and electrical components that make up the streak camera system) must be characterized. In some cases the characterization information is used as a guide to help determine how experiment data should be taken. In other cases, the characterization data are applied to the raw data images to correct for the nonlinearities. In order to characterize an optical streak camera, a specific set of data is collected, where the response to defined inputs are recorded. A set of analysis software routines has been developed to extract information such as spatial resolution, dynamic range, and temporal resolution from this data set. The routines are highly automated, requiring very little user input and thus provide very reliable and repeatable results that are not subject to interpretation. An emphasis on quality control has been placed on these routines due to the high importance of the camera characterization information.

  9. Reliable and Repeatable Characterization of Optical Streak Cameras

    International Nuclear Information System (INIS)

    Michael R. Charest, Peter Torres III, Christopher Silbernagel

    2008-01-01

    Optical streak cameras are used as primary diagnostics for a wide range of physics and laser performance verification experiments at the National Ignition Facility (NIF). To meet the strict accuracy requirements needed for these experiments, the systematic nonlinearities of the streak cameras (attributed to nonlinearities in the optical and electronic components that make up the streak camera system) must be characterized. In some cases the characterization information is used as a guide to help determine how experiment data should be taken. In other cases the characterization data is used to 'correct' data images, to remove some of the nonlinearities. In order to obtain these camera characterizations, a specific data set is collected where the response to specific known inputs is recorded. A set of analysis software routines has been developed to extract information such as spatial resolution, dynamic range, temporal resolution, etc., from this data set. The routines are highly automated, requiring very little user input and thus provide very reliable and repeatable results that are not subject to interpretation. An emphasis on quality control has been placed on these routines due to the high importance of the camera characterization information

  10. Underwater television camera for monitoring inner side of pressure vessel

    International Nuclear Information System (INIS)

    Takayama, Kazuhiko.

    1997-01-01

    An underwater television support device equipped with a rotatable and vertically movable underwater television camera and an underwater television camera controlling device for monitoring images of the inside of the reactor core photographed by the underwater television camera to control the position of the underwater television camera and the underwater light are disposed on an upper lattice plate of a reactor pressure vessel. Both of them are electrically connected with each other by way of a cable to rapidly observe the inside of the reactor core by the underwater television camera. The reproducibility is extremely satisfactory by efficiently concentrating the position of the camera and image information upon inspection and observation. As a result, the steps for periodical inspection can be reduced to shorten the days for the periodical inspection. Since there is no requirement to withdraw fuel assemblies over a wide reactor core region, and the device can be used with the fuel assemblies being left as they are in the reactor, it is suitable for inspection of detectors for nuclear instrumentation. (N.H.)

  11. Event detection intelligent camera development

    International Nuclear Information System (INIS)

    Szappanos, A.; Kocsis, G.; Molnar, A.; Sarkozi, J.; Zoletnik, S.

    2008-01-01

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed for the video diagnostics of W-7X stellarator, which consists of 10 distinct and standalone measurement channels each holding a camera. Different operation modes will be implemented for continuous and for triggered readout as well. Hardware level trigger signals will be generated from real time image processing algorithms optimized for digital signal processor (DSP) and field programmable gate array (FPGA) architectures. At full resolution a camera sends 12 bit sampled 1280 x 1024 pixels with 444 fps which means 1.43 Terabyte over half an hour. To analyse such a huge amount of data is time consuming and has a high computational complexity. We plan to overcome this problem by EDICAM's preprocessing concepts. EDICAM camera system integrates all the advantages of CMOS sensor chip technology and fast network connections. EDICAM is built up from three different modules with two interfaces. A sensor module (SM) with reduced hardware and functional elements to reach a small and compact size and robust action in harmful environment as well. An image processing and control unit (IPCU) module handles the entire user predefined events and runs image processing algorithms to generate trigger signals. Finally a 10 Gigabit Ethernet compatible image readout card functions as the network interface for the PC. In this contribution all the concepts of EDICAM and the functions of the distinct modules are described

  12. Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry

    Science.gov (United States)

    Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

  13. Achieving sub-millimetre precision with a solid-state full-field heterodyning range imaging camera

    Science.gov (United States)

    Dorrington, A. A.; Cree, M. J.; Payne, A. D.; Conroy, R. M.; Carnegie, D. A.

    2007-09-01

    We have developed a full-field solid-state range imaging system capable of capturing range and intensity data simultaneously for every pixel in a scene with sub-millimetre range precision. The system is based on indirect time-of-flight measurements by heterodyning intensity-modulated illumination with a gain modulation intensified digital video camera. Sub-millimetre precision to beyond 5 m and 2 mm precision out to 12 m has been achieved. In this paper, we describe the new sub-millimetre class range imaging system in detail, and review the important aspects that have been instrumental in achieving high precision ranging. We also present the results of performance characterization experiments and a method of resolving the range ambiguity problem associated with homodyne and heterodyne ranging systems.

  14. A fast parallel encoding scheme for the Anger camera

    International Nuclear Information System (INIS)

    Seeger, P.A.

    1983-01-01

    An Anger camera is a position-sensitive scintillation detector with a continuous scintillator and a relatively small number of photomultipliers. Light from any one event disperses through a coupling plate to strike several photomultipliers. An air gap between the scintillator and the disperser limits the divergence of the photons by total internal reflection, and the radius of the distribution is proportional to the thickness of the disperser. The camera layout is illustrated and described. The basic unit for two-dimensional position determination is a ''receptive field'' of seven photomultipliers, the detector illustrated has three overlapping fields. In the standard Anger camera, position is determined by finding the centroid of the photomultiplier signals from weighted sums over all tubes of the array. The simplest case (a single field of seven tubes) is described first and then it is shown how this can be expanded to arbitrary size by combining simple circuits. Attention is drawn to the close analogy of this circuit to the structure (and function) of vertebrate visual cortex. (author)

  15. Characterization of SWIR cameras by MRC measurements

    Science.gov (United States)

    Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.

    2014-05-01

    Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera

  16. Ultrahigh sensitivity endoscopic camera using a new CMOS image sensor: providing with clear images under low illumination in addition to fluorescent images.

    Science.gov (United States)

    Aoki, Hisae; Yamashita, Hiromasa; Mori, Toshiyuki; Fukuyo, Tsuneo; Chiba, Toshio

    2014-11-01

    We developed a new ultrahigh-sensitive CMOS camera using a specific sensor that has a wide range of spectral sensitivity characteristics. The objective of this study is to present our updated endoscopic technology that has successfully integrated two innovative functions; ultrasensitive imaging as well as advanced fluorescent viewing. Two different experiments were conducted. One was carried out to evaluate the function of the ultrahigh-sensitive camera. The other was to test the availability of the newly developed sensor and its performance as a fluorescence endoscope. In both studies, the distance from the endoscopic tip to the target was varied and those endoscopic images in each setting were taken for further comparison. In the first experiment, the 3-CCD camera failed to display the clear images under low illumination, and the target was hardly seen. In contrast, the CMOS camera was able to display the targets regardless of the camera-target distance under low illumination. Under high illumination, imaging quality given by both cameras was quite alike. In the second experiment as a fluorescence endoscope, the CMOS camera was capable of clearly showing the fluorescent-activated organs. The ultrahigh sensitivity CMOS HD endoscopic camera is expected to provide us with clear images under low illumination in addition to the fluorescent images under high illumination in the field of laparoscopic surgery.

  17. Region of Interest Selection Interface for Wide-Angle Arthroscope

    Directory of Open Access Journals (Sweden)

    Jung Kyunghwa

    2015-01-01

    Full Text Available We have proposed a new interface for an wide-angle endoscope for solo surgery. The wide-angle arthroscopic view and magnified region of interest (ROI within the wide view were shown simultaneously. With a camera affixed to surgical instruments, the position of the ROI could be determined by manipulating the surgical instrument. Image features acquired by the A-KAZE approach were used to estimate the change of position of the surgical instrument by tracking the features every time the camera moved. We examined the accuracy of ROI selection using three different images, which were different-sized square arrays and tested phantom experiments. When the number of ROIs was twelve, the success rate was best, and the rate diminished as the size of ROIs decreased. The experimental results showed that the method of using a camera without additional sensors satisfied the appropriate accuracy required for ROI selection, and this interface was helpful in performing surgery with fewer assistants.

  18. A USB 2.0 computer interface for the UCO/Lick CCD cameras

    Science.gov (United States)

    Wei, Mingzhi; Stover, Richard J.

    2004-09-01

    The new UCO/Lick Observatory CCD camera uses a 200 MHz fiber optic cable to transmit image data and an RS232 serial line for low speed bidirectional command and control. Increasingly RS232 is a legacy interface supported on fewer computers. The fiber optic cable requires either a custom interface board that is plugged into the mainboard of the image acquisition computer to accept the fiber directly or an interface converter that translates the fiber data onto a widely used standard interface. We present here a simple USB 2.0 interface for the UCO/Lick camera. A single USB cable connects to the image acquisition computer and the camera's RS232 serial and fiber optic cables plug into the USB interface. Since most computers now support USB 2.0 the Lick interface makes it possible to use the camera on essentially any modern computer that has the supporting software. No hardware modifications or additions to the computer are needed. The necessary device driver software has been written for the Linux operating system which is now widely used at Lick Observatory. The complete data acquisition software for the Lick CCD camera is running on a variety of PC style computers as well as an HP laptop.

  19. The Receiver System for the Ooty Wide Field Array

    Indian Academy of Sciences (India)

    The legacy Ooty Radio Telescope (ORT) is being reconfigured as a 264-element synthesis telescope, called the Ooty Wide Field Array (OWFA). Its antenna elements are the contiguous 1.92 m sections of the parabolic cylinder. It will operate in a 38-MHz frequency band centred at 326.5 MHz and will be equipped with a ...

  20. Airborne multispectral identification of individual cotton plants using consumer-grade cameras

    Science.gov (United States)

    Although multispectral remote sensing using consumer-grade cameras has successfully identified fields of small cotton plants, improvements to detection sensitivity are needed to identify individual or small clusters of plants. The imaging sensor of consumer-grade cameras are based on a Bayer patter...

  1. Wide-field optical coherence tomography based microangiography for retinal imaging

    Science.gov (United States)

    Zhang, Qinqin; Lee, Cecilia S.; Chao, Jennifer; Chen, Chieh-Li; Zhang, Thomas; Sharma, Utkarsh; Zhang, Anqi; Liu, Jin; Rezaei, Kasra; Pepple, Kathryn L.; Munsen, Richard; Kinyoun, James; Johnstone, Murray; van Gelder, Russell N.; Wang, Ruikang K.

    2016-02-01

    Optical coherence tomography angiography (OCTA) allows for the evaluation of functional retinal vascular networks without a need for contrast dyes. For sophisticated monitoring and diagnosis of retinal diseases, OCTA capable of providing wide-field and high definition images of retinal vasculature in a single image is desirable. We report OCTA with motion tracking through an auxiliary real-time line scan ophthalmoscope that is clinically feasible to image functional retinal vasculature in patients, with a coverage of more than 60 degrees of retina while still maintaining high definition and resolution. We demonstrate six illustrative cases with unprecedented details of vascular involvement in retinal diseases. In each case, OCTA yields images of the normal and diseased microvasculature at all levels of the retina, with higher resolution than observed with fluorescein angiography. Wide-field OCTA technology will be an important next step in augmenting the utility of OCT technology in clinical practice.

  2. The Wide Field Imager of the International X-ray Observatory

    Energy Technology Data Exchange (ETDEWEB)

    Stefanescu, A., E-mail: astefan@hll.mpg.d [Max-Planck-Institut Halbleiterlabor, Otto-Hahn-Ring 6, 81739 Muenchen (Germany); Johannes Gutenberg-Universitaet, Inst. f. anorganische und analytische Chemie, 55099 Mainz (Germany); Bautz, M.W. [Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139-4307 (United States); Burrows, D.N. [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States); Bombelli, L.; Fiorini, C. [Politecnico di Milano, Dipartimento di Elettronica e Informazione, Milano (Italy); INFN Sezione di Milano, Milano (Italy); Fraser, G. [Space Research Centre, Department of Physics and Astronomy, University of Leicester, University Road, Leicester LE1 7RH (United Kingdom); Heinzinger, K. [PNSensor GmbH, Roemerstr. 28, 80803 Muenchen (Germany); Herrmann, S. [Max-Planck-Institut Halbleiterlabor, Otto-Hahn-Ring 6, 81739 Muenchen (Germany); Max-Planck-Institut fuer extraterrestrische Physik, Giessenbachstr., 85748 Garching (Germany); Kuster, M. [Technische Universitaet Darmstadt, Institut fuer Kernphysik, Schlossgartenstr. 9, 64289 Darmstadt (Germany); Lauf, T. [Max-Planck-Institut Halbleiterlabor, Otto-Hahn-Ring 6, 81739 Muenchen (Germany); Max-Planck-Institut fuer extraterrestrische Physik, Giessenbachstr., 85748 Garching (Germany); Lechner, P. [PNSensor GmbH, Roemerstr. 28, 80803 Muenchen (Germany); Lutz, G. [Max-Planck-Institut Halbleiterlabor, Otto-Hahn-Ring 6, 81739 Muenchen (Germany); Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Muenchen (Germany); Majewski, P. [PNSensor GmbH, Roemerstr. 28, 80803 Muenchen (Germany); Meuris, A. [Max-Planck-Institut Halbleiterlabor, Otto-Hahn-Ring 6, 81739 Muenchen (Germany); Max-Planck-Institut fuer extraterrestrische Physik, Giessenbachstr., 85748 Garching (Germany); Murray, S.S. [Harvard/Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States)

    2010-12-11

    The International X-ray Observatory (IXO) will be a joint X-ray observatory mission by ESA, NASA and JAXA. It will have a large effective area (3 m{sup 2} at 1.25 keV) grazing incidence mirror system with good angular resolution (5 arcsec at 0.1-10 keV) and will feature a comprehensive suite of scientific instruments: an X-ray Microcalorimeter Spectrometer, a High Time Resolution Spectrometer, an X-ray Polarimeter, an X-ray Grating Spectrometer, a Hard X-ray Imager and a Wide-Field Imager. The Wide Field Imager (WFI) has a field-of-view of 18 ftx18 ft. It will be sensitive between 0.1 and 15 keV, offer the full angular resolution of the mirrors and good energy resolution. The WFI will be implemented as a 6 in. wafer-scale monolithical array of 1024x1024 pixels of 100x100{mu}m{sup 2} size. The DEpleted P-channel Field-Effect Transistors (DEPFET) forming the individual pixels are devices combining the functionalities of both detector and amplifier. Signal electrons are collected in a potential well below the transistor's gate, modulating the transistor current. Even when the device is powered off, the signal charge is collected and kept in the potential well below the gate until it is explicitly cleared. This makes flexible and fast readout modes possible.

  3. Computational imaging with multi-camera time-of-flight systems

    KAUST Repository

    Shrestha, Shikhar; Heide, Felix; Heidrich, Wolfgang; Wetzstein, Gordon

    2016-01-01

    Depth cameras are a ubiquitous technology used in a wide range of applications, including robotic and machine vision, human computer interaction, autonomous vehicles as well as augmented and virtual reality. In this paper, we explore the design

  4. First Light with a 67-Million-Pixel WFI Camera

    Science.gov (United States)

    1999-01-01

    The newest astronomical instrument at the La Silla observatory is a super-camera with no less than sixty-seven million image elements. It represents the outcome of a joint project between the European Southern Observatory (ESO) , the Max-Planck-Institut für Astronomie (MPI-A) in Heidelberg (Germany) and the Osservatorio Astronomico di Capodimonte (OAC) near Naples (Italy), and was installed at the 2.2-m MPG/ESO telescope in December 1998. Following careful adjustment and testing, it has now produced the first spectacular test images. With a field size larger than the Full Moon, the new digital Wide Field Imager is able to obtain detailed views of extended celestial objects to very faint magnitudes. It is the first of a new generation of survey facilities at ESO with which a variety of large-scale searches will soon be made over extended regions of the southern sky. These programmes will lead to the discovery of particularly interesting and unusual (rare) celestial objects that may then be studied with large telescopes like the VLT at Paranal. This will in turn allow astronomers to penetrate deeper and deeper into the many secrets of the Universe. More light + larger fields = more information! The larger a telescope is, the more light - and hence information about the Universe and its constituents - it can collect. This simple truth represents the main reason for building ESO's Very Large Telescope (VLT) at the Paranal Observatory. However, the information-gathering power of astronomical equipment can also be increased by using a larger detector with more image elements (pixels) , thus permitting the simultaneous recording of images of larger sky fields (or more details in the same field). It is for similar reasons that many professional photographers prefer larger-format cameras and/or wide-angle lenses to the more conventional ones. The Wide Field Imager at the 2.2-m telescope Because of technological limitations, the sizes of detectors most commonly in use in

  5. Multi-Angle Snowflake Camera Value-Added Product

    Energy Technology Data Exchange (ETDEWEB)

    Shkurko, Konstantin [Univ. of Utah, Salt Lake City, UT (United States); Garrett, T. [Univ. of Utah, Salt Lake City, UT (United States); Gaustad, K [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-12-01

    The Multi-Angle Snowflake Camera (MASC) addresses a need for high-resolution multi-angle imaging of hydrometeors in freefall with simultaneous measurement of fallspeed. As illustrated in Figure 1, the MASC consists of three cameras, separated by 36°, each pointing at an identical focal point approximately 10 cm away. Located immediately above each camera, a light aims directly at the center of depth of field for its corresponding camera. The focal point at which the cameras are aimed lies within a ring through which hydrometeors fall. The ring houses a system of near-infrared emitter-detector pairs, arranged in two arrays separated vertically by 32 mm. When hydrometeors pass through the lower array, they simultaneously trigger all cameras and lights. Fallspeed is calculated from the time it takes to traverse the distance between the upper and lower triggering arrays. The trigger electronics filter out ambient light fluctuations associated with varying sunlight and shadows. The microprocessor onboard the MASC controls the camera system and communicates with the personal computer (PC). The image data is sent via FireWire 800 line, and fallspeed (and camera control) is sent via a Universal Serial Bus (USB) line that relies on RS232-over-USB serial conversion. See Table 1 for specific details on the MASC located at the Oliktok Point Mobile Facility on the North Slope of Alaska. The value-added product (VAP) detailed in this documentation analyzes the raw data (Section 2.0) using Python: images rely on OpenCV image processing library and derived aggregated statistics rely on some clever averaging. See Sections 4.1 and 4.2 for more details on what variables are computed.

  6. Gamma camera

    International Nuclear Information System (INIS)

    Berninger, W.H.

    1975-01-01

    The light pulse output of a scintillator, on which incident collimated gamma rays impinge, is detected by an array of photoelectric tubes each having a convexly curved photocathode disposed in close proximity to the scintillator. Electronic circuitry connected to outputs of the phototubes develops the scintillation event position coordinate electrical signals with good linearity and with substantial independence of the spacing between the scintillator and photocathodes so that the phototubes can be positioned as close to the scintillator as is possible to obtain less distortion in the field of view and improved spatial resolution as compared to conventional planar photocathode gamma cameras

  7. Development of underwater camera using high-definition camera

    International Nuclear Information System (INIS)

    Tsuji, Kenji; Watanabe, Masato; Takashima, Masanobu; Kawamura, Shingo; Tanaka, Hiroyuki

    2012-01-01

    In order to reduce the time for core verification or visual inspection of BWR fuels, the underwater camera using a High-Definition camera has been developed. As a result of this development, the underwater camera has 2 lights and 370 x 400 x 328mm dimensions and 20.5kg weight. Using the camera, 6 or so spent-fuel IDs are identified at 1 or 1.5m distance at a time, and 0.3mmφ pin-hole is recognized at 1.5m distance and 20 times zoom-up. Noises caused by radiation less than 15 Gy/h are not affected the images. (author)

  8. Development of a solid-state multi-sensor array camera for real time imaging of magnetic fields

    International Nuclear Information System (INIS)

    Benitez, D; Gaydecki, P; Quek, S; Torres, V

    2007-01-01

    The development of a real-time magnetic field imaging camera based on solid-state sensors is described. The final laboratory comprises a 2D array of 33 x 33 solid state, tri-axial magneto-inductive sensors, and is located within a large current-carrying coil. This may be excited to produce either a steady or time-varying magnetic field. Outputs from several rows of sensors are routed to a sub-master controller and all sub-masters route to a master-controller responsible for data coordination and signal pre-processing. The data are finally streamed to a host computer via a USB interface and the image generated and displayed at a rate of several frames per second. Accurate image generation is predicated on a knowledge of the sensor response, magnetic field perturbations and the nature of the target respecting permeability and conductivity. To this end, the development of the instrumentation has been complemented by extensive numerical modelling of field distribution patterns using boundary element methods. Although it was originally intended for deployment in the nondestructive evaluation (NDE) of reinforced concrete, it was soon realised during the course of the work that the magnetic field imaging system had many potential applications, for example, in medicine, security screening, quality assurance (such as the food industry), other areas of nondestructive evaluation (NDE), designs associated with magnetic fields, teaching and research

  9. Development of a solid-state multi-sensor array camera for real time imaging of magnetic fields

    Science.gov (United States)

    Benitez, D.; Gaydecki, P.; Quek, S.; Torres, V.

    2007-07-01

    The development of a real-time magnetic field imaging camera based on solid-state sensors is described. The final laboratory comprises a 2D array of 33 x 33 solid state, tri-axial magneto-inductive sensors, and is located within a large current-carrying coil. This may be excited to produce either a steady or time-varying magnetic field. Outputs from several rows of sensors are routed to a sub-master controller and all sub-masters route to a master-controller responsible for data coordination and signal pre-processing. The data are finally streamed to a host computer via a USB interface and the image generated and displayed at a rate of several frames per second. Accurate image generation is predicated on a knowledge of the sensor response, magnetic field perturbations and the nature of the target respecting permeability and conductivity. To this end, the development of the instrumentation has been complemented by extensive numerical modelling of field distribution patterns using boundary element methods. Although it was originally intended for deployment in the nondestructive evaluation (NDE) of reinforced concrete, it was soon realised during the course of the work that the magnetic field imaging system had many potential applications, for example, in medicine, security screening, quality assurance (such as the food industry), other areas of nondestructive evaluation (NDE), designs associated with magnetic fields, teaching and research.

  10. Camera-Model Identification Using Markovian Transition Probability Matrix

    Science.gov (United States)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  11. Analysis of Camera Arrays Applicable to the Internet of Things.

    Science.gov (United States)

    Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing

    2016-03-22

    The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.

  12. Short-channel field-effect transistors with 9-atom and 13-atom wide graphene nanoribbons.

    Science.gov (United States)

    Llinas, Juan Pablo; Fairbrother, Andrew; Borin Barin, Gabriela; Shi, Wu; Lee, Kyunghoon; Wu, Shuang; Yong Choi, Byung; Braganza, Rohit; Lear, Jordan; Kau, Nicholas; Choi, Wonwoo; Chen, Chen; Pedramrazi, Zahra; Dumslaff, Tim; Narita, Akimitsu; Feng, Xinliang; Müllen, Klaus; Fischer, Felix; Zettl, Alex; Ruffieux, Pascal; Yablonovitch, Eli; Crommie, Michael; Fasel, Roman; Bokor, Jeffrey

    2017-09-21

    Bottom-up synthesized graphene nanoribbons and graphene nanoribbon heterostructures have promising electronic properties for high-performance field-effect transistors and ultra-low power devices such as tunneling field-effect transistors. However, the short length and wide band gap of these graphene nanoribbons have prevented the fabrication of devices with the desired performance and switching behavior. Here, by fabricating short channel (L ch  ~ 20 nm) devices with a thin, high-κ gate dielectric and a 9-atom wide (0.95 nm) armchair graphene nanoribbon as the channel material, we demonstrate field-effect transistors with high on-current (I on  > 1 μA at V d  = -1 V) and high I on /I off  ~ 10 5 at room temperature. We find that the performance of these devices is limited by tunneling through the Schottky barrier at the contacts and we observe an increase in the transparency of the barrier by increasing the gate field near the contacts. Our results thus demonstrate successful fabrication of high-performance short-channel field-effect transistors with bottom-up synthesized armchair graphene nanoribbons.Graphene nanoribbons show promise for high-performance field-effect transistors, however they often suffer from short lengths and wide band gaps. Here, the authors use a bottom-up synthesis approach to fabricate 9- and 13-atom wide ribbons, enabling short-channel transistors with 10 5 on-off current ratio.

  13. CameraHRV: robust measurement of heart rate variability using a camera

    Science.gov (United States)

    Pai, Amruta; Veeraraghavan, Ashok; Sabharwal, Ashutosh

    2018-02-01

    The inter-beat-interval (time period of the cardiac cycle) changes slightly for every heartbeat; this variation is measured as Heart Rate Variability (HRV). HRV is presumed to occur due to interactions between the parasym- pathetic and sympathetic nervous system. Therefore, it is sometimes used as an indicator of the stress level of an individual. HRV also reveals some clinical information about cardiac health. Currently, HRV is accurately measured using contact devices such as a pulse oximeter. However, recent research in the field of non-contact imaging Photoplethysmography (iPPG) has made vital sign measurements using just the video recording of any exposed skin (such as a person's face) possible. The current signal processing methods for extracting HRV using peak detection perform well for contact-based systems but have poor performance for the iPPG signals. The main reason for this poor performance is the fact that current methods are sensitive to large noise sources which are often present in iPPG data. Further, current methods are not robust to motion artifacts that are common in iPPG systems. We developed a new algorithm, CameraHRV, for robustly extracting HRV even in low SNR such as is common with iPPG recordings. CameraHRV combined spatial combination and frequency demodulation to obtain HRV from the instantaneous frequency of the iPPG signal. CameraHRV outperforms other current methods of HRV estimation. Ground truth data was obtained from FDA-approved pulse oximeter for validation purposes. CameraHRV on iPPG data showed an error of 6 milliseconds for low motion and varying skin tone scenarios. The improvement in error was 14%. In case of high motion scenarios like reading, watching and talking, the error was 10 milliseconds.

  14. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  15. Meteor Film Recording with Digital Film Cameras with large CMOS Sensors

    Science.gov (United States)

    Slansky, P. C.

    2016-12-01

    In this article the author combines his professional know-how about cameras for film and television production with his amateur astronomy activities. Professional digital film cameras with high sensitivity are still quite rare in astronomy. One reason for this may be their costs of up to 20 000 and more (camera body only). In the interim, however,consumer photo cameras with film mode and very high sensitivity have come to the market for about 2 000 EUR. In addition, ultra-high sensitive professional film cameras, that are very interesting for meteor observation, have been introduced to the market. The particular benefits of digital film cameras with large CMOS sensors, including photo cameras with film recording function, for meteor recording are presented by three examples: a 2014 Camelopardalid, shot with a Canon EOS C 300, an exploding 2014 Aurigid, shot with a Sony alpha7S, and the 2016 Perseids, shot with a Canon ME20F-SH. All three cameras use large CMOS sensors; "large" meaning Super-35 mm, the classic 35 mm film format (24x13.5 mm, similar to APS-C size), or full format (36x24 mm), the classic 135 photo camera format. Comparisons are made to the widely used cameras with small CCD sensors, such as Mintron or Watec; "small" meaning 12" (6.4x4.8 mm) or less. Additionally, special photographic image processing of meteor film recordings is discussed.

  16. A comparison of image restoration approaches applied to three-dimensional confocal and wide-field fluorescence microscopy.

    Science.gov (United States)

    Verveer, P. J; Gemkow, M. J; Jovin, T. M

    1999-01-01

    We have compared different image restoration approaches for fluorescence microscopy. The most widely used algorithms were classified with a Bayesian theory according to the assumed noise model and the type of regularization imposed. We considered both Gaussian and Poisson models for the noise in combination with Tikhonov regularization, entropy regularization, Good's roughness and without regularization (maximum likelihood estimation). Simulations of fluorescence confocal imaging were used to examine the different noise models and regularization approaches using the mean squared error criterion. The assumption of a Gaussian noise model yielded only slightly higher errors than the Poisson model. Good's roughness was the best choice for the regularization. Furthermore, we compared simulated confocal and wide-field data. In general, restored confocal data are superior to restored wide-field data, but given sufficient higher signal level for the wide-field data the restoration result may rival confocal data in quality. Finally, a visual comparison of experimental confocal and wide-field data is presented.

  17. Design of Microwave Camera for Breast Cancer Detection

    DEFF Research Database (Denmark)

    Zhurbenko, Vitaliy

    2008-01-01

    is then used to reconstruct an image, which consists of a spatial distribution of the complex permittivity in the imaging domain. Using this image the cancer tissue can be detected due to its dielectric property contrast compared to normal tissue. The instrument employs a multichannel high sensitive...... superheterodyne architecture, enabling parallel coherent measurements. In this way, mechanical scanning, which is commonly used in measurements of an electromagnetic field distribution, is avoided. The system presented is the first reported 3D microwave breast imaging camera with parallel signal detection....... The hardware operates in the frequency range 0.3 – 3 GHz. The noise floor is below -140 dBm over the bandwidth of the system. The dynamic range depends on the available incident power range and is limited by the channel to channel isolation of 140 dB. The work presented in this thesis encompasses a wide range...

  18. A portable Si/CdTe Compton camera and its applications to the visualization of radioactive substances

    Energy Technology Data Exchange (ETDEWEB)

    Takeda, Shin' ichiro, E-mail: takeda@astro.isas.jaxa.jp [Institute of Space and Astronautical Science (ISAS)/JAXA, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252-5210 (Japan); Harayama, Atsushi [Institute of Space and Astronautical Science (ISAS)/JAXA, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252-5210 (Japan); Ichinohe, Yuto [Institute of Space and Astronautical Science (ISAS)/JAXA, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252-5210 (Japan); University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033 (Japan); Odaka, Hirokazu [Institute of Space and Astronautical Science (ISAS)/JAXA, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252-5210 (Japan); Watanabe, Shin; Takahashi, Tadayuki [Institute of Space and Astronautical Science (ISAS)/JAXA, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252-5210 (Japan); University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033 (Japan); Tajima, Hiroyasu [Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi 464-8601 (Japan); Genba, Kei; Matsuura, Daisuke; Ikebuchi, Hiroshi; Kuroda, Yoshikatsu [Mitsubishi Heavy Industries, 1200 Higashi-Tanaka, Komaki, Aichi 485-8561 (Japan); Tomonaka, Tetsuya [Mitsubishi Heavy Industry, 2-1-1 Shinhama, Arai-cho, Takasago, Hyogo 676-8686 (Japan)

    2015-07-01

    Gamma-ray imagers with the potential for visualizing the distribution of radioactive materials are required in the fields of astrophysics, medicine, nuclear applications, and homeland security. Based on the technology of the Si/CdTe Compton camera, we have manufactured the first commercial Compton camera for practical use. Through field tests in Fukushima, we demonstrated that the camera is capable of hot spot detection and the evaluation of radioactive decontamination.

  19. A portable Si/CdTe Compton camera and its applications to the visualization of radioactive substances

    International Nuclear Information System (INIS)

    Takeda, Shin'ichiro; Harayama, Atsushi; Ichinohe, Yuto; Odaka, Hirokazu; Watanabe, Shin; Takahashi, Tadayuki; Tajima, Hiroyasu; Genba, Kei; Matsuura, Daisuke; Ikebuchi, Hiroshi; Kuroda, Yoshikatsu; Tomonaka, Tetsuya

    2015-01-01

    Gamma-ray imagers with the potential for visualizing the distribution of radioactive materials are required in the fields of astrophysics, medicine, nuclear applications, and homeland security. Based on the technology of the Si/CdTe Compton camera, we have manufactured the first commercial Compton camera for practical use. Through field tests in Fukushima, we demonstrated that the camera is capable of hot spot detection and the evaluation of radioactive decontamination

  20. Design and tests of a portable mini gamma camera

    International Nuclear Information System (INIS)

    Sanchez, F.; Benlloch, J.M.; Escat, B.; Pavon, N.; Porras, E.; Kadi-Hanifi, D.; Ruiz, J.A.; Mora, F.J.; Sebastia, A.

    2004-01-01

    Design optimization, manufacturing, and tests, both laboratory and clinical, of a portable gamma camera for medical applications are presented. This camera, based on a continuous scintillation crystal and a position-sensitive photomultiplier tube, has an intrinsic spatial resolution of ≅2 mm, an energy resolution of 13% at 140 keV, and linearities of 0.28 mm (absolute) and 0.15 mm (differential), with a useful field of view of 4.6 cm diameter. Our camera can image small organs with high efficiency and so it can address the demand for devices of specific clinical applications like thyroid and sentinel node scintigraphy as well as scintimammography and radio-guided surgery. The main advantages of the gamma camera with respect to those previously reported in the literature are high portability, low cost, and weight (2 kg), with no significant loss of sensitivity and spatial resolution. All the electronic components are packed inside the minigamma camera, and no external electronic devices are required. The camera is only connected through the universal serial bus port to a portable personal computer (PC), where a specific software allows to control both the camera parameters and the measuring process, by displaying on the PC the acquired image on 'real time'. In this article, we present the camera and describe the procedures that have led us to choose its configuration. Laboratory and clinical tests are presented together with diagnostic capabilities of the gamma camera

  1. Picosecond X-ray streak camera dynamic range measurement

    Energy Technology Data Exchange (ETDEWEB)

    Zuber, C., E-mail: celine.zuber@cea.fr; Bazzoli, S.; Brunel, P.; Gontier, D.; Raimbourg, J.; Rubbelynck, C.; Trosseille, C. [CEA, DAM, DIF, F-91297 Arpajon (France); Fronty, J.-P.; Goulmy, C. [Photonis SAS, Avenue Roger Roncier, BP 520, 19106 Brive Cedex (France)

    2016-09-15

    Streak cameras are widely used to record the spatio-temporal evolution of laser-induced plasma. A prototype of picosecond X-ray streak camera has been developed and tested by Commissariat à l’Énergie Atomique et aux Énergies Alternatives to answer the Laser MegaJoule specific needs. The dynamic range of this instrument is measured with picosecond X-ray pulses generated by the interaction of a laser beam and a copper target. The required value of 100 is reached only in the configurations combining the slowest sweeping speed and optimization of the streak tube electron throughput by an appropriate choice of high voltages applied to its electrodes.

  2. Deep Rapid Optical Follow-Up of Gravitational Wave Sources with the Dark Energy Camera

    Science.gov (United States)

    Cowperthwaite, Philip

    2018-01-01

    The detection of an electromagnetic counterpart associated with a gravitational wave detection by the Advanced LIGO and VIRGO interferometers is one of the great observational challenges of our time. The large localization regions and potentially faint counterparts require the use of wide-field, large aperture telescopes. As a result, the Dark Energy Camera, a 3.3 sq deg CCD imager on the 4-m Blanco telescope at CTIO in Chile is the most powerful instrument for this task in the Southern Hemisphere. I will report on the results from our joint program between the community and members of the dark energy survey to conduct rapid and efficient follow-up of gravitational wave sources. This includes systematic searches for optical counterparts, as well as developing an understanding of contaminating sources on timescales not normally probed by traditional untargeted supernova surveys. I will additionally comment on the immense science gains to be made by a joint detection and discuss future prospects from the standpoint of both next generation wide-field telescopes and next generation gravitational wave detectors.

  3. Performance tests of two portable mini gamma cameras for medical applications

    International Nuclear Information System (INIS)

    Sanchez, F.; Fernandez, M. M.; Gimenez, M.; Benlloch, J. M.; Rodriguez-Alvarez, M. J.; Garcia de Quiros, F.; Lerche, Ch. W.; Pavon, N.; Palazon, J. A.; Martinez, J.; Sebastia, A.

    2006-01-01

    We have developed two prototypes of portable gamma cameras for medical applications based on a previous prototype designed and tested by our group. These cameras use a CsI(Na) continuous scintillation crystal coupled to the new flat-panel-type multianode position-sensitive photomultiplier tube, H8500 from Hamamatsu Photonics. One of the prototypes, mainly intended for intrasurgical use, has a field of view of 44x44 mm 2 , and weighs 1.2 kg. Its intrinsic resolution is better than 1.5 mm and its energy resolution is about 13% at 140 keV. The second prototype, mainly intended for osteological, renal, mammary, and endocrine (thyroid, parathyroid, and suprarenal) scintigraphies, weighs a total of 2 kg. Its average spatial resolution is 2 mm; it has a field of view of 95x95 mm 2 , with an energy resolution of about 15% at 140 keV. The main advantages of these gamma camera prototypes with respect to those previously reported in the literature are high portability and low weight, with no significant loss of sensitivity and spatial resolution. All the electronic components are packed inside the mini gamma cameras, and no external electronic devices are required. The cameras are only connected through the universal serial bus port to a portable PC. In this paper, we present the design of the cameras and describe the procedures that have led us to choose their configuration together with the most important performance features of the cameras. For one of the prototypes, clinical tests on melanoma patients are presented and images are compared with those obtained with a conventional camera

  4. NSTX Tangential Divertor Camera

    International Nuclear Information System (INIS)

    Roquemore, A.L.; Ted Biewer; Johnson, D.; Zweben, S.J.; Nobuhiro Nishino; Soukhanovskii, V.A.

    2004-01-01

    Strong magnetic field shear around the divertor x-point is numerically predicted to lead to strong spatial asymmetries in turbulence driven particle fluxes. To visualize the turbulence and associated impurity line emission near the lower x-point region, a new tangential observation port has been recently installed on NSTX. A reentrant sapphire window with a moveable in-vessel mirror images the divertor region from the center stack out to R 80 cm and views the x-point for most plasma configurations. A coherent fiber optic bundle transmits the image through a remotely selected filter to a fast camera, for example a 40500 frames/sec Photron CCD camera. A gas puffer located in the lower inboard divertor will localize the turbulence in the region near the x-point. Edge fluid and turbulent codes UEDGE and BOUT will be used to interpret impurity and deuterium emission fluctuation measurements in the divertor

  5. An assessment of the effectiveness of high definition cameras as remote monitoring tools for dolphin ecology studies.

    Directory of Open Access Journals (Sweden)

    Estênio Guimarães Paiva

    Full Text Available Research involving marine mammals often requires costly field programs. This paper assessed whether the benefits of using cameras outweighs the implications of having personnel performing marine mammal detection in the field. The efficacy of video and still cameras to detect Indo-Pacific bottlenose dolphins (Tursiops aduncus in the Fremantle Harbour (Western Australia was evaluated, with consideration on how environmental conditions affect detectability. The cameras were set on a tower in the Fremantle Port channel and videos were perused at 1.75 times the normal speed. Images from the cameras were used to estimate position of dolphins at the water's surface. Dolphin detections ranged from 5.6 m to 463.3 m for the video camera, and from 10.8 m to 347.8 m for the still camera. Detection range showed to be satisfactory when compared to distances at which dolphins would be detected by field observers. The relative effect of environmental conditions on detectability was considered by fitting a Generalised Estimation Equations (GEEs model with Beaufort, level of glare and their interactions as predictors and a temporal auto-correlation structure. The best fit model indicated level of glare had an effect, with more intense periods of glare corresponding to lower occurrences of observed dolphins. However this effect was not large (-0.264 and the parameter estimate was associated with a large standard error (0.113. The limited field of view was the main restraint in that cameras can be only applied to detections of animals observed rather than counts of individuals. However, the use of cameras was effective for long term monitoring of occurrence of dolphins, outweighing the costs and reducing the health and safety risks to field personal. This study showed that cameras could be effectively implemented onshore for research such as studying changes in habitat use in response to development and construction activities.

  6. Force Limited Random Vibration Test of TESS Camera Mass Model

    Science.gov (United States)

    Karlicek, Alexandra; Hwang, James Ho-Jin; Rey, Justin J.

    2015-01-01

    The Transiting Exoplanet Survey Satellite (TESS) is a spaceborne instrument consisting of four wide field-of-view-CCD cameras dedicated to the discovery of exoplanets around the brightest stars. As part of the environmental testing campaign, force limiting was used to simulate a realistic random vibration launch environment. While the force limit vibration test method is a standard approach used at multiple institutions including Jet Propulsion Laboratory (JPL), NASA Goddard Space Flight Center (GSFC), European Space Research and Technology Center (ESTEC), and Japan Aerospace Exploration Agency (JAXA), it is still difficult to find an actual implementation process in the literature. This paper describes the step-by-step process on how the force limit method was developed and applied on the TESS camera mass model. The process description includes the design of special fixtures to mount the test article for properly installing force transducers, development of the force spectral density using the semi-empirical method, estimation of the fuzzy factor (C2) based on the mass ratio between the supporting structure and the test article, subsequent validating of the C2 factor during the vibration test, and calculation of the C.G. accelerations using the Root Mean Square (RMS) reaction force in the spectral domain and the peak reaction force in the time domain.

  7. A Quality Evaluation of Single and Multiple Camera Calibration Approaches for an Indoor Multi Camera Tracking System

    Directory of Open Access Journals (Sweden)

    M. Adduci

    2014-06-01

    Full Text Available Human detection and tracking has been a prominent research area for several scientists around the globe. State of the art algorithms have been implemented, refined and accelerated to significantly improve the detection rate and eliminate false positives. While 2D approaches are well investigated, 3D human detection and tracking is still an unexplored research field. In both 2D/3D cases, introducing a multi camera system could vastly expand the accuracy and confidence of the tracking process. Within this work, a quality evaluation is performed on a multi RGB-D camera indoor tracking system for examining how camera calibration and pose can affect the quality of human tracks in the scene, independently from the detection and tracking approach used. After performing a calibration step on every Kinect sensor, state of the art single camera pose estimators were evaluated for checking how good the quality of the poses is estimated using planar objects such as an ordinate chessboard. With this information, a bundle block adjustment and ICP were performed for verifying the accuracy of the single pose estimators in a multi camera configuration system. Results have shown that single camera estimators provide high accuracy results of less than half a pixel forcing the bundle to converge after very few iterations. In relation to ICP, relative information between cloud pairs is more or less preserved giving a low score of fitting between concatenated pairs. Finally, sensor calibration proved to be an essential step for achieving maximum accuracy in the generated point clouds, and therefore in the accuracy of the produced 3D trajectories, from each sensor.

  8. Multi‐angular observations of vegetation indices from UAV cameras

    DEFF Research Database (Denmark)

    Sobejano-Paz, Veronica; Wang, Sheng; Jakobsen, Jakob

    Unmanned aerial vehicles (UAVs) are found as an alternative to the classical manned aerial photogrammetry, which can be used to obtain environmental data or as a complementary solution to other methods (Nex and Remondino, 2014). Although UAVs have coverage limitations, they have better resolution...... (Berni et al., 2009), hyper spectral camera (Burkart et al., 2015) and photometric elevation mapping sensor (Shahbazi et al., 2015) among others. Therefore, UAVs can be used in many fields such as agriculture, forestry, archeology, architecture, environment and traffic monitoring (Nex and Remondino, 2014......). In this study, the UAV used is a hexacopter s900 equipped with a Global Positioning System (GPS) and two cameras; a digital RGB photo camera and a multispectral camera (MCA), with a resolution of 5472 x 3648 pixels and 1280 x 1024 pixels, respectively. In terms of applications, traditional methods using...

  9. Analysis of filament statistics in fast camera data on MAST

    Science.gov (United States)

    Farley, Tom; Militello, Fulvio; Walkden, Nick; Harrison, James; Silburn, Scott; Bradley, James

    2017-10-01

    Coherent filamentary structures have been shown to play a dominant role in turbulent cross-field particle transport [D'Ippolito 2011]. An improved understanding of filaments is vital in order to control scrape off layer (SOL) density profiles and thus control first wall erosion, impurity flushing and coupling of radio frequency heating in future devices. The Elzar code [T. Farley, 2017 in prep.] is applied to MAST data. The code uses information about the magnetic equilibrium to calculate the intensity of light emission along field lines as seen in the camera images, as a function of the field lines' radial and toroidal locations at the mid-plane. In this way a `pseudo-inversion' of the intensity profiles in the camera images is achieved from which filaments can be identified and measured. In this work, a statistical analysis of the intensity fluctuations along field lines in the camera field of view is performed using techniques similar to those typically applied in standard Langmuir probe analyses. These filament statistics are interpreted in terms of the theoretical ergodic framework presented by F. Militello & J.T. Omotani, 2016, in order to better understand how time averaged filament dynamics produce the more familiar SOL density profiles. This work has received funding from the RCUK Energy programme (Grant Number EP/P012450/1), from Euratom (Grant Agreement No. 633053) and from the EUROfusion consortium.

  10. Multi Camera Multi Object Tracking using Block Search over Epipolar Geometry

    Directory of Open Access Journals (Sweden)

    Saman Sargolzaei

    2000-01-01

    Full Text Available We present strategy for multi-objects tracking in multi camera environment for the surveillance and security application where tracking multitude subjects are of utmost importance in a crowded scene. Our technique assumes partially overlapped multi-camera setup where cameras share common view from different angle to assess positions and activities of subjects under suspicion. To establish spatial correspondence between camera views we employ an epipolar geometry technique. We propose an overlapped block search method to find the interested pattern (target in new frames. Color pattern update scheme has been considered to further optimize the efficiency of the object tracking when object pattern changes due to object motion in the field of views of the cameras. Evaluation of our approach is presented with the results on PETS2007 dataset..

  11. An autonomous sensor module based on a legacy CCTV camera

    Science.gov (United States)

    Kent, P. J.; Faulkner, D. A. A.; Marshall, G. F.

    2016-10-01

    A UK MoD funded programme into autonomous sensors arrays (SAPIENT) has been developing new, highly capable sensor modules together with a scalable modular architecture for control and communication. As part of this system there is a desire to also utilise existing legacy sensors. The paper reports upon the development of a SAPIENT-compliant sensor module using a legacy Close-Circuit Television (CCTV) pan-tilt-zoom (PTZ) camera. The PTZ camera sensor provides three modes of operation. In the first mode, the camera is automatically slewed to acquire imagery of a specified scene area, e.g. to provide "eyes-on" confirmation for a human operator or for forensic purposes. In the second mode, the camera is directed to monitor an area of interest, with zoom level automatically optimized for human detection at the appropriate range. Open source algorithms (using OpenCV) are used to automatically detect pedestrians; their real world positions are estimated and communicated back to the SAPIENT central fusion system. In the third mode of operation a "follow" mode is implemented where the camera maintains the detected person within the camera field-of-view without requiring an end-user to directly control the camera with a joystick.

  12. INTRODUCING NOVEL GENERATION OF HIGH ACCURACY CAMERA OPTICAL-TESTING AND CALIBRATION TEST-STANDS FEASIBLE FOR SERIES PRODUCTION OF CAMERAS

    Directory of Open Access Journals (Sweden)

    M. Nekouei Shahraki

    2015-12-01

    Full Text Available The recent advances in the field of computer-vision have opened the doors of many opportunities for taking advantage of these techniques and technologies in many fields and applications. Having a high demand for these systems in today and future vehicles implies a high production volume of video cameras. The above criterions imply that it is critical to design test systems which deliver fast and accurate calibration and optical-testing capabilities. In this paper we introduce new generation of test-stands delivering high calibration quality in single-shot calibration of fisheye surround-view cameras. This incorporates important geometric features from bundle-block calibration, delivers very high (sub-pixel calibration accuracy, makes possible a very fast calibration procedure (few seconds, and realizes autonomous calibration via machines. We have used the geometrical shape of a Spherical Helix (Type: 3D Spherical Spiral with special geometrical characteristics, having a uniform radius which corresponds to the uniform motion. This geometrical feature was mechanically realized using three dimensional truncated icosahedrons which practically allow the implementation of a spherical helix on multiple surfaces. Furthermore the test-stand enables us to perform many other important optical tests such as stray-light testing, enabling us to evaluate the certain qualities of the camera optical module.

  13. The GCT camera for the Cherenkov Telescope Array

    Science.gov (United States)

    Lapington, J. S.; Abchiche, A.; Allan, D.; Amans, J.-P.; Armstrong, T. P.; Balzer, A.; Berge, D.; Boisson, C.; Bousquet, J.-J.; Bose, R.; Brown, A. M.; Bryan, M.; Buchholtz, G.; Buckley, J.; Chadwick, P. M.; Costantini, H.; Cotter, G.; Daniel, M. K.; De Franco, A.; De Frondat, F.; Dournaux, J.-L.; Dumas, D.; Ernenwein, J.-P.; Fasola, G.; Funk, S.; Gironnet, J.; Graham, J. A.; Greenshaw, T.; Hervet, O.; Hidaka, N.; Hinton, J. A.; Huet, J.-M.; Jankowsky, D.; Jegouzo, I.; Jogler, T.; Kawashima, T.; Kraus, M.; Laporte, P.; Leach, S.; Lefaucheur, J.; Markoff, S.; Melse, T.; Minaya, I. A.; Mohrmann, L.; Molyneux, P.; Moore, P.; Nolan, S. J.; Okumura, A.; Osborne, J. P.; Parsons, R. D.; Rosen, S.; Ross, D.; Rowell, G.; Rulten, C. B.; Sato, Y.; Sayede, F.; Schmoll, J.; Schoorlemmer, H.; Servillat, M.; Sol, H.; Stamatescu, V.; Stephan, M.; Stuik, R.; Sykes, J.; Tajima, H.; Thornhill, J.; Tibaldo, L.; Trichard, C.; Varner, G.; Vink, J.; Watson, J. J.; White, R.; Yamane, N.; Zech, A.; Zink, A.; Zorn, J.; CTA Consortium

    2017-12-01

    The Gamma Cherenkov Telescope (GCT) is one of the designs proposed for the Small Sized Telescope (SST) section of the Cherenkov Telescope Array (CTA). The GCT uses dual-mirror optics, resulting in a compact telescope with good image quality and a large field of view with a smaller, more economical, camera than is achievable with conventional single mirror solutions. The photon counting GCT camera is designed to record the flashes of atmospheric Cherenkov light from gamma and cosmic ray initiated cascades, which last only a few tens of nanoseconds. The GCT optics require that the camera detectors follow a convex surface with a radius of curvature of 1 m and a diameter of 35 cm, which is approximated by tiling the focal plane with 32 modules. The first camera prototype is equipped with multi-anode photomultipliers, each comprising an 8×8 array of 6×6 mm2 pixels to provide the required angular scale, adding up to 2048 pixels in total. Detector signals are shaped, amplified and digitised by electronics based on custom ASICs that provide digitisation at 1 GSample/s. The camera is self-triggering, retaining images where the focal plane light distribution matches predefined spatial and temporal criteria. The electronics are housed in the liquid-cooled, sealed camera enclosure. LED flashers at the corners of the focal plane provide a calibration source via reflection from the secondary mirror. The first GCT camera prototype underwent preliminary laboratory tests last year. In November 2015, the camera was installed on a prototype GCT telescope (SST-GATE) in Paris and was used to successfully record the first Cherenkov light of any CTA prototype, and the first Cherenkov light seen with such a dual-mirror optical system. A second full-camera prototype based on Silicon Photomultipliers is under construction. Up to 35 GCTs are envisaged for CTA.

  14. Image dynamic range test and evaluation of Gaofen-2 dual cameras

    Science.gov (United States)

    Zhang, Zhenhua; Gan, Fuping; Wei, Dandan

    2015-12-01

    In order to fully understand the dynamic range of Gaofen-2 satellite data and support the data processing, application and next satellites development, in this article, we evaluated the dynamic range by calculating some statistics such as maximum ,minimum, average and stand deviation of four images obtained at the same time by Gaofen-2 dual cameras in Beijing area; then the maximum ,minimum, average and stand deviation of each longitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of each camera's dynamic range consistency; and these four statistics of each latitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of the dynamic range consistency between PMS1 and PMS2 at last. The results suggest that there is a wide dynamic range of DN value in the image obtained by PMS1 and PMS2 which contains rich information of ground objects; in general, the consistency of dynamic range between the single camera images is in close agreement, but also a little difference, so do the dual cameras. The consistency of dynamic range between the single camera images is better than the dual cameras'.

  15. Calibration and verification of thermographic cameras for geometric measurements

    Science.gov (United States)

    Lagüela, S.; González-Jorge, H.; Armesto, J.; Arias, P.

    2011-03-01

    Infrared thermography is a technique with an increasing degree of development and applications. Quality assessment in the measurements performed with the thermal cameras should be achieved through metrology calibration and verification. Infrared cameras acquire temperature and geometric information, although calibration and verification procedures are only usual for thermal data. Black bodies are used for these purposes. Moreover, the geometric information is important for many fields as architecture, civil engineering and industry. This work presents a calibration procedure that allows the photogrammetric restitution and a portable artefact to verify the geometric accuracy, repeatability and drift of thermographic cameras. These results allow the incorporation of this information into the quality control processes of the companies. A grid based on burning lamps is used for the geometric calibration of thermographic cameras. The artefact designed for the geometric verification consists of five delrin spheres and seven cubes of different sizes. Metrology traceability for the artefact is obtained from a coordinate measuring machine. Two sets of targets with different reflectivity are fixed to the spheres and cubes to make data processing and photogrammetric restitution possible. Reflectivity was the chosen material propriety due to the thermographic and visual cameras ability to detect it. Two thermographic cameras from Flir and Nec manufacturers, and one visible camera from Jai are calibrated, verified and compared using calibration grids and the standard artefact. The calibration system based on burning lamps shows its capability to perform the internal orientation of the thermal cameras. Verification results show repeatability better than 1 mm for all cases, being better than 0.5 mm for the visible one. As it must be expected, also accuracy appears higher in the visible camera, and the geometric comparison between thermographic cameras shows slightly better

  16. Design and Construction of an X-ray Lightning Camera

    Science.gov (United States)

    Schaal, M.; Dwyer, J. R.; Rassoul, H. K.; Uman, M. A.; Jordan, D. M.; Hill, J. D.

    2010-12-01

    A pinhole-type camera was designed and built for the purpose of producing high-speed images of the x-ray emissions from rocket-and-wire-triggered lightning. The camera consists of 30 7.62-cm diameter NaI(Tl) scintillation detectors, each sampling at 10 million frames per second. The steel structure of the camera is encased in 1.27-cm thick lead, which blocks x-rays that are less than 400 keV, except through a 7.62-cm diameter “pinhole” aperture located at the front of the camera. The lead and steel structure is covered in 0.16-cm thick aluminum to block RF noise, water and light. All together, the camera weighs about 550-kg and is approximately 1.2-m x 0.6-m x 0.6-m. The image plane, which is adjustable, was placed 32-cm behind the pinhole aperture, giving a field of view of about ±38° in both the vertical and horizontal directions. The elevation of the camera is adjustable between 0 and 50° from horizontal and the camera may be pointed in any azimuthal direction. In its current configuration, the camera’s angular resolution is about 14°. During the summer of 2010, the x-ray camera was located 44-m from the rocket-launch tower at the UF/Florida Tech International Center for Lightning Research and Testing (ICLRT) at Camp Blanding, FL and several rocket-triggered lightning flashes were observed. In this presentation, I will discuss the design, construction and operation of this x-ray camera.

  17. Deployment of the Hobby-Eberly Telescope wide-field upgrade

    Science.gov (United States)

    Hill, Gary J.; Drory, Niv; Good, John M.; Lee, Hanshin; Vattiat, Brian L.; Kriel, Herman; Ramsey, Jason; Bryant, Randy; Elliot, Linda; Fowler, Jim; Häuser, Marco; Landiau, Martin; Leck, Ron; Odewahn, Stephen; Perry, Dave; Savage, Richard; Schroeder Mrozinski, Emily; Shetrone, Matthew; DePoy, D. L.; Prochaska, Travis; Marshall, J. L.; Damm, George; Gebhardt, Karl; MacQueen, Phillip J.; Martin, Jerry; Armandroff, Taft; Ramsey, Lawrence W.

    2016-07-01

    The Hobby-Eberly Telescope (HET) is an innovative large telescope, located in West Texas at the McDonald Observatory. The HET operates with a fixed segmented primary and has a tracker, which moves the four-mirror corrector and prime focus instrument package to track the sidereal and non-sidereal motions of objects. We have completed a major multi-year upgrade of the HET that has substantially increased the pupil size to 10 meters and the field of view to 22 arcminutes by replacing the corrector, tracker, and prime focus instrument package. The new wide field HET will feed the revolutionary integral field spectrograph called VIRUS, in support of the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX§), a new low resolution spectrograph (LRS2), an upgraded high resolution spectrograph (HRS2), and later the Habitable Zone Planet Finder (HPF). The upgrade is being commissioned and this paper discusses the completion of the installation, the commissioning process and the performance of the new HET.

  18. Common aperture multispectral spotter camera: Spectro XR

    Science.gov (United States)

    Petrushevsky, Vladimir; Freiman, Dov; Diamant, Idan; Giladi, Shira; Leibovich, Maor

    2017-10-01

    The Spectro XRTM is an advanced color/NIR/SWIR/MWIR 16'' payload recently developed by Elbit Systems / ELOP. The payload's primary sensor is a spotter camera with common 7'' aperture. The sensor suite includes also MWIR zoom, EO zoom, laser designator or rangefinder, laser pointer / illuminator and laser spot tracker. Rigid structure, vibration damping and 4-axes gimbals enable high level of line-of-sight stabilization. The payload's list of features include multi-target video tracker, precise boresight, strap-on IMU, embedded moving map, geodetic calculations suite, and image fusion. The paper describes main technical characteristics of the spotter camera. Visible-quality, all-metal front catadioptric telescope maintains optical performance in wide range of environmental conditions. High-efficiency coatings separate the incoming light into EO, SWIR and MWIR band channels. Both EO and SWIR bands have dual FOV and 3 spectral filters each. Several variants of focal plane array formats are supported. The common aperture design facilitates superior DRI performance in EO and SWIR, in comparison to the conventionally configured payloads. Special spectral calibration and color correction extend the effective range of color imaging. An advanced CMOS FPA and low F-number of the optics facilitate low light performance. SWIR band provides further atmospheric penetration, as well as see-spot capability at especially long ranges, due to asynchronous pulse detection. MWIR band has good sharpness in the entire field-of-view and (with full HD FPA) delivers amount of detail far exceeding one of VGA-equipped FLIRs. The Spectro XR offers level of performance typically associated with larger and heavier payloads.

  19. Image-scanning measurement using video dissection cameras

    International Nuclear Information System (INIS)

    Carson, J.S.

    1978-01-01

    A high speed dimensional measuring system capable of scanning a thin film network, and determining if there are conductor widths, resistor widths, or spaces not typical of the design for this product is described. The eye of the system is a conventional TV camera, although such devices as image dissector cameras or solid-state scanners may be used more often in the future. The analog signal from the TV camera is digitized for processing by the computer and is presented to the TV monitor to assist the operator in monitoring the system's operation. Movable stages are required when the field of view of the scanner is less than the size of the object. A minicomputer controls the movement of the stage, and communicates with the digitizer to select picture points that are to be processed. Communications with the system are maintained through a teletype or CRT terminal

  20. Development of a compact scintillator-based high-resolution Compton camera for molecular imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kishimoto, A., E-mail: daphne3h-aya@ruri.waseda.jp [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Kataoka, J.; Koide, A.; Sueoka, K.; Iwamoto, Y.; Taya, T. [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Ohsuka, S. [Central Research Laboratory, Hamamatsu Photonics K.K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu, Shizuoka (Japan)

    2017-02-11

    The Compton camera, which shows gamma-ray distribution utilizing the kinematics of Compton scattering, is a promising detector capable of imaging across a wide range of energy. In this study, we aim to construct a small-animal molecular imaging system in a wide energy range by using the Compton camera. We developed a compact medical Compton camera based on a Ce-doped Gd{sub 3}Al{sub 2}Ga{sub 3}O{sub 12} (Ce:GAGG) scintillator and multi-pixel photon counter (MPPC). A basic performance confirmed that for 662 keV, the typical energy resolution was 7.4 % (FWHM) and the angular resolution was 4.5° (FWHM). We then used the medical Compton camera to conduct imaging experiments based on a 3-D imaging reconstruction algorithm using the multi-angle data acquisition method. The result confirmed that for a {sup 137}Cs point source at a distance of 4 cm, the image had a spatial resolution of 3.1 mm (FWHM). Furthermore, we succeeded in producing 3-D multi-color image of different simultaneous energy sources ({sup 22}Na [511 keV], {sup 137}Cs [662 keV], and {sup 54}Mn [834 keV]).

  1. A New CCD Camera at the Molėtai Observatory

    Directory of Open Access Journals (Sweden)

    Zdanavičius J.

    2003-12-01

    Full Text Available The results of the first testing of a new CCD camera of the Molėtai Observatory are given. The linearity and the flat field corrections of good accuracy are determined by using shifted star field exposures.

  2. Improvement of passive THz camera images

    Science.gov (United States)

    Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw

    2012-10-01

    Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.

  3. Development of stable monolithic wide-field Michelson interferometers.

    Science.gov (United States)

    Wan, Xiaoke; Ge, Jian; Chen, Zhiping

    2011-07-20

    Bulk wide-field Michelson interferometers are very useful for high precision applications in remote sensing and astronomy. A stable monolithic Michelson interferometer is a key element in high precision radial velocity (RV) measurements for extrasolar planets searching and studies. Thermal stress analysis shows that matching coefficients of thermal expansion (CTEs) is a critical requirement for ensuring interferometer stability. This requirement leads to a novel design using BK7 and LAK7 materials, such that the monolithic interferometer is free from thermal distortion. The processes of design, fabrication, and testing of interferometers are described in detail. In performance evaluations, the field angle is typically 23.8° and thermal sensitivity is typically -2.6×10(-6)/°C near 550 nm, which corresponds to ∼800 m/s/°C in the RV scale. Low-cost interferometer products have been commissioned in multiple RV instruments, and they are producing high stability performance over long term operations. © 2011 Optical Society of America

  4. Infrared detectors and test technology of cryogenic camera

    Science.gov (United States)

    Yang, Xiaole; Liu, Xingxin; Xing, Mailing; Ling, Long

    2016-10-01

    Cryogenic camera which is widely used in deep space detection cools down optical system and support structure by cryogenic refrigeration technology, thereby improving the sensitivity. Discussing the characteristics and design points of infrared detector combined with camera's characteristics. At the same time, cryogenic background test systems of chip and detector assembly are established. Chip test system is based on variable cryogenic and multilayer Dewar, and assembly test system is based on target and background simulator in the thermal vacuum environment. The core of test is to establish cryogenic background. Non-uniformity, ratio of dead pixels and noise of test result are given finally. The establishment of test system supports for the design and calculation of infrared systems.

  5. The first GCT camera for the Cherenkov Telescope Array

    CERN Document Server

    De Franco, A.; Allan, D.; Armstrong, T.; Ashton, T.; Balzer, A.; Berge, D.; Bose, R.; Brown, A.M.; Buckley, J.; Chadwick, P.M.; Cooke, P.; Cotter, G.; Daniel, M.K.; Funk, S.; Greenshaw, T.; Hinton, J.; Kraus, M.; Lapington, J.; Molyneux, P.; Moore, P.; Nolan, S.; Okumura, A.; Ross, D.; Rulten, C.; Schmoll, J.; Schoorlemmer, H.; Stephan, M.; Sutcliffe, P.; Tajima, H.; Thornhill, J.; Tibaldo, L.; Varner, G.; Watson, J.; Zink, A.

    2015-01-01

    The Gamma Cherenkov Telescope (GCT) is proposed to be part of the Small Size Telescope (SST) array of the Cherenkov Telescope Array (CTA). The GCT dual-mirror optical design allows the use of a compact camera of diameter roughly 0.4 m. The curved focal plane is equipped with 2048 pixels of ~0.2{\\deg} angular size, resulting in a field of view of ~9{\\deg}. The GCT camera is designed to record the flashes of Cherenkov light from electromagnetic cascades, which last only a few tens of nanoseconds. Modules based on custom ASICs provide the required fast electronics, facilitating sampling and digitisation as well as first level of triggering. The first GCT camera prototype is currently being commissioned in the UK. On-telescope tests are planned later this year. Here we give a detailed description of the camera prototype and present recent progress with testing and commissioning.

  6. Design of comprehensive general maintenance service system of aerial reconnaissance camera

    Directory of Open Access Journals (Sweden)

    Li Xu

    2016-01-01

    Full Text Available Aiming at the problem of lack of security equipment for airborne reconnaissance camera and universal difference between internal and external field and model, the design scheme of comprehensive universal system based on PC-104 bus architecture and ARM wireless test module is proposed is proposed using the ATE design. The scheme uses the "embedded" technology to design the system, which meets the requirements of the system. By using the technique of classified switching, the hardware resources are reasonably extended, and the general protection of the various types of aerial reconnaissance cameras is realized. Using the concept of “wireless test”, the test interface is extended to realize the comprehensive protection of the aerial reconnaissance camera and the field. The application proves that the security system works stably, has good generality, practicability, and has broad application prospect.

  7. Initial inflight calibration for Hayabusa2 optical navigation camera (ONC) for science observations of asteroid Ryugu

    Science.gov (United States)

    Suzuki, H.; Yamada, M.; Kouyama, T.; Tatsumi, E.; Kameda, S.; Honda, R.; Sawada, H.; Ogawa, N.; Morota, T.; Honda, C.; Sakatani, N.; Hayakawa, M.; Yokota, Y.; Yamamoto, Y.; Sugita, S.

    2018-01-01

    Hayabusa2, the first sample return mission to a C-type asteroid was launched by the Japan Aerospace Exploration Agency (JAXA) on December 3, 2014 and will arrive at the asteroid in the middle of 2018 to collect samples from its surface, which may contain both hydrated minerals and organics. The optical navigation camera (ONC) system on board the Hayabusa2 consists of three individual framing CCD cameras, ONC-T for a telescopic nadir view, ONC-W1 for a wide-angle nadir view, and ONC-W2 for a wide-angle slant view will be used to observe the surface of Ryugu. The cameras will be used to measure the global asteroid shape, local morphologies, and visible spectroscopic properties. Thus, image data obtained by ONC will provide essential information to select landing (sampling) sites on the asteroid. This study reports the results of initial inflight calibration based on observations of Earth, Mars, Moon, and stars to verify and characterize the optical performance of the ONC, such as flat-field sensitivity, spectral sensitivity, point-spread function (PSF), distortion, and stray light of ONC-T, and distortion for ONC-W1 and W2. We found some potential problems that may influence our science observations. This includes changes in sensitivity of flat fields for all bands from those that were measured in the pre-flight calibration and existence of a stray light that arises under certain conditions of spacecraft attitude with respect to the sun. The countermeasures for these problems were evaluated by using data obtained during initial in-flight calibration. The results of our inflight calibration indicate that the error of spectroscopic measurements around 0.7 μm using 0.55, 0.70, and 0.86 μm bands of the ONC-T can be lower than 0.7% after these countermeasures and pixel binning. This result suggests that our ONC-T would be able to detect typical strength (∼3%) of the serpentine absorption band often found on CM chondrites and low albedo asteroids with ≥ 4

  8. THE COSMIC INFRARED BACKGROUND EXPERIMENT (CIBER): THE WIDE-FIELD IMAGERS

    Energy Technology Data Exchange (ETDEWEB)

    Bock, J.; Battle, J. [Jet Propulsion Laboratory (JPL), National Aeronautics and Space Administration (NASA), Pasadena, CA 91109 (United States); Sullivan, I. [Department of Physics, University of Washington, Seattle, WA 98195 (United States); Arai, T.; Matsumoto, T.; Matsuura, S.; Tsumura, K. [Department of Space Astronomy and Astrophysics, Institute of Space and Astronautical Science (ISAS), Japan Aerospace Exploration Agency (JAXA), Sagamihara, Kanagawa 252-5210 (Japan); Cooray, A.; Mitchell-Wynne, K.; Smidt, J. [Center for Cosmology, University of California, Irvine, CA 92697 (United States); Hristov, V.; Lam, A. C.; Levenson, L. R.; Mason, P. [Department of Physics, Mathematics and Astronomy, California Institute of Technology, Pasadena, CA 91125 (United States); Keating, B.; Renbarger, T. [Department of Physics, University of California, San Diego, San Diego, CA 92093 (United States); Kim, M. G. [Department of Physics and Astronomy, Seoul National University, Seoul 151-742 (Korea, Republic of); Lee, D. H. [Institute of Astronomy and Astrophysics, Academia Sinica, National Taiwan University, Taipei 10617, Taiwan (China); Nam, U. W. [Korea Astronomy and Space Science Institute (KASI), Daejeon 305-348 (Korea, Republic of); Suzuki, K. [Instrument Development Group of Technical Center, Nagoya University, Nagoya, Aichi 464-8602 (Japan); and others

    2013-08-15

    We have developed and characterized an imaging instrument to measure the spatial properties of the diffuse near-infrared extragalactic background light (EBL) in a search for fluctuations from z > 6 galaxies during the epoch of reionization. The instrument is part of the Cosmic Infrared Background Experiment (CIBER), designed to observe the EBL above Earth's atmosphere during a suborbital sounding rocket flight. The imaging instrument incorporates a 2 Degree-Sign Multiplication-Sign 2 Degree-Sign field of view to measure fluctuations over the predicted peak of the spatial power spectrum at 10 arcmin, and 7'' Multiplication-Sign 7'' pixels, to remove lower redshift galaxies to a depth sufficient to reduce the low-redshift galaxy clustering foreground below instrumental sensitivity. The imaging instrument employs two cameras with {Delta}{lambda}/{lambda} {approx} 0.5 bandpasses centered at 1.1 {mu}m and 1.6 {mu}m to spectrally discriminate reionization extragalactic background fluctuations from local foreground fluctuations. CIBER operates at wavelengths where the electromagnetic spectrum of the reionization extragalactic background is thought to peak, and complements fluctuation measurements by AKARI and Spitzer at longer wavelengths. We have characterized the instrument in the laboratory, including measurements of the sensitivity, flat-field response, stray light performance, and noise properties. Several modifications were made to the instrument following a first flight in 2009 February. The instrument performed to specifications in three subsequent flights, and the scientific data are now being analyzed.

  9. Performance analysis for automated gait extraction and recognition in multi-camera surveillance

    OpenAIRE

    Goffredo, Michela; Bouchrika, Imed; Carter, John N.; Nixon, Mark S.

    2010-01-01

    Many studies have confirmed that gait analysis can be used as a new biometrics. In this research, gait analysis is deployed for people identification in multi-camera surveillance scenarios. We present a new method for viewpoint independent markerless gait analysis that does not require camera calibration and works with a wide range of walking directions. These properties make the proposed method particularly suitable for gait identification in real surveillance scenarios where people and thei...

  10. Phenocams bridge the gap between field and satellite observations in an arid grassland ecosystem

    Science.gov (United States)

    Near surface (i.e., camera) and satellite remote sensing metrics have become widely used indicators of plant growing seasons. While robust linkages have been established between field metrics and ecosystem exchange in many land cover types, assessment of how well remotely-derived season start and en...

  11. INFN Camera demonstrator for the Cherenkov Telescope Array

    CERN Document Server

    Ambrosi, G; Aramo, C.; Bertucci, B.; Bissaldi, E.; Bitossi, M.; Brasolin, S.; Busetto, G.; Carosi, R.; Catalanotti, S.; Ciocci, M.A.; Consoletti, R.; Da Vela, P.; Dazzi, F.; De Angelis, A.; De Lotto, B.; de Palma, F.; Desiante, R.; Di Girolamo, T.; Di Giulio, C.; Doro, M.; D'Urso, D.; Ferraro, G.; Ferrarotto, F.; Gargano, F.; Giglietto, N.; Giordano, F.; Giraudo, G.; Iacovacci, M.; Ionica, M.; Iori, M.; Longo, F.; Mariotti, M.; Mastroianni, S.; Minuti, M.; Morselli, A.; Paoletti, R.; Pauletta, G.; Rando, R.; Fernandez, G. Rodriguez; Rugliancich, A.; Simone, D.; Stella, C.; Tonachini, A.; Vallania, P.; Valore, L.; Vagelli, V.; Verzi, V.; Vigorito, C.

    2015-01-01

    The Cherenkov Telescope Array is a world-wide project for a new generation of ground-based Cherenkov telescopes of the Imaging class with the aim of exploring the highest energy region of the electromagnetic spectrum. With two planned arrays, one for each hemisphere, it will guarantee a good sky coverage in the energy range from a few tens of GeV to hundreds of TeV, with improved angular resolution and a sensitivity in the TeV energy region better by one order of magnitude than the currently operating arrays. In order to cover this wide energy range, three different telescope types are envisaged, with different mirror sizes and focal plane features. In particular, for the highest energies a possible design is a dual-mirror Schwarzschild-Couder optical scheme, with a compact focal plane. A silicon photomultiplier (SiPM) based camera is being proposed as a solution to match the dimensions of the pixel (angular size of ~ 0.17 degrees). INFN is developing a camera demonstrator made by 9 Photo Sensor Modules (PSMs...

  12. Wide Field Radio Transient Surveys

    Science.gov (United States)

    Bower, Geoffrey

    2011-04-01

    The time domain of the radio wavelength sky has been only sparsely explored. Nevertheless, serendipitous discovery and results from limited surveys indicate that there is much to be found on timescales from nanoseconds to years and at wavelengths from meters to millimeters. These observations have revealed unexpected phenomena such as rotating radio transients and coherent pulses from brown dwarfs. Additionally, archival studies have revealed an unknown class of radio transients without radio, optical, or high-energy hosts. The new generation of centimeter-wave radio telescopes such as the Allen Telescope Array (ATA) will exploit wide fields of view and flexible digital signal processing to systematically explore radio transient parameter space, as well as lay the scientific and technical foundation for the Square Kilometer Array. Known unknowns that will be the target of future transient surveys include orphan gamma-ray burst afterglows, radio supernovae, tidally-disrupted stars, flare stars, and magnetars. While probing the variable sky, these surveys will also provide unprecedented information on the static radio sky. I will present results from three large ATA surveys (the Fly's Eye survey, the ATA Twenty CM Survey (ATATS), and the Pi GHz Survey (PiGSS)) and several small ATA transient searches. Finally, I will discuss the landscape and opportunities for future instruments at centimeter wavelengths.

  13. Distributed FPGA-based smart camera architecture for computer vision applications

    OpenAIRE

    Bourrasset, Cédric; Maggiani, Luca; Sérot, Jocelyn; Berry, François; Pagano, Paolo

    2013-01-01

    International audience; Smart camera networks (SCN) raise challenging issues in many fields of research, including vision processing, communication protocols, distributed algorithms or power management. Furthermore, application logic in SCN is not centralized but spread among network nodes meaning that each node must have to process images to extract significant features, and aggregate data to understand the surrounding environment. In this context, smart camera have first embedded general pu...

  14. Computational imaging with multi-camera time-of-flight systems

    KAUST Repository

    Shrestha, Shikhar

    2016-07-11

    Depth cameras are a ubiquitous technology used in a wide range of applications, including robotic and machine vision, human computer interaction, autonomous vehicles as well as augmented and virtual reality. In this paper, we explore the design and applications of phased multi-camera time-of-flight (ToF) systems. We develop a reproducible hardware system that allows for the exposure times and waveforms of up to three cameras to be synchronized. Using this system, we analyze waveform interference between multiple light sources in ToF applications and propose simple solutions to this problem. Building on the concept of orthogonal frequency design, we demonstrate state-of-the-art results for instantaneous radial velocity capture via Doppler time-of-flight imaging and we explore new directions for optically probing global illumination, for example by de-scattering dynamic scenes and by non-line-of-sight motion detection via frequency gating. © 2016 ACM.

  15. The PETRRA positron camera: design, characterization and results of a physical evaluation

    International Nuclear Information System (INIS)

    Divoli, A; Flower, M A; Erlandsson, K; Reader, A J; Evans, N; Meriaux, S; Ott, R J; Stephenson, R; Bateman, J E; Duxbury, D M; Spill, E J

    2005-01-01

    The PETRRA positron camera is a large-area (600 mm x 400 mm sensitive area) prototype system that has been developed through a collaboration between the Rutherford Appleton Laboratory and the Institute of Cancer Research/Royal Marsden Hospital. The camera uses novel technology involving the coupling of 10 mm thick barium fluoride scintillating crystals to multi-wire proportional chambers filled with a photosensitive gas. The performance of the camera is reported here and shows that the present system has a 3D spatial resolution of ∼7.5 mm full-width-half-maximum (FWHM), a timing resolution of ∼3.5 ns (FWHM), a total coincidence count-rate performance of at least 80-90 kcps and a randoms-corrected sensitivity of ∼8-10 kcps kBq -1 ml. For an average concentration of 3 kBq ml -1 as expected in a patient it is shown that, for the present prototype, ∼20% of the data would be true events. The count-rate performance is presently limited by the obsolete off-camera read-out electronics and computer system and the sensitivity by the use of thin (10 mm thick) crystals. The prototype camera has limited scatter rejection and no intrinsic shielding and is, therefore, susceptible to high levels of scatter and out-of-field activity when imaging patients. All these factors are being addressed to improve the performance of the camera. The large axial field-of-view of 400 mm makes the camera ideally suited to whole-body PET imaging. We present examples of preliminary clinical images taken with the prototype camera. Overall, the results show the potential for this alternative technology justifying further development

  16. Adaptation Computing Parameters of Pan-Tilt-Zoom Cameras for Traffic Monitoring

    Directory of Open Access Journals (Sweden)

    Ya Lin WU

    2014-01-01

    Full Text Available The Closed- CIRCUIT television (CCTV cameras have been widely used in recent years for traffic monitoring and surveillance applications. We can use CCTV cameras to extract automatically real-time traffic parameters according to the image processing and tracking technologies. Especially, the pan-tilt-zoom (PTZ cameras can provide flexible view selection as well as a wider observation range, and this makes the traffic parameters can be accurately calculated. Therefore, that the parameters of PTZ cameras are calibrated plays an important role in vision-based traffic applications. However, in the specific traffic environment, which is that the license plate number of the illegal parking is located, the parameters of PTZ cameras have to be updated according to the position and distance of illegal parking. In proposed traffic monitoring systems, we use the ordinary webcam and PTZ camera. We get vanishing-point of traffic lane lines in the pixel-based coordinate system by fixed webcam. The parameters of PTZ camera can be initialized by distance of the traffic monitoring and specific objectives and vanishing-point. And then we can use the coordinate position of the illegally parked car to update the parameters of PTZ camera and then get the real word coordinate position of the illegally parked car and use it to compute the distance. The result shows the error of the tested distance and real distance is only 0.2064 meter.

  17. Global Calibration of Multi-Cameras Based on Refractive Projection and Ray Tracing

    Directory of Open Access Journals (Sweden)

    Mingchi Feng

    2017-10-01

    Full Text Available Multi-camera systems are widely applied in the three dimensional (3D computer vision, especially when multiple cameras are distributed on both sides of the measured object. The calibration methods of multi-camera systems are critical to the accuracy of vision measurement and the key is to find an appropriate calibration target. In this paper, a high-precision camera calibration method for multi-camera systems based on transparent glass checkerboards and ray tracing is described, and is used to calibrate multiple cameras distributed on both sides of the glass checkerboard. Firstly, the intrinsic parameters of each camera are obtained by Zhang’s calibration method. Then, multiple cameras capture several images from the front and back of the glass checkerboard with different orientations, and all images contain distinct grid corners. As the cameras on one side are not affected by the refraction of glass checkerboard, extrinsic parameters can be directly calculated. However, the cameras on the other side are influenced by the refraction of glass checkerboard, and the direct use of projection model will produce a calibration error. A multi-camera calibration method using refractive projection model and ray tracing is developed to eliminate this error. Furthermore, both synthetic and real data are employed to validate the proposed approach. The experimental results of refractive calibration show that the error of the 3D reconstruction is smaller than 0.2 mm, the relative errors of both rotation and translation are less than 0.014%, and the mean and standard deviation of reprojection error of the four-camera system are 0.00007 and 0.4543 pixels, respectively. The proposed method is flexible, highly accurate, and simple to carry out.

  18. Multiple-camera tracking: UK government requirements

    Science.gov (United States)

    Hosmer, Paul

    2007-10-01

    The Imagery Library for Intelligent Detection Systems (i-LIDS) is the UK government's new standard for Video Based Detection Systems (VBDS). The standard was launched in November 2006 and evaluations against it began in July 2007. With the first four i-LIDS scenarios completed, the Home Office Scientific development Branch (HOSDB) are looking toward the future of intelligent vision in the security surveillance market by adding a fifth scenario to the standard. The fifth i-LIDS scenario will concentrate on the development, testing and evaluation of systems for the tracking of people across multiple cameras. HOSDB and the Centre for the Protection of National Infrastructure (CPNI) identified a requirement to track targets across a network of CCTV cameras using both live and post event imagery. The Detection and Vision Systems group at HOSDB were asked to determine the current state of the market and develop an in-depth Operational Requirement (OR) based on government end user requirements. Using this OR the i-LIDS team will develop a full i-LIDS scenario to aid the machine vision community in its development of multi-camera tracking systems. By defining a requirement for multi-camera tracking and building this into the i-LIDS standard the UK government will provide a widely available tool that developers can use to help them turn theory and conceptual demonstrators into front line application. This paper will briefly describe the i-LIDS project and then detail the work conducted in building the new tracking aspect of the standard.

  19. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  20. What about getting physiological information into dynamic gamma camera studies

    International Nuclear Information System (INIS)

    Kiuru, A.; Nickles, R. J.; Holden, J. E.; Polcyn, R. E.

    1976-01-01

    A general technique has been developed for the multiplexing of time dependent analog signals into the individual frames of a gamma camera dynamic function study. A pulse train, frequency-modulated by the physiological signal, is capacitively coupled to the preamplifier servicing anyone of the outer phototubes of the camera head. These negative tail pulses imitate photoevents occuring at a point outside of the camera field of view, chosen to occupy a data cell in an unused corner of the computer-stored square image. By defining a region of interest around this cell, the resulting time-activity curve displays the physiological variable in temporal synchrony with the radiotracer distribution. (author)

  1. The NIKA2 Large Field-of-View Millimeter Continuum Camera for the 30-M IRAM Telescope

    Science.gov (United States)

    Monfardini, Alessandro

    2018-01-01

    We have constructed and deployed a multi-thousands pixels dual-band (150 and 260 GHz, respectively 2mm and 1.15mm wavelengths) camera to image an instantaneous field-of-view of 6.5arc-min and configurable to map the linear polarization at 260GHz. We are providing a detailed description of this instrument, named NIKA2 (New IRAM KID Arrays 2), in particular focusing on the cryogenics, the optics, the focal plane arrays based on Kinetic Inductance Detectors (KID) and the readout electronics. We are presenting the performance measured on the sky during the commissioning runs that took place between October 2015 and April 2017 at the 30-meter IRAM (Institute of Millimetric Radio Astronomy) telescope at Pico Veleta, and preliminary science-grade results.

  2. Mapping the Tidal Destruction of the Hercules Dwarf: A Wide-field DECam Imaging Search for RR Lyrae Stars

    Science.gov (United States)

    Garling, Christopher; Willman, Beth; Sand, David J.; Hargis, Jonathan; Crnojević, Denija; Bechtol, Keith; Carlin, Jeffrey L.; Strader, Jay; Zou, Hu; Zhou, Xu; Nie, Jundan; Zhang, Tianmeng; Zhou, Zhimin; Peng, Xiyan

    2018-01-01

    We investigate the hypothesized tidal disruption of the Hercules ultra-faint dwarf galaxy (UFD). Previous tidal disruption studies of the Hercules UFD have been hindered by the high degree of foreground contamination in the direction of the dwarf. We bypass this issue by using RR Lyrae stars, which are standard candles with a very low field-volume density at the distance of Hercules. We use wide-field imaging from the Dark Energy Camera on CTIO to identify candidate RR Lyrae stars, supplemented with observations taken in coordination with the Beijing–Arizona Sky Survey on the Bok Telescope. Combining color, magnitude, and light-curve information, we identify three new RR Lyrae stars associated with Hercules. All three of these new RR Lyrae stars lie outside its published tidal radius. When considered with the nine RR Lyrae stars already known within the tidal radius, these results suggest that a substantial fraction of Hercules’ stellar content has been stripped. With this degree of tidal disruption, Hercules is an interesting case between a visibly disrupted dwarf (such as the Sagittarius dwarf spheroidal galaxy) and one in dynamic equilibrium. The degree of disruption also shows that we must be more careful with the ways we determine object membership when estimating dwarf masses in the future. One of the three discovered RR Lyrae stars sits along the minor axis of Hercules, but over two tidal radii away. This type of debris is consistent with recent models that suggest Hercules’ orbit is aligned with its minor axis.

  3. Exact optics - III. Schwarzschild's spectrograph camera revised

    Science.gov (United States)

    Willstrop, R. V.

    2004-03-01

    Karl Schwarzschild identified a system of two mirrors, each defined by conic sections, free of third-order spherical aberration, coma and astigmatism, and with a flat focal surface. He considered it impractical, because the field was too restricted. This system was rediscovered as a quadratic approximation to one of Lynden-Bell's `exact optics' designs which have wider fields. Thus the `exact optics' version has a moderate but useful field, with excellent definition, suitable for a spectrograph camera. The mirrors are strongly aspheric in both the Schwarzschild design and the exact optics version.

  4. Remote hardware-reconfigurable robotic camera

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  5. Breast-specific gamma-imaging: molecular imaging of the breast using 99mTc-sestamibi and a small-field-of-view gamma-camera.

    Science.gov (United States)

    Jones, Elizabeth A; Phan, Trinh D; Blanchard, Deborah A; Miley, Abbe

    2009-12-01

    Breast-specific gamma-imaging (BSGI), also known as molecular breast imaging, is breast scintigraphy using a small-field-of-view gamma-camera and (99m)Tc-sestamibi. There are many different types of breast cancer, and many have characteristics making them challenging to detect by mammography and ultrasound. BSGI is a cost-effective, highly sensitive and specific technique that complements other imaging modalities currently being used to identify malignant lesions in the breast. Using the current Society of Nuclear Medicine guidelines for breast scintigraphy, Legacy Good Samaritan Hospital began conducting BSGI, breast scintigraphy with a breast-optimized gamma-camera. In our experience, optimal imaging has been conducted in the Breast Center by a nuclear medicine technologist. In addition, the breast radiologists read the BSGI images in correlation with the mammograms, ultrasounds, and other imaging studies performed. By modifying the current Society of Nuclear Medicine protocol to adapt it to the practice of breast scintigraphy with these new systems and by providing image interpretation in conjunction with the other breast imaging studies, our center has found BSGI to be a valuable adjunctive procedure in the diagnosis of breast cancer. The development of a small-field-of-view gamma-camera, designed to optimize breast imaging, has resulted in improved detection capabilities, particularly for lesions less than 1 cm. Our experience with this procedure has proven to aid in the clinical work-up of many of our breast patients. After reading this article, the reader should understand the history of breast scintigraphy, the pharmaceutical used, patient preparation and positioning, imaging protocol guidelines, clinical indications, and the role of breast scintigraphy in breast cancer diagnosis.

  6. Wide-field Spatio-Spectral Interferometry: Bringing High Resolution to the Far- Infrared

    Science.gov (United States)

    Leisawitx, David

    Wide-field spatio-spectral interferometry combines spatial and spectral interferometric data to provide integral field spectroscopic information over a wide field of view. This technology breaks through a mission cost barrier that stands in the way of resolving spatially and measuring spectroscopically at far-infrared wavelengths objects that will lead to a deep understanding of planetary system and galaxy formation processes. A space-based far-IR interferometer will combine Spitzer s superb sensitivity with a two order of magnitude gain in angular resolution, and with spectral resolution in the thousands. With the possible exception of detector technology, which is advancing with support from other research programs, the greatest challenge for far-IR interferometry is to demonstrate that the interferometer will actually produce the images and spectra needed to satisfy mission science requirements. With past APRA support, our team has already developed the highly specialized hardware testbed, image projector, computational model, and image construction software required for the proposed effort, and we have access to an ideal test facility.

  7. A framework for multi-object tracking over distributed wireless camera networks

    Science.gov (United States)

    Gau, Victor; Hwang, Jenq-Neng

    2010-07-01

    In this paper, we propose a unified framework targeting at two important issues in a distributed wireless camera network, i.e., object tracking and network communication, to achieve reliable multi-object tracking over distributed wireless camera networks. In the object tracking part, we propose a fully automated approach for tracking of multiple objects across multiple cameras with overlapping and non-overlapping field of views without initial training. To effectively exchange the tracking information among the distributed cameras, we proposed an idle probability based broadcasting method, iPro, which adaptively adjusts the broadcast probability to improve the broadcast effectiveness in a dense saturated camera network. Experimental results for the multi-object tracking demonstrate the promising performance of our approach on real video sequences for cameras with overlapping and non-overlapping views. The modeling and ns-2 simulation results show that iPro almost approaches the theoretical performance upper bound if cameras are within each other's transmission range. In more general scenarios, e.g., in case of hidden node problems, the simulation results show that iPro significantly outperforms standard IEEE 802.11, especially when the number of competing nodes increases.

  8. Low-cost mobile phone microscopy with a reversed mobile phone camera lens.

    Directory of Open Access Journals (Sweden)

    Neil A Switz

    Full Text Available The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples.

  9. Low-cost mobile phone microscopy with a reversed mobile phone camera lens.

    Science.gov (United States)

    Switz, Neil A; D'Ambrosio, Michael V; Fletcher, Daniel A

    2014-01-01

    The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples.

  10. Design, manufacturing and testing of a four-mirror telescope with a wide field of view

    Science.gov (United States)

    Gloesener, P.; Wolfs, F.; Lemagne, F.; Cola, M.; Flebus, C.; Blanchard, G.; Kirschner, V.

    2017-11-01

    Regarding Earth observation missions, it has become unnecessary to point out the importance of making available wide field of view optical instruments for the purpose of spectral imaging. Taking advantage of the pushbroom instrument concept with its linear field across the on-ground track, it is in particular relevant to consider front-end optical configurations that involve an all-reflective system presenting inherent and dedicated advantages such as achromaticity, unobscuration and compactness, while ensuring the required image quality over the whole field. The attractiveness of the concept must be balanced with respect to the state-of-the-art mirror manufacturing technologies as the need for fast, broadband and wide field systems increases the constraints put on the feasibility of each individual component. As part of an ESTEC contract, AMOS designed, manufactured and tested a breadboard of a four-mirror wide field telescope for typical Earth observation superspectral missions. The initial purpose of the development was to assess the feasibility of a telecentric spaceborne three-mirror system covering an unobscured rectangular field of view of 26 degrees across track (ACT) by 6 degrees along track (ALT) with a f-number of 3.5 and a focal length of 500 mm and presenting an overall image quality better than 100 nm RMS wavefront error within the whole field.

  11. Geometric Calibration and Radiometric Correction of the Maia Multispectral Camera

    Science.gov (United States)

    Nocerino, E.; Dubbini, M.; Menna, F.; Remondino, F.; Gattelli, M.; Covi, D.

    2017-10-01

    Multispectral imaging is a widely used remote sensing technique, whose applications range from agriculture to environmental monitoring, from food quality check to cultural heritage diagnostic. A variety of multispectral imaging sensors are available on the market, many of them designed to be mounted on different platform, especially small drones. This work focuses on the geometric and radiometric characterization of a brand-new, lightweight, low-cost multispectral camera, called MAIA. The MAIA camera is equipped with nine sensors, allowing for the acquisition of images in the visible and near infrared parts of the electromagnetic spectrum. Two versions are available, characterised by different set of band-pass filters, inspired by the sensors mounted on the WorlView-2 and Sentinel2 satellites, respectively. The camera details and the developed procedures for the geometric calibrations and radiometric correction are presented in the paper.

  12. A Wide Field Auroral Imager (WFAI for low Earth orbit missions

    Directory of Open Access Journals (Sweden)

    N. P. Bannister

    2007-03-01

    Full Text Available A comprehensive understanding of the solar wind interaction with Earth's coupled magnetosphere-ionosphere system requires an ability to observe the charged particle environment and auroral activity from the same platform, generating particle and photon image data which are matched in time and location. While unambiguous identification of the particles giving rise to the aurora requires a Low Earth Orbit satellite, obtaining adequate spatial coverage of aurorae with the relatively limited field of view of current space bourne auroral imaging systems requires much higher orbits. A goal for future satellite missions, therefore, is the development of compact, wide field-of-view optics permitting high spatial and temporal resolution ultraviolet imaging of the aurora from small spacecraft in low polar orbit. Microchannel plate optics offer a method of achieving the required performance. We describe a new, compact instrument design which can observe a wide field-of-view with the required spatial resolution. We report the focusing of 121.6 nm radiation using a spherically-slumped, square-pore microchannel plate with a focal length of 32 mm and an F number of 0.7. Measurements are compared with detailed ray-trace simulations of imaging performance. The angular resolution is 2.7±0.2° for the prototype, corresponding to a footprint ~33 km in diameter for an aurora altitude of 110 km and a spacecraft altitude of 800 km. In preliminary analysis, a more recent optic has demonstrated a full width at half maximum of 5.0±0.3 arcminutes, corresponding to a footprint of ~1 km from the same spacecraft altitude. We further report the imaging properties of a convex microchannel plate detector with planar resistive anode readout; this detector, whose active surface has a radius of curvature of only 100 mm, is shown to meet the spatial resolution and sensitivity requirements of the new wide field auroral imager (WFAI.

  13. Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?

    Science.gov (United States)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2018-05-01

    360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.

  14. Nondestructive evaluation using dipole model analysis with a scan type magnetic camera

    Science.gov (United States)

    Lee, Jinyi; Hwang, Jiseong

    2005-12-01

    Large structures such as nuclear power, thermal power, chemical and petroleum refining plants are drawing interest with regard to the economic aspect of extending component life in respect to the poor environment created by high pressure, high temperature, and fatigue, securing safety from corrosion and exceeding their designated life span. Therefore, technology that accurately calculates and predicts degradation and defects of aging materials is extremely important. Among different methods available, nondestructive testing using magnetic methods is effective in predicting and evaluating defects on the surface of or surrounding ferromagnetic structures. It is important to estimate the distribution of magnetic field intensity for applicable magnetic methods relating to industrial nondestructive evaluation. A magnetic camera provides distribution of a quantitative magnetic field with a homogeneous lift-off and spatial resolution. It is possible to interpret the distribution of magnetic field when the dipole model was introduced. This study proposed an algorithm for nondestructive evaluation using dipole model analysis with a scan type magnetic camera. The numerical and experimental considerations of the quantitative evaluation of several sizes and shapes of cracks using magnetic field images of the magnetic camera were examined.

  15. A wide angle view imaging diagnostic with all reflective, in-vessel optics at JET

    Energy Technology Data Exchange (ETDEWEB)

    Clever, M. [Institute of Energy and Climate Research – Plasma Physics, Forschungszentrum Jülich GmbH, Association EURATOM-FZJ, 52425 Jülich (Germany); Arnoux, G.; Balshaw, N. [Euratom/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Garcia-Sanchez, P. [Laboratorio Nacional de Fusion, Asociacion EURATOM-CIEMAT, Madrid (Spain); Patel, K. [Euratom/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Sergienko, G. [Institute of Energy and Climate Research – Plasma Physics, Forschungszentrum Jülich GmbH, Association EURATOM-FZJ, 52425 Jülich (Germany); Soler, D. [Winlight System, 135 rue Benjamin Franklin, ZA Saint Martin, F-84120 Pertuis (France); Stamp, M.F.; Williams, J.; Zastrow, K.-D. [Euratom/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom)

    2013-10-15

    Highlights: ► A new wide angle view camera system has been installed at JET. ► The system helps to protect the ITER-like wall plasma facing components from damage. ► The coverage of the vessel by camera observation systems was increased. ► The system comprises an in-vessel part with parabolic and flat mirrors. ► The required image quality for plasma monitoring and wall protection was delivered. -- Abstract: A new wide angle view camera system has been installed at JET in preparation for the ITER-like wall campaigns. It considerably increases the coverage of the vessel by camera observation systems and thereby helps to protect the – compared to carbon – more fragile plasma facing components from damage. The system comprises an in-vessel part with parabolic and flat mirrors and an ex-vessel part with beam splitters, lenses and cameras. The system delivered the image quality required for plasma monitoring and wall protection.

  16. INVESTIGATING THE SUITABILITY OF MIRRORLESS CAMERAS IN TERRESTRIAL PHOTOGRAMMETRIC APPLICATIONS

    Directory of Open Access Journals (Sweden)

    A. H. Incekara

    2017-11-01

    Full Text Available Digital single-lens reflex cameras (DSLR which are commonly referred as mirrored cameras are preferred for terrestrial photogrammetric applications such as documentation of cultural heritage, archaeological excavations and industrial measurements. Recently, digital cameras which are called as mirrorless systems that can be used with different lens combinations have become available for using similar applications. The main difference between these two camera types is the presence of the mirror mechanism which means that the incoming beam towards the lens is different in the way it reaches the sensor. In this study, two different digital cameras, one with a mirror (Nikon D700 and the other without a mirror (Sony a6000, were used to apply close range photogrammetric application on the rock surface at Istanbul Technical University (ITU Ayazaga Campus. Accuracy of the 3D models created by means of photographs taken with both cameras were compared with each other using difference values between field and model coordinates which were obtained after the alignment of the photographs. In addition, cross sections were created on the 3D models for both data source and maximum area difference between them is quite small because they are almost overlapping. The mirrored camera has become more consistent in itself with respect to the change of model coordinates for models created with photographs taken at different times, with almost the same ground sample distance. As a result, it has been determined that mirrorless cameras and point cloud produced using photographs obtained from these cameras can be used for terrestrial photogrammetric studies.

  17. Investigating the Suitability of Mirrorless Cameras in Terrestrial Photogrammetric Applications

    Science.gov (United States)

    Incekara, A. H.; Seker, D. Z.; Delen, A.; Acar, A.

    2017-11-01

    Digital single-lens reflex cameras (DSLR) which are commonly referred as mirrored cameras are preferred for terrestrial photogrammetric applications such as documentation of cultural heritage, archaeological excavations and industrial measurements. Recently, digital cameras which are called as mirrorless systems that can be used with different lens combinations have become available for using similar applications. The main difference between these two camera types is the presence of the mirror mechanism which means that the incoming beam towards the lens is different in the way it reaches the sensor. In this study, two different digital cameras, one with a mirror (Nikon D700) and the other without a mirror (Sony a6000), were used to apply close range photogrammetric application on the rock surface at Istanbul Technical University (ITU) Ayazaga Campus. Accuracy of the 3D models created by means of photographs taken with both cameras were compared with each other using difference values between field and model coordinates which were obtained after the alignment of the photographs. In addition, cross sections were created on the 3D models for both data source and maximum area difference between them is quite small because they are almost overlapping. The mirrored camera has become more consistent in itself with respect to the change of model coordinates for models created with photographs taken at different times, with almost the same ground sample distance. As a result, it has been determined that mirrorless cameras and point cloud produced using photographs obtained from these cameras can be used for terrestrial photogrammetric studies.

  18. Increased Automation in Stereo Camera Calibration Techniques

    Directory of Open Access Journals (Sweden)

    Brandi House

    2006-08-01

    Full Text Available Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This research seeks to automate the most labor intensive aspects of a popular calibration technique developed by Jean-Yves Bouguet. His process requires manual selection of the extreme corners of a checkerboard pattern. The modified process uses embedded LEDs in the checkerboard pattern to act as active fiducials. Images are captured of the checkerboard with the LEDs on and off in rapid succession. The difference of the two images automatically highlights the location of the four extreme corners, and these corner locations take the place of the manual selections. With this modification to the calibration routine, upwards of eighty mouse clicks are eliminated per stereo calibration. Preliminary test results indicate that accuracy is not substantially affected by the modified procedure. Improved automation to camera calibration procedures may finally penetrate the barriers to the use of calibration in practice.

  19. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  20. Surveillance of a 2D Plane Area with 3D Deployed Cameras

    Directory of Open Access Journals (Sweden)

    Yi-Ge Fu

    2014-01-01

    Full Text Available As the use of camera networks has expanded, camera placement to satisfy some quality assurance parameters (such as a good coverage ratio, an acceptable resolution constraints, an acceptable cost as low as possible, etc. has become an important problem. The discrete camera deployment problem is NP-hard and many heuristic methods have been proposed to solve it, most of which make very simple assumptions. In this paper, we propose a probability inspired binary Particle Swarm Optimization (PI-BPSO algorithm to solve a homogeneous camera network placement problem. We model the problem under some more realistic assumptions: (1 deploy the cameras in the 3D space while the surveillance area is restricted to a 2D ground plane; (2 deploy the minimal number of cameras to get a maximum visual coverage under more constraints, such as field of view (FOV of the cameras and the minimum resolution constraints. We can simultaneously optimize the number and the configuration of the cameras through the introduction of a regulation item in the cost function. The simulation results showed the effectiveness of the proposed PI-BPSO algorithm.

  1. UNUSUALLY WIDE BINARIES: ARE THEY WIDE OR UNUSUAL?

    International Nuclear Information System (INIS)

    Kraus, Adam L.; Hillenbrand, Lynne A.

    2009-01-01

    We describe an astrometric and spectroscopic campaign to confirm the youth and association of a complete sample of candidate wide companions in Taurus and Upper Sco. Our survey found 15 new binary systems (three in Taurus and 12 in Upper Sco) with separations of 3''-30'' (500-5000 AU) among all of the known members with masses of 2.5-0.012 M sun . The total sample of 49 wide systems in these two regions conforms to only some expectations from field multiplicity surveys. Higher mass stars have a higher frequency of wide binary companions, and there is a marked paucity of wide binary systems near the substellar regime. However, the separation distribution appears to be log-flat, rather than declining as in the field, and the mass ratio distribution is more biased toward similar-mass companions than the initial mass function or the field G-dwarf distribution. The maximum separation also shows no evidence of a limit at ∼ sun . We attribute this result to the post-natal dynamical sculpting that occurs for most field systems; our binary systems will escape to the field intact, but most field stars are formed in denser clusters and undergo significant dynamical evolution. In summary, only wide binary systems with total masses ∼ sun appear to be 'unusually wide'.

  2. Wide Field Infra-Red Survey Telescope (WFIRST) 2.4-Meter Mission Study

    Science.gov (United States)

    Content, D.; Aaron, K.; Alplanalp, L.; Anderson, K.; Capps, R.; Chang, Z.; Dooley, J.; Egerman, R.; Goullioud, R.; Klein, D.; hide

    2013-01-01

    The most recent study of the Wide Field Infrared Survey Telescope (WFIRST) mission is based on reuse of an existing 2.4m telescope. This study was commissioned by NASA to examine the potential science return and cost effectiveness of WFIRST by using this significantly larger aperture telescope. We review the science program envisioned by the WFIRST 2012-2013 Science Definition Team (SDT), an overview of the mission concept, and the telescope design and status. Comparisons against the previous 1.3m and reduced cost 1.1m WFIRST design concepts are discussed. A significant departure from past point designs is the option for serviceability and the geostationary orbit location which enables servicing and replacement instrument insertion later during mission life. Other papers at this conference provide more in depth discussion of the wide field instrument and the optional exoplanet imaging coronagraph instrument.

  3. Optimization of video capturing and tone mapping in video camera systems

    NARCIS (Netherlands)

    Cvetkovic, S.D.

    2011-01-01

    Image enhancement techniques are widely employed in many areas of professional and consumer imaging, machine vision and computational imaging. Image enhancement techniques used in surveillance video cameras are complex systems involving controllable lenses, sensors and advanced signal processing. In

  4. THERMAL EFFECTS ON CAMERA FOCAL LENGTH IN MESSENGER STAR CALIBRATION AND ORBITAL IMAGING

    Directory of Open Access Journals (Sweden)

    S. Burmeister

    2018-04-01

    Full Text Available We analyse images taken by the MErcury Surface, Space ENviorment, GEochemistry, and Ranging (MESSENGER spacecraft for the camera’s thermal response in the harsh thermal environment near Mercury. Specifically, we study thermally induced variations in focal length of the Mercury Dual Imaging System (MDIS. Within the several hundreds of images of star fields, the Wide Angle Camera (WAC typically captures up to 250 stars in one frame of the panchromatic channel. We measure star positions and relate these to the known star coordinates taken from the Tycho-2 catalogue. We solve for camera pointing, the focal length parameter and two non-symmetrical distortion parameters for each image. Using data from the temperature sensors on the camera focal plane we model a linear focal length function in the form of f(T = A0 + A1 T. Next, we use images from MESSENGER’s orbital mapping mission. We deal with large image blocks, typically used for the production of a high-resolution digital terrain models (DTM. We analyzed images from the combined quadrangles H03 and H07, a selected region, covered by approx. 10,600 images, in which we identified about 83,900 tiepoints. Using bundle block adjustments, we solved for the unknown coordinates of the control points, the pointing of the camera – as well as the camera’s focal length. We then fit the above linear function with respect to the focal plane temperature. As a result, we find a complex response of the camera to thermal conditions of the spacecraft. To first order, we see a linear increase by approx. 0.0107 mm per degree temperature for the Narrow-Angle Camera (NAC. This is in agreement with the observed thermal response seen in images of the panchromatic channel of the WAC. Unfortunately, further comparisons of results from the two methods, both of which use different portions of the available image data, are limited. If leaving uncorrected, these effects may pose significant difficulties in

  5. The Alfred Nobel rocket camera. An early aerial photography attempt

    Science.gov (United States)

    Ingemar Skoog, A.

    2010-02-01

    Alfred Nobel (1833-1896), mainly known for his invention of dynamite and the creation of the Nobel Prices, was an engineer and inventor active in many fields of science and engineering, e.g. chemistry, medicine, mechanics, metallurgy, optics, armoury and rocketry. Amongst his inventions in rocketry was the smokeless solid propellant ballistite (i.e. cordite) patented for the first time in 1887. As a very wealthy person he actively supported many Swedish inventors in their work. One of them was W.T. Unge, who was devoted to the development of rockets and their applications. Nobel and Unge had several rocket patents together and also jointly worked on various rocket applications. In mid-1896 Nobel applied for patents in England and France for "An Improved Mode of Obtaining Photographic Maps and Earth or Ground Measurements" using a photographic camera carried by a "…balloon, rocket or missile…". During the remaining of 1896 the mechanical design of the camera mechanism was pursued and cameras manufactured. In April 1897 (after the death of Alfred Nobel) the first aerial photos were taken by these cameras. These photos might be the first documented aerial photos taken by a rocket borne camera. Cameras and photos from 1897 have been preserved. Nobel did not only develop the rocket borne camera but also proposed methods on how to use the photographs taken for ground measurements and preparing maps.

  6. Analysis of conditions for magnetron discharge initiation at vacuum camera testing

    International Nuclear Information System (INIS)

    Tzeneva, Raina; Dineff, Peter; Darjanova, Denitza

    2002-01-01

    Models of electric field distribution for two typical cases of vacuum camera internal pressure control are investigated. New relations between the maximum magnetron discharge current value I max and the maximum electric field strength radial component value E τ max are established. (Author)

  7. Design of a Day/Night Star Camera System

    Science.gov (United States)

    Alexander, Cheryl; Swift, Wesley; Ghosh, Kajal; Ramsey, Brian

    1999-01-01

    This paper describes the design of a camera system capable of acquiring stars during both the day and night cycles of a high altitude balloon flight (35-42 km). The camera system will be filtered to operate in the R band (590-810 nm). Simulations have been run using MODTRAN atmospheric code to determine the worse case sky brightness at 35 km. With a daytime sky brightness of 2(exp -05) W/sq cm/str/um in the R band, the sensitivity of the camera system will allow acquisition of at least 1-2 stars/sq degree at star magnitude limits of 8.25-9.00. The system will have an F2.8, 64.3 mm diameter lens and a 1340X1037 CCD array digitized to 12 bits. The CCD array is comprised of 6.8 X 6.8 micron pixels with a well depth of 45,000 electrons and a quantum efficiency of 0.525 at 700 nm. The camera's field of view will be 6.33 sq degree and provide attitude knowledge to 8 arcsec or better. A test flight of the system is scheduled for fall 1999.

  8. A METHOD FOR SELF-CALIBRATION IN SATELLITE WITH HIGH PRECISION OF SPACE LINEAR ARRAY CAMERA

    Directory of Open Access Journals (Sweden)

    W. Liu

    2016-06-01

    Full Text Available At present, the on-orbit calibration of the geometric parameters of a space surveying camera is usually processed by data from a ground calibration field after capturing the images. The entire process is very complicated and lengthy and cannot monitor and calibrate the geometric parameters in real time. On the basis of a large number of on-orbit calibrations, we found that owing to the influence of many factors, e.g., weather, it is often difficult to capture images of the ground calibration field. Thus, regular calibration using field data cannot be ensured. This article proposes a real time self-calibration method for a space linear array camera on a satellite using the optical auto collimation principle. A collimating light source and small matrix array CCD devices are installed inside the load system of the satellite; these use the same light path as the linear array camera. We can extract the location changes of the cross marks in the matrix array CCD to determine the real-time variations in the focal length and angle parameters of the linear array camera. The on-orbit status of the camera is rapidly obtained using this method. On one hand, the camera’s change regulation can be mastered accurately and the camera’s attitude can be adjusted in a timely manner to ensure optimal photography; in contrast, self-calibration of the camera aboard the satellite can be realized quickly, which improves the efficiency and reliability of photogrammetric processing.

  9. Community cyberinfrastructure for Advanced Microbial Ecology Research and Analysis: the CAMERA resource.

    Science.gov (United States)

    Sun, Shulei; Chen, Jing; Li, Weizhong; Altintas, Ilkay; Lin, Abel; Peltier, Steve; Stocks, Karen; Allen, Eric E; Ellisman, Mark; Grethe, Jeffrey; Wooley, John

    2011-01-01

    The Community Cyberinfrastructure for Advanced Microbial Ecology Research and Analysis (CAMERA, http://camera.calit2.net/) is a database and associated computational infrastructure that provides a single system for depositing, locating, analyzing, visualizing and sharing data about microbial biology through an advanced web-based analysis portal. CAMERA collects and links metadata relevant to environmental metagenome data sets with annotation in a semantically-aware environment allowing users to write expressive semantic queries against the database. To meet the needs of the research community, users are able to query metadata categories such as habitat, sample type, time, location and other environmental physicochemical parameters. CAMERA is compliant with the standards promulgated by the Genomic Standards Consortium (GSC), and sustains a role within the GSC in extending standards for content and format of the metagenomic data and metadata and its submission to the CAMERA repository. To ensure wide, ready access to data and annotation, CAMERA also provides data submission tools to allow researchers to share and forward data to other metagenomics sites and community data archives such as GenBank. It has multiple interfaces for easy submission of large or complex data sets, and supports pre-registration of samples for sequencing. CAMERA integrates a growing list of tools and viewers for querying, analyzing, annotating and comparing metagenome and genome data.

  10. A generalized measurement equation and van Cittert-Zernike theorem for wide-field radio astronomical interferometry

    Science.gov (United States)

    Carozzi, T. D.; Woan, G.

    2009-05-01

    We derive a generalized van Cittert-Zernike (vC-Z) theorem for radio astronomy that is valid for partially polarized sources over an arbitrarily wide field of view (FoV). The classical vC-Z theorem is the theoretical foundation of radio astronomical interferometry, and its application is the basis of interferometric imaging. Existing generalized vC-Z theorems in radio astronomy assume, however, either paraxiality (narrow FoV) or scalar (unpolarized) sources. Our theorem uses neither of these assumptions, which are seldom fulfiled in practice in radio astronomy, and treats the full electromagnetic field. To handle wide, partially polarized fields, we extend the two-dimensional (2D) electric field (Jones vector) formalism of the standard `Measurement Equation' (ME) of radio astronomical interferometry to the full three-dimensional (3D) formalism developed in optical coherence theory. The resulting vC-Z theorem enables full-sky imaging in a single telescope pointing, and imaging based not only on standard dual-polarized interferometers (that measure 2D electric fields) but also electric tripoles and electromagnetic vector-sensor interferometers. We show that the standard 2D ME is easily obtained from our formalism in the case of dual-polarized antenna element interferometers. We also exploit an extended 2D ME to determine that dual-polarized interferometers can have polarimetric aberrations at the edges of a wide FoV. Our vC-Z theorem is particularly relevant to proposed, and recently developed, wide FoV interferometers such as Low Frequency Array (LOFAR) and Square Kilometer Array (SKA), for which direction-dependent effects will be important.

  11. Comparison of parameters of modern cooled and uncooled thermal cameras

    Science.gov (United States)

    Bareła, Jarosław; Kastek, Mariusz; Firmanty, Krzysztof; Krupiński, Michał

    2017-10-01

    During the design of a system employing thermal cameras one always faces a problem of choosing the camera types best suited for the task. In many cases such a choice is far from optimal one, and there are several reasons for that. System designers often favor tried and tested solution they are used to. They do not follow the latest developments in the field of infrared technology and sometimes their choices are based on prejudice and not on facts. The paper presents the results of measurements of basic parameters of MWIR and LWIR thermal cameras, carried out in a specialized testing laboratory. The measured parameters are decisive in terms of image quality generated by thermal cameras. All measurements were conducted according to current procedures and standards. However the camera settings were not optimized for a specific test conditions or parameter measurements. Instead the real settings used in normal camera operations were applied to obtain realistic camera performance figures. For example there were significant differences between measured values of noise parameters and catalogue data provided by manufacturers, due to the application of edge detection filters to increase detection and recognition ranges. The purpose of this paper is to provide help in choosing the optimal thermal camera for particular application, answering the question whether to opt for cheaper microbolometer device or apply slightly better (in terms of specifications) yet more expensive cooled unit. Measurements and analysis were performed by qualified personnel with several dozen years of experience in both designing and testing of thermal camera systems with both cooled and uncooled focal plane arrays. Cameras of similar array sizes and optics were compared, and for each tested group the best performing devices were selected.

  12. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  13. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  14. Picosecond camera

    International Nuclear Information System (INIS)

    Decroisette, Michel

    A Kerr cell activated by infrared pulses of a model locked Nd glass laser, acts as an ultra-fast and periodic shutter, with a few p.s. opening time. Associated with a S.T.L. camera, it gives rise to a picosecond camera allowing us to study very fast effects [fr

  15. Calibration Procedures in Mid Format Camera Setups

    Science.gov (United States)

    Pivnicka, F.; Kemper, G.; Geissler, S.

    2012-07-01

    A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU), the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and camera can be applied

  16. Initial clinical experience with dedicated ultra fast solid state cardiac gamma camera

    International Nuclear Information System (INIS)

    Aland, Nusrat; Lele, V.

    2010-01-01

    detector reducing camera related motion artifacts. Diagnostic performance was comparable to that of standard dual detector gamma camera images. Mid septum invariably showed perfusion defects in QPS protocol, this probably was due to lack of normal database for solid state detector. Lung activity could not be visualized due to small field of view. Extra cardiac activity could be assessed. CONCLUSION: We preferred solid state cardiac gamma camera over conventional dual detector gamma camera for myocardial perfusion imaging. Advantages of solid state gamma camera over standard dual head gamma camera: 1. Faster acquisition time 2. Increased patient comfort 3. Less radiation dose to the patient 4. Brighter images 5. No motion artifact 6. Better right ventricular imaging Disadvantages 1. Extra cardiac activity cannot be assessed 2. Lung activity not seen due to small field of view 3. Invariably septal perfusion defects noted

  17. Performance and quality control of scintillation cameras

    International Nuclear Information System (INIS)

    Moretti, J.L.; Iachetti, D.

    1983-01-01

    Acceptance testing, quality and control assurance of gamma-cameras are a part of diagnostic quality in clinical practice. Several parameters are required to achieve a good diagnostic reliability: intrinsic spatial resolution, spatial linearity, uniformities, energy resolution, count-rate characteristics, multiple window spatial analysis. Each parameter was measured and also estimated by a test easy to implement in routine practice. Material required was a 4028 multichannel analyzer linked to a microcomputeur, mini-computers and a set of phantoms (parallel slits, diffusing phantom, orthogonal hole transmission pattern). Gamma-cameras on study were:CGR 3400, CGR 3420, G.E.4000. Siemens ZLC 75 and large field Philips. Several tests proposed by N.E.M.A. and W.H.O. have to be improved concerning too punctual spatial determinations during distortion measurements with multiple window. Contrast control of image need to be monitored with high counting rate. This study shows the need to avoid punctual determinations and the interest to give sets of values of the same parameter on the whole field and to report mean values with their standard variation [fr

  18. Wide area 2D/3D imaging development, analysis and applications

    CERN Document Server

    Langmann, Benjamin

    2014-01-01

    Imaging technology is an important research area and it is widely utilized in a growing number of disciplines ranging from gaming, robotics and automation to medicine. In the last decade 3D imaging became popular mainly driven by the introduction of novel 3D cameras and measuring devices. These cameras are usually limited to indoor scenes with relatively low distances. Benjamin Langmann introduces medium and long-range 2D/3D cameras to overcome these limitations. He reports measurement results for these devices and studies their characteristic behavior. In order to facilitate the application o

  19. Determining fast orientation changes of multi-spectral line cameras from the primary images

    Science.gov (United States)

    Wohlfeil, Jürgen

    2012-01-01

    Fast orientation changes of airborne and spaceborne line cameras cannot always be avoided. In such cases it is essential to measure them with high accuracy to ensure a good quality of the resulting imagery products. Several approaches exist to support the orientation measurement by using optical information received through the main objective/telescope. In this article an approach is proposed that allows the determination of non-systematic orientation changes between every captured line. It does not require any additional camera hardware or onboard processing capabilities but the payload images and a rough estimate of the camera's trajectory. The approach takes advantage of the typical geometry of multi-spectral line cameras with a set of linear sensor arrays for different spectral bands on the focal plane. First, homologous points are detected within the heavily distorted images of different spectral bands. With their help a connected network of geometrical correspondences can be built up. This network is used to calculate the orientation changes of the camera with the temporal and angular resolution of the camera. The approach was tested with an extensive set of aerial surveys covering a wide range of different conditions and achieved precise and reliable results.

  20. Light field imaging and application analysis in THz

    Science.gov (United States)

    Zhang, Hongfei; Su, Bo; He, Jingsuo; Zhang, Cong; Wu, Yaxiong; Zhang, Shengbo; Zhang, Cunlin

    2018-01-01

    The light field includes the direction information and location information. Light field imaging can capture the whole light field by single exposure. The four-dimensional light field function model represented by two-plane parameter, which is proposed by Levoy, is adopted in the light field. Acquisition of light field is based on the microlens array, camera array and the mask. We calculate the dates of light-field to synthetize light field image. The processing techniques of light field data include technology of refocusing rendering, technology of synthetic aperture and technology of microscopic imaging. Introducing the technology of light field imaging into THz, the efficiency of 3D imaging is higher than that of conventional THz 3D imaging technology. The advantages compared with visible light field imaging include large depth of field, wide dynamic range and true three-dimensional. It has broad application prospects.

  1. CAMERA-BASED SOFTWARE IN REHABILITATION/THERAPY INTERVENTION (extended)

    DEFF Research Database (Denmark)

    Brooks, Anthony Lewis

    2014-01-01

    on specific hardware. Adaptable means that human tracking and created artefact interaction in the camera field of view is relatively easily changed as one desires via a user-friendly GUI. The significance of having both available for contemporary intervention is argued. Conclusions are that the mature, robust...

  2. Defocus Deblurring and Superresolution for Time-of-Flight Depth Cameras

    KAUST Repository

    Xiao, Lei

    2015-06-07

    Continuous-wave time-of-flight (ToF) cameras show great promise as low-cost depth image sensors in mobile applications. However, they also suffer from several challenges, including limited illumination intensity, which mandates the use of large numerical aperture lenses, and thus results in a shallow depth of field, making it difficult to capture scenes with large variations in depth. Another shortcoming is the limited spatial resolution of currently available ToF sensors. In this paper we analyze the image formation model for blurred ToF images. By directly working with raw sensor measurements but regularizing the recovered depth and amplitude images, we are able to simultaneously deblur and super-resolve the output of ToF cameras. Our method outperforms existing methods on both synthetic and real datasets. In the future our algorithm should extend easily to cameras that do not follow the cosine model of continuous-wave sensors, as well as to multi-frequency or multi-phase imaging employed in more recent ToF cameras.

  3. Defocus Deblurring and Superresolution for Time-of-Flight Depth Cameras

    KAUST Repository

    Xiao, Lei; Heide, Felix; O'Toole, Matthew; Kolb, Andreas; Hullin, Matthias B.; Kutulakos, Kyros; Heidrich, Wolfgang

    2015-01-01

    Continuous-wave time-of-flight (ToF) cameras show great promise as low-cost depth image sensors in mobile applications. However, they also suffer from several challenges, including limited illumination intensity, which mandates the use of large numerical aperture lenses, and thus results in a shallow depth of field, making it difficult to capture scenes with large variations in depth. Another shortcoming is the limited spatial resolution of currently available ToF sensors. In this paper we analyze the image formation model for blurred ToF images. By directly working with raw sensor measurements but regularizing the recovered depth and amplitude images, we are able to simultaneously deblur and super-resolve the output of ToF cameras. Our method outperforms existing methods on both synthetic and real datasets. In the future our algorithm should extend easily to cameras that do not follow the cosine model of continuous-wave sensors, as well as to multi-frequency or multi-phase imaging employed in more recent ToF cameras.

  4. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  5. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10 6 - 10 8 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  6. Simultaneous density-field visualization and PIV of a shock-accelerated gas curtain

    Energy Technology Data Exchange (ETDEWEB)

    Prestridge, K.; Rightley, P.M.; Vorobieff, P. [Los Alamos Nat. Lab., NM (United States). Dynamic Exp. Div.; Benjamin, R.F.; Kurnit, N.A.

    2000-10-01

    We describe a highly-detailed experimental characterization of the Richtmyer-Meshkov instability (the impulsively driven Rayleigh-Taylor instability) (Meshkov 1969; Richtmyer 1960). In our experiment, a vertical curtain of heavy gas (SF{sub 6}) flows into the test section of an air-filled, horizontal shock tube. The instability evolves after a Mach 1.2 shock passes through the curtain. For visualization, we pre-mix the SF{sub 6} with a small ({proportional_to}10{sup -5}) volume fraction of sub-micron-sized glycol/water droplets. A horizontal section of the flow is illuminated by a light sheet produced by a combination of a customized, burst-mode Nd:YAG laser and a commercial pulsed laser. Three CCD cameras are employed in visualization. The ''dynamic imaging camera'' images the entire test section, but does not detect the individual droplets. It produces a sequence of instantaneous images of local droplet concentration, which in the post-shock flow is proportional to density. The gas curtain is convected out of the test section about 1 ms after the shock passes through the curtain. A second camera images the initial conditions with high resolution, since the initial conditions vary from test to test. The third camera, ''PIV camera,'' has a spatial resolution sufficient to detect the individual droplets in the light sheet. Images from this camera are interrogated using particle image velocimetry (PIV) to recover instantaneous snapshots of the velocity field in a small (19 x 14 mm) field of view. The fidelity of the flow-seeding technique for density-field acquisition and the reliability of the PIV technique are both quantified in this paper. In combination with wide-field density data, PIV measurements give us additional physical insight into the evolution of the Richtmyer-Meshkov instability in a problem which serves as an excellent test case for general transition-to-turbulence studies. (orig.)

  7. Electrolocation-based underwater obstacle avoidance using wide-field integration methods

    International Nuclear Information System (INIS)

    Dimble, Kedar D; Faddy, James M; Humbert, J Sean

    2014-01-01

    Weakly electric fish are capable of efficiently performing obstacle avoidance in dark and navigationally challenging aquatic environments using electrosensory information. This sensory modality enables extraction of relevant proximity information about surrounding obstacles by interpretation of perturbations induced to the fish’s self-generated electric field. In this paper, reflexive obstacle avoidance is demonstrated by extracting relative proximity information using spatial decompositions of the perturbation signal, also called an electric image. Electrostatics equations were formulated for mathematically expressing electric images due to a straight tunnel to the electric field generated with a planar electro-sensor model. These equations were further used to design a wide-field integration based static output feedback controller. The controller was implemented in quasi-static simulations for environments with complicated geometries modelled using finite element methods to demonstrate sense and avoid behaviours. The simulation results were confirmed by performing experiments using a computer operated gantry system in environments lined with either conductive or non-conductive objects acting as global stimuli to the field of the electro-sensor. The proposed approach is computationally inexpensive and readily implementable, making underwater autonomous navigation in real-time feasible. (paper)

  8. Polarization leakage in epoch of reionization windows - III. Wide-field effects of narrow-field arrays

    Science.gov (United States)

    Asad, K. M. B.; Koopmans, L. V. E.; Jelić, V.; de Bruyn, A. G.; Pandey, V. N.; Gehlot, B. K.

    2018-05-01

    Leakage of polarized Galactic diffuse emission into total intensity can potentially mimic the 21-cm signal coming from the epoch of reionization (EoR), as both of them might have fluctuating spectral structure. Although we are sensitive to the EoR signal only in small fields of view, chromatic side-lobes from further away can contaminate the inner region. Here, we explore the effects of leakage into the `EoR window' of the cylindrically averaged power spectra (PS) within wide fields of view using both observation and simulation of the 3C196 and North Celestial Pole (NCP) fields, two observing fields of the LOFAR-EoR project. We present the polarization PS of two one-night observations of the two fields and find that the NCP field has higher fluctuations along frequency, and consequently exhibits more power at high-k∥ that could potentially leak to Stokes I. Subsequently, we simulate LOFAR observations of Galactic diffuse polarized emission based on a model to assess what fraction of polarized power leaks into Stokes I because of the primary beam. We find that the rms fractional leakage over the instrumental k-space is 0.35 {per cent} in the 3C196 field and 0.27 {per cent} in the NCP field, and it does not change significantly within the diameters of 15°, 9°, and 4°. Based on the observed PS and simulated fractional leakage, we show that a similar level of leakage into Stokes I is expected in the 3C196 and NCP fields, and the leakage can be considered to be a bias in the PS.

  9. Wide field monitoring of the X-ray sky using Rotation Modulation Collimators

    DEFF Research Database (Denmark)

    Lund, Niels; Brandt, Søren

    1995-01-01

    Wide field monitoring is of particular interest in X-ray astronomy due to the strong time-variability of most X-ray sources. Not only does the time-profiles of the persistent sources contain characteristic signatures of the underlying physical systems, but, additionally, some of the most intrigui...

  10. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  11. Development and characterization of a CCD camera system for use on six-inch manipulator systems

    International Nuclear Information System (INIS)

    Logory, L.M.; Bell, P.M.; Conder, A.D.; Lee, F.D.

    1996-01-01

    The Lawrence Livermore National Laboratory has designed, constructed, and fielded a compact CCD camera system for use on the Six Inch Manipulator (SIM) at the Nova laser facility. The camera system has been designed to directly replace the 35 mm film packages on all active SIM-based diagnostics. The unit's electronic package is constructed for small size and high thermal conductivity using proprietary printed circuit board technology, thus reducing the size of the overall camera and improving its performance when operated within the vacuum environment of the Nova laser target chamber. The camera has been calibrated and found to yield a linear response, with superior dynamic range and signal-to-noise levels as compared to T-Max 3200 optic film, while providing real-time access to the data. Limiting factors related to fielding such devices on Nova will be discussed, in addition to planned improvements of the current design

  12. In-camera video-stream processing for bandwidth reduction in web inspection

    Science.gov (United States)

    Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.

    1996-02-01

    Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.

  13. Divergence-ratio axi-vision camera (Divcam): A distance mapping camera

    International Nuclear Information System (INIS)

    Iizuka, Keigo

    2006-01-01

    A novel distance mapping camera the divergence-ratio axi-vision camera (Divcam) is proposed. The decay rate of the illuminating light with distance due to the divergence of the light is used as means of mapping the distance. Resolutions of 10 mm over a range of meters and 0.5 mm over a range of decimeters were achieved. The special features of this camera are its high resolution real-time operation, simplicity, compactness, light weight, portability, and yet low fabrication cost. The feasibility of various potential applications is also included

  14. Automatic helmet-wearing detection for law enforcement using CCTV cameras

    Science.gov (United States)

    Wonghabut, P.; Kumphong, J.; Satiennam, T.; Ung-arunyawee, R.; Leelapatra, W.

    2018-04-01

    The objective of this research is to develop an application for enforcing helmet wearing using CCTV cameras. The developed application aims to help law enforcement by police, and eventually resulting in changing risk behaviours and consequently reducing the number of accidents and its severity. Conceptually, the application software implemented using C++ language and OpenCV library uses two different angle of view CCTV cameras. Video frames recorded by the wide-angle CCTV camera are used to detect motorcyclists. If any motorcyclist without helmet is found, then the zoomed (narrow-angle) CCTV is activated to capture image of the violating motorcyclist and the motorcycle license plate in real time. Captured images are managed by database implemented using MySQL for ticket issuing. The results show that the developed program is able to detect 81% of motorcyclists on various motorcycle types during daytime and night-time. The validation results reveal that the program achieves 74% accuracy in detecting the motorcyclist without helmet.

  15. A survey of camera error sources in machine vision systems

    Science.gov (United States)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  16. Performance of Laser Megajoule’s x-ray streak camera

    Energy Technology Data Exchange (ETDEWEB)

    Zuber, C., E-mail: celine.zuber@cea.fr; Bazzoli, S.; Brunel, P.; Burillo, M.; Gontier, D.; Moreau, I.; Oudot, G.; Rubbelynck, C.; Soullié, G.; Stemmler, P.; Trosseille, C. [CEA, DAM, DIF, F-91297 Arpajon (France); Fronty, J. P.; Goulmy, C. [Photonis France SAS, Avenue Roger Roncier, BP 520, 19106 Brive Cedex (France)

    2016-11-15

    A prototype of a picosecond x-ray streak camera has been developed and tested by Commissariat à l’Énergie Atomique et aux Énergies Alternatives to provide plasma-diagnostic support for the Laser Megajoule. We report on the measured performance of this streak camera, which almost fulfills the requirements: 50-μm spatial resolution over a 15-mm field in the photocathode plane, 17-ps temporal resolution in a 2-ns timebase, a detection threshold lower than 625 nJ/cm{sup 2} in the 0.05–15 keV spectral range, and a dynamic range greater than 100.

  17. Digital fundus image grading with the non-mydriatic Visucam(PRO NM) versus the FF450(plus) camera in diabetic retinopathy.

    Science.gov (United States)

    Neubauer, Aljoscha S; Rothschuh, Antje; Ulbig, Michael W; Blum, Marcus

    2008-03-01

    Grading diabetic retinopathy in clinical trials is frequently based on 7-field stereo photography of the fundus in diagnostic mydriasis. In terms of image quality, the FF450(plus) camera (Carl Zeiss Meditec AG, Jena, Germany) defines a high-quality reference. The aim of the study was to investigate if the fully digital fundus camera Visucam(PRO NM) could serve as an alternative in clinical trials requiring 7-field stereo photography. A total of 128 eyes of diabetes patients were enrolled in the randomized, controlled, prospective trial. Seven-field stereo photography was performed with the Visucam(PRO NM) and the FF450(plus) camera, in random order, both in diagnostic mydriasis. The resulting 256 image sets from the two camera systems were graded for retinopathy levels and image quality (on a scale of 1-5); both were anonymized and blinded to the image source. On FF450(plus) stereoscopic imaging, 20% of the patients had no or mild diabetic retinopathy (ETDRS level cameras regarding retinopathy levels (kappa 0.87) and macular oedema (kappa 0.80). In diagnostic mydriasis the image quality of the Visucam was graded slightly as better than that of the FF450(plus) (2.20 versus 2.41; p camera for applications and clinical trials requiring 7-field stereo photography.

  18. FNTD radiation dosimetry system enhanced with dual-color wide-field imaging

    International Nuclear Information System (INIS)

    Akselrod, M.S.; Fomenko, V.V.; Bartz, J.A.; Ding, F.

    2014-01-01

    At high neutron and photon doses Fluorescent Nuclear Track Detectors (FNTDs) require operation in analog mode and the measurement results depend on individual crystal color center concentration (coloration). We describe a new method for radiation dosimetry using FNTDs, which includes non-destructive, automatic sensitivity calibration for each individual FNTD. In the method presented, confocal laser scanning fluorescent imaging of FNTDs is combined with dual-color wide field imaging of the FNTD. The calibration is achieved by measuring the color center concentration in the detector through fluorescence imaging and reducing the effect of diffuse reflection on the lapped surface of the FNTD by imaging with infra-red (IR) light. The dual-color imaging of FNTDs is shown to provide a good estimation of the detector sensitivity at high doses of photons and neutrons, where conventional track counting is impeded by track overlap. - Highlights: • New method and optical imaging head was developed for FNTD used at high doses. • Dual-color wide-field imaging used for color center concentration measurement. • Green fluorescence corrected by diffuse reflection used for sensitivity correction. • FNTD dose measurements performed in analog processing mode

  19. Gamma camera computer system quality control for conventional and tomographic use

    International Nuclear Information System (INIS)

    Laird, E.E.; Allan, W.; Williams, E.D.

    1983-01-01

    The proposition that some of the proposed measurements of gamma camera performance parameters for routine quality control are redundant and that only the uniformity requires daily monitoring was examined. To test this proposition, measurements of gamma camera performance were carried out under normal operating conditions and also with the introduction of faults (offset window, offset PM tube). Results for the uniform flood field are presented for non-uniformity, intrinsic spatial resolution, linearity and relative system sensitivity. The response to introduced faults revealed that while the non-uniformity response pattern of the gamma camera was clearly affected, both measurements and qualitative indications of the other performance parameters did not necessarily show any deterioration. (U.K.)

  20. Field test comparison of an autocorrelation technique for determining grain size using a digital 'beachball' camera versus traditional methods

    Science.gov (United States)

    Barnard, P.L.; Rubin, D.M.; Harney, J.; Mustain, N.

    2007-01-01

    This extensive field test of an autocorrelation technique for determining grain size from digital images was conducted using a digital bed-sediment camera, or 'beachball' camera. Using 205 sediment samples and >1200 images from a variety of beaches on the west coast of the US, grain size ranging from sand to granules was measured from field samples using both the autocorrelation technique developed by Rubin [Rubin, D.M., 2004. A simple autocorrelation algorithm for determining grain size from digital images of sediment. Journal of Sedimentary Research, 74(1): 160-165.] and traditional methods (i.e. settling tube analysis, sieving, and point counts). To test the accuracy of the digital-image grain size algorithm, we compared results with manual point counts of an extensive image data set in the Santa Barbara littoral cell. Grain sizes calculated using the autocorrelation algorithm were highly correlated with the point counts of the same images (r2 = 0.93; n = 79) and had an error of only 1%. Comparisons of calculated grain sizes and grain sizes measured from grab samples demonstrated that the autocorrelation technique works well on high-energy dissipative beaches with well-sorted sediment such as in the Pacific Northwest (r2 ??? 0.92; n = 115). On less dissipative, more poorly sorted beaches such as Ocean Beach in San Francisco, results were not as good (r2 ??? 0.70; n = 67; within 3% accuracy). Because the algorithm works well compared with point counts of the same image, the poorer correlation with grab samples must be a result of actual spatial and vertical variability of sediment in the field; closer agreement between grain size in the images and grain size of grab samples can be achieved by increasing the sampling volume of the images (taking more images, distributed over a volume comparable to that of a grab sample). In all field tests the autocorrelation method was able to predict the mean and median grain size with ???96% accuracy, which is more than

  1. A novel fully integrated handheld gamma camera

    International Nuclear Information System (INIS)

    Massari, R.; Ucci, A.; Campisi, C.; Scopinaro, F.; Soluri, A.

    2016-01-01

    In this paper, we present an innovative, fully integrated handheld gamma camera, namely designed to gather in the same device the gamma ray detector with the display and the embedded computing system. The low power consumption allows the prototype to be battery operated. To be useful in radioguided surgery, an intraoperative gamma camera must be very easy to handle since it must be moved to find a suitable view. Consequently, we have developed the first prototype of a fully integrated, compact and lightweight gamma camera for radiopharmaceuticals fast imaging. The device can operate without cables across the sterile field, so it may be easily used in the operating theater for radioguided surgery. The prototype proposed consists of a Silicon Photomultiplier (SiPM) array coupled with a proprietary scintillation structure based on CsI(Tl) crystals. To read the SiPM output signals, we have developed a very low power readout electronics and a dedicated analog to digital conversion system. One of the most critical aspects we faced designing the prototype was the low power consumption, which is mandatory to develop a battery operated device. We have applied this detection device in the lymphoscintigraphy technique (sentinel lymph node mapping) comparing the results obtained with those of a commercial gamma camera (Philips SKYLight). The results obtained confirm a rapid response of the device and an adequate spatial resolution for the use in the scintigraphic imaging. This work confirms the feasibility of a small gamma camera with an integrated display. This device is designed for radioguided surgery and small organ imaging, but it could be easily combined into surgical navigation systems.

  2. A novel fully integrated handheld gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Massari, R.; Ucci, A.; Campisi, C. [Biostructure and Bioimaging Institute (IBB), National Research Council of Italy (CNR), Rome (Italy); Scopinaro, F. [University of Rome “La Sapienza”, S. Andrea Hospital, Rome (Italy); Soluri, A., E-mail: alessandro.soluri@ibb.cnr.it [Biostructure and Bioimaging Institute (IBB), National Research Council of Italy (CNR), Rome (Italy)

    2016-10-01

    In this paper, we present an innovative, fully integrated handheld gamma camera, namely designed to gather in the same device the gamma ray detector with the display and the embedded computing system. The low power consumption allows the prototype to be battery operated. To be useful in radioguided surgery, an intraoperative gamma camera must be very easy to handle since it must be moved to find a suitable view. Consequently, we have developed the first prototype of a fully integrated, compact and lightweight gamma camera for radiopharmaceuticals fast imaging. The device can operate without cables across the sterile field, so it may be easily used in the operating theater for radioguided surgery. The prototype proposed consists of a Silicon Photomultiplier (SiPM) array coupled with a proprietary scintillation structure based on CsI(Tl) crystals. To read the SiPM output signals, we have developed a very low power readout electronics and a dedicated analog to digital conversion system. One of the most critical aspects we faced designing the prototype was the low power consumption, which is mandatory to develop a battery operated device. We have applied this detection device in the lymphoscintigraphy technique (sentinel lymph node mapping) comparing the results obtained with those of a commercial gamma camera (Philips SKYLight). The results obtained confirm a rapid response of the device and an adequate spatial resolution for the use in the scintigraphic imaging. This work confirms the feasibility of a small gamma camera with an integrated display. This device is designed for radioguided surgery and small organ imaging, but it could be easily combined into surgical navigation systems.

  3. Wide-field kinematic structure of early-type galaxy halos

    Science.gov (United States)

    Arnold, Jacob Antony

    2013-12-01

    The stellar halos of nearby galaxies bare the signatures of the mass-assembly processes that have driven galaxy evolution over the last ˜10 Gyr. Finding and interpreting these relict clues in galaxies within and beyond the local group offers one of the most promising avenues for understanding how galaxies accumulate their stars over time. To tackle this problem we have performed a systematic study of the wide-field kinematic structure of nearby (Dspectroscopy out to several effective radii (˜3 R e). The 22 galaxies presented here span a range of environments (field, group, and cluster), intrinsic luminosities (-22.4 infrared Calcium II triplet. For each spectrum, we parameterize the line-of-sight velocity distribution (LOSVD) as a truncated Gauss-Hermite series convolved with an optimally weighted combination of stellar templates. These kinematic measurements (V, sigma, h3, and h4) are combined with literature values to construct spatially resolved maps of large-scale kinematic structure. A variety of kinematic behaviors are observed beyond ~1 Re, potentially reflecting the stochastic and chaotic assembly of stellar bulges and halos in early-type galaxies. Next, we describe a global analysis (out to 5 Re) of kinematics and metallicity in the nearest S0 galaxy, NGC 3115, along with implications for its assembly history. The data include high-quality wide-field imaging and multi-slit spectra of the field stars and globular clusters (GCs). Within two effective radii, the bulge (as traced by the stars and metal-rich GCs) is flattened and rotates rapidly. At larger radii, the rotation declines dramatically, while the characteristic GC metallicities also decrease with radius. We argue that this pattern is not naturally explained by a binary major merger, but instead by a two-phase assembly process where the inner regions have formed in an early violent, dissipative phase, followed by the protracted growth of the outer parts via minor mergers. To test this hypothesis

  4. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  5. Design studies of a depth encoding large aperture PET camera

    International Nuclear Information System (INIS)

    Moisan, C.; Rogers, J.G.; Buckley, K.R.; Ruth, T.J.; Stazyk, M.W.; Tsang, G.

    1994-10-01

    The feasibility of a wholebody PET tomograph with the capacity to correct for the parallax error induced by the Depth-Of-Interaction of γ-rays is assessed through simulation. The experimental energy, depth, and transverse position resolutions of BGO block detector candidates are the main inputs to a simulation that predicts the point source resolution of the Depth Encoding Large Aperture Camera (DELAC). The results indicate that a measured depth resolution of 7 mm (FWHM) is sufficient to correct a substantial part of the parallax error for a point source at the edge of the Field-Of-View. A search for the block specifications and camera ring radius that would optimize the spatial resolution and its uniformity across the Field-Of-View is also presented. (author). 10 refs., 1 tab., 5 figs

  6. Evaluation of the optical cross talk level in the SiPMs adopted in ASTRI SST-2M Cherenkov Camera using EASIROC front-end electronics

    International Nuclear Information System (INIS)

    Impiombato, D; Giarrusso, S; Mineo, T; Agnetta, G; Biondo, B; Catalano, O; Gargano, C; Rosa, G La; Russo, F; Sottile, G; Belluso, M; Billotta, S; Bonanno, G; Garozzo, S; Marano, D; Romeo, G

    2014-01-01

    ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana), is a flagship project of the Italian Ministry of Education, University and Research whose main goal is the design and construction of an end-to-end prototype of the Small Size of Telescopes of the Cherenkov Telescope Array. The prototype, named ASTRI SST-2M, will adopt a wide field dual mirror optical system in a Schwarzschild-Couder configuration to explore the VHE range of the electromagnetic spectrum. The camera at the focal plane is based on Silicon Photo-Multipliers detectors which is an innovative solution for the detection astronomical Cherenkov light. This contribution reports some preliminary results on the evaluation of the optical cross talk level among the SiPM pixels foreseen for the ASTRI SST-2M camera

  7. An electrically tunable plenoptic camera using a liquid crystal microlens array

    International Nuclear Information System (INIS)

    Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng

    2015-01-01

    Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF

  8. A triple GEM gamma camera for medical application

    Energy Technology Data Exchange (ETDEWEB)

    Anulli, F. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Balla, A. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Bencivenni, G. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Corradi, G. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); D' Ambrosio, C. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Domenici, D. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Felici, G. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Gatta, M. [Laboratori Nazionali di Frascati INFN, Frascati (Italy); Morone, M.C. [Dipartimento di Biopatologia e Diagnostica per immagini, Universita di Roma Tor Vergata (Italy); INFN - Sezione di Roma Tor Vergata (Italy); Murtas, F. [Laboratori Nazionali di Frascati INFN, Frascati (Italy)]. E-mail: fabrizio.murtas@lnf.infn.it; Schillaci, O. [Dipartimento di Biopatologia e Diagnostica per immagini, Universita di Roma Tor Vergata (Italy)

    2007-03-01

    A Gamma Camera for medical applications 10x10cm{sup 2} has been built using a triple GEM chamber prototype. The photon converters placed in front of the three GEM foils, has been realized with different technologies. The chamber, High Voltage supplied with a new active divider made in Frascati, is readout through 64 pads, 1mm{sup 2} wide, organized in a row of 8cm long, with LHCb ASDQ chip. This Gamma Camera can be used both for X-ray movie and PET-SPECT imaging; this chamber prototype is placed in a scanner system, creating images of 8x8cm{sup 2}. Several measurements have been performed using phantom and radioactive sources of Tc99m(140keV) and Na22(511keV). Results on spatial resolution and image reconstruction are presented.

  9. UCalMiCeL – UNIFIED INTRINSIC AND EXTRINSIC CALIBRATION OF A MULTI-CAMERA-SYSTEM AND A LASERSCANNER

    Directory of Open Access Journals (Sweden)

    M. Hillemann

    2017-08-01

    Full Text Available Unmanned Aerial Vehicle (UAV with adequate sensors enable new applications in the scope between expensive, large-scale, aircraftcarried remote sensing and time-consuming, small-scale, terrestrial surveyings. To perform these applications, cameras and laserscanners are a good sensor combination, due to their complementary properties. To exploit this sensor combination the intrinsics and relative poses of the individual cameras and the relative poses of the cameras and the laserscanners have to be known. In this manuscript, we present a calibration methodology for the Unified Intrinsic and Extrinsic Calibration of a Multi-Camera-System and a Laserscanner (UCalMiCeL. The innovation of this methodology, which is an extension to the calibration of a single camera to a line laserscanner, is an unifying bundle adjustment step to ensure an optimal calibration of the entire sensor system. We use generic camera models, including pinhole, omnidirectional and fisheye cameras. For our approach, the laserscanner and each camera have to share a joint field of view, whereas the fields of view of the individual cameras may be disjoint. The calibration approach is tested with a sensor system consisting of two fisheye cameras and a line laserscanner with a range measuring accuracy of 30 mm. We evaluate the estimated relative poses between the cameras quantitatively by using an additional calibration approach for Multi-Camera-Systems based on control points which are accurately measured by a motion capture system. In the experiments, our novel calibration method achieves a relative pose estimation with a deviation below 1.8° and 6.4 mm.

  10. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen- er...

  11. Automatic Quadcopter Control Avoiding Obstacle Using Camera with Integrated Ultrasonic Sensor

    Science.gov (United States)

    Anis, Hanafi; Haris Indra Fadhillah, Ahmad; Darma, Surya; Soekirno, Santoso

    2018-04-01

    Automatic navigation on the drone is being developed these days, a wide variety of types of drones and its automatic functions. Drones used in this study was an aircraft with four propellers or quadcopter. In this experiment, image processing used to recognize the position of an object and ultrasonic sensor used to detect obstacle distance. The method used to trace an obsctacle in image processing was the Lucas-Kanade-Tomasi Tracker, which had been widely used due to its high accuracy. Ultrasonic sensor used to complement the image processing success rate to be fully detected object. The obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors. Visual feedback control based PID controllers are used as a control of drones movement. The conclusion of the obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors.

  12. CALIBRATION PROCEDURES IN MID FORMAT CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    F. Pivnicka

    2012-07-01

    Full Text Available A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU, the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and

  13. Automated recognition and tracking of aerosol threat plumes with an IR camera pod

    Science.gov (United States)

    Fauth, Ryan; Powell, Christopher; Gruber, Thomas; Clapp, Dan

    2012-06-01

    Protection of fixed sites from chemical, biological, or radiological aerosol plume attacks depends on early warning so that there is time to take mitigating actions. Early warning requires continuous, autonomous, and rapid coverage of large surrounding areas; however, this must be done at an affordable cost. Once a potential threat plume is detected though, a different type of sensor (e.g., a more expensive, slower sensor) may be cued for identification purposes, but the problem is to quickly identify all of the potential threats around the fixed site of interest. To address this problem of low cost, persistent, wide area surveillance, an IR camera pod and multi-image stitching and processing algorithms have been developed for automatic recognition and tracking of aerosol plumes. A rugged, modular, static pod design, which accommodates as many as four micro-bolometer IR cameras for 45deg to 180deg of azimuth coverage, is presented. Various OpenCV1 based image-processing algorithms, including stitching of multiple adjacent FOVs, recognition of aerosol plume objects, and the tracking of aerosol plumes, are presented using process block diagrams and sample field test results, including chemical and biological simulant plumes. Methods for dealing with the background removal, brightness equalization between images, and focus quality for optimal plume tracking are also discussed.

  14. Quantitative studies with the gamma-camera: correction for spatial and energy distortion

    International Nuclear Information System (INIS)

    Soussaline, F.; Todd-Pokropek, A.E.; Raynaud, C.

    1977-01-01

    The gamma camera sensitivity distribution is an important source of error in quantitative studies. In addition, spatial distortion produces apparent variations in count density which degrades quantitative studies. The flood field image takes into account both effects and is influenced by the pile-up of the tail distribution. It is essential to measure separately each of these parameters. These were investigated using a point source displaced by a special scanning table with two X, Y stepping motors of 10 micron precision. The spatial distribution of the sensitivity, spatial distortion and photopeak in the field of view were measured and compared for different setting-up of the camera and PM gains. For well-tuned cameras, the sensitivity is fairly constant, while the variations appearing in the flood field image are primarily due to spatial distortion, the former more dependent than the latter on the energy window setting. This indicates why conventional flood field uniformity correction must not be applied. A correction technique to improve the results in quantitative studies has been tested using a continuously matched energy window at every point within the field. A method for correcting spatial distortion is also proposed, where, after an adequately sampled measurement of this error, a transformation can be applied to calculate the true position of events. The knowledge of the magnitude of these parameters is essential in the routine use and design of detector systems

  15. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  16. Wide Field-of-View Soft X-Ray Imaging for Solar Wind-Magnetosphere Interactions

    Science.gov (United States)

    Walsh, B. M.; Collier, M. R.; Kuntz, K. D.; Porter, F. S.; Sibeck, D. G.; Snowden, S. L.; Carter, J. A.; Collado-Vega, Y.; Connor, H. K.; Cravens, T. E.; hide

    2016-01-01

    Soft X-ray imagers can be used to study the mesoscale and macroscale density structures that occur whenever and wherever the solar wind encounters neutral atoms at comets, the Moon, and both magnetized and unmagnetized planets. Charge exchange between high charge state solar wind ions and exospheric neutrals results in the isotropic emission of soft X-ray photons with energies from 0.1 to 2.0 keV. At Earth, this process occurs primarily within the magnetosheath and cusps. Through providing a global view, wide field-of-view imaging can determine the significance of the various proposed solar wind-magnetosphere interaction mechanisms by evaluating their global extent and occurrence patterns. A summary of wide field-of-view (several to tens of degrees) soft X-ray imaging is provided including slumped micropore microchannel reflectors, simulated images, and recent flight results.

  17. KEhD-1 Debye-Sherrar camera with a coordinate proportional counter

    International Nuclear Information System (INIS)

    Ageev, O.I.; Glazova, L.P.; Goganov, D.A.; Rejzis, B.M.; Syrkin, M.G.

    1985-01-01

    An arrangement of the KEhD-1 Debye-Sherrar camera, in which the advantages of a proportional counter are combined with the wide range of simultaneous image recording is described. The camera consists of an X-ray tube unit with the URS-0.1 source, a linear coordinate detector with resistive-capacity coding, a signal transducer and the MK-1 multichannel system for data acquisition and processing based on the ''Uskra-1256'' computer. The counting rate of X-ray pulses is > 5x10 4 s -1 , energy resolution for the CuKsub(α) line constitutes 20%, spatial resolution equals 150 μm, detection efficiency constitutes not less than 64%. The range of the detector displacement varies from -30 deg to +130 deg. The information obtained by means of the camera may be output to a display, a plotter, a numeric printer or a magnetic tape

  18. Poor Man's Virtual Camera: Real-Time Simultaneous Matting and Camera Pose Estimation.

    Science.gov (United States)

    Szentandrasi, Istvan; Dubska, Marketa; Zacharias, Michal; Herout, Adam

    2016-03-18

    Today's film and advertisement production heavily uses computer graphics combined with living actors by chromakeying. The matchmoving process typically takes a considerable manual effort. Semi-automatic matchmoving tools exist as well, but they still work offline and require manual check-up and correction. In this article, we propose an instant matchmoving solution for green screen. It uses a recent technique of planar uniform marker fields. Our technique can be used in indie and professional filmmaking as a cheap and ultramobile virtual camera, and for shot prototyping and storyboard creation. The matchmoving technique based on marker fields of shades of green is very computationally efficient: we developed and present in the article a mobile application running at 33 FPS. Our technique is thus available to anyone with a smartphone at low cost and with easy setup, opening space for new levels of filmmakers' creative expression.

  19. Pothole Detection System Using a Black-box Camera

    Directory of Open Access Journals (Sweden)

    Youngtae Jo

    2015-11-01

    Full Text Available Aging roads and poor road-maintenance systems result a large number of potholes, whose numbers increase over time. Potholes jeopardize road safety and transportation efficiency. Moreover, they are often a contributing factor to car accidents. To address the problems associated with potholes, the locations and size of potholes must be determined quickly. Sophisticated road-maintenance strategies can be developed using a pothole database, which requires a specific pothole-detection system that can collect pothole information at low cost and over a wide area. However, pothole repair has long relied on manual detection efforts. Recent automatic detection systems, such as those based on vibrations or laser scanning, are insufficient to detect potholes correctly and inexpensively owing to the unstable detection of vibration-based methods and high costs of laser scanning-based methods. Thus, in this paper, we introduce a new pothole-detection system using a commercial black-box camera. The proposed system detects potholes over a wide area and at low cost. We have developed a novel pothole-detection algorithm specifically designed to work with the embedded computing environments of black-box cameras. Experimental results are presented with our proposed system, showing that potholes can be detected accurately in real-time.

  20. An intelligent space for mobile robot localization using a multi-camera system.

    Science.gov (United States)

    Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel

    2014-08-15

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  1. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Mariana Rampinelli

    2014-08-01

    Full Text Available This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  2. Periscope-camera system for visible and infrared imaging diagnostics on TFTR

    International Nuclear Information System (INIS)

    Medley, S.S.; Dimock, D.L.; Hayes, S.; Long, D.; Lowrence, J.L.; Mastrocola, V.; Renda, G.; Ulrickson, M.; Young, K.M.

    1985-05-01

    An optical diagnostic consisting of a periscope which relays images of the torus interior to an array of cameras is used on the Tokamak Fusion Test Reactor (TFTR) to view plasma discharge phenomena and inspect vacuum vessel internal structures in both visible and near-infrared wavelength regions. Three periscopes view through 20-cm-diameter fused-silica windows which are spaced around the torus midplane to provide a viewing coverage of approximately 75% of the vacuum vessel internal surface area. The periscopes have f/8 optics and motor-driven controls for focusing, magnification selection (5 0 , 20 0 , and 60 0 field of view), elevation and azimuth setting, mast rotation, filter selection, iris aperture, and viewing port selection. The four viewing ports on each periscope are equipped with multiple imaging devices which include: (1) an inspection eyepiece, (2) standard (RCA TC2900) and fast (RETICON) framing rate television cameras, (3) a PtSi CCD infrared imaging camera, (4) a 35 mm Nikon F3 still camera, or (5) a 16 mm Locam II movie camera with variable framing up to 500 fps. Operation of the periscope-camera system is controlled either locally or remotely through a computer-CAMAC interface. A description of the equipment and examples of its application are presented

  3. Periscope-camera system for visible and infrared imaging diagnostics on TFTR

    International Nuclear Information System (INIS)

    Medley, S.S.; Dimock, D.L.; Hayes, S.; Long, D.; Lowrance, J.L.; Mastrocola, V.; Renda, G.; Ulrickson, M.; Young, K.M.

    1985-01-01

    An optical diagnostic consisting of a periscope which relays images of the torus interior to an array of cameras is used on the Tokamak Fusion Test Reactor (TFTR) to view plasma discharge phenomena and inspect the vacuum vessel internal structures in both the visible and near-infrared wavelength regions. Three periscopes view through 20-cm-diam fused-silica windows which are spaced around the torus midplane to provide a viewing coverage of approximately 75% of the vacuum vessel internal surface area. The periscopes have f/8 optics and motor-driven controls for focusing, magnification selection (5 0 , 20 0 , and 60 0 field of view), elevation and azimuth setting, mast rotation, filter selection, iris aperture, and viewing port selection. The four viewing ports on each periscope are equipped with multiple imaging devices which include: (1) an inspection eyepiece, (2) standard (RCA TC2900) and fast (RETICON) framing rate television cameras, (3) a PtSi CCD infrared imaging camera, (4) a 35-mm Nikon F3 still camera, or (5) a 16-mm Locam II movie camera with variable framing rate up to 500 fps. Operation of the periscope-camera system is controlled either locally or remotely through a computer-CAMAC interface. A description of the equipment and examples of its application are presented

  4. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    section unearths what characterizes the literature on camera movement. The second section of the dissertation delineates the history of camera movement itself within narrative cinema. Several organizational principles subtending the on-screen effect of camera movement are revealed in section two...... but they are not organized into a coherent framework. This is the task that section three meets in proposing a functional taxonomy for camera movement in narrative cinema. Two presumptions subtend the taxonomy: That camera movement actively contributes to the way in which we understand the sound and images on the screen......, commentative or valuative manner. 4) Focalization: associating the movement of the camera with the viewpoints of characters or entities in the story world. 5) Reflexive: inviting spectators to engage with the artifice of camera movement. 6) Abstract: visualizing abstract ideas and concepts. In order...

  5. Optical design of a Michelson wide-field multiple-aperture telescope

    Science.gov (United States)

    Cassaing, Frederic; Sorrente, Beatrice; Fleury, Bruno; Laubier, David

    2004-02-01

    Multiple-Aperture Optical Telescopes (MAOTs) are a promising solution for very high resolution imaging. In the Michelson configuration, the instrument is made of sub-telescopes distributed in the pupil and combined by a common telescope via folding periscopes. The phasing conditions of the sub-pupils lead to specific optical constraints in these subsystems. The amplitude of main contributors to the wavefront error (WFE) is given as a function of high level requirements (such as field or resolution) and free parameters, mainly the sub-telescope type, magnification and diameter. It is shown that for the periscopes, the field-to-resolution ratio is the main design driver and can lead to severe specifications. The effect of sub-telescopes aberrations on the global WFE can be minimized by reducing their diameter. An analytical tool for the MAOT design has been derived from this analysis, illustrated and validated in three different cases: LEO or GEO Earth observation and astronomy with extremely large telescopes. The last two cases show that a field larger than 10 000 resolution elements can be covered with a very simple MAOT based on Mersenne paraboloid-paraboloid sub-telescopes. Michelson MAOTs are thus a solution to be considered for high resolution wide-field imaging, from space or ground.

  6. Camera aperture to optimize data collection in nuclear medicine

    International Nuclear Information System (INIS)

    Dupras, G.; Villeneuve, C.

    1979-01-01

    Collection of data with a large field of view camera can cause problems when a small organ like the heart is to be imaged, especially when high activity is used. A simple, inexpensive mask is described that solves most of these problems. (orig.) [de

  7. Status of the NectarCAM camera project

    International Nuclear Information System (INIS)

    Glicenstein, J.F.; Delagnes, E.; Fesquet, M.; Louis, F.; Moudden, Y.; Moulin, E.; Nunio, F.; Sizun, P.

    2014-01-01

    NectarCAM is a camera designed for the medium-sized telescopes of the Cherenkov Telescope Array (CTA) covering the central energy range 100 GeV to 30 TeV. It has a modular design based on the NECTAr chip, at the heart of which is a GHz sampling Switched Capacitor Array and 12-bit Analog to Digital converter. The camera will be equipped with 265 7-photomultiplier modules, covering a field of view of 7 to 8 degrees. Each module includes the photomultiplier bases, High Voltage supply, pre-amplifier, trigger, readout and Thernet transceiver. Events recorded last between a few nanoseconds and tens of nanoseconds. A flexible trigger scheme allows to read out very long events. NectarCAM can sustain a data rate of 10 kHz. The camera concept, the design and tests of the various sub-components and results of thermal and electrical prototypes are presented. The design includes the mechanical structure, the cooling of electronics, read-out, clock distribution, slow control, data-acquisition, trigger, monitoring and services. A 133-pixel prototype with full scale mechanics, cooling, data acquisition and slow control will be built at the end of 2014. (authors)

  8. Upgrade of the JET gamma-ray cameras

    International Nuclear Information System (INIS)

    Soare, S.; Curuia, M.; Anghel, M.; Constantin, M.; David, E.; Craciunescu, T.; Falie, D.; Pantea, A.; Tiseanu, I.; Kiptily, V.; Prior, P.; Edlington, T.; Griph, S.; Krivchenkov, Y.; Loughlin, M.; Popovichev, S.; Riccardo, V; Syme, B.; Thompson, V.; Lengar, I.; Murari, A.; Bonheure, G.; Le Guern, F.

    2007-01-01

    Full text: The JET gamma-ray camera diagnostics have already provided valuable information on the gamma-ray imaging of fast ion in JET plasmas. The applicability of gamma-ray imaging to high performance deuterium and deuterium-tritium JET discharges is strongly dependent on the fulfilment of rather strict requirements for the characterisation of the neutron and gamma-ray radiation fields. These requirements have to be satisfied within very stringent boundary conditions for the design, such as the requirement of minimum impact on the co-existing neutron camera diagnostics. The JET Gamma-Ray Cameras (GRC) upgrade project deals with these issues with particular emphasis on the design of appropriate neutron/gamma-ray filters ('neutron attenuators'). Several design versions have been developed and evaluated for the JET GRC neutron attenuators at the conceptual design level. The main design parameter was the neutron attenuation factor. The two design solutions, that have been finally chosen and developed at the level of scheme design, consist of: a) one quasi-crescent shaped neutron attenuator (for the horizontal camera) and b) two quasi-trapezoid shaped neutron attenuators (for the vertical one). The second design solution has different attenuation lengths: a short version, to be used together with the horizontal attenuator for deuterium discharges, and a long version to be used for high performance deuterium and DT discharges. Various neutron-attenuating materials have been considered (lithium hydride with natural isotopic composition and 6 Li enriched, light and heavy water, polyethylene). Pure light water was finally chosen as the attenuating material for the JET gamma-ray cameras. The neutron attenuators will be steered in and out of the detector line-of-sight by means of an electro-pneumatic steering and control system. The MCNP code was used for neutron and gamma ray transport in order to evaluate the effect of the neutron attenuators on the neutron field of the

  9. Application of X-ray CCD camera in X-ray spot diagnosis of rod-pinch diode

    International Nuclear Information System (INIS)

    Song Yan; Zhou Ming; Song Guzhou; Ma Jiming; Duan Baojun; Han Changcai; Yao Zhiming

    2015-01-01

    The pinhole imaging technique is widely used in the measurement of X-ray spot of rod-pinch diode. The X-ray CCD camera, which was composed of film, fiber optic taper and CCD camera, was employed to replace the imaging system based on scintillator, lens and CCD camera in the diagnosis of X-ray spot. The resolution of the X-ray CCD camera was studied. The resolution is restricted by the film and is 5 lp/mm in the test with Pb resolution chart. The frequency is 1.5 lp/mm when the MTF is 0.5 in the test with edge image. The resolution tests indicate that the X-ray CCD camera can meet the requirement of the diagnosis of X-ray spot whose scale is about 1.5 mm when the pinhole imaging magnification is 0.5. At last, the image of X-ray spot was gained and the restoration was implemented in the diagnosis of X-ray spot of rod-pinch diode. (authors)

  10. The status of MUSIC: the multiwavelength sub-millimeter inductance camera

    Science.gov (United States)

    Sayers, Jack; Bockstiegel, Clint; Brugger, Spencer; Czakon, Nicole G.; Day, Peter K.; Downes, Thomas P.; Duan, Ran P.; Gao, Jiansong; Gill, Amandeep K.; Glenn, Jason; Golwala, Sunil R.; Hollister, Matthew I.; Lam, Albert; LeDuc, Henry G.; Maloney, Philip R.; Mazin, Benjamin A.; McHugh, Sean G.; Miller, David A.; Mroczkowski, Anthony K.; Noroozian, Omid; Nguyen, Hien Trong; Schlaerth, James A.; Siegel, Seth R.; Vayonakis, Anastasios; Wilson, Philip R.; Zmuidzinas, Jonas

    2014-08-01

    The Multiwavelength Sub/millimeter Inductance Camera (MUSIC) is a four-band photometric imaging camera operating from the Caltech Submillimeter Observatory (CSO). MUSIC is designed to utilize 2304 microwave kinetic inductance detectors (MKIDs), with 576 MKIDs for each observing band centered on 150, 230, 290, and 350 GHz. MUSIC's field of view (FOV) is 14' square, and the point-spread functions (PSFs) in the four observing bands have 45'', 31'', 25'', and 22'' full-widths at half maximum (FWHM). The camera was installed in April 2012 with 25% of its nominal detector count in each band, and has subsequently completed three short sets of engineering observations and one longer duration set of early science observations. Recent results from on-sky characterization of the instrument during these observing runs are presented, including achieved map- based sensitivities from deep integrations, along with results from lab-based measurements made during the same period. In addition, recent upgrades to MUSIC, which are expected to significantly improve the sensitivity of the camera, are described.

  11. A study on the performance evaluation of small gamma camera collimators using detective quantun efficiency

    International Nuclear Information System (INIS)

    Jeon, Ho Sang

    2008-02-01

    The anger-type gamma camera and novel marker compound using Tc-99m were firstly introduced in 1963. The gamma camera systems have being improved and applied to various fields, for example, medical, industrial, and environmental fields. Gamma camera is mainly composed of collimator, detector, and signal processor. And the radiative source is namely the imaging object. The collimator is essential component of gamma camera system because the imaging performance of system is mainly dependent on the collimator. The performance evaluation of collimators can be done by using evaluating factors. In this study, the novel factors for gamma camera evaluation are suggested. The established evaluating factors by NEMA are FWHM, sensitivity, and uniformity. They have some limitations in spite of their usefulness. Firstly, performance evaluation by those factors give insensitive and indirect results only. Secondly, the evaluation of noise property is ambiguous. Thirdly, there is no synthetic evaluation of system performance. Simulation with Monte Carlo code and experiment with a small camera camera were simultaenuously performed to verify novel evaluating factors. For the evaluation of spatial resolution, MTF was applied instead of FWHM. The MTF values presents excellent linear relationship with FWHM values. The NNPS was applied instead of uniformity and sensitivity for the evaluation of noise fluctuation. The NNPS values also presents linear relationship with sensitivity and unifomity. Moreover, these novel factors give quantities as the function of spatial frequencies. Finally, the DQE values were given by calculations with MTF, NNPS, and input SNR. DQE effectively presents the synthetic evaluation of gamma camera performance. It is the conclusion that MTF, NNPS, and DQE can be novel evaluating factors for gamma camera systems and the new factor for synthetic evaluation is derived

  12. Spatial capture–recapture with partial identity: An application to camera traps

    Science.gov (United States)

    Augustine, Ben C.; Royle, J. Andrew; Kelly, Marcella J.; Satter, Christopher B.; Alonso, Robert S.; Boydston, Erin E.; Crooks, Kevin R.

    2018-01-01

    Camera trapping surveys frequently capture individuals whose identity is only known from a single flank. The most widely used methods for incorporating these partial identity individuals into density analyses discard some of the partial identity capture histories, reducing precision, and, while not previously recognized, introducing bias. Here, we present the spatial partial identity model (SPIM), which uses the spatial location where partial identity samples are captured to probabilistically resolve their complete identities, allowing all partial identity samples to be used in the analysis. We show that the SPIM outperforms other analytical alternatives. We then apply the SPIM to an ocelot data set collected on a trapping array with double-camera stations and a bobcat data set collected on a trapping array with single-camera stations. The SPIM improves inference in both cases and, in the ocelot example, individual sex is determined from photographs used to further resolve partial identities—one of which is resolved to near certainty. The SPIM opens the door for the investigation of trapping designs that deviate from the standard two camera design, the combination of other data types between which identities cannot be deterministically linked, and can be extended to the problem of partial genotypes.

  13. Evaluation of a high-resolution, breast-specific, small-field-of-view gamma camera for the detection of breast cancer

    International Nuclear Information System (INIS)

    Brem, R.F.; Kieper, D.A.; Rapelyea, J.A.; Majewski, S.

    2003-01-01

    Purpose: The purpose of our study is to review the state of the art in nuclear medicine imaging of the breast (scintimammography) and to evaluate a novel, high-resolution, breast-specific gamma camera (HRBGC) for the detection of suspicious breast lesions. Materials and Methods: Fifty patients with 58 breast lesions in whom a scintimammogram was clinically indicated were prospectively evaluated with a general-purpose gamma camera and a HRBGC prototype. Nuclear studies were prospectively classified as negative (normal/benign) or positive (suspicious/malignant) by two radiologists, blinded to mammographic and histologic results with both the conventional and high-resolution. All lesions were confirmed by pathology. Results: Included in this study were 30 benign and 28 malignant lesions. The sensitivity for detection of breast cancer was 64.3% (18/28) with the conventional camera and 78.6% (22/28) with the HRBGC. Specificity of both systems was 93.3% (28/30). In the 18 nonpalpable cancers, sensitivity was 55.5% (10/18) and 72.2% (13/18) with the general-purpose camera and HRBGC, respectively. In cancers ≤ 1cm, 7 of 15 were detected with the general-purpose camera and 10 of 15 with the HRBGC. Four of the cancers (median size, 8.5 mm) detected with the HRBGC were missed by the conventional camera Conclusion: Evaluation of indeterminate breasts lesions with a high resolution, breast-specific gamma camera results in improved sensitivity for the detection of cancer with greater improvement demonstrated in nonpalpable and ≤1 cm cancers

  14. Recording of radiation-induced optical density changes in doped agarose gels with a CCD camera

    International Nuclear Information System (INIS)

    Tarte, B.J.; Jardine, P.A.; Van Doorn, T.

    1996-01-01

    Full text: Spatially resolved dose measurement with iron-doped agarose gels is continuing to be investigated for applications in radiotherapy dosimetry. It has previously been proposed to use optical methods, rather than MRI, for dose measurement with such gels and this has been investigated using a spectrophotometer (Appleby A and Leghrouz A, Med Phys, 18:309-312, 1991). We have previously studied the use of a pencil beam laser for such optical density measurement of gels and are currently investigating charge-coupled devices (CCD) camera imaging for the same purpose but with the advantages of higher data acquisition rates and potentially greater spatial resolution. The gels used in these studies were poured, irradiated and optically analysed in Perspex casts providing gel sections 1 cm thick and up to 20 cm x 30 cm in dimension. The gels were also infused with a metal indicator dye (xylenol orange) to render the radiation induced oxidation of the iron in the gel sensitive to optical radiation, specifically in the green spectral region. Data acquisition with the CCD camera involved illumination of the irradiated gel section with a diffuse white light source, with the light from the plane of the gel section focussed to the CCD array with a manual zoom lens. The light was also filtered with a green colour glass filter to maximise the contrast between unirradiated and irradiated gels. The CCD camera (EG and G Reticon MC4013) featured a 1024 x 1024 pixel array and was interfaced to a PC via a frame grabber acquisition board with 8 bit resolution. The performance of the gel dosimeter was appraised in mapping of physical and dynamic wedged 6 MV X-ray fields. The results from the CCD camera detection system were compared with both ionisation chamber data and laser based optical density measurements of the gels. Cross beam profiles were extracted from each measurement system at a particular depth (eg. 2.3 cm for the physical wedge field) for direct comparison. A

  15. Creating personalized memories from social events: community-based support for multi-camera recordings of school concerts

    NARCIS (Netherlands)

    R.L. Guimarães (Rodrigo); P.S. Cesar Garcia (Pablo Santiago); D.C.A. Bulterman (Dick); V. Zsombori; I. Kegel

    2011-01-01

    htmlabstractThe wide availability of relatively high-quality cameras makes it easy for many users to capture video fragments of social events such as concerts, sports events or community gatherings. The wide availability of simple sharing tools makes it nearly as easy to upload individual fragments

  16. Performance characteristics of the novel PETRRA positron camera

    CERN Document Server

    Ott, R J; Erlandsson, K; Reader, A; Duxbury, D; Bateman, J; Stephenson, R; Spill, E

    2002-01-01

    The PETRRA positron camera consists of two 60 cmx40 cm annihilation photon detectors mounted on a rotating gantry. Each detector contains large BaF sub 2 scintillators interfaced to large area multiwire proportional chambers filled with a photo-sensitive vapour (tetrakis-(dimethylamino)-ethylene). The spatial resolution of the camera has been measured as 6.5+-1.0 mm FWHM throughout the sensitive field-of-view (FoV), the timing resolution is between 7 and 10 ns FWHM and the detection efficiency for annihilation photons is approx 30% per detector. The count-rates obtained, from a 20 cm diameter by 11 cm long water filled phantom containing 90 MBq of sup 1 sup 8 F, were approx 1.25x10 sup 6 singles and approx 1.1x10 sup 5 cps raw coincidences, limited only by the read-out system dead-time of approx 4 mu s. The count-rate performance, sensitivity and large FoV make the camera ideal for whole-body imaging in oncology.

  17. INFLUENCE OF MECHANICAL ERRORS IN A ZOOM CAMERA

    Directory of Open Access Journals (Sweden)

    Alfredo Gardel

    2011-05-01

    Full Text Available As it is well known, varying the focus and zoom of a camera lens system changes the alignment of the lens components resulting in a displacement of the image centre and field of view. Thus, knowledge of how the image centre shifts may be important for some aspects of camera calibration. As shown in other papers, the pinhole model is not adequate for zoom lenses. To ensure a calibration model for these lenses, the calibration parameters must be adjusted. The geometrical modelling of a zoom lens is realized from its lens specifications. The influence on the calibration parameters is calculated by introducing mechanical errors in the mobile lenses. Figures are given describing the errors obtained in the principal point coordinates and also in its standard deviation. A comparison is then made with the errors that come from the incorrect detection of the calibration points. It is concluded that mechanical errors of actual zoom lenses can be neglected in the calibration process because detection errors have more influence on the camera parameters.

  18. Solutions on a high-speed wide-angle zoom lens with aspheric surfaces

    Science.gov (United States)

    Yamanashi, Takanori

    2012-10-01

    Recent development in CMOS and digital camera technology has accelerated the business and market share of digital cinematography. In terms of optical design, this technology has increased the need to carefully consider pixel pitch and characteristics of the imager. When the field angle at the wide end, zoom ratio, and F-number are specified, choosing an appropriate zoom lens type is crucial. In addition, appropriate power distributions and lens configurations are required. At points near the wide end of a zoom lens, it is known that an aspheric surface is an effective means to correct off-axis aberrations. On the other hand, optical designers have to focus on manufacturability of aspheric surfaces and perform required analysis with respect to the surface shape. Centration errors aside, it is also important to know the sensitivity to aspheric shape errors and their effect on image quality. In this paper, wide angle cine zoom lens design examples are introduced and their main characteristics are described. Moreover, technical challenges are pointed out and solutions are proposed.

  19. Intraocular camera for retinal prostheses: Refractive and diffractive lens systems

    Science.gov (United States)

    Hauer, Michelle Christine

    The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.

  20. Surface impedance of superconductors in wide frequency ranges for wake field calculations

    International Nuclear Information System (INIS)

    Davidovskii, V.G.

    2006-01-01

    The problem of the surface impedance of superconductors in wide frequency ranges for calculations of wake fields, generated by bunches of charged particles moving axially inside a metallic vacuum chambers, is solved. The case of specular electron reflection at the superconductor surface is considered. The expression for the surface impedance of superconductors suitable for numerical computation is derived [ru

  1. Mixel camera--a new push-broom camera concept for high spatial resolution keystone-free hyperspectral imaging.

    Science.gov (United States)

    Høye, Gudrun; Fridman, Andrei

    2013-05-06

    Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.

  2. ACCURACY ASSESSMENT OF GO PRO HERO 3 (BLACK CAMERA IN UNDERWATER ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    P. Helmholz,

    2016-06-01

    Full Text Available Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras have become available, which often cost less than $500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black. The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm and 0.0072 mm for 12MB (for an average c of 3.642mm. The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.

  3. Accuracy Assessment of GO Pro Hero 3 (black) Camera in Underwater Environment

    Science.gov (United States)

    Helmholz, , P.; Long, J.; Munsie, T.; Belton, D.

    2016-06-01

    Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras) have become available, which often cost less than 500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black). The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm) and 0.0072 mm for 12MB (for an average c of 3.642mm). The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.

  4. A Distributed Wireless Camera System for the Management of Parking Spaces.

    Science.gov (United States)

    Vítek, Stanislav; Melničuk, Petr

    2017-12-28

    The importance of detection of parking space availability is still growing, particularly in major cities. This paper deals with the design of a distributed wireless camera system for the management of parking spaces, which can determine occupancy of the parking space based on the information from multiple cameras. The proposed system uses small camera modules based on Raspberry Pi Zero and computationally efficient algorithm for the occupancy detection based on the histogram of oriented gradients (HOG) feature descriptor and support vector machine (SVM) classifier. We have included information about the orientation of the vehicle as a supporting feature, which has enabled us to achieve better accuracy. The described solution can deliver occupancy information at the rate of 10 parking spaces per second with more than 90% accuracy in a wide range of conditions. Reliability of the implemented algorithm is evaluated with three different test sets which altogether contain over 700,000 samples of parking spaces.

  5. A Distributed Wireless Camera System for the Management of Parking Spaces

    Directory of Open Access Journals (Sweden)

    Stanislav Vítek

    2017-12-01

    Full Text Available The importance of detection of parking space availability is still growing, particularly in major cities. This paper deals with the design of a distributed wireless camera system for the management of parking spaces, which can determine occupancy of the parking space based on the information from multiple cameras. The proposed system uses small camera modules based on Raspberry Pi Zero and computationally efficient algorithm for the occupancy detection based on the histogram of oriented gradients (HOG feature descriptor and support vector machine (SVM classifier. We have included information about the orientation of the vehicle as a supporting feature, which has enabled us to achieve better accuracy. The described solution can deliver occupancy information at the rate of 10 parking spaces per second with more than 90% accuracy in a wide range of conditions. Reliability of the implemented algorithm is evaluated with three different test sets which altogether contain over 700,000 samples of parking spaces.

  6. Relevance of wide-field autofluorescence imaging in Birdshot retinochoroidopathy: descriptive analysis of 76 eyes.

    Science.gov (United States)

    Piffer, Anne-Laure Le; Boissonnot, Michèle; Gobert, Frédéric; Zenger, Anita; Wolf, Sebastian; Wolf, Ute; Korobelnik, Jean-François; Rougier, Marie-Bénédicte

    2014-09-01

    To study and classify retinal lesions in patients with birdshot disease using wide-field autofluorescence imaging and correlate them according to patients' visual status. A multicentre study was carried out on 76 eyes of 39 patients with birdshot disease, analysing colour images and under autofluorescence using the wide-field Optomap(®) imaging system. This was combined with a complete clinical exam and analysis of the macula with OCT. In over 80% of the eyes, a chorioretinal lesion has been observed under autofluorescence with a direct correlation between the extent of the lesion and visual status. The presence of macular hypo-autofluorescence was correlated with a decreased visual acuity, due to the presence of a macular oedema, active clinical inflammation or an epiretinal membrane. The hypo-autofluorescence observed correlated with the duration of the disease and the degree of inflammation in the affected eye, indicating a secondary lesion in the pigment epithelium in relation to the choroid. The pigment epithelium was affected in a diffuse manner, as in almost 50% of the eyes the wider peripheral retina was affected. Wide-field autofluorescence imaging could appear to be a useful examination when monitoring patients, to look for areas of macular hypo-autofluorescence responsible for an irreversible loss of vision. © 2013 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  7. VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras

    Science.gov (United States)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-08-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (UV, EUV and X-ray science cameras at MSFC.

  8. Analysis of dark current images of a CMOS camera during gamma irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Náfrádi, Gábor, E-mail: nafradi@reak.bme.hu [INT, BME, EURATOM Association, H-1111 Budapest (Hungary); Czifrus, Szabolcs, E-mail: czifrus@reak.bme.hu [INT, BME, EURATOM Association, H-1111 Budapest (Hungary); Kocsis, Gábor, E-mail: kocsis.gabor@wigner.mta.hu [Wigner RCP, RMI, EURATOM Association, POB 49, 1525 Budapest (Hungary); Pór, Gábor, E-mail: por@reak.bme.hu [INT, BME, EURATOM Association, H-1111 Budapest (Hungary); Szepesi, Tamás, E-mail: szepesi.tamas@wigner.mta.hu [Wigner RCP, RMI, EURATOM Association, POB 49, 1525 Budapest (Hungary); Zoletnik, Sándor, E-mail: zoletnik.sandor@wigner.mta.hu [Wigner RCP, RMI, EURATOM Association, POB 49, 1525 Budapest (Hungary)

    2013-12-15

    Highlights: • Radiation tolerance of a fast framing CMOS camera EDICAM examined. • We estimate the expected gamma dose and spectrum of EDICAM with MCNP. • We irradiate EDICAM by 23.5 Gy in 70 min in a fission rector. • Dose rate normalised average brightness of frames grows linearly with the dose. • Dose normalised average brightness of frames follows the dose rate time evolution. -- Abstract: We report on the behaviour of the dark current images of the Event Detection Intelligent Camera (EDICAM) when placed into an irradiation field of gamma rays. EDICAM is an intelligent fast framing CMOS camera operating in the visible spectral range, which is designed for the video diagnostic system of the Wendelstein 7-X (W7-X) stellarator. Monte Carlo calculations were carried out in order to estimate the expected gamma spectrum and dose for an entire year of operation in W7-X. EDICAM was irradiated in a pure gamma field in the Training Reactor of BME with a dose of approximately 23.5 Gy in 1.16 h. During the irradiation, numerous frame series were taken with the camera with exposure times 20 μs, 50 μs, 100 μs, 1 ms, 10 ms, 100 ms. EDICAM withstood the irradiation, but suffered some dynamic range degradation. The behaviour of the dark current images during irradiation is described in detail. We found that the average brightness of dark current images depends on the total ionising dose that the camera is exposed to and the dose rate as well as on the applied exposure times.

  9. The design of the wide field monitor for LOFT

    DEFF Research Database (Denmark)

    Brandt, Søren; Hernanz, M.; Alvarez, L.

    2014-01-01

    is designed to carry on-board two instruments with sensitivity in the 2-50 keV range: a 10 m 2 class Large Area Detector (LAD) with a monitor (WFM) making use of coded masks and providing an instantaneous coverage of more than 1/3 of the sky. The prime goal of the WFM...... will be to detect transient sources to be observed by the LAD. However, thanks to its unique combination of a wide field of view (FoV) and energy resolution (better than 500 eV), the WFM will be also an excellent monitoring instrument to study the long term variability of many classes of X-ray sources. The WFM...

  10. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many

  11. Fast image acquisition and processing on a TV camera-based portal imaging system

    International Nuclear Information System (INIS)

    Baier, K.; Meyer, J.

    2005-01-01

    The present paper describes the fast acquisition and processing of portal images directly from a TV camera-based portal imaging device (Siemens Beamview Plus trademark). This approach employs not only hard- and software included in the standard package installed by the manufacturer (in particular the frame grabber card and the Matrox(tm) Intellicam interpreter software), but also a software tool developed in-house for further processing and analysis of the images. The technical details are presented, including the source code for the Matrox trademark interpreter script that enables the image capturing process. With this method it is possible to obtain raw images directly from the frame grabber card at an acquisition rate of 15 images per second. The original configuration by the manufacturer allows the acquisition of only a few images over the course of a treatment session. The approach has a wide range of applications, such as quality assurance (QA) of the radiation beam, real-time imaging, real-time verification of intensity-modulated radiation therapy (IMRT) fields, and generation of movies of the radiation field (fluoroscopy mode). (orig.)

  12. Peripheral plasma measurement during SMBI in Heliotron J using fast cameras

    International Nuclear Information System (INIS)

    Nishino, N.; Mizuuchi, T.; Takeuchi, M.; Mukai, K.; Takabatake, Y.; Nagasaki, K.; Kobayashi, S.; Okada, H.; Ohshima, S.; Yamamoto, S.; Minami, T.; Hanatani, K.; Konoshima, S.; Nakamura, Y.; Sano, F.

    2011-01-01

    Since fueling technique is very important for maintaining fusion plasma, supersonic molecular beam injection (SMBI) was studied using mainly fast cameras, Hα measurement, Langmuir/magnetic probes, and electron density/diamagnetic measurement in Heliotron J. Using a fast camera with a tangential view a very bright stripe along the magnetic field line was observed during SMBI. Time-dependent FFT analysis of data from each pixel showed that the low frequency waves rotated around the magnetic field line in a left-handed sense at the initial stage of SMBI. After a few milliseconds they propagated towards the SMBI region along the magnetic field line, and their phase velocities were almost the same. Experimental evidence is consistent with the interpretation as the ion acoustic wave, and the peak frequency of these waves was the same as that of the power spectra of the magnetic probe signals. It suggests the slow magnetoacoustic wave may convert into the ion acoustic wave due to collisions with neutrals.

  13. A multicenter prospective cohort study on camera navigation training for key user groups in minimally invasive surgery

    NARCIS (Netherlands)

    Graafland, Maurits; Bok, Kiki; Schreuder, Henk W. R.; Schijven, Marlies P.

    2014-01-01

    Untrained laparoscopic camera assistants in minimally invasive surgery (MIS) may cause suboptimal view of the operating field, thereby increasing risk for errors. Camera navigation is often performed by the least experienced member of the operating team, such as inexperienced surgical residents,

  14. Power estimation of martial arts movement using 3D motion capture camera

    Science.gov (United States)

    Azraai, Nur Zaidi; Awang Soh, Ahmad Afiq Sabqi; Mat Jafri, Mohd Zubir

    2017-06-01

    Motion capture camera (MOCAP) has been widely used in many areas such as biomechanics, physiology, animation, arts, etc. This project is done by approaching physics mechanics and the extended of MOCAP application through sports. Most researchers will use a force plate, but this will only can measure the force of impact, but for us, we are keen to observe the kinematics of the movement. Martial arts is one of the sports that uses more than one part of the human body. For this project, martial art `Silat' was chosen because of its wide practice in Malaysia. 2 performers have been selected, one of them has an experienced in `Silat' practice and another one have no experience at all so that we can compare the energy and force generated by the performers. Every performer will generate a punching with same posture which in this project, two types of punching move were selected. Before the measuring start, a calibration has been done so the software knows the area covered by the camera and reduce the error when analyze by using the T stick that have been pasted with a marker. A punching bag with mass 60 kg was hung on an iron bar as a target. The use of this punching bag is to determine the impact force of a performer when they punch. This punching bag also will be stuck with the optical marker so we can observe the movement after impact. 8 cameras have been used and placed with 2 cameras at every side of the wall with different angle in a rectangular room 270 ft2 and the camera covered approximately 50 ft2. We covered only a small area so less noise will be detected and make the measurement more accurate. A Marker has been pasted on the limb of the entire hand that we want to observe and measure. A passive marker used in this project has a characteristic to reflect the infrared that being generated by the camera. The infrared will reflected to the camera sensor so the marker position can be detected and show in software. The used of many cameras is to increase the

  15. Multi-spectral CCD camera system for ocean water color and seacoast observation

    Science.gov (United States)

    Zhu, Min; Chen, Shiping; Wu, Yanlin; Huang, Qiaolin; Jin, Weiqi

    2001-10-01

    One of the earth observing instruments on HY-1 Satellite which will be launched in 2001, the multi-spectral CCD camera system, is developed by Beijing Institute of Space Mechanics & Electricity (BISME), Chinese Academy of Space Technology (CAST). In 798 km orbit, the system can provide images with 250 m ground resolution and a swath of 500 km. It is mainly used for coast zone dynamic mapping and oceanic watercolor monitoring, which include the pollution of offshore and coast zone, plant cover, watercolor, ice, terrain underwater, suspended sediment, mudflat, soil and vapor gross. The multi- spectral camera system is composed of four monocolor CCD cameras, which are line array-based, 'push-broom' scanning cameras, and responding for four spectral bands. The camera system adapts view field registration; that is, each camera scans the same region at the same moment. Each of them contains optics, focal plane assembly, electrical circuit, installation structure, calibration system, thermal control and so on. The primary features on the camera system are: (1) Offset of the central wavelength is better than 5 nm; (2) Degree of polarization is less than 0.5%; (3) Signal-to-noise ratio is about 1000; (4) Dynamic range is better than 2000:1; (5) Registration precision is better than 0.3 pixel; (6) Quantization value is 12 bit.

  16. Spectroscopic gamma camera for use in high dose environments

    Energy Technology Data Exchange (ETDEWEB)

    Ueno, Yuichiro, E-mail: yuichiro.ueno.bv@hitachi.com [Research and Development Group, Hitachi, Ltd., Hitachi-shi, Ibaraki-ken 319-1221 (Japan); Takahashi, Isao; Ishitsu, Takafumi; Tadokoro, Takahiro; Okada, Koichi; Nagumo, Yasushi [Research and Development Group, Hitachi, Ltd., Hitachi-shi, Ibaraki-ken 319-1221 (Japan); Fujishima, Yasutake; Kometani, Yutaka [Hitachi Works, Hitachi-GE Nuclear Energy, Ltd., Hitachi-shi, Ibaraki-ken (Japan); Suzuki, Yasuhiko [Measuring Systems Engineering Dept., Hitachi Aloka Medical, Ltd., Ome-shi, Tokyo (Japan); Umegaki, Kikuo [Faculty of Engineering, Hokkaido University, Sapporo-shi, Hokkaido (Japan)

    2016-06-21

    We developed a pinhole gamma camera to measure distributions of radioactive material contaminants and to identify radionuclides in extraordinarily high dose regions (1000 mSv/h). The developed gamma camera is characterized by: (1) tolerance for high dose rate environments; (2) high spatial and spectral resolution for identifying unknown contaminating sources; and (3) good usability for being carried on a robot and remotely controlled. These are achieved by using a compact pixelated detector module with CdTe semiconductors, efficient shielding, and a fine resolution pinhole collimator. The gamma camera weighs less than 100 kg, and its field of view is an 8 m square in the case of a distance of 10 m and its image is divided into 256 (16×16) pixels. From the laboratory test, we found the energy resolution at the 662 keV photopeak was 2.3% FWHM, which is enough to identify the radionuclides. We found that the count rate per background dose rate was 220 cps h/mSv and the maximum count rate was 300 kcps, so the maximum dose rate of the environment where the gamma camera can be operated was calculated as 1400 mSv/h. We investigated the reactor building of Unit 1 at the Fukushima Dai-ichi Nuclear Power Plant using the gamma camera and could identify the unknown contaminating source in the dose rate environment that was as high as 659 mSv/h.

  17. On-ground and in-orbit characterisation plan for the PLATO CCD normal cameras

    Science.gov (United States)

    Gow, J. P. D.; Walton, D.; Smith, A.; Hailey, M.; Curry, P.; Kennedy, T.

    2017-11-01

    PLAnetary Transits and Ocillations (PLATO) is the third European Space Agency (ESA) medium class mission in ESA's cosmic vision programme due for launch in 2026. PLATO will carry out high precision un-interrupted photometric monitoring in the visible band of large samples of bright solar-type stars. The primary mission goal is to detect and characterise terrestrial exoplanets and their systems with emphasis on planets orbiting in the habitable zone, this will be achieved using light curves to detect planetary transits. PLATO uses a novel multi- instrument concept consisting of 26 small wide field cameras The 26 cameras are made up of a telescope optical unit, four Teledyne e2v CCD270s mounted on a focal plane array and connected to a set of Front End Electronics (FEE) which provide CCD control and readout. There are 2 fast cameras with high read-out cadence (2.5 s) for magnitude ~ 4-8 stars, being developed by the German Aerospace Centre and 24 normal (N) cameras with a cadence of 25 s to monitor stars with a magnitude greater than 8. The N-FEEs are being developed at University College London's Mullard Space Science Laboratory (MSSL) and will be characterised along with the associated CCDs. The CCDs and N-FEEs will undergo rigorous on-ground characterisation and the performance of the CCDs will continue to be monitored in-orbit. This paper discusses the initial development of the experimental arrangement, test procedures and current status of the N-FEE. The parameters explored will include gain, quantum efficiency, pixel response non-uniformity, dark current and Charge Transfer Inefficiency (CTI). The current in-orbit characterisation plan is also discussed which will enable the performance of the CCDs and their associated N-FEE to be monitored during the mission, this will include measurements of CTI giving an indication of the impact of radiation damage in the CCDs.

  18. Radiation-resistant optical sensors and cameras; Strahlungsresistente optische Sensoren und Kameras

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, G. [Imaging and Sensing Technology, Bonn (Germany)

    2008-02-15

    Introducing video technology, i.e. 'TV', specifically in the nuclear field was considered at an early stage. Possibilities to view spaces in nuclear facilities by means of radiation-resistant optical sensors or cameras are presented. These systems are to enable operators to monitor and control visually the processes occurring within such spaces. Camera systems are used, e.g., for remote surveillance of critical components in nuclear power plants and nuclear facilities, and thus contribute also to plant safety. A different application of optical systems resistant to radiation is in the visual inspection of, e.g., reactor pressure vessels and in tracing small parts inside a reactor. Camera systems are also employed in remote disassembly of radioactively contaminated old plants. Unfortunately, the niche market of radiation-resistant camera systems hardly gives rise to the expectation of research funds becoming available for the development of new radiation-resistant optical systems for picture taking and viewing. Current efforts are devoted mainly to improvements of image evaluation and image quality. Other items on the agendas of manufacturers are the reduction in camera size, which is limited by the size of picture tubes, and the increased use of commercial CCD cameras together with adequate shieldings or improved lenses. Consideration is also being given to the use of periphery equipment and to data transmission by LAN, WAN, or Internet links to remote locations. (orig.)

  19. Improving Situational Awareness in camera surveillance by combining top-view maps with camera images

    NARCIS (Netherlands)

    Kooi, F.L.; Zeeders, R.

    2009-01-01

    The goal of the experiment described is to improve today's camera surveillance in public spaces. Three designs with the camera images combined on a top-view map were compared to each other and to the current situation in camera surveillance. The goal was to test which design makes spatial

  20. Wide-Field Gamma-Spectrometer BDRG: GRB Monitor On-Board the Lomonosov Mission

    Science.gov (United States)

    Svertilov, S. I.; Panasyuk, M. I.; Bogomolov, V. V.; Amelushkin, A. M.; Barinova, V. O.; Galkin, V. I.; Iyudin, A. F.; Kuznetsova, E. A.; Prokhorov, A. V.; Petrov, V. L.; Rozhkov, G. V.; Yashin, I. V.; Gorbovskoy, E. S.; Lipunov, V. M.; Park, I. H.; Jeong, S.; Kim, M. B.

    2018-02-01

    The study of GRB prompt emissions (PE) is one of the main goals of the Lomonosov space mission. The payloads of the GRB monitor (BDRG) with the wide-field optical cameras (SHOK) and the ultra-fast flash observatory (UFFO) onboard the Lomonosov satellite are intended for the observation of GRBs, and in particular, their prompt emissions. The BDRG gamma-ray spectrometer is designed to obtain the temporal and spectral information of GRBs in the energy range of 10-3000 keV as well as to provide GRB triggers on several time scales (10 ms, 1 s and 20 s) for ground and space telescopes, including the UFFO and SHOK. The BDRG instrument consists of three identical detector boxes with axes shifted by 90° from each other. This configuration allows us to localize a GRB source in the sky with an accuracy of ˜ 2°. Each BDRG box contains a phoswich NaI(Tl)/CsI(Tl) scintillator detector. A thick CsI(Tl) crystal in size of \\varnothing 130 × 17 mm is placed underneath the NaI(Tl) as an active shield in the soft energy range and as the main detector in the hard energy range. The ratio of the CsI(Tl) to NaI(Tl) event rates at varying energies can be employed as an independent metric to distinguish legitimate GRB signals from false positives originating from electrons in near-Earth vicinities. The data from three detectors are collected in a BA BDRG information unit, which generates a GRB trigger and a set of data frames in output format. The scientific data output is ˜ 500 Mb per day, including ˜ 180 Mb of continuous data for events with durations in excess of 100 ms for 16 channels in each detector, detailed energy spectra, and sets of frames with ˜ 5 Mb of detailed information for each burst-like event. A number of pre-flight tests including those for the trigger algorithm and calibration were carried out to confirm the reliability of the BDRG for operation in space.

  1. A flexible geometry Compton camera for industrial gamma ray imaging

    International Nuclear Information System (INIS)

    Royle, G.J.; Speller, R.D.

    1996-01-01

    A design for a Compton scatter camera is proposed which is applicable to gamma ray imaging within limited access industrial sites. The camera consists of a number of single element detectors arranged in a small cluster. Coincidence circuitry enables the detectors to act as a scatter camera. Positioning the detector cluster at various locations within the site, and subsequent reconstruction of the recorded data, allows an image to be obtained. The camera design allows flexibility to cater for limited space or access simply by positioning the detectors in the optimum geometric arrangement within the space allowed. The quality of the image will be limited but imaging could still be achieved in regions which are otherwise inaccessible. Computer simulation algorithms have been written to optimize the various parameters involved, such as geometrical arrangement of the detector cluster and the positioning of the cluster within the site, and to estimate the performance of such a device. Both scintillator and semiconductor detectors have been studied. A prototype camera has been constructed which operates three small single element detectors in coincidence. It has been tested in a laboratory simulation of an industrial site. This consisted of a small room (2 m wide x 1 m deep x 2 m high) into which the only access points were two 6 cm diameter holes in a side wall. Simple images of Cs-137 sources have been produced. The work described has been done on behalf of BNFL for applications at their Sellafield reprocessing plant in the UK

  2. Hydra phantom applicability for carrying out tests of field uniformity in gamma cameras; Aplicabilidade do fantoma hydra para realizacao dos testes de uniformidade de campo em gama camaras

    Energy Technology Data Exchange (ETDEWEB)

    Aragao Filho, Geraldo L., E-mail: geraldo_lemos10@hotmail.com [Centro de Medicina Nuclear de Pernambuco (CEMUPE), Recife, PE (Brazil); Oliveira, Alex C.H., E-mail: oliveira_ach@yahoo.com [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Dept. de Energia Nuclear; Lopes Filho, Ferdinand J.; Vieira, Jose W., E-mail: ferdinand.lopes@oi.com.br, E-mail: jose-wilson59@live.com [Instituto Federal de Pernambuco (IFPE), Recife, PE (Brazil)

    2014-07-01

    Nuclear Medicine is a medical modality that makes use of radioactive material 'in vivo' in humans, making them a temporary radioactive source. The radiation emitted by the patient's body is detected by a specific equipment, called a gamma camera, creates an image showing the spatial and temporal biodistribution of radioactive material administered to the patient. Therefore, it's of fundamental importance a number of specific measures to make sure that procedure be satisfactory, called quality control. To Nuclear Medicine, quality control of gamma camera has the purpose of ensuring accurate scintillographic imaging, truthful and reliable for the diagnosis, guaranteeing visibility and clarity of details of structures, and also to determine the frequency and the need for preventive maintenance of equipment. To ensure the quality control of the gamma camera it's necessary to use some simulators, called phantom, used in Nuclear Medicine to evaluate system performance, system calibration and simulation of injuries. The goal of this study was to validate a new simulator for nuclear medicine, the Hydra phantom. The phantom was initially built for construction of calibration curves used in radiotherapy planning and quality control in CT. It has similar characteristics to specific phantoms in nuclear medicine, containing inserts and water area. Those inserts are regionally sourced materials, many of them are already used in the literature and based on information about density and interaction of radiation with matter. To verify its efficiency in quality control in Nuclear Medicine, was performed a test for uniformity field, one of the main tests performed daily, so we can verify the ability of the gamma camera to reproduce a uniform distribution of the administered activity in the phantom, been analysed qualitatively, through the image, and quantitatively, through values established for Central Field Of View (CFOV) and Useful Field Of View (UFOV

  3. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    Science.gov (United States)

    Champey, Patrick R.; Kobayashi, Ken; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  4. Toward standardising gamma camera quality control procedures

    International Nuclear Information System (INIS)

    Alkhorayef, M.A.; Alnaaimi, M.A.; Alduaij, M.A.; Mohamed, M.O.; Ibahim, S.Y.; Alkandari, F.A.; Bradley, D.A.

    2015-01-01

    Attaining high standards of efficiency and reliability in the practice of nuclear medicine requires appropriate quality control (QC) programs. For instance, the regular evaluation and comparison of extrinsic and intrinsic flood-field uniformity enables the quick correction of many gamma camera problems. Whereas QC tests for uniformity are usually performed by exposing the gamma camera crystal to a uniform flux of gamma radiation from a source of known activity, such protocols can vary significantly. Thus, there is a need for optimization and standardization, in part to allow direct comparison between gamma cameras from different vendors. In the present study, intrinsic uniformity was examined as a function of source distance, source activity, source volume and number of counts. The extrinsic uniformity and spatial resolution were also examined. Proper standard QC procedures need to be implemented because of the continual development of nuclear medicine imaging technology and the rapid expansion and increasing complexity of hybrid imaging system data. The present work seeks to promote a set of standard testing procedures to contribute to the delivery of safe and effective nuclear medicine services. - Highlights: • Optimal parameters for quality control of the gamma camera are proposed. • For extrinsic and intrinsic uniformity a minimum of 15,000 counts is recommended. • For intrinsic flood uniformity the activity should not exceed 100 µCi (3.7 MBq). • For intrinsic uniformity the source to detector distance should be at least 60 cm. • The bar phantom measurement must be performed with at least 15 million counts.

  5. Cost Effective Paper-Based Colorimetric Microfluidic Devices and Mobile Phone Camera Readers for the Classroom

    Science.gov (United States)

    Koesdjojo, Myra T.; Pengpumkiat, Sumate; Wu, Yuanyuan; Boonloed, Anukul; Huynh, Daniel; Remcho, Thomas P.; Remcho, Vincent T.

    2015-01-01

    We have developed a simple and direct method to fabricate paper-based microfluidic devices that can be used for a wide range of colorimetric assay applications. With these devices, assays can be performed within minutes to allow for quantitative colorimetric analysis by use of a widely accessible iPhone camera and an RGB color reader application…

  6. Performance benefits and limitations of a camera network

    Science.gov (United States)

    Carr, Peter; Thomas, Paul J.; Hornsey, Richard

    2005-06-01

    Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.

  7. Dynamic Human Body Modeling Using a Single RGB Camera.

    Science.gov (United States)

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-03-18

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones.

  8. Impact of New Camera Technologies on Discoveries in Cell Biology.

    Science.gov (United States)

    Stuurman, Nico; Vale, Ronald D

    2016-08-01

    New technologies can make previously invisible phenomena visible. Nowhere is this more obvious than in the field of light microscopy. Beginning with the observation of "animalcules" by Antonie van Leeuwenhoek, when he figured out how to achieve high magnification by shaping lenses, microscopy has advanced to this day by a continued march of discoveries driven by technical innovations. Recent advances in single-molecule-based technologies have achieved unprecedented resolution, and were the basis of the Nobel prize in Chemistry in 2014. In this article, we focus on developments in camera technologies and associated image processing that have been a major driver of technical innovations in light microscopy. We describe five types of developments in camera technology: video-based analog contrast enhancement, charge-coupled devices (CCDs), intensified sensors, electron multiplying gain, and scientific complementary metal-oxide-semiconductor cameras, which, together, have had major impacts in light microscopy. © 2016 Marine Biological Laboratory.

  9. Three-reflections telescope proposal as flat-field anastigmat for wide field observations at Dome C

    Science.gov (United States)

    Ferrari, M.; Lemaître, G.; Viotti, R.; La Padula, C.; Comte, G.; Blanc, M.; Boer, M.

    It is now evident that the exceptional seeing at Dome C will allow, in the next years, to pursue astronomical programs with conditions better than at any other observatory in the world, and very close to space experiments. Considering a new type of wide-field telescope, particular astronomical programs could be well optimized for observations at Dome C such as surveys for the discovery and follow up of near-Earth asteroids, search for extra-solar planets using transit or micro-lensing events, and stellar luminosity variations. We propose to build a 1.5 2m class three-reflections telescope, with 1 1.5degree FOV, four times shorter than an equivalent Schmidt telescope, and providing a flat field without requiring a triplet- or quadruplet-lens corrector since its design is anastigmatic. We present the preliminary optical tests of such designs: MINITRUST1 and 2 are two 45cm identical prototypes based in France and Italy, and manufactured using active optics techniques.

  10. OPTIMAL CAMERA NETWORK DESIGN FOR 3D MODELING OF CULTURAL HERITAGE

    Directory of Open Access Journals (Sweden)

    B. S. Alsadik

    2012-07-01

    Full Text Available Digital cultural heritage documentation in 3D is subject to research and practical applications nowadays. Image-based modeling is a technique to create 3D models, which starts with the basic task of designing the camera network. This task is – however – quite crucial in practical applications because it needs a thorough planning and a certain level of expertise and experience. Bearing in mind todays computational (mobile power we think that the optimal camera network should be designed in the field, and, therefore, making the preprocessing and planning dispensable. The optimal camera network is designed when certain accuracy demands are fulfilled with a reasonable effort, namely keeping the number of camera shots at a minimum. In this study, we report on the development of an automatic method to design the optimum camera network for a given object of interest, focusing currently on buildings and statues. Starting from a rough point cloud derived from a video stream of object images, the initial configuration of the camera network assuming a high-resolution state-of-the-art non-metric camera is designed. To improve the image coverage and accuracy, we use a mathematical penalty method of optimization with constraints. From the experimental test, we found that, after optimization, the maximum coverage is attained beside a significant improvement of positional accuracy. Currently, we are working on a guiding system, to ensure, that the operator actually takes the desired images. Further next steps will include a reliable and detailed modeling of the object applying sophisticated dense matching techniques.

  11. Developing Wide-Field Spatio-Spectral Interferometry for Far-Infrared Space Applications

    Science.gov (United States)

    Leisawitz, David; Bolcar, Matthew R.; Lyon, Richard G.; Maher, Stephen F.; Memarsadeghi, Nargess; Rinehart, Stephen A.; Sinukoff, Evan J.

    2012-01-01

    Interferometry is an affordable way to bring the benefits of high resolution to space far-IR astrophysics. We summarize an ongoing effort to develop and learn the practical limitations of an interferometric technique that will enable the acquisition of high-resolution far-IR integral field spectroscopic data with a single instrument in a future space-based interferometer. This technique was central to the Space Infrared Interferometric Telescope (SPIRIT) and Submillimeter Probe of the Evolution of Cosmic Structure (SPECS) space mission design concepts, and it will first be used on the Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII). Our experimental approach combines data from a laboratory optical interferometer (the Wide-field Imaging Interferometry Testbed, WIIT), computational optical system modeling, and spatio-spectral synthesis algorithm development. We summarize recent experimental results and future plans.

  12. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  13. Light-pollution measurement with the Wide-field all-sky image analyzing monitoring system

    Science.gov (United States)

    Vítek, S.

    2017-07-01

    The purpose of this experiment was to measure light pollution in the capital of Czech Republic, Prague. As a measuring instrument is used calibrated consumer level digital single reflex camera with IR cut filter, therefore, the paper reports results of measuring and monitoring of the light pollution in the wavelength range of 390 - 700 nm, which most affects visual range astronomy. Combining frames of different exposure times made with a digital camera coupled with fish-eye lens allow to create high dynamic range images, contain meaningful values, so such a system can provide absolute values of the sky brightness.

  14. Selecting a digital camera for telemedicine.

    Science.gov (United States)

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  15. A CCD camera probe for a superconducting cyclotron

    International Nuclear Information System (INIS)

    Marti, F.; Blue, R.; Kuchar, J.; Nolen, J.A.; Sherrill, B.; Yurkon, J.

    1991-01-01

    The traditional internal beam probes in cyclotrons have consisted of a differential element, a wire or thin strip, and a main probe with several fingers to determine the vertical distribution of the beam. The resolution of these probes is limited, especially in the vertical direction. The authors have developed a probe for their K1200 superconducting cyclotron based on a CCD TV camera that works in a 6 T magnetic field. The camera looks at the beam spot on a scintillating screen. The TV image is processed by a frame grabber that digitizes and displays the image in pseudocolor in real time. This probe has much better resolution than traditional probes. They can see beams with total currents as low as 0.1 pA, with position resolution of about 0.05 mm

  16. Vulcamera: a program for measuring volcanic SO2 using UV cameras

    Directory of Open Access Journals (Sweden)

    Alessandro Aiuppa

    2011-06-01

    Full Text Available We report here on Vulcamera, a stand-alone program for the determination of volcanic SO2  fluxes using ultraviolet cameras. The code enables field image acquisition and all the required post-processing operations.

  17. About possibility of temperature trace observing on the human skin using commercially available IR camera

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Shestakov, Ivan L.; Blednov, Roman G.

    2016-09-01

    One of urgent security problems is a detection of objects placed inside the human body. Obviously, for safety reasons one cannot use X-rays for such object detection widely and often. Three years ago, we have demonstrated principal possibility to see a temperature trace, induced by food eating or water drinking, on the human body skin by using a passive THz camera. However, this camera is very expensive. Therefore, for practice it will be very convenient if one can use the IR camera for this purpose. In contrast to passive THz camera using, the IR camera does not allow to see the object under clothing, if an image, produced by this camera, is used directly. Of course, this is a big disadvantage for a security problem solution based on the IR camera using. To overcome this disadvantage we develop novel approach for computer processing of IR camera images. It allows us to increase a temperature resolution of IR camera as well as increasing of human year effective susceptibility. As a consequence of this, a possibility for seeing of a human body temperature changing through clothing appears. We analyze IR images of a person, which drinks water and eats chocolate. We follow a temperature trace on human body skin, caused by changing of temperature inside the human body. Some experiments were made with measurements of a body temperature covered by T-shirt. Shown results are very important for the detection of forbidden objects, cancelled inside the human body, by using non-destructive control without using X-rays.

  18. Wide-field two-dimensional multifocal optical-resolution photoacoustic computed microscopy

    Science.gov (United States)

    Xia, Jun; Li, Guo; Wang, Lidai; Nasiriavanaki, Mohammadreza; Maslov, Konstantin; Engelbach, John A.; Garbow, Joel R.; Wang, Lihong V.

    2014-01-01

    Optical-resolution photoacoustic microscopy (OR-PAM) is an emerging technique that directly images optical absorption in tissue at high spatial resolution. To date, the majority of OR-PAM systems are based on single focused optical excitation and ultrasonic detection, limiting the wide-field imaging speed. While one-dimensional multifocal OR-PAM (1D-MFOR-PAM) has been developed, the potential of microlens and transducer arrays has not been fully realized. Here, we present the development of two-dimensional multifocal optical-resolution photoacoustic computed microscopy (2D-MFOR-PACM), using a 2D microlens array and a full-ring ultrasonic transducer array. The 10 × 10 mm2 microlens array generates 1800 optical foci within the focal plane of the 512-element transducer array, and raster scanning the microlens array yields optical-resolution photoacoustic images. The system has improved the in-plane resolution of a full-ring transducer array from ≥100 µm to 29 µm and achieved an imaging time of 36 seconds over a 10 × 10 mm2 field of view. In comparison, the 1D-MFOR-PAM would take more than 4 minutes to image over the same field of view. The imaging capability of the system was demonstrated on phantoms and animals both ex vivo and in vivo. PMID:24322226

  19. Accessing camera trap survey feasibility for estimating Blastocerus dichotomus (Cetartiodactyla, Cervidae demographic parameters

    Directory of Open Access Journals (Sweden)

    Pedro Henrique F. Peres

    2017-11-01

    Full Text Available ABSTRACT Demographic information is the basis for evaluating and planning conservation strategies for an endangered species. However, in numerous situations there are methodological or financial limitations to obtain such information for some species. The marsh deer, an endangered Neotropical cervid, is a challenging species to obtain biological information. To help achieve such aims, the study evaluated the applicability of camera traps to obtain demographic information on the marsh deer compared to the traditional aerial census method. Fourteen camera traps were installed for three months on the Capão da Cruz floodplain, in state of São Paulo, and ten helicopter flyovers were made along a 13-kilometer trajectory to detect resident marsh deer. In addition to counting deer, the study aimed to identify the sex, age group and individual identification of the antlered males recorded. Population estimates were performed using the capture-mark-recapture method with the camera trap data and by the distance sampling method for aerial observation data. The costs and field efforts expended for both methodologies were calculated and compared. Twenty independent photographic records and 42 sightings were obtained and generated estimates of 0.98 and 1.06 ind/km², respectively. In contrast to the aerial census, camera traps allowed us to individually identify branch-antlered males, determine the sex ratio and detect fawns in the population. The cost of camera traps was 78% lower but required 20 times more field effort. Our analysis indicates that camera traps present a superior cost-benefit ratio compared to aerial surveys, since they are more informative, cheaper and offer simpler logistics. Their application extends the possibilities of studying a greater number of populations in a long-term monitoring.

  20. Wide-Field Astronomical Surveys in the Next Decade

    Energy Technology Data Exchange (ETDEWEB)

    Strauss, Michael A.; /Princeton U.; Tyson, J.Anthony; /UC, Davis; Anderson, Scott F.; /Washington U., Seattle, Astron. Dept.; Axelrod, T.S.; /LSST Corp.; Becker, Andrew C.; /Washington U., Seattle, Astron. Dept.; Bickerton, Steven J.; /Princeton U.; Blanton, Michael R.; /New York U.; Burke, David L.; /SLAC; Condon, J.J.; /NRAO, Socorro; Connolly, A.J.; /Washington U., Seattle, Astron. Dept.; Cooray, Asantha R.; /UC, Irvine; Covey, Kevin R.; /Harvard U.; Csabai, Istvan; /Eotvos U.; Ferguson, Henry C.; /Baltimore, Space Telescope Sci.; Ivezic, Zeljko; /Washington U., Seattle, Astron. Dept.; Kantor, Jeffrey; /LSST Corp.; Kent, Stephen M.; /Fermilab; Knapp, G.R.; /Princeton U.; Myers, Steven T.; /NRAO, Socorro; Neilsen, Eric H., Jr.; /Fermilab; Nichol, Robert C.; /Portsmouth U., ICG /Harish-Chandra Res. Inst. /Caltech, IPAC /Potsdam, Max Planck Inst. /Harvard U. /Hawaii U. /UC, Berkeley, Astron. Dept. /Baltimore, Space Telescope Sci. /NOAO, Tucson /Carnegie Mellon U. /Chicago U., Astron. Astrophys. Ctr.

    2011-11-14

    Wide-angle surveys have been an engine for new discoveries throughout the modern history of astronomy, and have been among the most highly cited and scientifically productive observing facilities in recent years. This trend is likely to continue over the next decade, as many of the most important questions in astrophysics are best tackled with massive surveys, often in synergy with each other and in tandem with the more traditional observatories. We argue that these surveys are most productive and have the greatest impact when the data from the surveys are made public in a timely manner. The rise of the 'survey astronomer' is a substantial change in the demographics of our field; one of the most important challenges of the next decade is to find ways to recognize the intellectual contributions of those who work on the infrastructure of surveys (hardware, software, survey planning and operations, and databases/data distribution), and to make career paths to allow them to thrive.

  1. Identification and Removal of High Frequency Temporal Noise in a Nd:YAG Macro-Pulse Laser Assisted with a Diagnostic Streak Camera

    International Nuclear Information System (INIS)

    Kent Marlett; Ke-Xun Sun

    2004-01-01

    This paper discusses the use of a reference streak camera (SC) to diagnose laser performance and guide modifications to remove high frequency noise from Bechtel Nevada's long-pulse laser. The upgraded laser exhibits less than 0.1% high frequency noise in cumulative spectra, exceeding National Ignition Facility (NIF) calibration specifications. Inertial Confinement Fusion (ICF) experiments require full characterization of streak cameras over a wide range of sweep speeds (10 ns to 480 ns). This paradigm of metrology poses stringent spectral requirements on the laser source for streak camera calibration. Recently, Bechtel Nevada worked with a laser vendor to develop a high performance, multi-wavelength Nd:YAG laser to meet NIF calibration requirements. For a typical NIF streak camera with a 4096 x 4096 pixel CCD, the flat field calibration at 30 ns requires a smooth laser spectrum over 33 MHz to 68 GHz. Streak cameras are the appropriate instrumentation for measuring laser amplitude noise at these very high frequencies since the upper end spectral content is beyond the frequency response of typical optoelectronic detectors for a single shot pulse. The SC was used to measure a similar laser at its second harmonic wavelength (532 nm), to establish baseline spectra for testing signal analysis algorithms. The SC was then used to measure the new custom calibration laser. In both spatial-temporal measurements and cumulative spectra, 6-8 GHz oscillations were identified. The oscillations were found to be caused by inter-surface reflections between amplifiers. Additional variations in the SC spectral data were found to result from temperature instabilities in the seeding laser. Based on these findings, laser upgrades were made to remove the high frequency noise from the laser output

  2. VizieR Online Data Catalog: Spectral types of stars in CoRoT fields (Sebastian+, 2012)

    Science.gov (United States)

    Sebastian, D.; Guenther, E. W.; Schaffenroth, V.; Gandolfi, D.; Geier, S.; Heber, U.; Deleuil, M.; Moutou, C.

    2012-03-01

    Spectroscopic classification for 2950 O-, B-, and A-type stars in the CoRoT-fields IRa01, LRa01, and LRa02. Stars are named by their CoRoT-identifier and Coordinates are given. The visual magnitudes were obtained with the Wide Field Camera filter-system of the Isaac Newton Telescope at Roque de los Muchachos Observatory on La Palma and can be converted into Landolt standards, as shown in Deleuil et al. (2009AJ....138..649D). (1 data file).

  3. The AOTF-Based NO2 Camera

    Science.gov (United States)

    Dekemper, E.; Fussen, D.; Vanhellemont, F.; Vanhamel, J.; Pieroux, D.; Berkenbosch, S.

    2017-12-01

    In an urban environment, nitrogen dioxide is emitted by a multitude of static and moving point sources (cars, industry, power plants, heating systems,…). Air quality models generally rely on a limited number of monitoring stations which do not capture the whole pattern, neither allow for full validation. So far, there has been a lack of instrument capable of measuring NO2 fields with the necessary spatio-temporal resolution above major point sources (power plants), or more extended ones (cities). We have developed a new type of passive remote sensing instrument aiming at the measurement of 2-D distributions of NO2 slant column densities (SCDs) with a high spatial (meters) and temporal (minutes) resolution. The measurement principle has some similarities with the popular filter-based SO2 camera (used in volcanic and industrial sulfur emissions monitoring) as it relies on spectral images taken at wavelengths where the molecule absorption cross section is different. But contrary to the SO2 camera, the spectral selection is performed by an acousto-optical tunable filter (AOTF) capable of resolving the target molecule's spectral features. A first prototype was successfully tested with the plume of a coal-firing power plant in Romania, revealing the dynamics of the formation of NO2 in the early plume. A lighter version of the NO2 camera is now being tested on other targets, such as oil refineries and urban air masses.

  4. A simple data loss model for positron camera systems

    International Nuclear Information System (INIS)

    Eriksson, L.; Dahlbom, M.

    1994-01-01

    A simple model to describe data losses in PET cameras is presented. The model is not intended to be used primarily for dead time corrections in existing scanners, although this is possible. Instead the model is intended to be used for data simulations in order to determine the figures of merit of future camera systems, based on data handling state-of-art solutions. The model assumes the data loss to be factorized into two components, one describing the detector or block-detector performance and the other the remaining data handling such as coincidence determinations, data transfer and data storage. Two modern positron camera systems have been investigated in terms of this model. These are the Siemens-CTI ECAT EXACT and ECAT EXACT HR systems, which both have an axial field-of-view (FOV) of about 15 cm. They both have retractable septa and can acquire data from the whole volume within the FOV and can reconstruct volume image data. An example is given how to use the model for live time calculation in a futuristic large axial FOV cylindrical system

  5. Automatic locking radioisotope camera lock

    International Nuclear Information System (INIS)

    Rosauer, P.J.

    1978-01-01

    The lock of the present invention secures the isotope source in a stored shielded condition in the camera until a positive effort has been made to open the lock and take the source outside of the camera and prevents disconnection of the source pigtail unless the source is locked in a shielded condition in the camera. It also gives a visual indication of the locked or possible exposed condition of the isotope source and prevents the source pigtail from being completely pushed out of the camera, even when the lock is released. (author)

  6. Computational cameras for moving iris recognition

    Science.gov (United States)

    McCloskey, Scott; Venkatesha, Sharath

    2015-05-01

    Iris-based biometric identification is increasingly used for facility access and other security applications. Like all methods that exploit visual information, however, iris systems are limited by the quality of captured images. Optical defocus due to a small depth of field (DOF) is one such challenge, as is the acquisition of sharply-focused iris images from subjects in motion. This manuscript describes the application of computational motion-deblurring cameras to the problem of moving iris capture, from the underlying theory to system considerations and performance data.

  7. Miniature CCD X-Ray Imaging Camera Technology Final Report CRADA No. TC-773-94

    Energy Technology Data Exchange (ETDEWEB)

    Conder, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mummolo, F. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-10-19

    The goal of the project was to develop a compact, large active area, high spatial resolution, high dynamic range, charge-coupled device (CCD) camera to replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating X-rays. The camera head and controller needed to be capable of operation within a vacuum environment and small enough to be fielded within the small vacuum target chambers at LLNL.

  8. Efficient Multiclass Object Detection: Detecting Pedestrians and Bicyclists in a Truck’s Blind Spot Camera

    OpenAIRE

    Van Beeck, Kristof; Goedemé, Toon

    2015-01-01

    In this paper we propose an efficient detection and tracking framework targeting vulnerable road users in the blind spot camera images of a truck. Existing non-vision based safety solutions are not able to handle this problem completely. Therefore we aim to develop an active safety system, based solely on the vision input of the blind spot camera. This is far from trivial: vulnerable road users are a diverse class and consist of a wide variety of poses and appearances. Evidently we need to ac...

  9. Image quality testing of assembled IR camera modules

    Science.gov (United States)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  10. The Camera of the MASCOT Asteroid Lander on Board Hayabusa 2

    Science.gov (United States)

    Jaumann, R.; Schmitz, N.; Koncz, A.; Michaelis, H.; Schroeder, S. E.; Mottola, S.; Trauthan, F.; Hoffmann, H.; Roatsch, T.; Jobs, D.; Kachlicki, J.; Pforte, B.; Terzer, R.; Tschentscher, M.; Weisse, S.; Mueller, U.; Perez-Prieto, L.; Broll, B.; Kruselburger, A.; Ho, T.-M.; Biele, J.; Ulamec, S.; Krause, C.; Grott, M.; Bibring, J.-P.; Watanabe, S.; Sugita, S.; Okada, T.; Yoshikawa, M.; Yabuta, H.

    2017-07-01

    The MASCOT Camera (MasCam) is part of the Mobile Asteroid Surface Scout (MASCOT) lander's science payload. MASCOT has been launched to asteroid (162173) Ryugu onboard JAXA's Hayabusa 2 asteroid sample return mission on Dec 3rd, 2014. It is scheduled to arrive at Ryugu in 2018, and return samples to Earth by 2020. MasCam was designed and built by DLR's Institute of Planetary Research, together with Airbus-DS Germany. The scientific goals of the MasCam investigation are to provide ground truth for the orbiter's remote sensing observations, provide context for measurements by the other lander instruments (radiometer, spectrometer and magnetometer), the orbiter sampling experiment, and characterize the geological context, compositional variations and physical properties of the surface (e.g. rock and regolith particle size distributions). During daytime, clear filter images will be acquired. During night, illumination of the dark surface is performed by an LED array, equipped with 4×36 monochromatic light-emitting diodes (LEDs) working in four spectral bands. Color imaging will allow the identification of spectrally distinct surface units. Continued imaging during the surface mission phase and the acquisition of image series at different sun angles over the course of an asteroid day will contribute to the physical characterization of the surface and also allow the investigation of time-dependent processes and to determine the photometric properties of the regolith. The MasCam observations, combined with the MASCOT hyperspectral microscope (MMEGA) and radiometer (MARA) thermal observations, will cover a wide range of observational scales and serve as a strong tie point between Hayabusa 2's remote-sensing scales (103-10^{-3} m) and sample scales (10^{-3}-10^{-6} m). The descent sequence and the close-up images will reveal the surface features over a broad range of scales, allowing an assessment of the surface's diversity and close the gap between the orbital observations

  11. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  12. Handbook of camera monitor systems the automotive mirror-replacement technology based on ISO 16505

    CERN Document Server

    2016-01-01

    This handbook offers a comprehensive overview of Camera Monitor Systems (CMS), ranging from the ISO 16505-based development aspects to practical realization concepts. It offers readers a wide-ranging discussion of the science and technology of CMS as well as the human-interface factors of such systems. In addition, it serves as a single reference source with contributions from leading international CMS professionals and academic researchers. In combination with the latest version of UN Regulation No. 46, the normative framework of ISO 16505 permits CMS to replace mandatory rearview mirrors in series production vehicles. The handbook includes scientific and technical background information to further readers’ understanding of both of these regulatory and normative texts. It is a key reference in the field of automotive CMS for system designers, members of standardization and regulation committees, engineers, students and researchers.

  13. Light field driven streak-camera for single-shot measurements of the temporal profile of XUV-pulses from a free-electron laser; Lichtfeld getriebene Streak-Kamera zur Einzelschuss Zeitstrukturmessung der XUV-Pulse eines Freie-Elektronen Lasers

    Energy Technology Data Exchange (ETDEWEB)

    Fruehling, Ulrike

    2009-10-15

    The Free Electron Laser in Hamburg (FLASH) is a source for highly intense ultra short extreme ultraviolet (XUV) light pulses with pulse durations of a few femtoseconds. Due to the stochastic nature of the light generation scheme based on self amplified spontaneous emission (SASE), the duration and temporal profile of the XUV pulses fluctuate from shot to shot. In this thesis, a THz-field driven streak-camera capable of single pulse measurements of the XUV pulse-profile has been realized. In a first XUV-THz pump-probe experiment at FLASH, the XUV-pulses are overlapped in a gas target with synchronized THz-pulses generated by a new THz-undulator. The electromagnetic field of the THz light accelerates photoelectrons produced by the XUV-pulses with the resulting change of the photoelectron momenta depending on the phase of the THz field at the time of ionisation. This technique is intensively used in attosecond metrology where near infrared streaking fields are employed for the temporal characterisation of attosecond XUV-Pulses. Here, it is adapted for the analysis of pulse durations in the few femtosecond range by choosing a hundred times longer far infrared streaking wavelengths. Thus, the gap between conventional streak cameras with typical resolutions of hundreds of femtoseconds and techniques with attosecond resolution is filled. Using the THz-streak camera, the time dependent electric field of the THz-pulses was sampled in great detail while on the other hand the duration and even details of the time structure of the XUV-pulses were characterized. (orig.)

  14. Re-identification of persons in multi-camera surveillance under varying viewpoints and illumination

    Science.gov (United States)

    Bouma, Henri; Borsboom, Sander; den Hollander, Richard J. M.; Landsmeer, Sander H.; Worring, Marcel

    2012-06-01

    The capability to track individuals in CCTV cameras is important for surveillance and forensics alike. However, it is laborious to do over multiple cameras. Therefore, an automated system is desirable. In literature several methods have been proposed, but their robustness against varying viewpoints and illumination is limited. Hence performance in realistic settings is also limited. In this paper, we present a novel method for the automatic re-identification of persons in video from surveillance cameras in a realistic setting. The method is computationally efficient, robust to a wide variety of viewpoints and illumination, simple to implement and it requires no training. We compare the performance of our method to several state-of-the-art methods on a publically available dataset that contains the variety of viewpoints and illumination to allow benchmarking. The results indicate that our method shows good performance and enables a human operator to track persons five times faster.

  15. Video camera use at nuclear power plants

    International Nuclear Information System (INIS)

    Estabrook, M.L.; Langan, M.O.; Owen, D.E.

    1990-08-01

    A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs

  16. Real-time tracking and fast retrieval of persons in multiple surveillance cameras of a shopping mall

    Science.gov (United States)

    Bouma, Henri; Baan, Jan; Landsmeer, Sander; Kruszynski, Chris; van Antwerpen, Gert; Dijk, Judith

    2013-05-01

    The capability to track individuals in CCTV cameras is important for e.g. surveillance applications at large areas such as train stations, airports and shopping centers. However, it is laborious to track and trace people over multiple cameras. In this paper, we present a system for real-time tracking and fast interactive retrieval of persons in video streams from multiple static surveillance cameras. This system is demonstrated in a shopping mall, where the cameras are positioned without overlapping fields-of-view and have different lighting conditions. The results show that the system allows an operator to find the origin or destination of a person more efficiently. The misses are reduced with 37%, which is a significant improvement.

  17. POLE PHOTOGRAMMETRY WITH AN ACTION CAMERA FOR FAST AND ACCURATE SURFACE MAPPING

    Directory of Open Access Journals (Sweden)

    J. A. Gonçalves

    2016-06-01

    Full Text Available High resolution and high accuracy terrain mapping can provide height change detection for studies of erosion, subsidence or land slip. A UAV flying at a low altitude above the ground, with a compact camera, acquires images with resolution appropriate for these change detections. However, there may be situations where different approaches may be needed, either because higher resolution is required or the operation of a drone is not possible. Pole photogrammetry, where a camera is mounted on a pole, pointing to the ground, is an alternative. This paper describes a very simple system of this kind, created for topographic change detection, based on an action camera. These cameras have high quality and very flexible image capture. Although radial distortion is normally high, it can be treated in an auto-calibration process. The system is composed by a light aluminium pole, 4 meters long, with a 12 megapixel GoPro camera. Average ground sampling distance at the image centre is 2.3 mm. The user moves along a path, taking successive photos, with a time lapse of 0.5 or 1 second, and adjusting the speed in order to have an appropriate overlap, with enough redundancy for 3D coordinate extraction. Marked ground control points are surveyed with GNSS for precise georeferencing of the DSM and orthoimage that are created by structure from motion processing software. An average vertical accuracy of 1 cm could be achieved, which is enough for many applications, for example for soil erosion. The GNSS survey in RTK mode with permanent stations is now very fast (5 seconds per point, which results, together with the image collection, in a very fast field work. If an improved accuracy is needed, since image resolution is 1/4 cm, it can be achieved using a total station for the control point survey, although the field work time increases.

  18. Pole Photogrammetry with AN Action Camera for Fast and Accurate Surface Mapping

    Science.gov (United States)

    Gonçalves, J. A.; Moutinho, O. F.; Rodrigues, A. C.

    2016-06-01

    High resolution and high accuracy terrain mapping can provide height change detection for studies of erosion, subsidence or land slip. A UAV flying at a low altitude above the ground, with a compact camera, acquires images with resolution appropriate for these change detections. However, there may be situations where different approaches may be needed, either because higher resolution is required or the operation of a drone is not possible. Pole photogrammetry, where a camera is mounted on a pole, pointing to the ground, is an alternative. This paper describes a very simple system of this kind, created for topographic change detection, based on an action camera. These cameras have high quality and very flexible image capture. Although radial distortion is normally high, it can be treated in an auto-calibration process. The system is composed by a light aluminium pole, 4 meters long, with a 12 megapixel GoPro camera. Average ground sampling distance at the image centre is 2.3 mm. The user moves along a path, taking successive photos, with a time lapse of 0.5 or 1 second, and adjusting the speed in order to have an appropriate overlap, with enough redundancy for 3D coordinate extraction. Marked ground control points are surveyed with GNSS for precise georeferencing of the DSM and orthoimage that are created by structure from motion processing software. An average vertical accuracy of 1 cm could be achieved, which is enough for many applications, for example for soil erosion. The GNSS survey in RTK mode with permanent stations is now very fast (5 seconds per point), which results, together with the image collection, in a very fast field work. If an improved accuracy is needed, since image resolution is 1/4 cm, it can be achieved using a total station for the control point survey, although the field work time increases.

  19. Feasibility Study of Utilization of Action Camera, GoPro Hero 4, Google Glass, and Panasonic HX-A100 in Spine Surgery.

    Science.gov (United States)

    Lee, Chang Kyu; Kim, Youngjun; Lee, Nam; Kim, Byeongwoo; Kim, Doyoung; Yi, Seong

    2017-02-15

    Study for feasibility of commercially available action cameras in recording video of spine. Recent innovation of the wearable action camera with high-definition video recording enables surgeons to use camera in the operation at ease without high costs. The purpose of this study is to compare the feasibility, safety, and efficacy of commercially available action cameras in recording video of spine surgery. There are early reports of medical professionals using Google Glass throughout the hospital, Panasonic HX-A100 action camera, and GoPro. This study is the first report for spine surgery. Three commercially available cameras were tested: GoPro Hero 4 Silver, Google Glass, and Panasonic HX-A100 action camera. Typical spine surgery was selected for video recording; posterior lumbar laminectomy and fusion. Three cameras were used by one surgeon and video was recorded throughout the operation. The comparison was made on the perspective of human factor, specification, and video quality. The most convenient and lightweight device for wearing and holding throughout the long operation time was Google Glass. The image quality; all devices except Google Glass supported HD format and GoPro has unique 2.7K or 4K resolution. Quality of video resolution was best in GoPro. Field of view, GoPro can adjust point of interest, field of view according to the surgery. Narrow FOV option was the best for recording in GoPro to share the video clip. Google Glass has potentials by using application programs. Connectivity such as Wi-Fi and Bluetooth enables video streaming for audience, but only Google Glass has two-way communication feature in device. Action cameras have the potential to improve patient safety, operator comfort, and procedure efficiency in the field of spinal surgery and broadcasting a surgery with development of the device and applied program in the future. N/A.

  20. The New Approach to Camera Calibration – GCPs or TLS Data?

    Directory of Open Access Journals (Sweden)

    J. Markiewicz

    2016-06-01

    Full Text Available Camera calibration is one of the basic photogrammetric tasks responsible for the quality of processed products. The majority of calibration is performed with a specially designed test field or during the self-calibration process. The research presented in this paper aims to answer the question of whether it is necessary to use control points designed in the standard way for determination of camera interior orientation parameters. Data from close-range laser scanning can be used as an alternative. The experiments shown in this work demonstrate the potential of laser measurements, since the number of points that may be involved in the calculation is much larger than that of commonly used ground control points. The problem which still exists is the correct and automatic identification of object details in the image, taken with a tested camera, as well as in the data set registered with the laser scanner.