WorldWideScience

Sample records for single color camera

  1. Ultrahigh-speed, high-sensitivity color camera with 300,000-pixel single CCD

    Science.gov (United States)

    Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Ohtake, H.; Kurita, T.; Tanioka, K.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Etoh, T. G.

    2007-01-01

    We have developed an ultrahigh-speed, high-sensitivity portable color camera with a new 300,000-pixel single CCD. The 300,000-pixel CCD, which has four times the number of pixels of our initial model, was developed by seamlessly joining two 150,000-pixel CCDs. A green-red-green-blue (GRGB) Bayer filter is used to realize a color camera with the single-chip CCD. The camera is capable of ultrahigh-speed video recording at up to 1,000,000 frames/sec, and small enough to be handheld. We also developed a technology for dividing the CCD output signal to enable parallel, highspeed readout and recording in external memory; this makes possible long, continuous shots up to 1,000 frames/second. As a result of an experiment, video footage was imaged at an athletics meet. Because of high-speed shooting, even detailed movements of athletes' muscles were captured. This camera can capture clear slow-motion videos, so it enables previously impossible live footage to be imaged for various TV broadcasting programs.

  2. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    Science.gov (United States)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  3. Optimum color filters for CCD digital cameras

    Science.gov (United States)

    Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl

    1993-12-01

    As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.

  4. High resolution RGB color line scan camera

    Science.gov (United States)

    Lynch, Theodore E.; Huettig, Fred

    1998-04-01

    A color line scan camera family which is available with either 6000, 8000 or 10000 pixels/color channel, utilizes off-the-shelf lenses, interfaces with currently available frame grabbers, includes on-board pixel by pixel offset correction, and is configurable and controllable via RS232 serial port for computer controlled or stand alone operation is described in this paper. This line scan camera is based on an available 8000 element monochrome line scan camera designed by AOA for OEM use. The new color version includes improvements such as better packaging and additional user features which make the camera easier to use. The heart of the camera is a tri-linear CCD sensor with on-chip color balancing for maximum accuracy and pinned photodiodes for low lag response. Each color channel is digitized to 12 bits and all three channels are multiplexed together so that the resulting camera output video is either a 12 or 8 bit data stream at a rate of up to 24Megpixels/sec. Conversion from 12 to 8 bit, or user-defined gamma, is accomplished by on board user-defined video look up tables. The camera has two user-selectable operating modes; lows speed, high sensitivity mode or high speed, reduced sensitivity mode. The intended uses of the camera include industrial inspection, digital archiving, document scanning, and graphic arts applications.

  5. Initial laboratory evaluation of color video cameras: Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  6. Perceptual Color Characterization of Cameras

    Directory of Open Access Journals (Sweden)

    Javier Vazquez-Corral

    2014-12-01

    Full Text Available Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as \\(XYZ\\, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a \\(3 \\times 3\\ matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson al., to perform a perceptual color characterization. In particular, we search for the \\(3 \\times 3\\ matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE \\(\\Delta E\\ error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3for the \\(\\Delta E\\ error, 7& for the S-CIELAB error and 13% for the CID error measures.

  7. Color reproduction software for a digital still camera

    Science.gov (United States)

    Lee, Bong S.; Park, Du-Sik; Nam, Byung D.

    1998-04-01

    We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.

  8. Spectral colors capture and reproduction based on digital camera

    Science.gov (United States)

    Chen, Defen; Huang, Qingmei; Li, Wei; Lu, Yang

    2018-01-01

    The purpose of this work is to develop a method for the accurate reproduction of the spectral colors captured by digital camera. The spectral colors being the purest color in any hue, are difficult to reproduce without distortion on digital devices. In this paper, we attempt to achieve accurate hue reproduction of the spectral colors by focusing on two steps of color correction: the capture of the spectral colors and the color characterization of digital camera. Hence it determines the relationship among the spectral color wavelength, the RGB color space of the digital camera device and the CIEXYZ color space. This study also provides a basis for further studies related to the color spectral reproduction on digital devices. In this paper, methods such as wavelength calibration of the spectral colors and digital camera characterization were utilized. The spectrum was obtained through the grating spectroscopy system. A photo of a clear and reliable primary spectrum was taken by adjusting the relative parameters of the digital camera, from which the RGB values of color spectrum was extracted in 1040 equally-divided locations. Calculated using grating equation and measured by the spectrophotometer, two wavelength values were obtained from each location. The polynomial fitting method for the camera characterization was used to achieve color correction. After wavelength calibration, the maximum error between the two sets of wavelengths is 4.38nm. According to the polynomial fitting method, the average color difference of test samples is 3.76. This has satisfied the application needs of the spectral colors in digital devices such as display and transmission.

  9. A holographic color camera for recording artifacts

    International Nuclear Information System (INIS)

    Jith, Abhay

    2013-01-01

    Advent of 3D televisions has created a new wave of public interest in images with depth. Though these technologies create moving pictures with apparent depth, it lacks the visual appeal and a set of other positive aspects of color holographic images. The above new wave of interest in 3D will definitely help to fuel popularity of holograms. In view of this, a low cost and handy color holography camera is designed for recording color holograms of artifacts. It is believed that such cameras will help to record medium format color holograms outside conventional holography laboratories and to popularize color holography. The paper discusses the design and the results obtained.

  10. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  11. Engineering task plan for flammable gas atmosphere mobile color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1995-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and testing of the mobile video camera systems. The color video camera systems will be used to observe and record the activities within the vapor space of a tank on a limited exposure basis. The units will be fully mobile and designed for operation in the single-shell flammable gas producing tanks. The objective of this tank is to provide two mobile camera systems for use in flammable gas producing single-shell tanks (SSTs) for the Flammable Gas Tank Safety Program. The camera systems will provide observation, video recording, and monitoring of the activities that occur in the vapor space of applied tanks. The camera systems will be designed to be totally mobile, capable of deployment up to 6.1 meters into a 4 inch (minimum) riser

  12. Dichromatic Gray Pixel for Camera-agnostic Color Constancy

    OpenAIRE

    Qian, Yanlin; Chen, Ke; Nikkanen, Jarno; Kämäräinen, Joni-Kristian; Matas, Jiri

    2018-01-01

    We propose a novel statistical color constancy method, especially suitable for the Camera-agnostic Color Constancy, i.e. the scenario where nothing is known a priori about the capturing devices. The method, called Dichromatic Gray Pixel, or DGP, relies on a novel gray pixel detection algorithm derived using the Dichromatic Reflection Model. DGP is suitable for camera-agnostic color constancy since varying devices are set to make achromatic pixels look gray under standard neutral illumination....

  13. Color correction pipeline optimization for digital cameras

    Science.gov (United States)

    Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo

    2013-04-01

    The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.

  14. A method and results of color calibration for the Chang'e-3 terrain camera and panoramic camera

    Science.gov (United States)

    Ren, Xin; Li, Chun-Lai; Liu, Jian-Jun; Wang, Fen-Fei; Yang, Jian-Feng; Liu, En-Hai; Xue, Bin; Zhao, Ru-Jin

    2014-12-01

    The terrain camera (TCAM) and panoramic camera (PCAM) are two of the major scientific payloads installed on the lander and rover of the Chang'e 3 mission respectively. They both use a Bayer color filter array covering CMOS sensor to capture color images of the Moon's surface. RGB values of the original images are related to these two kinds of cameras. There is an obvious color difference compared with human visual perception. This paper follows standards published by the International Commission on Illumination to establish a color correction model, designs the ground calibration experiment and obtains the color correction coefficient. The image quality has been significantly improved and there is no obvious color difference in the corrected images. Ground experimental results show that: (1) Compared with uncorrected images, the average color difference of TCAM is 4.30, which has been reduced by 62.1%. (2) The average color differences of the left and right cameras in PCAM are 4.14 and 4.16, which have been reduced by 68.3% and 67.6% respectively.

  15. Use of a color CMOS camera as a colorimeter

    Science.gov (United States)

    Dallas, William J.; Roehrig, Hans; Redford, Gary R.

    2006-08-01

    In radiology diagnosis, film is being quickly replaced by computer monitors as the display medium for all imaging modalities. Increasingly, these monitors are color instead of monochrome. It is important to have instruments available to characterize the display devices in order to guarantee reproducible presentation of image material. We are developing an imaging colorimeter based on a commercially available color digital camera. The camera uses a sensor that has co-located pixels in all three primary colors.

  16. A method and results of color calibration for the Chang'e-3 terrain camera and panoramic camera

    International Nuclear Information System (INIS)

    Ren Xin; Li Chun-Lai; Liu Jian-Jun; Wang Fen-Fei; Yang Jian-Feng; Xue Bin; Liu En-Hai; Zhao Ru-Jin

    2014-01-01

    The terrain camera (TCAM) and panoramic camera (PCAM) are two of the major scientific payloads installed on the lander and rover of the Chang'e 3 mission respectively. They both use a Bayer color filter array covering CMOS sensor to capture color images of the Moon's surface. RGB values of the original images are related to these two kinds of cameras. There is an obvious color difference compared with human visual perception. This paper follows standards published by the International Commission on Illumination to establish a color correction model, designs the ground calibration experiment and obtains the color correction coefficient. The image quality has been significantly improved and there is no obvious color difference in the corrected images. Ground experimental results show that: (1) Compared with uncorrected images, the average color difference of TCAM is 4.30, which has been reduced by 62.1%. (2) The average color differences of the left and right cameras in PCAM are 4.14 and 4.16, which have been reduced by 68.3% and 67.6% respectively

  17. Feature-based automatic color calibration for networked camera system

    Science.gov (United States)

    Yamamoto, Shoji; Taki, Keisuke; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2011-01-01

    In this paper, we have developed a feature-based automatic color calibration by using an area-based detection and adaptive nonlinear regression method. Simple color matching of chartless is achieved by using the characteristic of overlapping image area with each camera. Accurate detection of common object is achieved by the area-based detection that combines MSER with SIFT. Adaptive color calibration by using the color of detected object is calculated by nonlinear regression method. This method can indicate the contribution of object's color for color calibration, and automatic selection notification for user is performed by this function. Experimental result show that the accuracy of the calibration improves gradually. It is clear that this method can endure practical use of multi-camera color calibration if an enough sample is obtained.

  18. Temperature measurement with industrial color camera devices

    Science.gov (United States)

    Schmidradler, Dieter J.; Berndorfer, Thomas; van Dyck, Walter; Pretschuh, Juergen

    1999-05-01

    This paper discusses color camera based temperature measurement. Usually, visual imaging and infrared image sensing are treated as two separate disciplines. We will show, that a well selected color camera device might be a cheaper, more robust and more sophisticated solution for optical temperature measurement in several cases. Herein, only implementation fragments and important restrictions for the sensing element will be discussed. Our aim is to draw the readers attention to the use of visual image sensors for measuring thermal radiation and temperature and to give reasons for the need of improved technologies for infrared camera devices. With AVL-List, our partner of industry, we successfully used the proposed sensor to perform temperature measurement for flames inside the combustion chamber of diesel engines which finally led to the presented insights.

  19. Efficient color correction method for smartphone camera-based health monitoring application.

    Science.gov (United States)

    Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong

    2017-07-01

    Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.

  20. Quantitative Evaluation of Surface Color of Tomato Fruits Cultivated in Remote Farm Using Digital Camera Images

    Science.gov (United States)

    Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu

    To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.

  1. Engineering task plan for Tanks 241-AN-103, 104, 105 color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1994-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and installation of the video camera systems into the vapor space within tanks 241-AN-103, 104, and 105. The one camera remotely operated color video systems will be used to observe and record the activities within the vapor space. Activities may include but are not limited to core sampling, auger activities, crust layer examination, monitoring of equipment installation/removal, and any other activities. The objective of this task is to provide a single camera system in each of the tanks for the Flammable Gas Tank Safety Program

  2. A method for evaluating image quality of monochrome and color displays based on luminance by use of a commercially available color digital camera

    Energy Technology Data Exchange (ETDEWEB)

    Tokurei, Shogo, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp [Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, Fukuoka 812-8582, Japan and Department of Radiology, Yamaguchi University Hospital, 1-1-1 Minamikogushi, Ube, Yamaguchi 755-8505 (Japan); Morishita, Junji, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp [Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, Fukuoka 812-8582 (Japan)

    2015-08-15

    Purpose: The aim of this study is to propose a method for the quantitative evaluation of image quality of both monochrome and color liquid-crystal displays (LCDs) using a commercially available color digital camera. Methods: The intensities of the unprocessed red (R), green (G), and blue (B) signals of a camera vary depending on the spectral sensitivity of the image sensor used in the camera. For consistent evaluation of image quality for both monochrome and color LCDs, the unprocessed RGB signals of the camera were converted into gray scale signals that corresponded to the luminance of the LCD. Gray scale signals for the monochrome LCD were evaluated by using only the green channel signals of the camera. For the color LCD, the RGB signals of the camera were converted into gray scale signals by employing weighting factors (WFs) for each RGB channel. A line image displayed on the color LCD was simulated on the monochrome LCD by using a software application for subpixel driving in order to verify the WF-based conversion method. Furthermore, the results obtained by different types of commercially available color cameras and a photometric camera were compared to examine the consistency of the authors’ method. Finally, image quality for both the monochrome and color LCDs was assessed by measuring modulation transfer functions (MTFs) and Wiener spectra (WS). Results: The authors’ results demonstrated that the proposed method for calibrating the spectral sensitivity of the camera resulted in a consistent and reliable evaluation of the luminance of monochrome and color LCDs. The MTFs and WS showed different characteristics for the two LCD types owing to difference in the subpixel structure. The MTF in the vertical direction of the color LCD was superior to that of the monochrome LCD, although the WS in the vertical direction of the color LCD was inferior to that of the monochrome LCD as a result of luminance fluctuations in RGB subpixels. Conclusions: The authors

  3. Performance of Color Camera Machine Vision in Automated Furniture Rough Mill Systems

    Science.gov (United States)

    D. Earl Kline; Agus Widoyoko; Janice K. Wiedenbeck; Philip A. Araman

    1998-01-01

    The objective of this study was to evaluate the performance of color camera machine vision for lumber processing in a furniture rough mill. The study used 134 red oak boards to compare the performance of automated gang-rip-first rough mill yield based on a prototype color camera lumber inspection system developed at Virginia Tech with both estimated optimum rough mill...

  4. SNAPSHOT SPECTRAL AND COLOR IMAGING USING A REGULAR DIGITAL CAMERA WITH A MONOCHROMATIC IMAGE SENSOR

    Directory of Open Access Journals (Sweden)

    J. Hauser

    2017-10-01

    Full Text Available Spectral imaging (SI refers to the acquisition of the three-dimensional (3D spectral cube of spatial and spectral data of a source object at a limited number of wavelengths in a given wavelength range. Snapshot spectral imaging (SSI refers to the instantaneous acquisition (in a single shot of the spectral cube, a process suitable for fast changing objects. Known SSI devices exhibit large total track length (TTL, weight and production costs and relatively low optical throughput. We present a simple SSI camera based on a regular digital camera with (i an added diffusing and dispersing phase-only static optical element at the entrance pupil (diffuser and (ii tailored compressed sensing (CS methods for digital processing of the diffused and dispersed (DD image recorded on the image sensor. The diffuser is designed to mix the spectral cube data spectrally and spatially and thus to enable convergence in its reconstruction by CS-based algorithms. In addition to performing SSI, this SSI camera is capable to perform color imaging using a monochromatic or gray-scale image sensor without color filter arrays.

  5. Single chip camera active pixel sensor

    Science.gov (United States)

    Shaw, Timothy (Inventor); Pain, Bedabrata (Inventor); Olson, Brita (Inventor); Nixon, Robert H. (Inventor); Fossum, Eric R. (Inventor); Panicacci, Roger A. (Inventor); Mansoorian, Barmak (Inventor)

    2003-01-01

    A totally digital single chip camera includes communications to operate most of its structure in serial communication mode. The digital single chip camera include a D/A converter for converting an input digital word into an analog reference signal. The chip includes all of the necessary circuitry for operating the chip using a single pin.

  6. Color blending based on viewpoint and surface normal for generating images from any viewpoint using multiple cameras

    OpenAIRE

    Mukaigawa, Yasuhiro; Genda, Daisuke; Yamane, Ryo; Shakunaga, Takeshi

    2003-01-01

    A color blending method for generating a high quality image of human motion is presented. The 3D (three-dimensional) human shape is reconstructed by volume intersection and expressed as a set of voxels. As each voxel is observed as different colors from different cameras, voxel color needs to be assigned appropriately from several colors. We present a color blending method, which calculates voxel color from a linear combination of the colors observed by multiple cameras. The weightings in the...

  7. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    International Nuclear Information System (INIS)

    Werry, S.M.

    1995-01-01

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure

  8. Hyperspectral imaging using a color camera and its application for pathogen detection

    Science.gov (United States)

    Yoon, Seung-Chul; Shin, Tae-Sung; Heitschmidt, Gerald W.; Lawrence, Kurt C.; Park, Bosoon; Gamble, Gary

    2015-02-01

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six representative non-O157 Shiga-toxin producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) grown in Petri dishes of Rainbow agar. The purpose of the feasibility study was to evaluate whether a DSLR camera (Nikon D700) could be used to predict hyperspectral images in the wavelength range from 400 to 1,000 nm and even to predict the types of pathogens using a hyperspectral STEC classification algorithm that was previously developed. Unlike many other studies using color charts with known and noise-free spectra for training reconstruction models, this work used hyperspectral and color images, separately measured by a hyperspectral imaging spectrometer and the DSLR color camera. The color images were calibrated (i.e. normalized) to relative reflectance, subsampled and spatially registered to match with counterpart pixels in hyperspectral images that were also calibrated to relative reflectance. Polynomial multivariate least-squares regression (PMLR) was previously developed with simulated color images. In this study, partial least squares regression (PLSR) was also evaluated as a spectral recovery technique to minimize multicollinearity and overfitting. The two spectral recovery models (PMLR and PLSR) and their parameters were evaluated by cross-validation. The QR decomposition was used to find a numerically more stable solution of the regression equation. The preliminary results showed that PLSR was more effective especially with higher order polynomial regressions than PMLR. The best classification accuracy measured with an independent test set was about 90%. The results suggest the potential of cost-effective color imaging using hyperspectral image

  9. Single Lens Dual-Aperture 3D Imaging System: Color Modeling

    Science.gov (United States)

    Bae, Sam Y.; Korniski, Ronald; Ream, Allen; Fritz, Eric; Shearn, Michael

    2012-01-01

    In an effort to miniaturize a 3D imaging system, we created two viewpoints in a single objective lens camera. This was accomplished by placing a pair of Complementary Multi-band Bandpass Filters (CMBFs) in the aperture area. Two key characteristics about the CMBFs are that the passbands are staggered so only one viewpoint is opened at a time when a light band matched to that passband is illuminated, and the passbands are positioned throughout the visible spectrum, so each viewpoint can render color by taking RGB spectral images. Each viewpoint takes a different spectral image from the other viewpoint hence yielding a different color image relative to the other. This color mismatch in the two viewpoints could lead to color rivalry, where the human vision system fails to resolve two different colors. The difference will be closer if the number of passbands in a CMBF increases. (However, the number of passbands is constrained by cost and fabrication technique.) In this paper, simulation predicting the color mismatch is reported.

  10. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    Science.gov (United States)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  11. Improving color constancy by discounting the variation of camera spectral sensitivity

    Science.gov (United States)

    Gao, Shao-Bing; Zhang, Ming; Li, Chao-Yi; Li, Yong-Jie

    2017-08-01

    It is an ill-posed problem to recover the true scene colors from a color biased image by discounting the effects of scene illuminant and camera spectral sensitivity (CSS) at the same time. Most color constancy (CC) models have been designed to first estimate the illuminant color, which is then removed from the color biased image to obtain an image taken under white light, without the explicit consideration of CSS effect on CC. This paper first studies the CSS effect on illuminant estimation arising in the inter-dataset-based CC (inter-CC), i.e., training a CC model on one dataset and then testing on another dataset captured by a distinct CSS. We show the clear degradation of existing CC models for inter-CC application. Then a simple way is proposed to overcome such degradation by first learning quickly a transform matrix between the two distinct CSSs (CSS-1 and CSS-2). The learned matrix is then used to convert the data (including the illuminant ground truth and the color biased images) rendered under CSS-1 into CSS-2, and then train and apply the CC model on the color biased images under CSS-2, without the need of burdensome acquiring of training set under CSS-2. Extensive experiments on synthetic and real images show that our method can clearly improve the inter-CC performance for traditional CC algorithms. We suggest that by taking the CSS effect into account, it is more likely to obtain the truly color constant images invariant to the changes of both illuminant and camera sensors.

  12. Prism-based single-camera system for stereo display

    Science.gov (United States)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  13. Experimental single-chip color HDTV image acquisition system with 8M-pixel CMOS image sensor

    Science.gov (United States)

    Shimamoto, Hiroshi; Yamashita, Takayuki; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed an experimental single-chip color HDTV image acquisition system using 8M-pixel CMOS image sensor. The sensor has 3840 × 2160 effective pixels and is progressively scanned at 60 frames per second. We describe the color filter array and interpolation method to improve image quality with a high-pixel-count single-chip sensor. We also describe an experimental image acquisition system we used to measured spatial frequency characteristics in the horizontal direction. The results indicate good prospects for achieving a high quality single chip HDTV camera that reduces pseudo signals and maintains high spatial frequency characteristics within the frequency band for HDTV.

  14. Development of digital shade guides for color assessment using a digital camera with ring flashes.

    Science.gov (United States)

    Tung, Oi-Hong; Lai, Yu-Lin; Ho, Yi-Ching; Chou, I-Chiang; Lee, Shyh-Yuan

    2011-02-01

    Digital photographs taken with cameras and ring flashes are commonly used for dental documentation. We hypothesized that different illuminants and camera's white balance setups shall influence color rendering of digital images and affect the effectiveness of color matching using digital images. Fifteen ceramic disks of different shades were fabricated and photographed with a digital camera in both automatic white balance (AWB) and custom white balance (CWB) under either light-emitting diode (LED) or electronic ring flash. The Commission Internationale d'Éclairage L*a*b* parameters of the captured images were derived from Photoshop software and served as digital shade guides. We found significantly high correlation coefficients (r² > 0.96) between the respective spectrophotometer standards and those shade guides generated in CWB setups. Moreover, the accuracy of color matching of another set of ceramic disks using digital shade guides, which was verified by ten operators, improved from 67% in AWB to 93% in CWB under LED illuminants. Probably, because of the inconsistent performance of the flashlight and specular reflection, the digital images captured under electronic ring flash in both white balance setups revealed less reliable and relative low-matching ability. In conclusion, the reliability of color matching with digital images is much influenced by the illuminants and camera's white balance setups, while digital shade guides derived under LED illuminants with CWB demonstrate applicable potential in the fields of color assessments.

  15. Color Segmentation Approach of Infrared Thermography Camera Image for Automatic Fault Diagnosis

    International Nuclear Information System (INIS)

    Djoko Hari Nugroho; Ari Satmoko; Budhi Cynthia Dewi

    2007-01-01

    Predictive maintenance based on fault diagnosis becomes very important in current days to assure the availability and reliability of a system. The main purpose of this research is to configure a computer software for automatic fault diagnosis based on image model acquired from infrared thermography camera using color segmentation approach. This technique detects hot spots in equipment of the plants. Image acquired from camera is first converted to RGB (Red, Green, Blue) image model and then converted to CMYK (Cyan, Magenta, Yellow, Key for Black) image model. Assume that the yellow color in the image represented the hot spot in the equipment, the CMYK image model is then diagnosed using color segmentation model to estimate the fault. The software is configured utilizing Borland Delphi 7.0 computer programming language. The performance is then tested for 10 input infrared thermography images. The experimental result shows that the software capable to detect the faulty automatically with performance value of 80 % from 10 sheets of image input. (author)

  16. Measurement of luminance noise and chromaticity noise of LCDs with a colorimeter and a color camera

    Science.gov (United States)

    Roehrig, H.; Dallas, W. J.; Krupinski, E. A.; Redford, Gary R.

    2007-09-01

    This communication focuses on physical evaluation of image quality of displays for applications in medical imaging. In particular we were interested in luminance noise as well as chromaticity noise of LCDs. Luminance noise has been encountered in the study of monochrome LCDs for some time, but chromaticity noise is a new type of noise which has been encountered first when monochrome and color LCDs were compared in an ROC study. In this present study one color and one monochrome 3 M-pixel LCDs were studied. Both were DICOM calibrated with equal dynamic range. We used a Konica Minolta Chroma Meter CS-200 as well as a Foveon color camera to estimate luminance and chrominance variations of the displays. We also used a simulation experiment to estimate luminance noise. The measurements with the colorimeter were consistent. The measurements with the Foveon color camera were very preliminary as color cameras had never been used for image quality measurements. However they were extremely promising. The measurements with the colorimeter and the simulation results showed that the luminance and chromaticity noise of the color LCD were larger than that of the monochrome LCD. Under the condition that an adequate calibration method and image QA/QC program for color displays are available, we expect color LCDs may be ready for radiology in very near future.

  17. The HydroColor App: Above Water Measurements of Remote Sensing Reflectance and Turbidity Using a Smartphone Camera.

    Science.gov (United States)

    Leeuw, Thomas; Boss, Emmanuel

    2018-01-16

    HydroColor is a mobile application that utilizes a smartphone's camera and auxiliary sensors to measure the remote sensing reflectance of natural water bodies. HydroColor uses the smartphone's digital camera as a three-band radiometer. Users are directed by the application to collect a series of three images. These images are used to calculate the remote sensing reflectance in the red, green, and blue broad wavelength bands. As with satellite measurements, the reflectance can be inverted to estimate the concentration of absorbing and scattering substances in the water, which are predominately composed of suspended sediment, chlorophyll, and dissolved organic matter. This publication describes the measurement method and investigates the precision of HydroColor's reflectance and turbidity estimates compared to commercial instruments. It is shown that HydroColor can measure the remote sensing reflectance to within 26% of a precision radiometer and turbidity within 24% of a portable turbidimeter. HydroColor distinguishes itself from other water quality camera methods in that its operation is based on radiometric measurements instead of image color. HydroColor is one of the few mobile applications to use a smartphone as a completely objective sensor, as opposed to subjective user observations or color matching using the human eye. This makes HydroColor a powerful tool for crowdsourcing of aquatic optical data.

  18. Single Camera Calibration in 3D Vision

    Directory of Open Access Journals (Sweden)

    Caius SULIMAN

    2009-12-01

    Full Text Available Camera calibration is a necessary step in 3D vision in order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.. In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment Matlab.

  19. Single-shot quantitative phase microscopy with color-multiplexed differential phase contrast (cDPC.

    Directory of Open Access Journals (Sweden)

    Zachary F Phillips

    Full Text Available We present a new technique for quantitative phase and amplitude microscopy from a single color image with coded illumination. Our system consists of a commercial brightfield microscope with one hardware modification-an inexpensive 3D printed condenser insert. The method, color-multiplexed Differential Phase Contrast (cDPC, is a single-shot variant of Differential Phase Contrast (DPC, which recovers the phase of a sample from images with asymmetric illumination. We employ partially coherent illumination to achieve resolution corresponding to 2× the objective NA. Quantitative phase can then be used to synthesize DIC and phase contrast images or extract shape and density. We demonstrate amplitude and phase recovery at camera-limited frame rates (50 fps for various in vitro cell samples and c. elegans in a micro-fluidic channel.

  20. The HydroColor App: Above Water Measurements of Remote Sensing Reflectance and Turbidity Using a Smartphone Camera

    Science.gov (United States)

    Leeuw, Thomas; Boss, Emmanuel

    2018-01-01

    HydroColor is a mobile application that utilizes a smartphone’s camera and auxiliary sensors to measure the remote sensing reflectance of natural water bodies. HydroColor uses the smartphone’s digital camera as a three-band radiometer. Users are directed by the application to collect a series of three images. These images are used to calculate the remote sensing reflectance in the red, green, and blue broad wavelength bands. As with satellite measurements, the reflectance can be inverted to estimate the concentration of absorbing and scattering substances in the water, which are predominately composed of suspended sediment, chlorophyll, and dissolved organic matter. This publication describes the measurement method and investigates the precision of HydroColor’s reflectance and turbidity estimates compared to commercial instruments. It is shown that HydroColor can measure the remote sensing reflectance to within 26% of a precision radiometer and turbidity within 24% of a portable turbidimeter. HydroColor distinguishes itself from other water quality camera methods in that its operation is based on radiometric measurements instead of image color. HydroColor is one of the few mobile applications to use a smartphone as a completely objective sensor, as opposed to subjective user observations or color matching using the human eye. This makes HydroColor a powerful tool for crowdsourcing of aquatic optical data. PMID:29337917

  1. Using Single Colors and Color Pairs to Communicate Basic Tastes II: Foreground-Background Color Combinations.

    Science.gov (United States)

    Woods, Andy T; Marmolejo-Ramos, Fernando; Velasco, Carlos; Spence, Charles

    2016-01-01

    People associate basic tastes (e.g., sweet, sour, bitter, and salty) with specific colors (e.g., pink or red, green or yellow, black or purple, and white or blue). In the present study, we investigated whether a color bordered by another color (either the same or different) would give rise to stronger taste associations relative to a single patch of color. We replicate previous findings, highlighting the existence of a robust crossmodal correspondence between individual colors and basic tastes. On occasion, color pairs were found to communicate taste expectations more consistently than were single color patches. Furthermore, and in contrast to a recent study in which the color pairs were shown side-by-side, participants took no longer to match the color pairs with tastes than the single colors (they had taken twice as long to respond to the color pairs in the previous study). Possible reasons for these results are discussed, and potential applications for the results, and for the testing methodology developed, are outlined.

  2. Using Single Colors and Color Pairs to Communicate Basic Tastes

    Directory of Open Access Journals (Sweden)

    Andy T. Woods

    2016-07-01

    Full Text Available Recently, it has been demonstrated that people associate each of the basic tastes (e.g., sweet, sour, bitter, and salty with specific colors (e.g., red, green, black, and white. In the present study, we investigated whether pairs of colors (both associated with a particular taste or taste word would give rise to stronger associations relative to pairs of colors that were associated with different tastes. We replicate the findings of previous studies highlighting the existence of a robust crossmodal correspondence between individual colors and basic tastes. However, while there was evidence that pairs of colors could indeed communicate taste information more consistently than single colors, our participants took more than twice as long to match the color pairs with tastes than the single colors. Possible reasons for these results are discussed.

  3. Using Single Colors and Color Pairs to Communicate Basic Tastes.

    Science.gov (United States)

    Woods, Andy T; Spence, Charles

    2016-01-01

    Recently, it has been demonstrated that people associate each of the basic tastes (e.g., sweet, sour, bitter, and salty) with specific colors (e.g., red, green, black, and white). In the present study, we investigated whether pairs of colors (both associated with a particular taste or taste word) would give rise to stronger associations relative to pairs of colors that were associated with different tastes. We replicate the findings of previous studies highlighting the existence of a robust crossmodal correspondence between individual colors and basic tastes. However, while there was evidence that pairs of colors could indeed communicate taste information more consistently than single colors, our participants took more than twice as long to match the color pairs with tastes than the single colors. Possible reasons for these results are discussed.

  4. Using Single Colors and Color Pairs to Communicate Basic Tastes II: Foreground–Background Color Combinations

    Science.gov (United States)

    Marmolejo-Ramos, Fernando; Velasco, Carlos; Spence, Charles

    2016-01-01

    People associate basic tastes (e.g., sweet, sour, bitter, and salty) with specific colors (e.g., pink or red, green or yellow, black or purple, and white or blue). In the present study, we investigated whether a color bordered by another color (either the same or different) would give rise to stronger taste associations relative to a single patch of color. We replicate previous findings, highlighting the existence of a robust crossmodal correspondence between individual colors and basic tastes. On occasion, color pairs were found to communicate taste expectations more consistently than were single color patches. Furthermore, and in contrast to a recent study in which the color pairs were shown side-by-side, participants took no longer to match the color pairs with tastes than the single colors (they had taken twice as long to respond to the color pairs in the previous study). Possible reasons for these results are discussed, and potential applications for the results, and for the testing methodology developed, are outlined. PMID:27708752

  5. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System

    Directory of Open Access Journals (Sweden)

    Rocco Furferi

    2016-08-01

    Full Text Available One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a “fuzzy mass” of tufted fibers into a regular mass of untwisted fibers, named “tow”. During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time.

  6. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    Science.gov (United States)

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  7. Full-Field Calibration of Color Camera Chromatic Aberration using Absolute Phase Maps.

    Science.gov (United States)

    Liu, Xiaohong; Huang, Shujun; Zhang, Zonghua; Gao, Feng; Jiang, Xiangqian

    2017-05-06

    The refractive index of a lens varies for different wavelengths of light, and thus the same incident light with different wavelengths has different outgoing light. This characteristic of lenses causes images captured by a color camera to display chromatic aberration (CA), which seriously reduces image quality. Based on an analysis of the distribution of CA, a full-field calibration method based on absolute phase maps is proposed in this paper. Red, green, and blue closed sinusoidal fringe patterns are generated, consecutively displayed on an LCD (liquid crystal display), and captured by a color camera from the front viewpoint. The phase information of each color fringe is obtained using a four-step phase-shifting algorithm and optimum fringe number selection method. CA causes the unwrapped phase of the three channels to differ. These pixel deviations can be computed by comparing the unwrapped phase data of the red, blue, and green channels in polar coordinates. CA calibration is accomplished in Cartesian coordinates. The systematic errors introduced by the LCD are analyzed and corrected. Simulated results show the validity of the proposed method and experimental results demonstrate that the proposed full-field calibration method based on absolute phase maps will be useful for practical software-based CA calibration.

  8. A detailed comparison of single-camera light-field PIV and tomographic PIV

    Science.gov (United States)

    Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.

    2018-03-01

    This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.

  9. Digital camera auto white balance based on color temperature estimation clustering

    Science.gov (United States)

    Zhang, Lei; Liu, Peng; Liu, Yuling; Yu, Feihong

    2010-11-01

    Auto white balance (AWB) is an important technique for digital cameras. Human vision system has the ability to recognize the original color of an object in a scene illuminated by a light source that has a different color temperature from D65-the standard sun light. However, recorded images or video clips, can only record the original information incident into the sensor. Therefore, those recorded will appear different from the real scene observed by the human. Auto white balance is a technique to solve this problem. Traditional methods such as gray world assumption, white point estimation, may fail for scenes with large color patches. In this paper, an AWB method based on color temperature estimation clustering is presented and discussed. First, the method gives a list of several lighting conditions that are common for daily life, which are represented by their color temperatures, and thresholds for each color temperature to determine whether a light source is this kind of illumination; second, an image to be white balanced are divided into N blocks (N is determined empirically). For each block, the gray world assumption method is used to calculate the color cast, which can be used to estimate the color temperature of that block. Third, each calculated color temperature are compared with the color temperatures in the given illumination list. If the color temperature of a block is not within any of the thresholds in the given list, that block is discarded. Fourth, the remaining blocks are given a majority selection, the color temperature having the most blocks are considered as the color temperature of the light source. Experimental results show that the proposed method works well for most commonly used light sources. The color casts are removed and the final images look natural.

  10. Multi-capability color night vision HD camera for defense, surveillance, and security

    Science.gov (United States)

    Pang, Francis; Powell, Gareth; Fereyre, Pierre

    2015-05-01

    e2v has developed a family of high performance cameras based on our next generation CMOS imagers that provide multiple features and capabilities to meet the range of challenging imaging applications in defense, surveillance, and security markets. Two resolution sizes are available: 1920x1080 with 5.3 μm pixels, and an ultra-low light level version at 1280x1024 with 10μm pixels. Each type is available in either monochrome or e2v's unique bayer pattern color version. The camera is well suited to accommodate many of the high demands for defense, surveillance, and security applications: compact form factor (SWAP+C), color night vision performance (down to 10-2 lux), ruggedized housing, Global Shutter, low read noise (<6e- in Global shutter mode and <2.5e- in Rolling shutter mode), 60 Hz frame rate, high QE especially in the enhanced NIR range (up to 1100nm). Other capabilities include active illumination and range gating. This paper will describe all the features of the sensor and the camera. It will be followed with a presentation of the latest test data with the current developments. Then, it will conclude with a description of how these features can be easily configured to meet many different applications. With this development, we can tune rather than create a full customization, making it more beneficial for many of our customers and their custom applications.

  11. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  12. High performance gel imaging with a commercial single lens reflex camera

    Science.gov (United States)

    Slobodan, J.; Corbett, R.; Wye, N.; Schein, J. E.; Marra, M. A.; Coope, R. J. N.

    2011-03-01

    A high performance gel imaging system was constructed using a digital single lens reflex camera with epi-illumination to image 19 × 23 cm agarose gels with up to 10,000 DNA bands each. It was found to give equivalent performance to a laser scanner in this high throughput DNA fingerprinting application using the fluorophore SYBR Green®. The specificity and sensitivity of the imager and scanner were within 1% using the same band identification software. Low and high cost color filters were also compared and it was found that with care, good results could be obtained with inexpensive dyed acrylic filters in combination with more costly dielectric interference filters, but that very poor combinations were also possible. Methods for determining resolution, dynamic range, and optical efficiency for imagers are also proposed to facilitate comparison between systems.

  13. Contourlet domain multiband deblurring based on color correlation for fluid lens cameras.

    Science.gov (United States)

    Tzeng, Jack; Liu, Chun-Chen; Nguyen, Truong Q

    2010-10-01

    Due to the novel fluid optics, unique image processing challenges are presented by the fluidic lens camera system. Developed for surgical applications, unique properties, such as no moving parts while zooming and better miniaturization than traditional glass optics, are advantages of the fluid lens. Despite these abilities, sharp color planes and blurred color planes are created by the nonuniform reaction of the liquid lens to different color wavelengths. Severe axial color aberrations are caused by this reaction. In order to deblur color images without estimating a point spread function, a contourlet filter bank system is proposed. Information from sharp color planes is used by this multiband deblurring method to improve blurred color planes. Compared to traditional Lucy-Richardson and Wiener deconvolution algorithms, significantly improved sharpness and reduced ghosting artifacts are produced by a previous wavelet-based method. Directional filtering is used by the proposed contourlet-based system to adjust to the contours of the image. An image is produced by the proposed method which has a similar level of sharpness to the previous wavelet-based method and has fewer ghosting artifacts. Conditions for when this algorithm will reduce the mean squared error are analyzed. While improving the blue color plane by using information from the green color plane is the primary focus of this paper, these methods could be adjusted to improve the red color plane. Many multiband systems such as global mapping, infrared imaging, and computer assisted surgery are natural extensions of this work. This information sharing algorithm is beneficial to any image set with high edge correlation. Improved results in the areas of deblurring, noise reduction, and resolution enhancement can be produced by the proposed algorithm.

  14. MonoSLAM: real-time single camera SLAM.

    Science.gov (United States)

    Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier

    2007-06-01

    We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.

  15. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.

    Science.gov (United States)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  16. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    Science.gov (United States)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  17. Streak camera imaging of single photons at telecom wavelength

    Science.gov (United States)

    Allgaier, Markus; Ansari, Vahid; Eigner, Christof; Quiring, Viktor; Ricken, Raimund; Donohue, John Matthew; Czerniuk, Thomas; Aßmann, Marc; Bayer, Manfred; Brecht, Benjamin; Silberhorn, Christine

    2018-01-01

    Streak cameras are powerful tools for temporal characterization of ultrafast light pulses, even at the single-photon level. However, the low signal-to-noise ratio in the infrared range prevents measurements on weak light sources in the telecom regime. We present an approach to circumvent this problem, utilizing an up-conversion process in periodically poled waveguides in Lithium Niobate. We convert single photons from a parametric down-conversion source in order to reach the point of maximum detection efficiency of commercially available streak cameras. We explore phase-matching configurations to apply the up-conversion scheme in real-world applications.

  18. Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras

    Science.gov (United States)

    Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

  19. A Quality Evaluation of Single and Multiple Camera Calibration Approaches for an Indoor Multi Camera Tracking System

    Directory of Open Access Journals (Sweden)

    M. Adduci

    2014-06-01

    Full Text Available Human detection and tracking has been a prominent research area for several scientists around the globe. State of the art algorithms have been implemented, refined and accelerated to significantly improve the detection rate and eliminate false positives. While 2D approaches are well investigated, 3D human detection and tracking is still an unexplored research field. In both 2D/3D cases, introducing a multi camera system could vastly expand the accuracy and confidence of the tracking process. Within this work, a quality evaluation is performed on a multi RGB-D camera indoor tracking system for examining how camera calibration and pose can affect the quality of human tracks in the scene, independently from the detection and tracking approach used. After performing a calibration step on every Kinect sensor, state of the art single camera pose estimators were evaluated for checking how good the quality of the poses is estimated using planar objects such as an ordinate chessboard. With this information, a bundle block adjustment and ICP were performed for verifying the accuracy of the single pose estimators in a multi camera configuration system. Results have shown that single camera estimators provide high accuracy results of less than half a pixel forcing the bundle to converge after very few iterations. In relation to ICP, relative information between cloud pairs is more or less preserved giving a low score of fitting between concatenated pairs. Finally, sensor calibration proved to be an essential step for achieving maximum accuracy in the generated point clouds, and therefore in the accuracy of the produced 3D trajectories, from each sensor.

  20. Color-filter-free spatial visible light communication using RGB-LED and mobile-phone camera.

    Science.gov (United States)

    Chen, Shih-Hao; Chow, Chi-Wai

    2014-12-15

    A novel color-filter-free visible-light communication (VLC) system using red-green-blue (RGB) light emitting diode (LED) and mobile-phone camera is proposed and demonstrated for the first time. A feature matching method, which is based on the scale-invariant feature transform (SIFT) algorithm for the received grayscale image is used instead of the chromatic information decoding method. The proposed method is simple and saves the computation complexity. The signal processing is based on the grayscale image computation; hence neither color-filter nor chromatic channel information is required. A proof-of-concept experiment is performed and high performance channel recognition is achieved.

  1. PANANICA quick charger for portable VIR and color camera

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Y; Sato, K; Kitani, M

    1978-04-01

    Recently, the use of portable VTR and color camera systems has become popular for producing various news films, documentary films, general TV programs, and VTR commercials. A cylindrical sealed nickel--cadmium rechargeable battery has been used as the system power source, and, therefore, a method of quick charge to keep the battery ready for the next use has been strongly demanded. The usual charge method, however, leaves something to be desired. It cannot give full performance with respect to required capacity, and it damages the battery by overcharge. The PANANICA Quck Charger, which can charge the battery safely and effectively, was developed by using the pulse-charge method, a temperature sensor to control the charge, the charge-stop function to prevent overcharge, exclusive intergrated circuit, etc. (7 figures, 3 tables)

  2. Variable high-resolution color CCD camera system with online capability for professional photo studio application

    Science.gov (United States)

    Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

    1998-04-01

    Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

  3. Electrolytic coloration and spectral properties of hydroxyl-doped potassium chloride single crystals

    International Nuclear Information System (INIS)

    Gu Hongen; Wu Yanru

    2011-01-01

    Hydroxyl-doped potassium chloride single crystals are colored electrolytically at various temperatures and voltages using a pointed cathode and a flat anode. Characteristic OH - spectral band is observed in the absorption spectrum of uncolored single crystal. Characteristic O - , OH - , U, V 2 , V 3 , O 2- -V a + , F, R 2 and M spectral bands are observed simultaneously in absorption spectra of colored single crystals. Current-time curve for electrolytic coloration of hydroxyl-doped potassium chloride single crystal and its relationship with electrolytic coloration process are given. Production and conversion of color centers are explained. - Highlights: → Expanded the traditional electrolysis method. → Hydroxyl-doped potassium chloride crystals were colored electrolytically for the first time. → Useful V, F and F-aggregate color centers were produced in colored crystals. → V color centers were produced directly and F and F-aggregate color centers indirectly.

  4. Electrolytic coloration and spectral properties of hydroxyl-doped potassium bromide single crystals

    International Nuclear Information System (INIS)

    Qi, Lan; Song, Cuiying; Gu, Hongen

    2013-01-01

    Hydroxyl-doped potassium bromide single crystals are colored electrolytically at various temperatures and voltages by using a pointed cathode and a flat anode. The characteristic OH − spectral band is observed in absorption spectrum of uncolored single crystal. The characteristic O − , OH − , U, V 2 , O 2− −V a + , M L1 , F and M spectral bands are observed simultaneously in absorption spectra of colored single crystals. Current–time curve for electrolytic coloration of hydroxyl-doped potassium bromide single crystal and its relationship with electrolytic coloration processes are given. Production and conversion of color centers are explained. - Highlights: ► We expanded the traditional electrolysis method. ► Hydroxyl-doped potassium bromide crystals were colored electrolytically for the first time. ► Useful V, F and F-aggregate color centers were produced in colored crystals. ► V color centers were produced directly and F as well as F-aggregate color centers indirectly.

  5. Location and Classification of Moving Fruits in Real Time with a Single Color Camera Localización y Clasificación de Frutas Móviles en Tiempo Real con una Cámara Individual a Color

    Directory of Open Access Journals (Sweden)

    José F Reyes

    2009-06-01

    Full Text Available Quality control of fruits to satisfy increasingly competitive food markets requires the implementation of automatic servovisual systems in fruit processing operations to cope with market challenges. A new and fast method for identifying and classifying moving fruits by processing single color images from a static camera in real time was developed and tested. Two algorithms were combined to classify and track moving fruits on image plane using representative color features. The method allows classifying the fruit by color segmentation and estimating its position on the image plane, which provides a reliable algorithm to be implemented in robotic manipulation of fruits. To evaluate the methodology an experimental real time system simulating a conveyor belt and real fruit was used. Testing of the system indicates that with natural lighting conditions and proper calibration of the system a minimum error of 2% in classification of fruits is feasible. The methodology allows for very simple implementation, and although operational results are promising, even higher accuracy may be possible if structured illumination is used.El control de calidad en frutas y hortalizas para satisfacer mercados cada vez mas exigentes, requiere la implementación de sistemas automáticos servo visuales en operaciones de de procesamiento de frutas para responder a estos desafíos de mercado. En está trabajo se desarrollo y evaluó un nuevo método para identificar y clasificar frutas en movimiento mediante el procesamiento en tiempo real imágenes a color capturadas por una cámara individual estática. Se combinaron dos algoritmos para clasificar y rastrear frutas en movimiento en el plano de imagen utilizando aspectos representativos de color. El método permite clasificar las frutas en base a segmentación de color y estimar su posición en el plano de imagen, lo cual proporciona un algoritmo confiable para ser implementado en un brazo robótico.de manipulación de

  6. Single photon detection and localization accuracy with an ebCMOS camera

    Energy Technology Data Exchange (ETDEWEB)

    Cajgfinger, T. [CNRS/IN2P3, Institut de Physique Nucléaire de Lyon, Villeurbanne F-69622 (France); Dominjon, A., E-mail: agnes.dominjon@nao.ac.jp [Université de Lyon, Université de Lyon 1, Lyon 69003 France. (France); Barbier, R. [CNRS/IN2P3, Institut de Physique Nucléaire de Lyon, Villeurbanne F-69622 (France); Université de Lyon, Université de Lyon 1, Lyon 69003 France. (France)

    2015-07-01

    The CMOS sensor technologies evolve very fast and offer today very promising solutions to existing issues facing by imaging camera systems. CMOS sensors are very attractive for fast and sensitive imaging thanks to their low pixel noise (1e-) and their possibility of backside illumination. The ebCMOS group of IPNL has produced a camera system dedicated to Low Light Level detection and based on a 640 kPixels ebCMOS with its acquisition system. After reminding the principle of detection of an ebCMOS and the characteristics of our prototype, we confront our camera to other imaging systems. We compare the identification efficiency and the localization accuracy of a point source by four different photo-detection devices: the scientific CMOS (sCMOS), the Charge Coupled Device (CDD), the Electron Multiplying CCD (emCCD) and the Electron Bombarded CMOS (ebCMOS). Our ebCMOS camera is able to identify a single photon source in less than 10 ms with a localization accuracy better than 1 µm. We report as well efficiency measurement and the false positive identification of the ebCMOS camera by identifying more than hundreds of single photon sources in parallel. About 700 spots are identified with a detection efficiency higher than 90% and a false positive percentage lower than 5. With these measurements, we show that our target tracking algorithm can be implemented in real time at 500 frames per second under a photon flux of the order of 8000 photons per frame. These results demonstrate that the ebCMOS camera concept with its single photon detection and target tracking algorithm is one of the best devices for low light and fast applications such as bioluminescence imaging, quantum dots tracking or adaptive optics.

  7. Photography in Dermatologic Surgery: Selection of an Appropriate Camera Type for a Particular Clinical Application.

    Science.gov (United States)

    Chen, Brian R; Poon, Emily; Alam, Murad

    2017-08-01

    Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.

  8. Euratom experience with video surveillance - Single camera and other non-multiplexed

    International Nuclear Information System (INIS)

    Otto, P.; Cozier, T.; Jargeac, B.; Castets, J.P.; Wagner, H.G.; Chare, P.; Roewer, V.

    1991-01-01

    The Euratom Safeguards Directorate (ESD) has been using a number of single camera video systems (Ministar, MIVS, DCS) and non-multiplexed multi-camera systems (Digiquad) for routine safeguards surveillance applications during the last four years. This paper describes aspects of system design and considerations relevant for installation. It reports on system reliability and performance and presents suggestions on future improvements

  9. Human attention filters for single colors

    Science.gov (United States)

    Sun, Peng; Chubb, Charles; Wright, Charles E.; Sperling, George

    2016-01-01

    The visual images in the eyes contain much more information than the brain can process. An important selection mechanism is feature-based attention (FBA). FBA is best described by attention filters that specify precisely the extent to which items containing attended features are selectively processed and the extent to which items that do not contain the attended features are attenuated. The centroid-judgment paradigm enables quick, precise measurements of such human perceptual attention filters, analogous to transmission measurements of photographic color filters. Subjects use a mouse to locate the centroid—the center of gravity—of a briefly displayed cloud of dots and receive precise feedback. A subset of dots is distinguished by some characteristic, such as a different color, and subjects judge the centroid of only the distinguished subset (e.g., dots of a particular color). The analysis efficiently determines the precise weight in the judged centroid of dots of every color in the display (i.e., the attention filter for the particular attended color in that context). We report 32 attention filters for single colors. Attention filters that discriminate one saturated hue from among seven other equiluminant distractor hues are extraordinarily selective, achieving attended/unattended weight ratios >20:1. Attention filters for selecting a color that differs in saturation or lightness from distractors are much less selective than attention filters for hue (given equal discriminability of the colors), and their filter selectivities are proportional to the discriminability distance of neighboring colors, whereas in the same range hue attention-filter selectivity is virtually independent of discriminabilty. PMID:27791040

  10. A lateral chromatic aberration correction system for ultrahigh-definition color video camera

    Science.gov (United States)

    Yamashita, Takayuki; Shimamoto, Hiroshi; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed color camera for an 8k x 4k-pixel ultrahigh-definition video system, which is called Super Hi- Vision, with a 5x zoom lens and a signal-processing system incorporating a function for real-time lateral chromatic aberration correction. The chromatic aberration of the lens degrades color image resolution. So in order to develop a compact zoom lens consistent with ultrahigh-resolution characteristics, we incorporated a real-time correction function in the signal-processing system. The signal-processing system has eight memory tables to store the correction data at eight focal length points on the blue and red channels. When the focal length data is inputted from the lens control units, the relevant correction data are interpolated from two of eights correction data tables. This system performs geometrical conversion on both channels using this correction data. This paper describes that the correction function can successfully reduce the lateral chromatic aberration, to an amount small enough to ensure the desired image resolution was achieved over the entire range of the lens in real time.

  11. Water Detection Based on Color Variation

    Science.gov (United States)

    Rankin, Arturo L.

    2012-01-01

    This software has been designed to detect water bodies that are out in the open on cross-country terrain at close range (out to 30 meters), using imagery acquired from a stereo pair of color cameras mounted on a terrestrial, unmanned ground vehicle (UGV). This detector exploits the fact that the color variation across water bodies is generally larger and more uniform than that of other naturally occurring types of terrain, such as soil and vegetation. Non-traversable water bodies, such as large puddles, ponds, and lakes, are detected based on color variation, image intensity variance, image intensity gradient, size, and shape. At ranges beyond 20 meters, water bodies out in the open can be indirectly detected by detecting reflections of the sky below the horizon in color imagery. But at closer range, the color coming out of a water body dominates sky reflections, and the water cue from sky reflections is of marginal use. Since there may be times during UGV autonomous navigation when a water body does not come into a perception system s field of view until it is at close range, the ability to detect water bodies at close range is critical. Factors that influence the perceived color of a water body at close range are the amount and type of sediment in the water, the water s depth, and the angle of incidence to the water body. Developing a single model of the mixture ratio of light reflected off the water surface (to the camera) to light coming out of the water body (to the camera) for all water bodies would be fairly difficult. Instead, this software detects close water bodies based on local terrain features and the natural, uniform change in color that occurs across the surface from the leading edge to the trailing edge.

  12. Multistabilities and symmetry-broken one-color and two-color states in closely coupled single-mode lasers.

    Science.gov (United States)

    Clerkin, Eoin; O'Brien, Stephen; Amann, Andreas

    2014-03-01

    We theoretically investigate the dynamics of two mutually coupled, identical single-mode semi-conductor lasers. For small separation and large coupling between the lasers, symmetry-broken one-color states are shown to be stable. In this case the light outputs of the lasers have significantly different intensities while at the same time the lasers are locked to a single common frequency. For intermediate coupling we observe stable symmetry-broken two-color states, where both lasers lase simultaneously at two optical frequencies which are separated by up to 150 GHz. Using a five-dimensional model, we identify the bifurcation structure which is responsible for the appearance of symmetric and symmetry-broken one-color and two-color states. Several of these states give rise to multistabilities and therefore allow for the design of all-optical memory elements on the basis of two coupled single-mode lasers. The switching performance of selected designs of optical memory elements is studied numerically.

  13. Full-color OLED on silicon microdisplay

    Science.gov (United States)

    Ghosh, Amalkumar P.

    2002-02-01

    eMagin has developed numerous enhancements to organic light emitting diode (OLED) technology, including a unique, up- emitting structure for OLED-on-silicon microdisplay devices. Recently, eMagin has fabricated full color SVGA+ resolution OLED microdisplays on silicon, with over 1.5 million color elements. The display is based on white light emission from OLED followed by LCD-type red, green and blue color filters. The color filters are patterned directly on OLED devices following suitable thin film encapsulation and the drive circuits are built directly on single crystal silicon. The resultant color OLED technology, with hits high efficiency, high brightness, and low power consumption, is ideally suited for near to the eye applications such as wearable PCS, wireless Internet applications and mobile phone, portable DVD viewers, digital cameras and other emerging applications.

  14. Single camera photogrammetry system for EEG electrode identification and localization.

    Science.gov (United States)

    Baysal, Uğur; Sengül, Gökhan

    2010-04-01

    In this study, photogrammetric coordinate measurement and color-based identification of EEG electrode positions on the human head are simultaneously implemented. A rotating, 2MP digital camera about 20 cm above the subject's head is used and the images are acquired at predefined stop points separated azimuthally at equal angular displacements. In order to realize full automation, the electrodes have been labeled by colored circular markers and an electrode recognition algorithm has been developed. The proposed method has been tested by using a plastic head phantom carrying 25 electrode markers. Electrode locations have been determined while incorporating three different methods: (i) the proposed photogrammetric method, (ii) conventional 3D radiofrequency (RF) digitizer, and (iii) coordinate measurement machine having about 6.5 mum accuracy. It is found that the proposed system automatically identifies electrodes and localizes them with a maximum error of 0.77 mm. It is suggested that this method may be used in EEG source localization applications in the human brain.

  15. 24/7 security system: 60-FPS color EMCCD camera with integral human recognition

    Science.gov (United States)

    Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.

    2007-04-01

    An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.

  16. Recognition and Matching of Clustered Mature Litchi Fruits Using Binocular Charge-Coupled Device (CCD Color Cameras

    Directory of Open Access Journals (Sweden)

    Chenglin Wang

    2017-11-01

    Full Text Available Recognition and matching of litchi fruits are critical steps for litchi harvesting robots to successfully grasp litchi. However, due to the randomness of litchi growth, such as clustered growth with uncertain number of fruits and random occlusion by leaves, branches and other fruits, the recognition and matching of the fruit become a challenge. Therefore, this study firstly defined mature litchi fruit as three clustered categories. Then an approach for recognition and matching of clustered mature litchi fruit was developed based on litchi color images acquired by binocular charge-coupled device (CCD color cameras. The approach mainly included three steps: (1 calibration of binocular color cameras and litchi image acquisition; (2 segmentation of litchi fruits using four kinds of supervised classifiers, and recognition of the pre-defined categories of clustered litchi fruit using a pixel threshold method; and (3 matching the recognized clustered fruit using a geometric center-based matching method. The experimental results showed that the proposed recognition method could be robust against the influences of varying illumination and occlusion conditions, and precisely recognize clustered litchi fruit. In the tested 432 clustered litchi fruits, the highest and lowest average recognition rates were 94.17% and 92.00% under sunny back-lighting and partial occlusion, and sunny front-lighting and non-occlusion conditions, respectively. From 50 pairs of tested images, the highest and lowest matching success rates were 97.37% and 91.96% under sunny back-lighting and non-occlusion, and sunny front-lighting and partial occlusion conditions, respectively.

  17. Color constancy by characterization of illumination chromaticity

    Science.gov (United States)

    Nikkanen, Jarno T.

    2011-05-01

    Computational color constancy algorithms play a key role in achieving desired color reproduction in digital cameras. Failure to estimate illumination chromaticity correctly will result in invalid overall colour cast in the image that will be easily detected by human observers. A new algorithm is presented for computational color constancy. Low computational complexity and low memory requirement make the algorithm suitable for resource-limited camera devices, such as consumer digital cameras and camera phones. Operation of the algorithm relies on characterization of the range of possible illumination chromaticities in terms of camera sensor response. The fact that only illumination chromaticity is characterized instead of the full color gamut, for example, increases robustness against variations in sensor characteristics and against failure of diagonal model of illumination change. Multiple databases are used in order to demonstrate the good performance of the algorithm in comparison to the state-of-the-art color constancy algorithms.

  18. Long-term tracking of multiple interacting pedestrians using a single camera

    CSIR Research Space (South Africa)

    Keaikitse, M

    2014-11-01

    Full Text Available interacting pedestrians using a single camera Mogomotsi Keaikitse∗, Willie Brink† and Natasha Govender∗ ∗Modelling and Digital Sciences, Council for Scientific and Industrial Research, Pretoria, South Africa †Department of Mathematical Sciences, Stellenbosch...-identified and their tracks extended. Standard, publicly available data sets are used to test the system. I. INTRODUCTION Closed circuit cameras are becoming widespread and preva- lent in cities and towns around the world, indicating that surveillance is an important issue...

  19. Observation of X-ray shadings in synchrotron radiation-total reflection X-ray fluorescence using a color X-ray camera

    Energy Technology Data Exchange (ETDEWEB)

    Fittschen, Ursula Elisabeth Adriane, E-mail: ursula.fittschen@chemie.uni-hamburg.de [Institut für Anorganische und Angewandte Chemie, Universität Hamburg, Martin-Luther-King-Platz 6, 20146 Hamburg (Germany); Menzel, Magnus [Institut für Anorganische und Angewandte Chemie, Universität Hamburg, Martin-Luther-King-Platz 6, 20146 Hamburg (Germany); Scharf, Oliver [IfG Institute for Scientific Instruments GmbH, Berlin (Germany); Radtke, Martin; Reinholz, Uwe; Buzanich, Günther [BAM Federal Institute of Materials Research and Testing, Berlin (Germany); Lopez, Velma M.; McIntosh, Kathryn [Los Alamos National Laboratory, Los Alamos, NM (United States); Streli, Christina [Atominstitut, TU Wien, Vienna (Austria); Havrilla, George Joseph [Los Alamos National Laboratory, Los Alamos, NM (United States)

    2014-09-01

    Absorption effects and the impact of specimen shape on TXRF analysis has been discussed intensively. Model calculations indicated that ring shaped specimens should give better results in terms of higher counts per mass signals than filled rectangle or circle shaped specimens. One major reason for the difference in signal is shading effects. Full field micro-XRF with a color X-ray camera (CXC) was used to investigate shading, which occurs when working with small angles of excitation as in TXRF. The device allows monitoring the illuminated parts of the sample and the shaded parts at the same time. It is expected that sample material hit first by the primary beam shade material behind it. Using the CXC shading could be directly visualized for the high concentration specimens. In order to compare the experimental results with calculation of the shading effect the generation of controlled specimens is crucial. This was achieved by “drop on demand” technology. It allows generating uniform, microscopic deposits of elements. The experimentally measured shadings match well with those expected from calculation. - Highlights: • Use of a color X-ray camera and drop on demand printing to diagnose X-ray shading • Specimens were obtained uniform and well-defined in shape and concentration by printing. • Direct visualization and determination of shading in such specimens using the camera.

  20. Observation of X-ray shadings in synchrotron radiation-total reflection X-ray fluorescence using a color X-ray camera

    International Nuclear Information System (INIS)

    Fittschen, Ursula Elisabeth Adriane; Menzel, Magnus; Scharf, Oliver; Radtke, Martin; Reinholz, Uwe; Buzanich, Günther; Lopez, Velma M.; McIntosh, Kathryn; Streli, Christina; Havrilla, George Joseph

    2014-01-01

    Absorption effects and the impact of specimen shape on TXRF analysis has been discussed intensively. Model calculations indicated that ring shaped specimens should give better results in terms of higher counts per mass signals than filled rectangle or circle shaped specimens. One major reason for the difference in signal is shading effects. Full field micro-XRF with a color X-ray camera (CXC) was used to investigate shading, which occurs when working with small angles of excitation as in TXRF. The device allows monitoring the illuminated parts of the sample and the shaded parts at the same time. It is expected that sample material hit first by the primary beam shade material behind it. Using the CXC shading could be directly visualized for the high concentration specimens. In order to compare the experimental results with calculation of the shading effect the generation of controlled specimens is crucial. This was achieved by “drop on demand” technology. It allows generating uniform, microscopic deposits of elements. The experimentally measured shadings match well with those expected from calculation. - Highlights: • Use of a color X-ray camera and drop on demand printing to diagnose X-ray shading • Specimens were obtained uniform and well-defined in shape and concentration by printing. • Direct visualization and determination of shading in such specimens using the camera

  1. Selective-imaging camera

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  2. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  3. Beyond leaf color: Comparing camera-based phenological metrics with leaf biochemical, biophysical, and spectral properties throughout the growing season of a temperate deciduous forest

    Science.gov (United States)

    Yang, Xi; Tang, Jianwu; Mustard, John F.

    2014-03-01

    Plant phenology, a sensitive indicator of climate change, influences vegetation-atmosphere interactions by changing the carbon and water cycles from local to global scales. Camera-based phenological observations of the color changes of the vegetation canopy throughout the growing season have become popular in recent years. However, the linkages between camera phenological metrics and leaf biochemical, biophysical, and spectral properties are elusive. We measured key leaf properties including chlorophyll concentration and leaf reflectance on a weekly basis from June to November 2011 in a white oak forest on the island of Martha's Vineyard, Massachusetts, USA. Concurrently, we used a digital camera to automatically acquire daily pictures of the tree canopies. We found that there was a mismatch between the camera-based phenological metric for the canopy greenness (green chromatic coordinate, gcc) and the total chlorophyll and carotenoids concentration and leaf mass per area during late spring/early summer. The seasonal peak of gcc is approximately 20 days earlier than the peak of the total chlorophyll concentration. During the fall, both canopy and leaf redness were significantly correlated with the vegetation index for anthocyanin concentration, opening a new window to quantify vegetation senescence remotely. Satellite- and camera-based vegetation indices agreed well, suggesting that camera-based observations can be used as the ground validation for satellites. Using the high-temporal resolution dataset of leaf biochemical, biophysical, and spectral properties, our results show the strengths and potential uncertainties to use canopy color as the proxy of ecosystem functioning.

  4. Application of colon capsule endoscopy (CCE to evaluate the whole gastrointestinal tract: a comparative study of single-camera and dual-camera analysis

    Directory of Open Access Journals (Sweden)

    Remes-Troche JM

    2013-09-01

    Full Text Available José María Remes-Troche,1 Victoria Alejandra Jiménez-García,2 Josefa María García-Montes,2 Pedro Hergueta-Delgado,2 Federico Roesch-Dietlen,1 Juan Manuel Herrerías-Gutiérrez2 1Digestive Physiology and Motility Lab, Medical Biological Research Institute, Universidad Veracruzana, Veracruz, México; 2Gastroenterology Service, Virgen Macarena University Hospital, Seville, Spain Background and study aims: Colon capsule endoscopy (CCE was developed for the evaluation of colorectal pathology. In this study, our aim was to assess if a dual-camera analysis using CCE allows better evaluation of the whole gastrointestinal (GI tract compared to a single-camera analysis. Patients and methods: We included 21 patients (12 males, mean age 56.20 years submitted for a CCE examination. After standard colon preparation, the colon capsule endoscope (PillCam Colon™ was swallowed after reinitiation from its “sleep” mode. Four physicians performed the analysis: two reviewed both video streams at the same time (dual-camera analysis; one analyzed images from one side of the device (“camera 1”; and the other reviewed the opposite side (“camera 2”. We compared numbers of findings from different parts of the entire GI tract and level of agreement among reviewers. Results: A complete evaluation of the GI tract was possible in all patients. Dual-camera analysis provided 16% and 5% more findings compared to camera 1 and camera 2 analysis, respectively. Overall agreement was 62.7% (kappa = 0.44, 95% CI: 0.373–0.510. Esophageal (kappa = 0.611 and colorectal (kappa = 0.595 findings had a good level of agreement, while small bowel (kappa = 0.405 showed moderate agreement. Conclusion: The use of dual-camera analysis with CCE for the evaluation of the GI tract is feasible and detects more abnormalities when compared with single-camera analysis. Keywords: capsule endoscopy, colon, gastrointestinal tract, small bowel

  5. Dynamic simulation of color blindness for studying color vision requirements in practice

    NARCIS (Netherlands)

    Lucassen, M.P.; Alferdinck, J.W.A.M.

    2006-01-01

    We report on a dynamic simulation of defective color vision. Using an RGB video camera connected to a PC or laptop, the captured and displayed RGB colors are translated by our software into modified RGB values that simulate the color appearance of a person with a color deficiency. Usually, the

  6. Colored Chaos

    Science.gov (United States)

    2004-01-01

    [figure removed for brevity, see original site] Released 7 May 2004 This daytime visible color image was collected on May 30, 2002 during the Southern Fall season in Atlantis Chaos. The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. Image information: VIS instrument. Latitude -34.5, Longitude 183.6 East (176.4 West). 38 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D

  7. Correcting for color crosstalk and chromatic aberration in multicolor particle shadow velocimetry

    International Nuclear Information System (INIS)

    McPhail, M J; Fontaine, A A; Krane, M H; Goss, L; Crafton, J

    2015-01-01

    Color crosstalk and chromatic aberration can bias estimates of fluid velocity measured by color particle shadow velocimetry (CPSV), using multicolor illumination and a color camera. This article describes corrections to remove these bias errors, and their evaluation. Color crosstalk removal is demonstrated with linear unmixing. It is also shown that chromatic aberrations may be removed using either scale calibration, or by processing an image illuminated by all colors simultaneously. CPSV measurements of a fully developed turbulent pipe flow of glycerin were conducted. Corrected velocity statistics from these measurements were compared to both single-color PSV and LDV measurements and showed excellent agreement to fourth-order, to well into the viscous sublayer. Recommendations for practical assessment and correction of color aberration and color crosstalk are discussed. (paper)

  8. Dynamic Human Body Modeling Using a Single RGB Camera.

    Science.gov (United States)

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-03-18

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones.

  9. Towards a better understanding of the overall health impact of the game of squash: automatic and high-resolution motion analysis from a single camera view

    Directory of Open Access Journals (Sweden)

    Brumann Christopher

    2017-09-01

    Full Text Available In this paper, we present a method for locating and tracking players in the game of squash using Gaussian mixture model background subtraction and agglomerative contour clustering from a calibrated single camera view. Furthermore, we describe a method for player re-identification after near total occlusion, based on stored color- and region-descriptors. For camera calibration, no additional pattern is needed, as the squash court itself can serve as a 3D calibration object. In order to exclude non-rally situations from motion analysis, we further classify each video frame into game phases using a multilayer perceptron. By considering a player’s position as well as the current game phase we are able to visualize player-individual motion patterns expressed as court coverage using pseudo colored heat-maps. In total, we analyzed two matches (six games, 1:28h of high quality commercial videos used in sports broadcasting and compute high resolution (1cm per pixel heat-maps. 130184 manually labeled frames (game phases and player identification show an identification correctness of 79.28±8.99% (mean±std. Game phase classification is correct in 60.87±7.62% and the heat-map visualization correctness is 72.47±7.27%.

  10. Color balancing in CCD color cameras using analog signal processors made by Kodak

    Science.gov (United States)

    Kannegundla, Ram

    1995-03-01

    The green, red, and blue color filters used for CCD sensors generally have different responses. It is often necessary to balance these three colors for displaying a high-quality image on the monitor. The color filter arrays on sensors have different architectures. A CCD with standard G R G B pattern is considered for the present discussion. A simple method of separating the colors using CDS/H that is a part of KASPs (Analog Signal Processors made by Kodak) and using the gain control, which is also a part of KASPs for color balance, is presented. The colors are separated from the video output of sensor by using three KASPs, one each for green, red, and blue colors and by using alternate sample pulses for green and 1 in 4 pulses for red and blue. The separated colors gain is adjusted either automatically or manually and sent to the monitor for direct display in the analog mode or through an A/D converter digitally to the memory. This method of color balancing demands high-quality ASPs. Kodak has designed four different chips with varying levels of power consumption and speed for analog signal processing of video output of CCD sensors. The analog ASICs have been characterized for noise, clock feedthrough, acquisition time, linearity, variable gain, line rate clamp, black muxing, affect of temperature variations on chip performance, and droop. The ASP chips have met their design specifications.

  11. Real-time multiple human perception with color-depth cameras on a mobile robot.

    Science.gov (United States)

    Zhang, Hao; Reardon, Christopher; Parker, Lynne E

    2013-10-01

    The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an

  12. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    Science.gov (United States)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  13. Optical determination and magnetic manipulation of a single nitrogen-vacancy color center in diamond nanocrystal

    International Nuclear Information System (INIS)

    Diep Lai, Ngoc; Zheng, Dingwei; Treussart, François; Roch, Jean-François

    2010-01-01

    The controlled and coherent manipulation of individual quantum systems is fundamental for the development of quantum information processing. The nitrogen-vacancy (NV) color center in diamond is a promising system since its photoluminescence is perfectly stable at room temperature and its electron spin can be optically read out at the individual level. We review here the experiments currently realized in our laboratory concerning the use of a single NV color center as the single photon source and the coherent magnetic manipulation of the electron spin associated with a single NV color center. Furthermore, we demonstrate a nanoscopy experiment based on the saturation absorption effect, which allows to optically pin-point a single NV color center at sub-λ resolution. This offers the possibility to independently address two or multiple magnetically coupled single NV color centers, which is a necessary step towards the realization of a diamond-based quantum computer

  14. Single-shot color fringe projection for three-dimensional shape measurement of objects with discontinuities.

    Science.gov (United States)

    Dai, Meiling; Yang, Fujun; He, Xiaoyuan

    2012-04-20

    A simple but effective fringe projection profilometry is proposed to measure 3D shape by using one snapshot color sinusoidal fringe pattern. One color fringe pattern encoded with a sinusoidal fringe (as red component) and one uniform intensity pattern (as blue component) is projected by a digital video projector, and the deformed fringe pattern is recorded by a color CCD camera. The captured color fringe pattern is separated into its RGB components and division operation is applied to red and blue channels to reduce the variable reflection intensity. Shape information of the tested object is decoded by applying an arcsine algorithm on the normalized fringe pattern with subpixel resolution. In the case of fringe discontinuities caused by height steps, or spatially isolated surfaces, the separated blue component is binarized and used for correcting the phase demodulation. A simple and robust method is also introduced to compensate for nonlinear intensity response of the digital video projector. The experimental results demonstrate the validity of the proposed method.

  15. Unsupervised markerless 3-DOF motion tracking in real time using a single low-budget camera.

    Science.gov (United States)

    Quesada, Luis; León, Alejandro J

    2012-10-01

    Motion tracking is a critical task in many computer vision applications. Existing motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on motion tracking. In this paper, we present a novel three degrees of freedom motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera that can be found installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a nonmodeled unmarked object that may be nonrigid, nonconvex, partially occluded, self-occluded, or motion blurred, given that it is opaque, evenly colored, enough contrasting with the background in each frame, and that it does not rotate. Our system is also able to determine the most relevant object to track in the screen. Our proposal does not impose additional constraints, therefore it allows a market-wide implementation of applications that require the estimation of the three position degrees of freedom of an object.

  16. Dual color single particle tracking via nanobodies

    International Nuclear Information System (INIS)

    Albrecht, David; Winterflood, Christian M; Ewers, Helge

    2015-01-01

    Single particle tracking is a powerful tool to investigate the function of biological molecules by following their motion in space. However, the simultaneous tracking of two different species of molecules is still difficult to realize without compromising the length or density of trajectories, the localization accuracy or the simplicity of the assay. Here, we demonstrate a simple dual color single particle tracking assay using small, bright, high-affinity labeling via nanobodies of accessible targets with widely available instrumentation. We furthermore apply a ratiometric step-size analysis method to visualize differences in apparent membrane viscosity. (paper)

  17. The Control of Single-color and Multiple-color Visual Search by Attentional Templates in Working Memory and in Long-term Memory.

    Science.gov (United States)

    Grubert, Anna; Carlisle, Nancy B; Eimer, Martin

    2016-12-01

    The question whether target selection in visual search can be effectively controlled by simultaneous attentional templates for multiple features is still under dispute. We investigated whether multiple-color attentional guidance is possible when target colors remain constant and can thus be represented in long-term memory but not when they change frequently and have to be held in working memory. Participants searched for one, two, or three possible target colors that were specified by cue displays at the start of each trial. In constant-color blocks, the same colors remained task-relevant throughout. In variable-color blocks, target colors changed between trials. The contralateral delay activity (CDA) to cue displays increased in amplitude as a function of color memory load in variable-color blocks, which indicates that cued target colors were held in working memory. In constant-color blocks, the CDA was much smaller, suggesting that color representations were primarily stored in long-term memory. N2pc components to targets were measured as a marker of attentional target selection. Target N2pcs were attenuated and delayed during multiple-color search, demonstrating less efficient attentional deployment to color-defined target objects relative to single-color search. Importantly, these costs were the same in constant-color and variable-color blocks. These results demonstrate that attentional guidance by multiple-feature as compared with single-feature templates is less efficient both when target features remain constant and can be represented in long-term memory and when they change across trials and therefore have to be maintained in working memory.

  18. Stokes image reconstruction for two-color microgrid polarization imaging systems.

    Science.gov (United States)

    Lemaster, Daniel A

    2011-07-18

    The Air Force Research Laboratory has developed a new microgrid polarization imaging system capable of simultaneously reconstructing linear Stokes parameter images in two colors on a single focal plane array. In this paper, an effective method for extracting Stokes images is presented for this type of camera system. It is also shown that correlations between the color bands can be exploited to significantly increase overall spatial resolution. Test data is used to show the advantages of this approach over bilinear interpolation. The bounds (in terms of available reconstruction bandwidth) on image resolution are also provided.

  19. 4 Vesta in Color: High Resolution Mapping from Dawn Framing Camera Images

    Science.gov (United States)

    Reddy, V.; LeCorre, L.; Nathues, A.; Sierks, H.; Christensen, U.; Hoffmann, M.; Schroeder, S. E.; Vincent, J. B.; McSween, H. Y.; Denevi, B. W.; hide

    2011-01-01

    Rotational surface variations on asteroid 4 Vesta have been known from ground-based and HST observations, and they have been interpreted as evidence of compositional diversity. NASA s Dawn mission entered orbit around Vesta on July 16, 2011 for a year-long global characterization. The framing cameras (FC) onboard the Dawn spacecraft will image the asteroid in one clear (broad) and seven narrow band filters covering the wavelength range between 0.4-1.0 microns. We present color mapping results from the Dawn FC observations of Vesta obtained during Survey orbit (approx.3000 km) and High-Altitude Mapping Orbit (HAMO) (approx.950 km). Our aim is to create global color maps of Vesta using multi spectral FC images to identify the spatial extent of compositional units and link them with other available data sets to extract the basic mineralogy. While the VIR spectrometer onboard Dawn has higher spectral resolution (864 channels) allowing precise mineralogical assessment of Vesta s surface, the FC has three times higher spatial resolution in any given orbital phase. In an effort to extract maximum information from FC data we have developed algorithms using laboratory spectra of pyroxenes and HED meteorites to derive parameters associated with the 1-micron absorption band wing. These parameters will help map the global distribution of compositionally related units on Vesta s surface. Interpretation of these units will involve the integration of FC and VIR data.

  20. Single-photon sensitive fast ebCMOS camera system for multiple-target tracking of single fluorophores: application to nano-biophotonics

    Science.gov (United States)

    Cajgfinger, Thomas; Chabanat, Eric; Dominjon, Agnes; Doan, Quang T.; Guerin, Cyrille; Houles, Julien; Barbier, Remi

    2011-03-01

    Nano-biophotonics applications will benefit from new fluorescent microscopy methods based essentially on super-resolution techniques (beyond the diffraction limit) on large biological structures (membranes) with fast frame rate (1000 Hz). This trend tends to push the photon detectors to the single-photon counting regime and the camera acquisition system to real time dynamic multiple-target tracing. The LUSIPHER prototype presented in this paper aims to give a different approach than those of Electron Multiplied CCD (EMCCD) technology and try to answer to the stringent demands of the new nano-biophotonics imaging techniques. The electron bombarded CMOS (ebCMOS) device has the potential to respond to this challenge, thanks to the linear gain of the accelerating high voltage of the photo-cathode, to the possible ultra fast frame rate of CMOS sensors and to the single-photon sensitivity. We produced a camera system based on a 640 kPixels ebCMOS with its acquisition system. The proof of concept for single-photon based tracking for multiple single-emitters is the main result of this paper.

  1. A natural-color mapping for single-band night-time image based on FPGA

    Science.gov (United States)

    Wang, Yilun; Qian, Yunsheng

    2018-01-01

    A natural-color mapping for single-band night-time image method based on FPGA can transmit the color of the reference image to single-band night-time image, which is consistent with human visual habits and can help observers identify the target. This paper introduces the processing of the natural-color mapping algorithm based on FPGA. Firstly, the image can be transformed based on histogram equalization, and the intensity features and standard deviation features of reference image are stored in SRAM. Then, the real-time digital images' intensity features and standard deviation features are calculated by FPGA. At last, FPGA completes the color mapping through matching pixels between images using the features in luminance channel.

  2. CMEIAS color segmentation: an improved computing technology to process color images for quantitative microbial ecology studies at single-cell resolution.

    Science.gov (United States)

    Gross, Colin A; Reddy, Chandan K; Dazzo, Frank B

    2010-02-01

    Quantitative microscopy and digital image analysis are underutilized in microbial ecology largely because of the laborious task to segment foreground object pixels from background, especially in complex color micrographs of environmental samples. In this paper, we describe an improved computing technology developed to alleviate this limitation. The system's uniqueness is its ability to edit digital images accurately when presented with the difficult yet commonplace challenge of removing background pixels whose three-dimensional color space overlaps the range that defines foreground objects. Image segmentation is accomplished by utilizing algorithms that address color and spatial relationships of user-selected foreground object pixels. Performance of the color segmentation algorithm evaluated on 26 complex micrographs at single pixel resolution had an overall pixel classification accuracy of 99+%. Several applications illustrate how this improved computing technology can successfully resolve numerous challenges of complex color segmentation in order to produce images from which quantitative information can be accurately extracted, thereby gain new perspectives on the in situ ecology of microorganisms. Examples include improvements in the quantitative analysis of (1) microbial abundance and phylotype diversity of single cells classified by their discriminating color within heterogeneous communities, (2) cell viability, (3) spatial relationships and intensity of bacterial gene expression involved in cellular communication between individual cells within rhizoplane biofilms, and (4) biofilm ecophysiology based on ribotype-differentiated radioactive substrate utilization. The stand-alone executable file plus user manual and tutorial images for this color segmentation computing application are freely available at http://cme.msu.edu/cmeias/ . This improved computing technology opens new opportunities of imaging applications where discriminating colors really matter most

  3. Acceptance/Operational Test Report for Tank 241-AN-104 camera and camera purge control system

    International Nuclear Information System (INIS)

    Castleberry, J.L.

    1995-11-01

    This Acceptance/Operational Test Procedure (ATP/OTP) will document the satisfactory operation of the camera purge panel, purge control panel, color camera system and associated control components destined for installation. The final acceptance of the complete system will be performed in the field. The purge panel and purge control panel will be tested for its safety interlock which shuts down the camera and pan-and-tilt inside the tank vapor space during loss of purge pressure and that the correct purge volume exchanges are performed as required by NFPA 496. This procedure is separated into seven sections. This Acceptance/Operational Test Report documents the successful acceptance and operability testing of the 241-AN-104 camera system and camera purge control system

  4. A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.

    Science.gov (United States)

    Qian, Shuo; Sheng, Yang

    2011-11-01

    Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.

  5. Multi-color pyrometry imaging system and method of operating the same

    Science.gov (United States)

    Estevadeordal, Jordi; Nirmalan, Nirm Velumylum; Tralshawala, Nilesh; Bailey, Jeremy Clyde

    2017-03-21

    A multi-color pyrometry imaging system for a high-temperature asset includes at least one viewing port in optical communication with at least one high-temperature component of the high-temperature asset. The system also includes at least one camera device in optical communication with the at least one viewing port. The at least one camera device includes a camera enclosure and at least one camera aperture defined in the camera enclosure, The at least one camera aperture is in optical communication with the at least one viewing port. The at least one camera device also includes a multi-color filtering mechanism coupled to the enclosure. The multi-color filtering mechanism is configured to sequentially transmit photons within a first predetermined wavelength band and transmit photons within a second predetermined wavelength band that is different than the first predetermined wavelength band.

  6. Decision Support System to Choose Digital Single Lens Camera with Simple Additive Weighting Method

    Directory of Open Access Journals (Sweden)

    Tri Pina Putri

    2016-11-01

    Full Text Available One of the technologies that evolve today is Digital Single Lens Reflex (DSLR camera. The number of products makes users have difficulties to choose the appropriate camera based on their criteria. Users may utilize several ways to help them choosing the intended camera such as using magazine, internet, and other media. This paper discusses about a web based decision support system to choose cameras by using SAW (Simple Additive Weighting method in order to make the decision process more effective and efficient. This system is expected to give recommendations about the camera which is appropriate with the user’s need and criteria based on the cost, the resolution, the feature, the ISO, and the censor. The system was implemented by using PHP and MySQL. Based on the result of questionnaire distributed to 20 respondents, 60% respondents agree that this decision support system can help users to choose the appropriate camera DSLR in accordance with the user’s need, 60% of respondents agree that this decision support system is more effective to choose DSLR camera and 75% of respondents agree that this system is more efficient. In addition, 60.55% of respondents agree that this system has met 5 Es Usability Framework.

  7. New nuclear medicine gamma camera systems

    International Nuclear Information System (INIS)

    Villacorta, Edmundo V.

    1997-01-01

    The acquisition of the Open E.CAM and DIACAM gamma cameras by Makati Medical Center is expected to enhance the capabilities of its nuclear medicine facilities. When used as an aid to diagnosis, nuclear medicine entails the introduction of a minute amount of radioactive material into the patient; thus, no reaction or side-effect is expected. When it reaches the particular target organ, depending on the radiopharmaceutical, a lesion will appear as a decrease (cold) area or increase (hot) area in the radioactive distribution as recorded byu the gamma cameras. Gamma camera images in slices or SPECT (Single Photon Emission Computer Tomography), increase the sensitivity and accuracy in detecting smaller and deeply seated lesions, which otherwise may not be detected in the regular single planar images. Due to the 'open' design of the equipment, claustrophobic patients will no longer feel enclosed during the procedure. These new gamma cameras yield improved resolution and superb image quality, and the higher photon sensitivity shortens imaging acquisition time. The E.CAM, which is the latest generation gamma camera, is featured by its variable angle dual-head system, the only one available in the Philipines, and the excellent choice for Myocardial Perfusion Imaging (MPI). From the usual 45 minutes, the acquisition time for gated SPECT imaging of the heart has now been remarkably reduced to 12 minutes. 'Gated' infers snap-shots of the heart in selected phases of its contraction and relaxation as triggered by ECG. The DIACAM is installed in a room with access outside the main entrance of the department, intended specially for bed-borne patients. Both systems are equipped with a network of high performance Macintosh ICOND acquisition and processing computers. Added to the hardware is the ICON processing software which allows total simultaneous acquisition and processing capabilities in the same operator's terminal. Video film and color printers are also provided. Together

  8. Multi-color single particle tracking with quantum dots.

    Directory of Open Access Journals (Sweden)

    Eva C Arnspang

    Full Text Available Quantum dots (QDs have long promised to revolutionize fluorescence detection to include even applications requiring simultaneous multi-species detection at single molecule sensitivity. Despite the early promise, the unique optical properties of QDs have not yet been fully exploited in e. g. multiplex single molecule sensitivity applications such as single particle tracking (SPT. In order to fully optimize single molecule multiplex application with QDs, we have in this work performed a comprehensive quantitative investigation of the fluorescence intensities, fluorescence intensity fluctuations, and hydrodynamic radii of eight types of commercially available water soluble QDs. In this study, we show that the fluorescence intensity of CdSe core QDs increases as the emission of the QDs shifts towards the red but that hybrid CdSe/CdTe core QDs are less bright than the furthest red-shifted CdSe QDs. We further show that there is only a small size advantage in using blue-shifted QDs in biological applications because of the additional size of the water-stabilizing surface coat. Extending previous work, we finally also show that parallel four color multicolor (MC-SPT with QDs is possible at an image acquisition rate of at least 25 Hz. We demonstrate the technique by measuring the lateral dynamics of a lipid, biotin-cap-DPPE, in the cellular plasma membrane of live cells using four different colors of QDs; QD565, QD605, QD655, and QD705 as labels.

  9. Development of a single-photon-counting camera with use of a triple-stacked micro-channel plate.

    Science.gov (United States)

    Yasuda, Naruomi; Suzuki, Hitoshi; Katafuchi, Tetsuro

    2016-01-01

    At the quantum-mechanical level, all substances (not merely electromagnetic waves such as light and X-rays) exhibit wave–particle duality. Whereas students of radiation science can easily understand the wave nature of electromagnetic waves, the particle (photon) nature may elude them. Therefore, to assist students in understanding the wave–particle duality of electromagnetic waves, we have developed a photon-counting camera that captures single photons in two-dimensional images. As an image intensifier, this camera has a triple-stacked micro-channel plate (MCP) with an amplification factor of 10(6). The ultra-low light of a single photon entering the camera is first converted to an electron through the photoelectric effect on the photocathode. The electron is intensified by the triple-stacked MCP and then converted to a visible light distribution, which is measured by a high-sensitivity complementary metal oxide semiconductor image sensor. Because it detects individual photons, the photon-counting camera is expected to provide students with a complete understanding of the particle nature of electromagnetic waves. Moreover, it measures ultra-weak light that cannot be detected by ordinary low-sensitivity cameras. Therefore, it is suitable for experimental research on scintillator luminescence, biophoton detection, and similar topics.

  10. Single-exposure color digital holography

    Science.gov (United States)

    Feng, Shaotong; Wang, Yanhui; Zhu, Zhuqing; Nie, Shouping

    2010-11-01

    In this paper, we report a method for color image reconstruction by recording only one single multi-wavelength hologram. In the recording process, three lasers of different wavelengths emitting in the red, green and blue regions are used for illuminating on the object and the object diffraction fields will arrive at the hologram plane simultaneously. Three reference beams with different spatial angles will interfere with the corresponding object diffraction fields on the hologram plane, respectively. Finally, a series of sub-holograms incoherently overlapped on the CCD to be recorded as a multi-wavelength hologram. Angular division multiplexing is employed to reference beams so that the spatial spectra of the multiple recordings will be separated in the Fourier plane. In the reconstruction process, the multi-wavelength hologram will be Fourier transformed into its Fourier plane, where the spatial spectra of different wavelengths are separated and can be easily extracted by employing frequency filtering. The extracted spectra are used to reconstruct the corresponding monochromatic complex amplitudes, which will be synthesized to reconstruct the color image. For singleexposure recording technique, it is convenient for applications on the real-time image processing fields. However, the quality of the reconstructed images is affected by speckle noise. How to improve the quality of the images needs for further research.

  11. Camera processing with chromatic aberration.

    Science.gov (United States)

    Korneliussen, Jan Tore; Hirakawa, Keigo

    2014-10-01

    Since the refractive index of materials commonly used for lens depends on the wavelengths of light, practical camera optics fail to converge light to a single point on an image plane. Known as chromatic aberration, this phenomenon distorts image details by introducing magnification error, defocus blur, and color fringes. Though achromatic and apochromatic lens designs reduce chromatic aberration to a degree, they are complex and expensive and they do not offer a perfect correction. In this paper, we propose a new postcapture processing scheme designed to overcome these problems computationally. Specifically, the proposed solution is comprised of chromatic aberration-tolerant demosaicking algorithm and post-demosaicking chromatic aberration correction. Experiments with simulated and real sensor data verify that the chromatic aberration is effectively corrected.

  12. Biomimetic plasmonic color generated by the single-layer coaxial honeycomb nanostructure arrays

    Science.gov (United States)

    Zhao, Jiancun; Gao, Bo; Li, Haoyong; Yu, Xiaochang; Yang, Xiaoming; Yu, Yiting

    2017-07-01

    We proposed a periodic coaxial honeycomb nanostructure array patterned in a silver film to realize the plasmonic structural color, which was inspired from natural honeybee hives. The spectral characteristics of the structure with variant geometrical parameters are investigated by employing a finite-difference time-domain method, and the corresponding colors are thus derived by calculating XYZ tristimulus values corresponding with the transmission spectra. The study demonstrates that the suggested structure with only a single layer has high transmission, narrow full-width at half-maximum, and wide color tunability by changing geometrical parameters. Therefore, the plasmonic colors realized possess a high color brightness, saturation, as well as a wide color gamut. In addition, the strong polarization independence makes it more attractive for practical applications. These results indicate that the recommended color-generating plasmonic structure has various potential applications in highly integrated optoelectronic devices, such as color filters and high-definition displays.

  13. A three-step vehicle detection framework for range estimation using a single camera

    CSIR Research Space (South Africa)

    Kanjee, R

    2015-12-01

    Full Text Available This paper proposes and validates a real-time onroad vehicle detection system, which uses a single camera for the purpose of intelligent driver assistance. A three-step vehicle detection framework is presented to detect and track the target vehicle...

  14. Multi-Color Single Particle Tracking with Quantum Dots

    DEFF Research Database (Denmark)

    Christensen, Eva Arnspang; Brewer, J. R.; Lagerholm, B. C.

    2012-01-01

    . multiplex single molecule sensitivity applications such as single particle tracking (SPT). In order to fully optimize single molecule multiplex application with QDs, we have in this work performed a comprehensive quantitative investigation of the fluorescence intensities, fluorescence intensity fluctuations......Quantum dots (QDs) have long promised to revolutionize fluorescence detection to include even applications requiring simultaneous multi-species detection at single molecule sensitivity. Despite the early promise, the unique optical properties of QDs have not yet been fully exploited in e. g...... further show that there is only a small size advantage in using blue-shifted QDs in biological applications because of the additional size of the water-stabilizing surface coat. Extending previous work, we finally also show that parallel four color multicolor (MC)-SPT with QDs is possible at an image...

  15. Color film spectral properties test experiment for target simulation

    Science.gov (United States)

    Liu, Xinyue; Ming, Xing; Fan, Da; Guo, Wenji

    2017-04-01

    In hardware-in-loop test of the aviation spectra camera, the liquid crystal light valve and digital micro-mirror device could not simulate the spectrum characteristics of the landmark. A test system frame was provided based on the color film for testing the spectra camera; and the spectrum characteristics of the color film was test in the paper. The result of the experiment shows that difference was existed between the landmark and the film spectrum curse. However, the spectrum curse peak should change according to the color, and the curse is similar with the standard color traps. So, if the quantity value of error between the landmark and the film was calibrated and the error could be compensated, the film could be utilized in the hardware-in-loop test for the aviation spectra camera.

  16. Selecting a digital camera for telemedicine.

    Science.gov (United States)

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  17. Single-molecule three-color FRET with both negligible spectral overlap and long observation time.

    Directory of Open Access Journals (Sweden)

    Sanghwa Lee

    Full Text Available Full understanding of complex biological interactions frequently requires multi-color detection capability in doing single-molecule fluorescence resonance energy transfer (FRET experiments. Existing single-molecule three-color FRET techniques, however, suffer from severe photobleaching of Alexa 488, or its alternative dyes, and have been limitedly used for kinetics studies. In this work, we developed a single-molecule three-color FRET technique based on the Cy3-Cy5-Cy7 dye trio, thus providing enhanced observation time and improved data quality. Because the absorption spectra of three fluorophores are well separated, real-time monitoring of three FRET efficiencies was possible by incorporating the alternating laser excitation (ALEX technique both in confocal microscopy and in total-internal-reflection fluorescence (TIRF microscopy.

  18. Low-cost, portable, robust and high-resolution single-camera stereo-DIC system and its application in high-temperature deformation measurements

    Science.gov (United States)

    Chi, Yuxi; Yu, Liping; Pan, Bing

    2018-05-01

    A low-cost, portable, robust and high-resolution single-camera stereo-digital image correlation (stereo-DIC) system for accurate surface three-dimensional (3D) shape and deformation measurements is described. This system adopts a single consumer-grade high-resolution digital Single Lens Reflex (SLR) camera and a four-mirror adaptor, rather than two synchronized industrial digital cameras, for stereo image acquisition. In addition, monochromatic blue light illumination and coupled bandpass filter imaging are integrated to ensure the robustness of the system against ambient light variations. In contrast to conventional binocular stereo-DIC systems, the developed pseudo-stereo-DIC system offers the advantages of low cost, portability, robustness against ambient light variations, and high resolution. The accuracy and precision of the developed single SLR camera-based stereo-DIC system were validated by measuring the 3D shape of a stationary sphere along with in-plane and out-of-plane displacements of a translated planar plate. Application of the established system to thermal deformation measurement of an alumina ceramic plate and a stainless-steel plate subjected to radiation heating was also demonstrated.

  19. Physical assessment of the GE/CGR Neurocam and comparison with a single rotating gamma-camera

    International Nuclear Information System (INIS)

    Kouris, K.; Jarritt, P.H.; Costa, D.C.; Ell, P.J.

    1992-01-01

    The GE/CGR Neurocam is a triple-headed single photon emission tomography (SPET) system dedicated to multi-slice brain tomography. We have assessed its physical performance in terms of sensitivity and resolution, and its clinical efficacy in comparison with a modern, single, rotating gamma-camera (GE 400XCT). Using a water-filled cylinder containing TC-99m, the tomographic volume sensitivity of the Neurocam was 30.0 and 50.7 kcps/MBq.ml.cm for the high-resolution and general-purpose collimators, respectively; the corresponding values for the single rotating camera were 7.6 and 12.8 kcps/MBq.ml.cm. Tomographic resolution was measured in air and in water. In air, the Neurocam resolution at the centre of the field-of-view is 9.0 and 10.7 mm full width at half-maximum (FWHM) with the collimators, respectively, and is isotropic in the three orthogonal planes; the resolution of the GE 400XCT with its 13-cm radius of rotation is 10.3 and 11.7 mm, respectively. For the Neurocam with the HR collimator, the transaxial FWHM values in water were 9.7 mm at the centre and 9.5 mm radial (6.6 mm tangential) at 8 cm from the centre. The physical characteristics of the Neurocam enable the routine acquisition of brain perfusion data with Tc-99m hexamethyl-propylene amine oxime in about 14 min, yielding better image quality than with a single rotating camera in 40 min. (orig./HP)

  20. Precise measurement of a subpicosecond electron single bunch by the femtosecond streak camera

    International Nuclear Information System (INIS)

    Uesaka, M.; Ueda, T.; Kozawa, T.; Kobayashi, T.

    1998-01-01

    Precise measurement of a subpicosecond electron single bunch by the femtosecond streak camera is presented. The subpicosecond electron single bunch of energy 35 MeV was generated by the achromatic magnetic pulse compressor at the S-band linear accelerator of nuclear engineering research laboratory (NERL), University of Tokyo. The electric charge per bunch and beam size are 0.5 nC and the horizontal and vertical beam sizes are 3.3 and 5.5 mm (full width at half maximum; FWHM), respectively. Pulse shape of the electron single bunch is measured via Cherenkov radiation emitted in air by the femtosecond streak camera. Optical parameters of the optical measurement system were optimized based on much experiment and numerical analysis in order to achieve a subpicosecond time resolution. By using the optimized optical measurement system, the subpicosecond pulse shape, its variation for the differents rf phases in the accelerating tube, the jitter of the total system and the correlation between measured streak images and calculated longitudinal phase space distributions were precisely evaluated. This measurement system is going to be utilized in several subpicosecond analyses for radiation physics and chemistry. (orig.)

  1. DETECTING LASER SPOT IN SHOOTING SIMULATOR USING AN EMBEDDED CAMERA

    OpenAIRE

    Soetedjo, Aryuanto; Mahmudi, Ali; Ibrahim Ashari, M.; Ismail Nakhoda, Yusuf

    2017-01-01

    This paper presents the application of an embedded camera system for detecting laser spot in the shooting simulator. The proposed shooting simulator uses a specific target box, where the circular pattern target is mounted. The embedded camera is installed inside the box to capture the circular pattern target and laser spot image. To localize the circular pattern automatically, two colored solid circles are painted on the target. This technique allows the simple and fast color tracking to trac...

  2. Design and initial operation of a two-color soft x-ray camera system on the Compact Toroidal Hybrid experiment

    International Nuclear Information System (INIS)

    Herfindal, J. L.; Dawson, J. D.; Ennis, D. A.; Hartwell, G. J.; Loch, S. D.; Maurer, D. A.

    2014-01-01

    A multi-camera soft x-ray diagnostic has been developed to measure the equilibrium electron temperature profile and temperature fluctuations due to magnetohydrodynamic activity on the Compact Toroidal Hybrid experiment. The diagnostic consists of three separate cameras each employing two 20-channel diode arrays that view the same plasma region through different beryllium filter thicknesses of 1.8 μm and 3.0 μm allowing electron temperature measurements between 50 eV and 200 eV. The Compact Toroidal Hybrid is a five-field period current-carrying stellarator, in which the presence of plasma current strongly modifies the rotational transform and degree of asymmetry of the equilibrium. Details of the soft x-ray emission, effects of plasma asymmetry, and impurity line radiation on the design and measurement of the two-color diagnostic are discussed. Preliminary estimates of the temperature perturbation due to sawtooth oscillations observed in these hybrid discharges are given

  3. Color correction of projected image on color-screen for mobile beam-projector

    Science.gov (United States)

    Son, Chang-Hwan; Sung, Soo-Jin; Ha, Yeong-Ho

    2008-01-01

    With the current trend of digital convergence in mobile phones, mobile manufacturers are researching how to develop a mobile beam-projector to cope with the limitations of a small screen size and to offer a better feeling of movement while watching movies or satellite broadcasting. However, mobile beam-projectors may project an image on arbitrary surfaces, such as a colored wall and paper, not on a white screen mainly used in an office environment. Thus, color correction method for the projected image is proposed to achieve good image quality irrespective of the surface colors. Initially, luminance values of original image transformed into the YCbCr space are changed to compensate for spatially nonuniform luminance distribution of arbitrary surface, depending on the pixel values of surface image captured by mobile camera. Next, the chromaticity values for each surface and white-screen image are calculated using the ratio of the sum of three RGB values to one another. Then their chromaticity ratios are multiplied by converted original image through an inverse YCbCr matrix to reduce an influence of modulating the appearance of projected image due to spatially different reflectance on the surface. By projecting corrected original image on a texture pattern or single color surface, the image quality of projected image can be improved more, as well as that of projected image on white screen.

  4. Image mosaicking based on feature points using color-invariant values

    Science.gov (United States)

    Lee, Dong-Chang; Kwon, Oh-Seol; Ko, Kyung-Woo; Lee, Ho-Young; Ha, Yeong-Ho

    2008-02-01

    In the field of computer vision, image mosaicking is achieved using image features, such as textures, colors, and shapes between corresponding images, or local descriptors representing neighborhoods of feature points extracted from corresponding images. However, image mosaicking based on feature points has attracted more recent attention due to the simplicity of the geometric transformation, regardless of distortion and differences in intensity generated by camera motion in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray values, identifying corresponding points becomes difficult in the case of changing illumination and images with a similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on feature points using color information of images. Essentially, the digital values acquired from a real digital color camera are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature points extracted using the proposed method is increased, while image mosaicking using color information is also achieved.

  5. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.

    2011-01-01

    The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5?m) or long-wave infrared (LWIR) radiation (8-12?m). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.

  6. Enhancing the brightness of electrically driven single-photon sources using color centers in silicon carbide

    Science.gov (United States)

    Khramtsov, Igor A.; Vyshnevyy, Andrey A.; Fedyanin, Dmitry Yu.

    2018-03-01

    Practical applications of quantum information technologies exploiting the quantum nature of light require efficient and bright true single-photon sources which operate under ambient conditions. Currently, point defects in the crystal lattice of diamond known as color centers have taken the lead in the race for the most promising quantum system for practical non-classical light sources. This work is focused on a different quantum optoelectronic material, namely a color center in silicon carbide, and reveals the physics behind the process of single-photon emission from color centers in SiC under electrical pumping. We show that color centers in silicon carbide can be far superior to any other quantum light emitter under electrical control at room temperature. Using a comprehensive theoretical approach and rigorous numerical simulations, we demonstrate that at room temperature, the photon emission rate from a p-i-n silicon carbide single-photon emitting diode can exceed 5 Gcounts/s, which is higher than what can be achieved with electrically driven color centers in diamond or epitaxial quantum dots. These findings lay the foundation for the development of practical photonic quantum devices which can be produced in a well-developed CMOS compatible process flow.

  7. A color display device recording X ray spectra, especially intended for medical radiography

    International Nuclear Information System (INIS)

    Boulch, J.-M.

    1975-01-01

    Said invention relates to a color display recording device for X ray spectra intended for medical radiography. The video signal of the X ray camera receiving the radiation having passed through the patient is amplified and transformed into a color coding according to the energy spectrum received by the camera. In a first version, the energy spectrum from the camera gives directly an image on the color tube. In a second version the energy spectrum, after having been transformed into digital signals, is first sent into a memory, then into a computer used as a spectrum analyzer, and finally into the color display device [fr

  8. Realtime Color Stereovision Processing

    National Research Council Canada - National Science Library

    Formwalt, Bryon

    2000-01-01

    .... This research takes a step forward in real time machine vision processing. It investigates techniques for implementing a real time stereovision processing system using two miniature color cameras...

  9. Estimation of color modification in digital images by CFA pattern change.

    Science.gov (United States)

    Choi, Chang-Hee; Lee, Hae-Yeoun; Lee, Heung-Kyu

    2013-03-10

    Extensive studies have been carried out for detecting image forgery such as copy-move, re-sampling, blurring, and contrast enhancement. Although color modification is a common forgery technique, there is no reported forensic method for detecting this type of manipulation. In this paper, we propose a novel algorithm for estimating color modification in images acquired from digital cameras when the images are modified. Most commercial digital cameras are equipped with a color filter array (CFA) for acquiring the color information of each pixel. As a result, the images acquired from such digital cameras include a trace from the CFA pattern. This pattern is composed of the basic red green blue (RGB) colors, and it is changed when color modification is carried out on the image. We designed an advanced intermediate value counting method for measuring the change in the CFA pattern and estimating the extent of color modification. The proposed method is verified experimentally by using 10,366 test images. The results confirmed the ability of the proposed method to estimate color modification with high accuracy. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. Process simulation in digital camera system

    Science.gov (United States)

    Toadere, Florin

    2012-06-01

    The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.

  11. SINGLE IMAGE CAMERA CALIBRATION IN CLOSE RANGE PHOTOGRAMMETRY FOR SOLDER JOINT ANALYSIS

    Directory of Open Access Journals (Sweden)

    D. Heinemann

    2016-06-01

    Full Text Available Printed Circuit Boards (PCB play an important role in the manufacturing of electronic devices. To ensure a correct function of the PCBs a certain amount of solder paste is needed during the placement of components. The aim of the current research is to develop an real-time, closed-loop solution for the analysis of the printing process where solder is printed onto PCBs. Close range photogrammetry allows for determination of the solder volume and a subsequent correction if necessary. Photogrammetry is an image based method for three dimensional reconstruction from two dimensional image data of an object. A precise camera calibration is indispensable for an accurate reconstruction. In our certain application it is not possible to use calibration methods with two dimensional calibration targets. Therefore a special calibration target was developed and manufactured, which allows for single image camera calibration.

  12. Stereo matching based on SIFT descriptor with illumination and camera invariance

    Science.gov (United States)

    Niu, Haitao; Zhao, Xunjie; Li, Chengjin; Peng, Xiang

    2010-10-01

    Stereo matching is the process of finding corresponding points in two or more images. The description of interest points is a critical aspect of point correspondence which is vital in stereo matching. SIFT descriptor has been proven to be better on the distinctiveness and robustness than other local descriptors. However, SIFT descriptor does not involve color information of feature point which provides powerfully distinguishable feature in matching tasks. Furthermore, in a real scene, image color are affected by various geometric and radiometric factors,such as gamma correction and exposure. These situations are very common in stereo images. For this reason, the color recorded by a camera is not a reliable cue, and the color consistency assumption is no longer valid between stereo images in real scenes. Hence the performance of other SIFT-based stereo matching algorithms can be severely degraded under the radiometric variations. In this paper, we present a new improved SIFT stereo matching algorithms that is invariant to various radiometric variations between left and right images. Unlike other improved SIFT stereo matching algorithms, we explicitly employ the color formation model with the parameters of lighting geometry, illuminant color and camera gamma in SIFT descriptor. Firstly, we transform the input color images to log-chromaticity color space, thus a linear relationship can be established. Then, we use a log-polar histogram to build three color invariance components for SIFT descriptor. So that our improved SIFT descriptor is invariant to lighting geometry, illuminant color and camera gamma changes between left and right images. Then we can match feature points between two images and use SIFT descriptor Euclidean distance as a geometric measure in our data sets to make it further accurate and robust. Experimental results show that our method is superior to other SIFT-based algorithms including conventional stereo matching algorithms under various

  13. Perception of color emotions for single colors in red-green defective observers.

    Science.gov (United States)

    Sato, Keiko; Inoue, Takaaki

    2016-01-01

    It is estimated that inherited red-green color deficiency, which involves both the protan and deutan deficiency types, is common in men. For red-green defective observers, some reddish colors appear desaturated and brownish, unlike those seen by normal observers. Despite its prevalence, few studies have investigated the effects that red-green color deficiency has on the psychological properties of colors (color emotions). The current study investigated the influence of red-green color deficiency on the following six color emotions: cleanliness, freshness, hardness, preference, warmth, and weight. Specifically, this study aimed to: (1) reveal differences between normal and red-green defective observers in rating patterns of six color emotions; (2) examine differences in color emotions related to the three cardinal channels in human color vision; and (3) explore relationships between color emotions and color naming behavior. Thirteen men and 10 women with normal vision and 13 men who were red-green defective performed both a color naming task and an emotion rating task with 32 colors from the Berkeley Color Project (BCP). Results revealed noticeable differences in the cleanliness and hardness ratings between the normal vision observers, particularly in women, and red-green defective observers, which appeared mainly for colors in the orange to cyan range, and in the preference and warmth ratings for colors with cyan and purple hues. Similarly, naming errors also mainly occurred in the cyan colors. A regression analysis that included the three cone-contrasts (i.e., red-green, blue-yellow, and luminance) as predictors significantly accounted for variability in color emotion ratings for the red-green defective observers as much as the normal individuals. Expressly, for warmth ratings, the weight of the red-green opponent channel was significantly lower in color defective observers than in normal participants. In addition, the analyses for individual warmth ratings in

  14. Perception of color emotions for single colors in red-green defective observers

    Directory of Open Access Journals (Sweden)

    Keiko Sato

    2016-12-01

    Full Text Available It is estimated that inherited red-green color deficiency, which involves both the protan and deutan deficiency types, is common in men. For red-green defective observers, some reddish colors appear desaturated and brownish, unlike those seen by normal observers. Despite its prevalence, few studies have investigated the effects that red-green color deficiency has on the psychological properties of colors (color emotions. The current study investigated the influence of red-green color deficiency on the following six color emotions: cleanliness, freshness, hardness, preference, warmth, and weight. Specifically, this study aimed to: (1 reveal differences between normal and red-green defective observers in rating patterns of six color emotions; (2 examine differences in color emotions related to the three cardinal channels in human color vision; and (3 explore relationships between color emotions and color naming behavior. Thirteen men and 10 women with normal vision and 13 men who were red-green defective performed both a color naming task and an emotion rating task with 32 colors from the Berkeley Color Project (BCP. Results revealed noticeable differences in the cleanliness and hardness ratings between the normal vision observers, particularly in women, and red-green defective observers, which appeared mainly for colors in the orange to cyan range, and in the preference and warmth ratings for colors with cyan and purple hues. Similarly, naming errors also mainly occurred in the cyan colors. A regression analysis that included the three cone-contrasts (i.e., red-green, blue-yellow, and luminance as predictors significantly accounted for variability in color emotion ratings for the red-green defective observers as much as the normal individuals. Expressly, for warmth ratings, the weight of the red-green opponent channel was significantly lower in color defective observers than in normal participants. In addition, the analyses for individual warmth

  15. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    Science.gov (United States)

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  16. Low-complexity camera digital signal imaging for video document projection system

    Science.gov (United States)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  17. Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery

    Science.gov (United States)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2012-10-01

    In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.

  18. Multi-spectral CCD camera system for ocean water color and seacoast observation

    Science.gov (United States)

    Zhu, Min; Chen, Shiping; Wu, Yanlin; Huang, Qiaolin; Jin, Weiqi

    2001-10-01

    One of the earth observing instruments on HY-1 Satellite which will be launched in 2001, the multi-spectral CCD camera system, is developed by Beijing Institute of Space Mechanics & Electricity (BISME), Chinese Academy of Space Technology (CAST). In 798 km orbit, the system can provide images with 250 m ground resolution and a swath of 500 km. It is mainly used for coast zone dynamic mapping and oceanic watercolor monitoring, which include the pollution of offshore and coast zone, plant cover, watercolor, ice, terrain underwater, suspended sediment, mudflat, soil and vapor gross. The multi- spectral camera system is composed of four monocolor CCD cameras, which are line array-based, 'push-broom' scanning cameras, and responding for four spectral bands. The camera system adapts view field registration; that is, each camera scans the same region at the same moment. Each of them contains optics, focal plane assembly, electrical circuit, installation structure, calibration system, thermal control and so on. The primary features on the camera system are: (1) Offset of the central wavelength is better than 5 nm; (2) Degree of polarization is less than 0.5%; (3) Signal-to-noise ratio is about 1000; (4) Dynamic range is better than 2000:1; (5) Registration precision is better than 0.3 pixel; (6) Quantization value is 12 bit.

  19. Single-Camera-Based Method for Step Length Symmetry Measurement in Unconstrained Elderly Home Monitoring.

    Science.gov (United States)

    Cai, Xi; Han, Guang; Song, Xin; Wang, Jinkuan

    2017-11-01

    single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc. single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring

  20. Frequency division multiplexed multi-color fluorescence microscope system

    Science.gov (United States)

    Le, Vu Nam; Yang, Huai Dong; Zhang, Si Chun; Zhang, Xin Rong; Jin, Guo Fan

    2017-10-01

    Grayscale camera can only obtain gray scale image of object, while the multicolor imaging technology can obtain the color information to distinguish the sample structures which have the same shapes but in different colors. In fluorescence microscopy, the current method of multicolor imaging are flawed. Problem of these method is affecting the efficiency of fluorescence imaging, reducing the sampling rate of CCD etc. In this paper, we propose a novel multiple color fluorescence microscopy imaging method which based on the Frequency division multiplexing (FDM) technology, by modulating the excitation lights and demodulating the fluorescence signal in frequency domain. This method uses periodic functions with different frequency to modulate amplitude of each excitation lights, and then combine these beams for illumination in a fluorescence microscopy imaging system. The imaging system will detect a multicolor fluorescence image by a grayscale camera. During the data processing, the signal obtained by each pixel of the camera will be processed with discrete Fourier transform, decomposed by color in the frequency domain and then used inverse discrete Fourier transform. After using this process for signals from all of the pixels, monochrome images of each color on the image plane can be obtained and multicolor image is also acquired. Based on this method, this paper has constructed and set up a two-color fluorescence microscope system with two excitation wavelengths of 488 nm and 639 nm. By using this system to observe the linearly movement of two kinds of fluorescent microspheres, after the data processing, we obtain a two-color fluorescence dynamic video which is consistent with the original image. This experiment shows that the dynamic phenomenon of multicolor fluorescent biological samples can be generally observed by this method. Compared with the current methods, this method can obtain the image signals of each color at the same time, and the color video's frame

  1. Spectral Characterization of a Prototype SFA Camera for Joint Visible and NIR Acquisition

    Directory of Open Access Journals (Sweden)

    Jean-Baptiste Thomas

    2016-06-01

    Full Text Available Multispectral acquisition improves machine vision since it permits capturing more information on object surface properties than color imaging. The concept of spectral filter arrays has been developed recently and allows multispectral single shot acquisition with a compact camera design. Due to filter manufacturing difficulties, there was, up to recently, no system available for a large span of spectrum, i.e., visible and Near Infra-Red acquisition. This article presents the achievement of a prototype of camera that captures seven visible and one near infra-red bands on the same sensor chip. A calibration is proposed to characterize the sensor, and images are captured. Data are provided as supplementary material for further analysis and simulations. This opens a new range of applications in security, robotics, automotive and medical fields.

  2. Real-time stop sign detection and distance estimation using a single camera

    Science.gov (United States)

    Wang, Wenpeng; Su, Yuxuan; Cheng, Ming

    2018-04-01

    In modern world, the drastic development of driver assistance system has made driving a lot easier than before. In order to increase the safety onboard, a method was proposed to detect STOP sign and estimate distance using a single camera. In STOP sign detection, LBP-cascade classifier was applied to identify the sign in the image, and the principle of pinhole imaging was based for distance estimation. Road test was conducted using a detection system built with a CMOS camera and software developed by Python language with OpenCV library. Results shows that that the proposed system reach a detection accuracy of maximum of 97.6% at 10m, a minimum of 95.00% at 20m, and 5% max error in distance estimation. The results indicate that the system is effective and has the potential to be used in both autonomous driving and advanced driver assistance driving systems.

  3. Characteristics of a single photon emission tomography system with a wide field gamma camera

    International Nuclear Information System (INIS)

    Mathonnat, F.; Soussaline, F.; Todd-Pokropek, A.E.; Kellershohn, C.

    1979-01-01

    This text summarizes a work study describing the imagery possibilities of a single photon emission tomography system composed of a conventional wide field gamma camera, connected to a computer. The encouraging results achieved on the various phantoms studied suggest a significant development of this technique in clinical work in Nuclear Medicine Departments [fr

  4. Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera

    Science.gov (United States)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2012-10-01

    In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.

  5. Versatile single-molecule multi-color excitation and detection fluorescence setup for studying biomolecular dynamics

    KAUST Repository

    Sobhy, M. A.; Elshenawy, M. M.; Takahashi, Masateru; Whitman, B. H.; Walter, N. G.; Hamdan, S. M.

    2011-01-01

    Single-molecule fluorescence imaging is at the forefront of tools applied to study biomolecular dynamics both in vitro and in vivo. The ability of the single-molecule fluorescence microscope to conduct simultaneous multi-color excitation

  6. COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    Dominique Lafon

    2011-05-01

    Full Text Available The goal of this article is to present specific capabilities and limitations of the use of color digital images in a characterization process. The whole process is investigated, from the acquisition of digital color images to the analysis of the information relevant to various applications in the field of material characterization. A digital color image can be considered as a matrix of pixels with values expressed in a vector-space (commonly 3 dimensional space whose specificity, compared to grey-scale images, is to ensure a coding and a representation of the output image (visualisation printing that fits the human visual reality. In a characterization process, it is interesting to regard color image attnbutes as a set of visual aspect measurements on a material surface. Color measurement systems (spectrocolorimeters, colorimeters and radiometers and cameras use the same type of light detectors: most of them use Charge Coupled Devices sensors. The difference between the two types of color data acquisition systems is that color measurement systems provide a global information of the observed surface (average aspect of the surface: the color texture is not taken into account. Thus, it seems interesting to use imaging systems as measuring instruments for the quantitative characterization of the color texture.

  7. New camera systems for fuel services

    International Nuclear Information System (INIS)

    Hummel, W.; Beck, H.J.

    2010-01-01

    AREVA NP Fuel Services have many years of experience in visual examination and measurements on fuel assemblies and associated core components by using state of the art cameras and measuring technologies. The used techniques allow the surface and dimensional characterization of materials and shapes by visual examination. New enhanced and sophisticated technologies for fuel services f. e. are two shielded color camera systems for use under water and close inspection of a fuel assembly. Nowadays the market requirements for detecting and characterization of small defects (lower than the 10th of one mm) or cracks and analyzing surface appearances on an irradiated fuel rod cladding or fuel assembly structure parts have increased. Therefore it is common practice to use movie cameras with higher resolution. The radiation resistance of high resolution CCD cameras is in general very low and it is not possible to use them unshielded close to a fuel assembly. By extending the camera with a mirror system and shielding around the sensitive parts, the movie camera can be utilized for fuel assembly inspection. AREVA NP Fuel Services is now equipped with such kind of movie cameras. (orig.)

  8. Coloration of chromium-doped yttrium aluminum garnet single-crystal fibers using a divalent codopant

    International Nuclear Information System (INIS)

    Tissue, B.M.; Jia, W.; Lu, L.; Yen, W.M.

    1991-01-01

    We have grown single-crystal fibers of Cr:YAG and Cr,Ca:YAG under oxidizing and reducing conditions by the laser-heated-pedestal-growth method. The Cr:YAG crystals were light green due to Cr 3+ in octahedral sites, while the Cr,Ca:YAG crystals were brown. The presence of the divalent codopant was the dominant factor determining the coloration in these single-crystal fibers, while the oxidizing power of the growth atmosphere had little effect on the coloration. The Cr,Ca:YAG had a broad absorption band centered at 1.03 μm and fluoresced from 1.1 to 1.7 μm, with a room-temperature lifetime of 3.5 μs. The presence of both chromium and a divalent codopant were necessary to create the optically-active center which produces the near-infrared emission. Doping with only Ca 2+ created a different coloration with absorption in the blue and ultraviolet. The coloration in the Cr,Ca:YAG is attributed to Cr 4+ and is produced in as-grown crystals without irradiation or annealing, as has been necessary in previous work

  9. Image compensation for camera and lighting variability

    Science.gov (United States)

    Daley, Wayne D.; Britton, Douglas F.

    1996-12-01

    With the current trend of integrating machine vision systems in industrial manufacturing and inspection applications comes the issue of camera and illumination stabilization. Unless each application is built around a particular camera and highly controlled lighting environment, the interchangeability of cameras of fluctuations in lighting become a problem as each camera usually has a different response. An empirical approach is proposed where color tile data is acquired using the camera of interest, and a mapping is developed to some predetermined reference image using neural networks. A similar analytical approach based on a rough analysis of the imaging systems is also considered for deriving a mapping between cameras. Once a mapping has been determined, all data from one camera is mapped to correspond to the images of the other prior to performing any processing on the data. Instead of writing separate image processing algorithms for the particular image data being received, the image data is adjusted based on each particular camera and lighting situation. All that is required when swapping cameras is the new mapping for the camera being inserted. The image processing algorithms can remain the same as the input data has been adjusted appropriately. The results of utilizing this technique are presented for an inspection application.

  10. High dynamic range image acquisition based on multiplex cameras

    Science.gov (United States)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  11. Payload topography camera of Chang'e-3

    International Nuclear Information System (INIS)

    Yu, Guo-Bin; Liu, En-Hai; Zhao, Ru-Jin; Zhong, Jie; Zhou, Xiang-Dong; Zhou, Wu-Lin; Wang, Jin; Chen, Yuan-Pei; Hao, Yong-Jie

    2015-01-01

    Chang'e-3 was China's first soft-landing lunar probe that achieved a successful roving exploration on the Moon. A topography camera functioning as the lander's “eye” was one of the main scientific payloads installed on the lander. It was composed of a camera probe, an electronic component that performed image compression, and a cable assembly. Its exploration mission was to obtain optical images of the lunar topography in the landing zone for investigation and research. It also observed rover movement on the lunar surface and finished taking pictures of the lander and rover. After starting up successfully, the topography camera obtained static images and video of rover movement from different directions, 360° panoramic pictures of the lunar surface around the lander from multiple angles, and numerous pictures of the Earth. All images of the rover, lunar surface, and the Earth were clear, and those of the Chinese national flag were recorded in true color. This paper describes the exploration mission, system design, working principle, quality assessment of image compression, and color correction of the topography camera. Finally, test results from the lunar surface are provided to serve as a reference for scientific data processing and application. (paper)

  12. A mobile device-based imaging spectrometer for environmental monitoring by attaching a lightweight small module to a commercial digital camera.

    Science.gov (United States)

    Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing

    2017-11-15

    Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.

  13. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  14. Color image guided depth image super resolution using fusion filter

    Science.gov (United States)

    He, Jin; Liang, Bin; He, Ying; Yang, Jun

    2018-04-01

    Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.

  15. A novel single-step procedure for the calibration of the mounting parameters of a multi-camera terrestrial mobile mapping system

    Science.gov (United States)

    Habib, A.; Kersting, P.; Bang, K.; Rau, J.

    2011-12-01

    Mobile Mapping Systems (MMS) can be defined as moving platforms which integrates a set of imaging sensors and a position and orientation system (POS) for the collection of geo-spatial information. In order to fully explore the potential accuracy of such systems and guarantee accurate multi-sensor integration, a careful system calibration must be carried out. System calibration involves individual sensor calibration as well as the estimation of the inter-sensor geometric relationship. This paper tackles a specific component of the system calibration process of a multi-camera MMS - the estimation of the relative orientation parameters among the cameras, i.e., the inter-camera geometric relationship (lever-arm offsets and boresight angles among the cameras). For that purpose, a novel single step procedure, which is easy to implement and not computationally intensive, will be introduced. The proposed method is implemented in such a way that it can also be used for the estimation of the mounting parameters among the cameras and the IMU body frame, in case of directly georeferenced systems. The performance of the proposed method is evaluated through experimental results using simulated data. A comparative analysis between the proposed single-step and the two-step, which makes use of the traditional bundle adjustment procedure, is demonstrated.

  16. Networked web-cameras monitor congruent seasonal development of birches with phenological field observations

    Science.gov (United States)

    Peltoniemi, Mikko; Aurela, Mika; Böttcher, Kristin; Kolari, Pasi; Loehr, John; Karhu, Jouni; Kubin, Eero; Linkosalmi, Maiju; Melih Tanis, Cemal; Nadir Arslan, Ali

    2017-04-01

    Ecosystems' potential to provide services, e.g. to sequester carbon is largely driven by the phenological cycle of vegetation. Timing of phenological events is required for understanding and predicting the influence of climate change on ecosystems and to support various analyses of ecosystem functioning. We established a network of cameras for automated monitoring of phenological activity of vegetation in boreal ecosystems of Finland. Cameras were mounted on 14 sites, each site having 1-3 cameras. In this study, we used cameras at 11 of these sites to investigate how well networked cameras detect phenological development of birches (Betula spp.) along the latitudinal gradient. Birches are interesting focal species for the analyses as they are common throughout Finland. In our cameras they often appear in smaller quantities within dominant species in the images. Here, we tested whether small scattered birch image elements allow reliable extraction of color indices and changes therein. We compared automatically derived phenological dates from these birch image elements to visually determined dates from the same image time series, and to independent observations recorded in the phenological monitoring network from the same region. Automatically extracted season start dates based on the change of green color fraction in the spring corresponded well with the visually interpreted start of season, and field observed budburst dates. During the declining season, red color fraction turned out to be superior over green color based indices in predicting leaf yellowing and fall. The latitudinal gradients derived using automated phenological date extraction corresponded well with gradients based on phenological field observations from the same region. We conclude that already small and scattered birch image elements allow reliable extraction of key phenological dates for birch species. Devising cameras for species specific analyses of phenological timing will be useful for

  17. Face Liveness Detection Using a Light Field Camera

    Directory of Open Access Journals (Sweden)

    Sooyeon Kim

    2014-11-01

    Full Text Available A light field camera is a sensor that can record the directions as well as the colors of incident rays. This camera is widely utilized from 3D reconstruction to face and iris recognition. In this paper, we suggest a novel approach for defending spoofing face attacks, like printed 2D facial photos (hereinafter 2D photos and HD tablet images, using the light field camera. By viewing the raw light field photograph from a different standpoint, we extract two special features which cannot be obtained from the conventional camera. To verify the performance, we compose light field photograph databases and conduct experiments. Our proposed method achieves at least 94.78% accuracy or up to 99.36% accuracy under different types of spoofing attacks.

  18. A simple and inexpensive high resolution color ratiometric planar optode imaging approach: application to oxygen and pH sensing

    DEFF Research Database (Denmark)

    Larsen, M.; Borisov, S. M.; Grunwald, B.

    2011-01-01

    A simple, high resolution colormetric planar optode imaging approach is presented. The approach is simple and inexpensive yet versatile, and can be used to study the two-dimensional distribution and dynamics of a range of analytes. The imaging approach utilizes the inbuilt color filter of standard...... commercial digital single lens reflex cameras to simultaneously record different colors (red, green, and blue) of luminophore emission light using only one excitation light source. Using the ratio between the intensity of the different colors recorded in a single image analyte concentrations can...... be calculated. The robustness of the approach is documented by obtaining high resolution data of O-2 and pH distributions in marine sediments using easy synthesizable sensors. The sensors rely on the platinum(II) octaethylporphyrin (PtOEP) and lipophilic 8-Hydroxy-1,3,6-pyrenetrisulfonic acid trisodium (HPTS...

  19. PEOPLE REIDENTIFCATION IN A DISTRIBUTED CAMERA NETWORK

    Directory of Open Access Journals (Sweden)

    Icaro Oliveira de Oliveira

    2010-06-01

    Full Text Available This paper presents an approach to the object reidentification problem in a distributed camera network system. The reidentification or reacquisition problem consists essentially on the matching process of images acquired from different cameras. This work is applied in a monitored environment by cameras. This application is important to modern security systems, in which the targets presence identification in the environment expands the capacity of action by security agents in real time and provides important parameters like localization for each target. We used target’s interest points and target’s color with features for reidentification. The satisfactory results were obtained from real experiments in public video datasets and synthetic images with noise.

  20. An Airborne Multispectral Imaging System Based on Two Consumer-Grade Cameras for Agricultural Remote Sensing

    Directory of Open Access Journals (Sweden)

    Chenghai Yang

    2014-06-01

    Full Text Available This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS sensor with 5616 × 3744 pixels. One camera captures normal color images, while the other is modified to obtain near-infrared (NIR images. The color camera is also equipped with a GPS receiver to allow geotagged images. A remote control is used to trigger both cameras simultaneously. Images are stored in 14-bit RAW and 8-bit JPEG files in CompactFlash cards. The second-order transformation was used to align the color and NIR images to achieve subpixel alignment in four-band images. The imaging system was tested under various flight and land cover conditions and optimal camera settings were determined for airborne image acquisition. Images were captured at altitudes of 305–3050 m (1000–10,000 ft and pixel sizes of 0.1–1.0 m were achieved. Four practical application examples are presented to illustrate how the imaging system was used to estimate cotton canopy cover, detect cotton root rot, and map henbit and giant reed infestations. Preliminary analysis of example images has shown that this system has potential for crop condition assessment, pest detection, and other agricultural applications.

  1. Fast natural color mapping for night-time imagery

    NARCIS (Netherlands)

    Hogervorst, M.A.; Toet, A.

    2010-01-01

    We present a new method to render multi-band night-time imagery (images from sensors whose sensitive range does not necessarily coincide with the visual part of the electromagnetic spectrum, e.g. image intensifiers, thermal camera's) in natural daytime colors. The color mapping is derived from the

  2. Spatial characterization of nanotextured surfaces by visual color imaging

    DEFF Research Database (Denmark)

    Feidenhans'l, Nikolaj Agentoft; Murthy, Swathi; Madsen, Morten H.

    2016-01-01

    We present a method using an ordinary color camera to characterize nanostructures from the visual color of the structures. The method provides a macroscale overview image from which micrometer-sized regions can be analyzed independently, hereby revealing long-range spatial variations...

  3. Structural colors of the SiO2/polyethyleneimine thin films on poly(ethylene terephthalate) substrates

    International Nuclear Information System (INIS)

    Jia, Yanrong; Zhang, Yun; Zhou, Qiubao; Fan, Qinguo; Shao, Jianzhong

    2014-01-01

    The SiO 2 /polyethyleneimine (PEI) films with structural colors on poly(ethylene terephthalate) (PET) substrates were fabricated by an electrostatic self-assembly method. The morphology of the films was characterized by Scanning Electron Microscopy. The results showed that there was no distinguishable multilayered structure found of SiO 2 /PEI films. The optical behaviors of the films were investigated through the color photos captured by a digital camera and the color measurement by a multi-angle spectrophotometer. Different hue and brightness were observed at various viewing angles. The structural colors were dependent on the SiO 2 particle size and the number of assembly cycles. The mechanism of the structural colors generated from the assembled films was elucidated. The morphological structures and the optical properties proved that the SiO 2 /PEI film fabricated on PET substrate formed a homogeneous inorganic/organic SiO 2 /PEI composite layer, and the structural colors were originated from single thin film interference. - Highlights: • SiO 2 /PEI thin films were electrostatic self-assembled on PET substrates. • The surface morphology and optical behavior of the film were investigated. • The structural colors varied with various SiO 2 particle sizes and assembly cycles. • Different hue and lightness of SiO 2 /PEI film were observed at various viewing angles. • Structural color of the SiO 2 /PEI film originated from single thin film interference

  4. A TV camera system for digitizing single shot oscillograms at sweep rate of 0.1 ns/cm

    International Nuclear Information System (INIS)

    Kienlen, M.; Knispel, G.; Miehe, J.A.; Sipp, B.

    1976-01-01

    A TV camera digitizing system associated with a 5 GHz photocell-oscilloscope apparatus allows the digitizing of single shot oscillograms; with an oscilloscope sweep rate of 0.1 ns/cm an accuracy on time measurements of 4 ps is obtained [fr

  5. Versatile single-molecule multi-color excitation and detection fluorescence setup for studying biomolecular dynamics

    KAUST Repository

    Sobhy, M. A.

    2011-11-07

    Single-molecule fluorescence imaging is at the forefront of tools applied to study biomolecular dynamics both in vitro and in vivo. The ability of the single-molecule fluorescence microscope to conduct simultaneous multi-color excitation and detection is a key experimental feature that is under continuous development. In this paper, we describe in detail the design and the construction of a sophisticated and versatile multi-color excitation and emission fluorescence instrument for studying biomolecular dynamics at the single-molecule level. The setup is novel, economical and compact, where two inverted microscopes share a laser combiner module with six individual laser sources that extend from 400 to 640 nm. Nonetheless, each microscope can independently and in a flexible manner select the combinations, sequences, and intensities of the excitation wavelengths. This high flexibility is achieved by the replacement of conventional mechanical shutters with acousto-optic tunable filter (AOTF). The use of AOTF provides major advancement by controlling the intensities, duration, and selection of up to eight different wavelengths with microsecond alternation time in a transparent and easy manner for the end user. To our knowledge this is the first time AOTF is applied to wide-field total internal reflection fluorescence (TIRF) microscopy even though it has been commonly used in multi-wavelength confocal microscopy. The laser outputs from the combiner module are coupled to the microscopes by two sets of four single-mode optic fibers in order to allow for the optimization of the TIRF angle for each wavelength independently. The emission is split into two or four spectral channels to allow for the simultaneous detection of up to four different fluorophores of wide selection and using many possible excitation and photoactivation schemes. We demonstrate the performance of this new setup by conducting two-color alternating excitation single-molecule fluorescence resonance energy

  6. A design of a high speed dual spectrometer by single line scan camera

    Science.gov (United States)

    Palawong, Kunakorn; Meemon, Panomsak

    2018-03-01

    A spectrometer that can capture two orthogonal polarization components of s light beam is demanded for polarization sensitive imaging system. Here, we describe the design and implementation of a high speed spectrometer for simultaneous capturing of two orthogonal polarization components, i.e. vertical and horizontal components, of light beam. The design consists of a polarization beam splitter, two polarization-maintain optical fibers, two collimators, a single line-scan camera, a focusing lens, and a reflection blaze grating. The alignment of two beam paths was designed to be symmetrically incident on the blaze side and reverse blaze side of reflection grating, respectively. The two diffracted beams were passed through the same focusing lens and focused on the single line-scan sensors of a CMOS camera. The two spectra of orthogonal polarization were imaged on 1000 pixels per spectrum. With the proposed setup, the amplitude and shape of the two detected spectra can be controlled by rotating the collimators. The technique for optical alignment of spectrometer will be presented and discussed. The two orthogonal polarization spectra can be simultaneously captured at a speed of 70,000 spectra per second. The high speed dual spectrometer can simultaneously detected two orthogonal polarizations, which is an important component for the development of polarization-sensitive optical coherence tomography. The performance of the spectrometer have been measured and analyzed.

  7. CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    G. Kemper

    2016-06-01

    Full Text Available Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna –IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first

  8. Development and evaluation of a portable CZT coded aperture gamma-camera

    Energy Technology Data Exchange (ETDEWEB)

    Montemont, G.; Monnet, O.; Stanchina, S.; Maingault, L.; Verger, L. [CEA, LETI, Minatec Campus, Univ. Grenoble Alpes, 38054 Grenoble, (France); Carrel, F.; Lemaire, H.; Schoepff, V. [CEA, LIST, 91191 Gif-sur-Yvette, (France); Ferrand, G.; Lalleman, A.-S. [CEA, DAM, DIF, 91297 Arpajon, (France)

    2015-07-01

    We present the design and the evaluation of a CdZnTe (CZT) based gamma camera using a coded aperture mask. This camera, based on a 8 cm{sup 3} detection module, is small enough to be portable and battery-powered (4 kg weight and 4 W power dissipation). As the detector has spectral capabilities, the gamma camera allows isotope identification and colored imaging, by affecting one color channel to each identified isotope. As all data processing is done at real time, the user can directly observe the outcome of an acquisition and can immediately react to what he sees. We first present the architecture of the system, how the detector works, and its performances. After, we focus on the imaging technique used and its strengths and limitations. Finally, results concerning sensitivity, spatial resolution, field of view and multi-isotope imaging are shown and discussed. (authors)

  9. Development and evaluation of a portable CZT coded aperture gamma-camera

    International Nuclear Information System (INIS)

    Montemont, G.; Monnet, O.; Stanchina, S.; Maingault, L.; Verger, L.; Carrel, F.; Lemaire, H.; Schoepff, V.; Ferrand, G.; Lalleman, A.-S.

    2015-01-01

    We present the design and the evaluation of a CdZnTe (CZT) based gamma camera using a coded aperture mask. This camera, based on a 8 cm 3 detection module, is small enough to be portable and battery-powered (4 kg weight and 4 W power dissipation). As the detector has spectral capabilities, the gamma camera allows isotope identification and colored imaging, by affecting one color channel to each identified isotope. As all data processing is done at real time, the user can directly observe the outcome of an acquisition and can immediately react to what he sees. We first present the architecture of the system, how the detector works, and its performances. After, we focus on the imaging technique used and its strengths and limitations. Finally, results concerning sensitivity, spatial resolution, field of view and multi-isotope imaging are shown and discussed. (authors)

  10. A Color-Opponency Based Biological Model for Color Constancy

    Directory of Open Access Journals (Sweden)

    Yongjie Li

    2011-05-01

    Full Text Available Color constancy is the ability of the human visual system to adaptively correct color-biased scenes under different illuminants. Most of the existing color constancy models are nonphysiologically plausible. Among the limited biological models, the great majority is Retinex and its variations, and only two or three models directly simulate the feature of color-opponency, but only of the very earliest stages of visual pathway, i.e., the single-opponent mechanisms involved at the levels of retinal ganglion cells and lateral geniculate nucleus (LGN neurons. Considering the extensive physiological evidences supporting that both the single-opponent cells in retina and LGN and the double-opponent neurons in primary visual cortex (V1 are the building blocks for color constancy, in this study we construct a color-opponency based color constancy model by simulating the opponent fashions of both the single-opponent and double-opponent cells in a forward manner. As for the spatial structure of the receptive fields (RF, both the classical RF (CRF center and the nonclassical RF (nCRF surround are taken into account for all the cells. The proposed model was tested on several typical image databases commonly used for performance evaluation of color constancy methods, and exciting results were achieved.

  11. Color tuning in alert macaque V1 assessed with fMRI and single-unit recording shows a bias toward daylight colors.

    Science.gov (United States)

    Lafer-Sousa, Rosa; Liu, Yang O; Lafer-Sousa, Luis; Wiest, Michael C; Conway, Bevil R

    2012-05-01

    Colors defined by the two intermediate directions in color space, "orange-cyan" and "lime-magenta," elicit the same spatiotemporal average response from the two cardinal chromatic channels in the lateral geniculate nucleus (LGN). While we found LGN functional magnetic resonance imaging (fMRI) responses to these pairs of colors were statistically indistinguishable, primary visual cortex (V1) fMRI responses were stronger to orange-cyan. Moreover, linear combinations of single-cell responses to cone-isolating stimuli of V1 cone-opponent cells also yielded stronger predicted responses to orange-cyan over lime-magenta, suggesting these neurons underlie the fMRI result. These observations are consistent with the hypothesis that V1 recombines LGN signals into "higher-order" mechanisms tuned to noncardinal color directions. In light of work showing that natural images and daylight samples are biased toward orange-cyan, our findings further suggest that V1 is adapted to daylight. V1, especially double-opponent cells, may function to extract spatial information from color boundaries correlated with scene-structure cues, such as shadows lit by ambient blue sky juxtaposed with surfaces reflecting sunshine. © 2012 Optical Society of America

  12. Color design model of high color rendering index white-light LED module.

    Science.gov (United States)

    Ying, Shang-Ping; Fu, Han-Kuei; Hsieh, Hsin-Hsin; Hsieh, Kun-Yang

    2017-05-10

    The traditional white-light light-emitting diode (LED) is packaged with a single chip and a single phosphor but has a poor color rendering index (CRI). The next-generation package comprises two chips and a single phosphor, has a high CRI, and retains high luminous efficacy. This study employs two chips and two phosphors to improve the diode's color tunability with various proportions of two phosphors and various densities of phosphor in the silicone used. A color design model is established for color fine-tuning of the white-light LED module. The maximum difference between the measured and color-design-model simulated CIE 1931 color coordinates is approximately 0.0063 around a correlated color temperature (CCT) of 2500 K. This study provides a rapid method to obtain the color fine-tuning of a white-light LED module with a high CRI and luminous efficacy.

  13. A Single Camera Motion Capture System for Human-Computer Interaction

    Science.gov (United States)

    Okada, Ryuzo; Stenger, Björn

    This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.

  14. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    Energy Technology Data Exchange (ETDEWEB)

    WERRY, S.M.

    2000-03-23

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151.

  15. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    International Nuclear Information System (INIS)

    WERRY, S.M.

    2000-01-01

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151

  16. Two-Color Single-Photon Photoinitiation and Photoinhibition for Subdiffraction Photolithography

    Science.gov (United States)

    Scott, Timothy F.; Kowalski, Benjamin A.; Sullivan, Amy C.; Bowman, Christopher N.; McLeod, Robert R.

    2009-05-01

    Controlling and reducing the developed region initiated by photoexposure is one of the fundamental goals of optical lithography. Here, we demonstrate a two-color irradiation scheme whereby initiating species are generated by single-photon absorption at one wavelength while inhibiting species are generated by single-photon absorption at a second, independent wavelength. Co-irradiation at the second wavelength thus reduces the polymerization rate, delaying gelation of the material and facilitating enhanced spatial control over the polymerization. Appropriate overlapping of the two beams produces structures with both feature sizes and monomer conversions otherwise unobtainable with use of single- or two-photon absorption photopolymerization. Additionally, the generated inhibiting species rapidly recombine when irradiation with the second wavelength ceases, allowing for fast sequential exposures not limited by memory effects in the material and thus enabling fabrication of complex two- or three-dimensional structures.

  17. A Linear Criterion to sort Color Components in Images

    Directory of Open Access Journals (Sweden)

    Leonardo Barriga Rodriguez

    2017-01-01

    Full Text Available The color and its representation play a basic role in Image Analysis process. Several methods can be beneficial whenever they have a correct representation of wave-length variations used to represent scenes with a camera. A wide variety of spaces and color representations is founded in specialized literature. Each one is useful in concrete circumstances and others may offer redundant color information (for instance, all RGB components are high correlated. This work deals with the task of identifying and sorting which component from several color representations offers the majority of information about the scene. This approach is based on analyzing linear dependences among each color component, by the implementation of a new sorting algorithm based on entropy. The proposal is tested in several outdoor/indoor scenes with different light conditions. Repeatability and stability are tested in order to guarantee its use in several image analysis applications. Finally, the results of this work have been used to enhance an external algorithm to compensate the camera random vibrations.

  18. Single camera analyses in studying pattern forming dynamics of player interactions in team sports.

    OpenAIRE

    Duarte, Ricardo; Fernandes, Orlando; Folgado, Hugo; Araújo, Duarte

    2013-01-01

    A network of patterned interactions between players characterises team ball sports. Thus, interpersonal coordination patterns are an important topic in the study of performance in such sports. A very useful method has been the study of inter-individual interactions captured by a single camera filming an extended performance area. The appropriate collection of positional data allows investigating the pattern forming dynamics emerging in different performance sub-phases of team ball sports. Thi...

  19. Regression analysis for LED color detection of visual-MIMO system

    Science.gov (United States)

    Banik, Partha Pratim; Saha, Rappy; Kim, Ki-Doo

    2018-04-01

    Color detection from a light emitting diode (LED) array using a smartphone camera is very difficult in a visual multiple-input multiple-output (visual-MIMO) system. In this paper, we propose a method to determine the LED color using a smartphone camera by applying regression analysis. We employ a multivariate regression model to identify the LED color. After taking a picture of an LED array, we select the LED array region, and detect the LED using an image processing algorithm. We then apply the k-means clustering algorithm to determine the number of potential colors for feature extraction of each LED. Finally, we apply the multivariate regression model to predict the color of the transmitted LEDs. In this paper, we show our results for three types of environmental light condition: room environmental light, low environmental light (560 lux), and strong environmental light (2450 lux). We compare the results of our proposed algorithm from the analysis of training and test R-Square (%) values, percentage of closeness of transmitted and predicted colors, and we also mention about the number of distorted test data points from the analysis of distortion bar graph in CIE1931 color space.

  20. High-content analysis of single cells directly assembled on CMOS sensor based on color imaging.

    Science.gov (United States)

    Tanaka, Tsuyoshi; Saeki, Tatsuya; Sunaga, Yoshihiko; Matsunaga, Tadashi

    2010-12-15

    A complementary metal oxide semiconductor (CMOS) image sensor was applied to high-content analysis of single cells which were assembled closely or directly onto the CMOS sensor surface. The direct assembling of cell groups on CMOS sensor surface allows large-field (6.66 mm×5.32 mm in entire active area of CMOS sensor) imaging within a second. Trypan blue-stained and non-stained cells in the same field area on the CMOS sensor were successfully distinguished as white- and blue-colored images under white LED light irradiation. Furthermore, the chemiluminescent signals of each cell were successfully visualized as blue-colored images on CMOS sensor only when HeLa cells were placed directly on the micro-lens array of the CMOS sensor. Our proposed approach will be a promising technique for real-time and high-content analysis of single cells in a large-field area based on color imaging. Copyright © 2010 Elsevier B.V. All rights reserved.

  1. Color constancy in Japanese animation

    Science.gov (United States)

    Ichihara, Yasuyo G.

    2006-01-01

    In this study, we measure the colors used in a Japanese Animations. The result can be seen on CIE-xy color spaces. It clearly shows that the color system is not a natural appearance system but an imagined and artistic appearance system. Color constancy of human vision can tell the difference in skin and hair colors between under moonlight and day light. Human brain generates a match to the memorized color of an object from daylight viewing conditions to the color of the object in different viewing conditions. For example, Japanese people always perceive the color of the Rising Sun in the Japanese flag as red even in a different viewing condition such as under moonlight. Color images captured by a camera cannot present those human perceptions. However, Japanese colorists in Animation succeeded in painting the effects of color constancy not only under moonlight but also added the memory matching colors. They aim to create a greater impact on viewer's perceptions by using the effect of the memory matching colors. In this paper, we propose the Imagined Japanese Animation Color System. This system in art is currently a subject of research in Japan. Its importance is that it could also provide an explanation on how human brain perceives the same color under different viewing conditions.

  2. Single-flavor color superconductivity with color-sextet pairing

    Czech Academy of Sciences Publication Activity Database

    Brauner, Tomáš

    2005-01-01

    Roč. 55, č. 1 (2005), s. 9-16 ISSN 0011-4626 R&D Projects: GA ČR(CZ) GA202/02/0847 Keywords : color superconductivity * spontaneous symmetry breaking Subject RIV: BE - Theoretical Physics Impact factor: 0.360, year: 2005

  3. Music-to-Color Associations of Single-Line Piano Melodies in Non-synesthetes.

    Science.gov (United States)

    Palmer, Stephen E; Langlois, Thomas A; Schloss, Karen B

    2016-01-01

    Prior research has shown that non-synesthetes' color associations to classical orchestral music are strongly mediated by emotion. The present study examines similar cross-modal music-to-color associations for much better controlled musical stimuli: 64 single-line piano melodies that were generated from four basic melodies by Mozart, whose global musical parameters were manipulated in tempo(slow/fast), note-density (sparse/dense), mode (major/minor) and pitch-height (low/high). Participants first chose the three colors (from 37) that they judged to be most consistent with (and, later, the three that were most inconsistent with) the music they were hearing. They later rated each melody and each color for the strength of its association along four emotional dimensions: happy/sad, agitated/calm, angry/not-angry and strong/weak. The cross-modal choices showed that faster music in the major mode was associated with lighter, more saturated, yellower (warmer) colors than slower music in the minor mode. These results replicate and extend those of Palmer et al. (2013, Proc. Natl Acad. Sci. 110, 8836-8841) with more precisely controlled musical stimuli. Further results replicated strong evidence for emotional mediation of these cross-modal associations, in that the emotional ratings of the melodies were very highly correlated with the emotional associations of the colors chosen as going best/worst with the melodies (r = 0.92, 0.85, 0.82 and 0.70 for happy/sad, strong/weak,angry/not-angry and agitated/calm, respectively). The results are discussed in terms of common emotional associations forming a cross-modal bridge between highly disparate sensory inputs.

  4. Selecting the right digital camera for telemedicine-choice for 2009.

    Science.gov (United States)

    Patricoski, Chris; Ferguson, A Stewart; Brudzinski, Jay; Spargo, Garret

    2010-03-01

    Digital cameras are fundamental tools for store-and-forward telemedicine (electronic consultation). The choice of a camera may significantly impact this consultative process based on the quality of the images, the ability of users to leverage the cameras' features, and other facets of the camera design. The goal of this research was to provide a substantive framework and clearly defined process for reviewing digital cameras and to demonstrate the results obtained when employing this process to review point-and-shoot digital cameras introduced in 2009. The process included a market review, in-house evaluation of features, image reviews, functional testing, and feature prioritization. Seventy-two cameras were identified new on the market in 2009, and 10 were chosen for in-house evaluation. Four cameras scored very high for mechanical functionality and ease-of-use. The final analysis revealed three cameras that had excellent scores for both color accuracy and photographic detail and these represent excellent options for telemedicine: Canon Powershot SD970 IS, Fujifilm FinePix F200EXR, and Panasonic Lumix DMC-ZS3. Additional features of the Canon Powershot SD970 IS make it the camera of choice for our Alaska program.

  5. Simple single-emitting layer hybrid white organic light emitting with high color stability

    Science.gov (United States)

    Nguyen, C.; Lu, Z. H.

    2017-10-01

    Simultaneously achieving a high efficiency and color quality at luminance levels required for solid-state lighting has been difficult for white organic light emitting diodes (OLEDs). Single-emitting layer (SEL) white OLEDs, in particular, exhibit a significant tradeoff between efficiency and color stability. Furthermore, despite the simplicity of SEL white OLEDs being its main advantage, the reported device structures are often complicated by the use of multiple blocking layers. In this paper, we report a highly simplified three-layered white OLED that achieves a low turn-on voltage of 2.7 V, an external quantum efficiency of 18.9% and power efficiency of 30 lm/W at 1000 cd/cm2. This simple white OLED also shows good color quality with a color rendering index of 75, CIE coordinates (0.42, 0.46), and little color shifting at high luminance. The device consists of a SEL sandwiched between a hole transport layer and an electron transport layer. The SEL comprises a thermally activated delayer fluorescent molecule having dual functions as a blue emitter and as a host for other lower energy emitters. The improved color stability and efficiency in such a simple device structure is explained as due to the elimination of significant energy barriers at various organic-organic interfaces in the traditional devices having multiple blocking layers.

  6. The algorithm to generate color point-cloud with the registration between panoramic image and laser point-cloud

    International Nuclear Information System (INIS)

    Zeng, Fanyang; Zhong, Ruofei

    2014-01-01

    Laser point cloud contains only intensity information and it is necessary for visual interpretation to obtain color information from other sensor. Cameras can provide texture, color, and other information of the corresponding object. Points with color information of corresponding pixels in digital images can be used to generate color point-cloud and is conducive to the visualization, classification and modeling of point-cloud. Different types of digital cameras are used in different Mobile Measurement Systems (MMS).the principles and processes for generating color point-cloud in different systems are not the same. The most prominent feature of the panoramic images is the field of 360 degrees view angle in the horizontal direction, to obtain the image information around the camera as much as possible. In this paper, we introduce a method to generate color point-cloud with panoramic image and laser point-cloud, and deduce the equation of the correspondence between points in panoramic images and laser point-clouds. The fusion of panoramic image and laser point-cloud is according to the collinear principle of three points (the center of the omnidirectional multi-camera system, the image point on the sphere, the object point). The experimental results show that the proposed algorithm and formulae in this paper are correct

  7. Person and gesture tracking with smart stereo cameras

    Science.gov (United States)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems

  8. Single software platform used for high speed data transfer implementation in a 65k pixel camera working in single photon counting mode

    International Nuclear Information System (INIS)

    Maj, P.; Kasiński, K.; Gryboś, P.; Szczygieł, R.; Kozioł, A.

    2015-01-01

    Integrated circuits designed for specific applications generally use non-standard communication methods. Hybrid pixel detector readout electronics produces a huge amount of data as a result of number of frames per seconds. The data needs to be transmitted to a higher level system without limiting the ASIC's capabilities. Nowadays, the Camera Link interface is still one of the fastest communication methods, allowing transmission speeds up to 800 MB/s. In order to communicate between a higher level system and the ASIC with a dedicated protocol, an FPGA with dedicated code is required. The configuration data is received from the PC and written to the ASIC. At the same time, the same FPGA should be able to transmit the data from the ASIC to the PC at the very high speed. The camera should be an embedded system enabling autonomous operation and self-monitoring. In the presented solution, at least three different hardware platforms are used—FPGA, microprocessor with real-time operating system and the PC with end-user software. We present the use of a single software platform for high speed data transfer from 65k pixel camera to the personal computer

  9. Single software platform used for high speed data transfer implementation in a 65k pixel camera working in single photon counting mode

    Science.gov (United States)

    Maj, P.; Kasiński, K.; Gryboś, P.; Szczygieł, R.; Kozioł, A.

    2015-12-01

    Integrated circuits designed for specific applications generally use non-standard communication methods. Hybrid pixel detector readout electronics produces a huge amount of data as a result of number of frames per seconds. The data needs to be transmitted to a higher level system without limiting the ASIC's capabilities. Nowadays, the Camera Link interface is still one of the fastest communication methods, allowing transmission speeds up to 800 MB/s. In order to communicate between a higher level system and the ASIC with a dedicated protocol, an FPGA with dedicated code is required. The configuration data is received from the PC and written to the ASIC. At the same time, the same FPGA should be able to transmit the data from the ASIC to the PC at the very high speed. The camera should be an embedded system enabling autonomous operation and self-monitoring. In the presented solution, at least three different hardware platforms are used—FPGA, microprocessor with real-time operating system and the PC with end-user software. We present the use of a single software platform for high speed data transfer from 65k pixel camera to the personal computer.

  10. Video Chat with Multiple Cameras

    OpenAIRE

    MacCormick, John

    2012-01-01

    The dominant paradigm for video chat employs a single camera at each end of the conversation, but some conversations can be greatly enhanced by using multiple cameras at one or both ends. This paper provides the first rigorous investigation of multi-camera video chat, concentrating especially on the ability of users to switch between views at either end of the conversation. A user study of 23 individuals analyzes the advantages and disadvantages of permitting a user to switch between views at...

  11. Multi Camera Multi Object Tracking using Block Search over Epipolar Geometry

    Directory of Open Access Journals (Sweden)

    Saman Sargolzaei

    2000-01-01

    Full Text Available We present strategy for multi-objects tracking in multi camera environment for the surveillance and security application where tracking multitude subjects are of utmost importance in a crowded scene. Our technique assumes partially overlapped multi-camera setup where cameras share common view from different angle to assess positions and activities of subjects under suspicion. To establish spatial correspondence between camera views we employ an epipolar geometry technique. We propose an overlapped block search method to find the interested pattern (target in new frames. Color pattern update scheme has been considered to further optimize the efficiency of the object tracking when object pattern changes due to object motion in the field of views of the cameras. Evaluation of our approach is presented with the results on PETS2007 dataset..

  12. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  13. SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems

    Science.gov (United States)

    Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.

    2015-02-01

    Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of

  14. MUSIC - Multifunctional stereo imaging camera system for wide angle and high resolution stereo and color observations on the Mars-94 mission

    Science.gov (United States)

    Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.

    1990-10-01

    Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.

  15. Spectrally resolved measurements of the terahertz beam profile generated from a two-color air plasma

    DEFF Research Database (Denmark)

    Pedersen, Pernille Klarskov; Zalkovskij, Maksim; Strikwerda, Andrew

    2014-01-01

    Using a THz camera and THz bandpass filters, we measure the frequency - resolved beam profile emitted from a two - color air plasma. We observe a frequency - independent emission angle from the plasma .......Using a THz camera and THz bandpass filters, we measure the frequency - resolved beam profile emitted from a two - color air plasma. We observe a frequency - independent emission angle from the plasma ....

  16. HDR imaging and color constancy: two sides of the same coin?

    Science.gov (United States)

    McCann, John J.

    2011-01-01

    At first, we think that High Dynamic Range (HDR) imaging is a technique for improved recordings of scene radiances. Many of us think that human color constancy is a variation of a camera's automatic white balance algorithm. However, on closer inspection, glare limits the range of light we can detect in cameras and on retinas. All scene regions below middle gray are influenced, more or less, by the glare from the bright scene segments. Instead of accurate radiance reproduction, HDR imaging works well because it preserves the details in the scene's spatial contrast. Similarly, on closer inspection, human color constancy depends on spatial comparisons that synthesize appearances from all the scene segments. Can spatial image processing play similar principle roles in both HDR imaging and color constancy?

  17. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    Science.gov (United States)

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  18. A Simple Setup to Perform 3D Locomotion Tracking in Zebrafish by Using a Single Camera

    Directory of Open Access Journals (Sweden)

    Gilbert Audira

    2018-02-01

    Full Text Available Generally, the measurement of three-dimensional (3D swimming behavior in zebrafish relies on commercial software or requires sophisticated scripts, and depends on more than two cameras to capture the video. Here, we establish a simple and economic apparatus to detect 3D locomotion in zebrafish, which involves a single camera capture system that records zebrafish movement in a specially designed water tank with a mirror tilted at 45 degrees. The recorded videos are analyzed using idTracker, while spatial positions are calibrated by ImageJ software and 3D trajectories are plotted by Origin 9.1 software. This easy setting allowed scientists to track 3D swimming behavior of multiple zebrafish with low cost and precise spatial position, showing great potential for fish behavioral research in the future.

  19. An integrated port camera and display system for laparoscopy.

    Science.gov (United States)

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

  20. Control of the Rendition Wavelength Shifts of Color Lippmann Holograms Recorded in Single-layer panchromatic Silver-halide Emulsion

    Institute of Scientific and Technical Information of China (English)

    ZHU Jianhua; GUO Lurong; LI Zuoyou; LIU Zhenqing

    2000-01-01

    Russian PFG-03C panchromatic ultra-high resolution silver-halide emulsion is regarded as the most successful material for the fabrication of color reflection holograms. But the lack of established and reliable processing sequences prevents its practical applications in business and everyday life. Though much attention is drawn upon the processing of PFG-03C color reflection holograms, the color desaturation is still a problem. The article describes the new processing of color holograms recorded in PFG- 03C plates which is demonstrated experimentally to have the capacity of controlling the rendition wavelength shifts and improving the color desaturation effectively. The rendition spectra of Red-Green-Blue (R. G. B. ) single-line reflection holographic gratings, and the color reflection hologram as well, are given in this paper.

  1. Applied learning-based color tone mapping for face recognition in video surveillance system

    Science.gov (United States)

    Yew, Chuu Tian; Suandi, Shahrel Azmin

    2012-04-01

    In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.

  2. Recent progress in color image intensifier

    International Nuclear Information System (INIS)

    Nittoh, K.

    2010-01-01

    A multi-color scintillator based high-sensitive, wide dynamic range and long-life X-ray image intensifier (Ultimage TM ) has been developed. Europium activated Y 2 O 2 S scintillator, emitting red, green and blue wavelength photons of different intensities, is utilized as the output fluorescent screen of the intensifier. By combining this image intensifier with a suitably tuned high sensitive color CCD camera, the sensitivity of the red color component achieved six times higher than that of the conventional image intensifier. Simultaneous emission of a moderate green color and a weak blue color covers different sensitivity regions. This widens the dynamic range by nearly two orders of magnitude. With this image intensifier, it is possible to image complex objects containing various different X-ray transmissions from paper, water or plastic to heavy metals at a time. This color scintillator based image intensifier is widely used in X-ray inspections of various fields. (author)

  3. Study on color difference estimation method of medicine biochemical analysis

    Science.gov (United States)

    Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Sun, Jiashi; Zhou, Fengkun

    2006-01-01

    The biochemical analysis in medicine is an important inspection and diagnosis method in hospital clinic. The biochemical analysis of urine is one important item. The Urine test paper shows corresponding color with different detection project or different illness degree. The color difference between the standard threshold and the test paper color of urine can be used to judge the illness degree, so that further analysis and diagnosis to urine is gotten. The color is a three-dimensional physical variable concerning psychology, while reflectance is one-dimensional variable; therefore, the estimation method of color difference in urine test can have better precision and facility than the conventional test method with one-dimensional reflectance, it can make an accurate diagnose. The digital camera is easy to take an image of urine test paper and is used to carry out the urine biochemical analysis conveniently. On the experiment, the color image of urine test paper is taken by popular color digital camera and saved in the computer which installs a simple color space conversion (RGB -> XYZ -> L *a *b *)and the calculation software. Test sample is graded according to intelligent detection of quantitative color. The images taken every time were saved in computer, and the whole illness process will be monitored. This method can also use in other medicine biochemical analyses that have relation with color. Experiment result shows that this test method is quick and accurate; it can be used in hospital, calibrating organization and family, so its application prospect is extensive.

  4. Forward looking anomaly detection via fusion of infrared and color imagery

    Science.gov (United States)

    Stone, K.; Keller, J. M.; Popescu, M.; Havens, T. C.; Ho, K. C.

    2010-04-01

    This paper develops algorithms for the detection of interesting and abnormal objects in color and infrared imagery taken from cameras mounted on a moving vehicle, observing a fixed scene. The primary purpose of detection is to cue a human-in-the-loop detection system. Algorithms for direct detection and change detection are investigated, as well as fusion of the two. Both methods use temporal information to reduce the number of false alarms. The direct detection algorithm uses image self-similarity computed between local neighborhoods to determine interesting, or unique, parts of an image. Neighborhood similarity is computed using Euclidean distance in CIELAB color space for the color imagery, and Euclidean distance between grey levels in the infrared imagery. The change detection algorithm uses the affine scale-invariant feature transform (ASIFT) to transform multiple background frames into the current image space. Each transformed image is then compared to the current image, and the multiple outputs are fused to produce a single difference image. Changes in lighting and contrast between the background run and the current run are adjusted for in both color and infrared imagery. Frame-to-frame motion is modeled using a perspective transformation, the parameters of which are computed using scale-invariant feature transform (SIFT) keypoint correspondences. This information is used to perform temporal accumulation of single frame detections for both the direct detection and change detection algorithms. Performance of the proposed algorithms is evaluated on multiple lanes from a data collection at a US Army test site.

  5. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    Science.gov (United States)

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-03-25

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  6. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    Energy Technology Data Exchange (ETDEWEB)

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  7. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    International Nuclear Information System (INIS)

    Strehlow, J.P.

    1994-01-01

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE' s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1)

  8. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  9. Realtime Reconstruction of an Animating Human Body from a Single Depth Camera.

    Science.gov (United States)

    Chen, Yin; Cheng, Zhi-Quan; Lai, Chao; Martin, Ralph R; Dang, Gang

    2016-08-01

    We present a method for realtime reconstruction of an animating human body,which produces a sequence of deforming meshes representing a given performance captured by a single commodity depth camera. We achieve realtime single-view mesh completion by enhancing the parameterized SCAPE model.Our method, which we call Realtime SCAPE, performs full-body reconstruction without the use of markers.In Realtime SCAPE, estimations of body shape parameters and pose parameters, needed for reconstruction, are decoupled. Intrinsic body shape is first precomputed for a given subject, by determining shape parameters with the aid of a body shape database. Subsequently, per-frame pose parameter estimation is performed by means of linear blending skinning (LBS); the problem is decomposed into separately finding skinning weights and transformations. The skinning weights are also determined offline from the body shape database,reducing online reconstruction to simply finding the transformations in LBS. Doing so is formulated as a linear variational problem;carefully designed constraints are used to impose temporal coherence and alleviate artifacts. Experiments demonstrate that our method can produce full-body mesh sequences with high fidelity.

  10. Acceptance/operational test procedure 241-AN-107 Video Camera System

    International Nuclear Information System (INIS)

    Pedersen, L.T.

    1994-01-01

    This procedure will document the satisfactory operation of the 241-AN-107 Video Camera System. The camera assembly, including camera mast, pan-and-tilt unit, camera, and lights, will be installed in Tank 241-AN-107 to monitor activities during the Caustic Addition Project. The camera focus, zoom, and iris remote controls will be functionally tested. The resolution and color rendition of the camera will be verified using standard reference charts. The pan-and-tilt unit will be tested for required ranges of motion, and the camera lights will be functionally tested. The master control station equipment, including the monitor, VCRs, printer, character generator, and video micrometer will be set up and performance tested in accordance with original equipment manufacturer's specifications. The accuracy of the video micrometer to measure objects in the range of 0.25 inches to 67 inches will be verified. The gas drying distribution system will be tested to ensure that a drying gas can be flowed over the camera and lens in the event that condensation forms on these components. This test will be performed by attaching the gas input connector, located in the upper junction box, to a pressurized gas supply and verifying that the check valve, located in the camera housing, opens to exhaust the compressed gas. The 241-AN-107 camera system will also be tested to assure acceptable resolution of the camera imaging components utilizing the camera system lights

  11. Color optimization of single emissive white OLEDs via energy transfer between RGB fluorescent dopants

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Nam Ho; Kim, You-Hyun; Yoon, Ju-An; Lee, Sang Youn [Department of Green Energy and Semiconductor Engineering, Hoseo University, Asan (Korea, Republic of); Ryu, Dae Hyun [Department of Information Technology, Hansei University, Gunpo (Korea, Republic of); Wood, Richard [Department of Engineering Physics, McMaster University, Hamilton, Ontario, Canada L8S 4L7 (Canada); Moon, C.-B. [Department of Green Energy and Semiconductor Engineering, Hoseo University, Asan (Korea, Republic of); Kim, Woo Young, E-mail: wykim@hoseo.edu [Department of Green Energy and Semiconductor Engineering, Hoseo University, Asan (Korea, Republic of); Department of Engineering Physics, McMaster University, Hamilton, Ontario, Canada L8S 4L7 (Canada)

    2013-11-15

    The electroluminescent characteristics of white organic light-emitting diodes (WOLEDs) were investigated including single emitting layer (SEL) with an ADN host and dopants; BCzVBi, C545T, and DCJTB for blue, green and red emission, respectively. The structure of the high efficiency WOLED device was; ITO/NPB(700 Å)/ADN: BCzVBi-7%:C545T-0.05%:DCJTB-0.1%(300 Å)/Bphen(300 Å)/Liq(20 Å)/Al(1200 Å) for mixing three primary colors. Luminous efficiency was 9.08 cd/A at 3.5 V and Commission Intenationale de L’eclairage (CIE{sub x,y}) coordinates of white emission was measured as (0.320, 0.338) at 8 V while simulated CIE{sub x,y} coordinates were (0.336, 0.324) via estimation from each dopant's PL spectrum. -- Highlights: • This paper observes single-emissive-layered white OLED using fluorescent dopants. • Electrical and optical properties are analyzed. • Color stability of white OLED is confirmed for new planar light source.

  12. Color optimization of single emissive white OLEDs via energy transfer between RGB fluorescent dopants

    International Nuclear Information System (INIS)

    Kim, Nam Ho; Kim, You-Hyun; Yoon, Ju-An; Lee, Sang Youn; Ryu, Dae Hyun; Wood, Richard; Moon, C.-B.; Kim, Woo Young

    2013-01-01

    The electroluminescent characteristics of white organic light-emitting diodes (WOLEDs) were investigated including single emitting layer (SEL) with an ADN host and dopants; BCzVBi, C545T, and DCJTB for blue, green and red emission, respectively. The structure of the high efficiency WOLED device was; ITO/NPB(700 Å)/ADN: BCzVBi-7%:C545T-0.05%:DCJTB-0.1%(300 Å)/Bphen(300 Å)/Liq(20 Å)/Al(1200 Å) for mixing three primary colors. Luminous efficiency was 9.08 cd/A at 3.5 V and Commission Intenationale de L’eclairage (CIE x,y ) coordinates of white emission was measured as (0.320, 0.338) at 8 V while simulated CIE x,y coordinates were (0.336, 0.324) via estimation from each dopant's PL spectrum. -- Highlights: • This paper observes single-emissive-layered white OLED using fluorescent dopants. • Electrical and optical properties are analyzed. • Color stability of white OLED is confirmed for new planar light source

  13. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  14. Reading color barcodes using visual snakes.

    Energy Technology Data Exchange (ETDEWEB)

    Schaub, Hanspeter (ORION International Technologies, Albuquerque, NM)

    2004-05-01

    Statistical pressure snakes are used to track a mono-color target in an unstructured environment using a video camera. The report discusses an algorithm to extract a bar code signal that is embedded within the target. The target is assumed to be rectangular in shape, with the bar code printed in a slightly different saturation and value in HSV color space. Thus, the visual snake, which primarily weighs hue tracking errors, will not be deterred by the presence of the color bar codes in the target. The bar code is generate with the standard 3 of 9 method. Using this method, the numeric bar codes reveal if the target is right-side-up or up-side-down.

  15. Design of smartphone-based spectrometer to assess fresh meat color

    Science.gov (United States)

    Jung, Youngkee; Kim, Hyun-Wook; Kim, Yuan H. Brad; Bae, Euiwon

    2017-02-01

    Based on its integrated camera, new optical attachment, and inherent computing power, we propose an instrument design and validation that can potentially provide an objective and accurate method to determine surface meat color change and myoglobin redox forms using a smartphone-based spectrometer. System is designed to be used as a reflection spectrometer which mimics the conventional spectrometry commonly used for meat color assessment. We utilize a 3D printing technique to make an optical cradle which holds all of the optical components for light collection, collimation, dispersion, and a suitable chamber. A light, which reflects a sample, enters a pinhole and is subsequently collimated by a convex lens. A diffraction grating spreads the wavelength over the camera's pixels to display a high resolution of spectrum. Pixel values in the smartphone image are translated to calibrate the wavelength values through three laser pointers which have different wavelength; 405, 532, 650 nm. Using an in-house app, the camera images are converted into a spectrum in the visible wavelength range based on the exterior light source. A controlled experiment simulating the refrigeration and shelving of the meat has been conducted and the results showed the capability to accurately measure the color change in quantitative and spectroscopic manner. We expect that this technology can be adapted to any smartphone and used to conduct a field-deployable color spectrum assay as a more practical application tool for various food sectors.

  16. Presence capture cameras - a new challenge to the image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  17. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    International Nuclear Information System (INIS)

    Winkler, A W; Zagar, B G

    2013-01-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives. (paper)

  18. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    Science.gov (United States)

    Winkler, A. W.; Zagar, B. G.

    2013-08-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.

  19. Color constancy in dermatoscopy with smartphone

    Science.gov (United States)

    Cugmas, Blaž; Pernuš, Franjo; Likar, Boštjan

    2017-12-01

    The recent spread of cheap dermatoscopes for smartphones can empower patients to acquire images of skin lesions on their own and send them to dermatologists. Since images are acquired by different smartphone cameras under unique illumination conditions, the variability in colors is expected. Therefore, the mobile dermatoscopic systems should be calibrated in order to ensure the color constancy in skin images. In this study, we have tested a dermatoscope DermLite DL1 basic, attached to Samsung Galaxy S4 smartphone. Under the controlled conditions, jpeg images of standard color patches were acquired and a model between an unknown device-dependent RGB and a deviceindependent Lab color space has been built. Results showed that median and the best color error was 7.77 and 3.94, respectively. Results are in the range of a human eye detection capability (color error ≈ 4) and video and printing industry standards (color error is expected to be between 5 and 6). It can be concluded that a calibrated smartphone dermatoscope can provide sufficient color constancy and can serve as an interesting opportunity to bring dermatologists closer to the patients.

  20. Image quality evaluation of medical color and monochrome displays using an imaging colorimeter

    Science.gov (United States)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2012-10-01

    The purpose of this presentation is to demonstrate the means which permit examining the accuracy of Image Quality with respect to MTF (Modulation Transfer Function) and NPS (Noise Power Spectrum) of Color Displays and Monochrome Displays. Indications were in the past that color displays could affect the clinical performance of color displays negatively compared to monochrome displays. Now colorimeters like the PM-1423 are available which have higher sensitivity and color accuracy than the traditional cameras like CCD cameras. Reference (1) was not based on measurements made with a colorimeter. This paper focuses on the measurements of physical characteristics of the spatial resolution and noise performance of color and monochrome medical displays which were made with a colorimeter and we will after this meeting submit the data to an ROC study so we have again a paper to present at a future SPIE Conference.Specifically, Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) were evaluated and compared at different digital driving levels (DDL) between the two medical displays. This paper focuses on the measurements of physical characteristics of the spatial resolution and noise performance of color and monochrome medical displays which were made with a colorimeter and we will after this meeting submit the data to an ROC study so we have again a paper to present at a future Annual SPIE Conference. Specifically, Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) were evaluated and compared at different digital driving levels (DDL) between the two medical displays. The Imaging Colorimeter. Measurement of color image quality needs were done with an imaging colorimeter as it is shown below. Imaging colorimetry is ideally suited to FPD measurement because imaging systems capture spatial data generating millions of data points in a single measurement operation. The imaging colorimeter which was used was the PM-1423 from Radiant Imaging. It uses

  1. Robust Color Choice for Small-size League RoboCup Competition

    Directory of Open Access Journals (Sweden)

    Qiang Zhou

    2004-10-01

    Full Text Available In this paper, the problem of choosing a set of most separable colors in a given environment is discussed. The proposed method models the process of generating theoretically computed best colors, printing of these colors through a color printer, and imaging the printed colors through a camera into an integrated framework. Thus, it provides a feasible way to generate practically best separable colors for a given environment with a set of given equipment. A real world application (robust color choice for small-size league RoboCup competition is used as an example to illustrate the proposed method. Experimental results on this example show the competitiveness of the colors learned from our algorithm compared to the colors adopted by other teams which are chosen via an extensive trial and error process using standard color papers.

  2. Superimpose of images by appending two simple video amplifier circuits to color television

    International Nuclear Information System (INIS)

    Kojima, Kazuhiko; Hiraki, Tatsunosuke; Koshida, Kichiro; Maekawa, Ryuichi; Hisada, Kinichi.

    1979-01-01

    Images are very useful to obtain diagnostic informations in medical fields. Also by superimposing two or three images obtained from the same patient, various informations, for example a degree of overlapping and anatomical land mark, which can not be found in only one image, can be often found. In this paper characteristics of our trial color television system for the purpose of superimposing x-ray images and/or radionuclide images are described. This color television system superimposing two images in each different color consists of two monochromatic vidicon cameras and 20 inches conventional color television in which only two simple video amplifier circuits are added. Signals from vidicon cameras are amplified about 40 dB and are directly applied to cathode terminals of color CRT in the television. This system is very simple and economical color displays, and enhance a degree of overlapping and displacement between images. As one of typical clinical applications, pancreas images were superimposed in color by this method. As a result, size and position of pancreas was enhanced. Also x-ray image and radionuclide image were superimposed to find exactly the position of tumors. Furthermore this system was very useful for color display of multinuclides scintigraphy. (author)

  3. Superimpose of images by appending two simple video amplifier circuits to color television

    Energy Technology Data Exchange (ETDEWEB)

    Kojima, K; Hiraki, T; Koshida, K; Maekawa, R [Kanazawa Univ. (Japan). School of Paramedicine; Hisada, K

    1979-09-01

    Images are very useful to obtain diagnostic informations in medical fields. Also by superimposing two or three images obtained from the same patient, various informations, for example a degree of overlapping and anatomical land mark, which can not be found in only one image, can be often found. In this paper characteristics of our trial color television system for the purpose of superimposing x-ray images and/or radionuclide images are described. This color television system superimposing two images in each different color consists of two monochromatic vidicon cameras and 20 inches conventional color television in which only two simple video amplifier circuits are added. Signals from vidicon cameras are amplified about 40 dB and are directly applied to cathode terminals of color CRT in the television. This system is very simple and economical color displays, and enhance a degree of overlapping and displacement between images. As one of typical clinical applications, pancreas images were superimposed in color by this method. As a result, size and position of pancreas was enhanced. Also x-ray image and radionuclide image were superimposed to find exactly the position of tumors. Furthermore this system was very useful for color display of multinuclides scintigraphy.

  4. Natural Colorants: Food Colorants from Natural Sources.

    Science.gov (United States)

    Sigurdson, Gregory T; Tang, Peipei; Giusti, M Mónica

    2017-02-28

    The color of food is often associated with the flavor, safety, and nutritional value of the product. Synthetic food colorants have been used because of their high stability and low cost. However, consumer perception and demand have driven the replacement of synthetic colorants with naturally derived alternatives. Natural pigment applications can be limited by lower stability, weaker tinctorial strength, interactions with food ingredients, and inability to match desired hues. Therefore, no single naturally derived colorant can serve as a universal alternative for a specified synthetic colorant in all applications. This review summarizes major environmental and biological sources for natural colorants as well as nature-identical counterparts. Chemical characteristics of prevalent pigments, including anthocyanins, carotenoids, betalains, and chlorophylls, are described. The possible applications and hues (warm, cool, and achromatic) of currently used natural pigments, such as anthocyanins as red and blue colorants, and possible future alternatives, such as purple violacein and red pyranoanthocyanins, are also discussed.

  5. Ranking TEM cameras by their response to electron shot noise

    International Nuclear Information System (INIS)

    Grob, Patricia; Bean, Derek; Typke, Dieter; Li, Xueming; Nogales, Eva; Glaeser, Robert M.

    2013-01-01

    We demonstrate two ways in which the Fourier transforms of images that consist solely of randomly distributed electrons (shot noise) can be used to compare the relative performance of different electronic cameras. The principle is to determine how closely the Fourier transform of a given image does, or does not, approach that of an image produced by an ideal camera, i.e. one for which single-electron events are modeled as Kronecker delta functions located at the same pixels where the electrons were incident on the camera. Experimentally, the average width of the single-electron response is characterized by fitting a single Lorentzian function to the azimuthally averaged amplitude of the Fourier transform. The reciprocal of the spatial frequency at which the Lorentzian function falls to a value of 0.5 provides an estimate of the number of pixels at which the corresponding line-spread function falls to a value of 1/e. In addition, the excess noise due to stochastic variations in the magnitude of the response of the camera (for single-electron events) is characterized by the amount to which the appropriately normalized power spectrum does, or does not, exceed the total number of electrons in the image. These simple measurements provide an easy way to evaluate the relative performance of different cameras. To illustrate this point we present data for three different types of scintillator–coupled camera plus a silicon-pixel (direct detection) camera. - Highlights: ► Fourier amplitude spectra of noise are well fitted by a single Lorentzian. ► This measures how closely, or not, the response approaches the single-pixel ideal. ► Noise in the Fourier amplitudes is (1−π/4) times the shot noise power spectrum. ► Finite variance in the single-electron responses adds to the output noise. ► This excess noise may be equal to or greater than shot noise itself

  6. Polarizing aperture stereoscopic cinema camera

    Science.gov (United States)

    Lipton, Lenny

    2012-07-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.

  7. Common aperture multispectral spotter camera: Spectro XR

    Science.gov (United States)

    Petrushevsky, Vladimir; Freiman, Dov; Diamant, Idan; Giladi, Shira; Leibovich, Maor

    2017-10-01

    The Spectro XRTM is an advanced color/NIR/SWIR/MWIR 16'' payload recently developed by Elbit Systems / ELOP. The payload's primary sensor is a spotter camera with common 7'' aperture. The sensor suite includes also MWIR zoom, EO zoom, laser designator or rangefinder, laser pointer / illuminator and laser spot tracker. Rigid structure, vibration damping and 4-axes gimbals enable high level of line-of-sight stabilization. The payload's list of features include multi-target video tracker, precise boresight, strap-on IMU, embedded moving map, geodetic calculations suite, and image fusion. The paper describes main technical characteristics of the spotter camera. Visible-quality, all-metal front catadioptric telescope maintains optical performance in wide range of environmental conditions. High-efficiency coatings separate the incoming light into EO, SWIR and MWIR band channels. Both EO and SWIR bands have dual FOV and 3 spectral filters each. Several variants of focal plane array formats are supported. The common aperture design facilitates superior DRI performance in EO and SWIR, in comparison to the conventionally configured payloads. Special spectral calibration and color correction extend the effective range of color imaging. An advanced CMOS FPA and low F-number of the optics facilitate low light performance. SWIR band provides further atmospheric penetration, as well as see-spot capability at especially long ranges, due to asynchronous pulse detection. MWIR band has good sharpness in the entire field-of-view and (with full HD FPA) delivers amount of detail far exceeding one of VGA-equipped FLIRs. The Spectro XR offers level of performance typically associated with larger and heavier payloads.

  8. Kaleido: Visualizing Big Brain Data with Automatic Color Assignment for Single-Neuron Images.

    Science.gov (United States)

    Wang, Ting-Yuan; Chen, Nan-Yow; He, Guan-Wei; Wang, Guo-Tzau; Shih, Chi-Tin; Chiang, Ann-Shyn

    2018-03-03

    Effective 3D visualization is essential for connectomics analysis, where the number of neural images easily reaches over tens of thousands. A formidable challenge is to simultaneously visualize a large number of distinguishable single-neuron images, with reasonable processing time and memory for file management and 3D rendering. In the present study, we proposed an algorithm named "Kaleido" that can visualize up to at least ten thousand single neurons from the Drosophila brain using only a fraction of the memory traditionally required, without increasing computing time. Adding more brain neurons increases memory only nominally. Importantly, Kaleido maximizes color contrast between neighboring neurons so that individual neurons can be easily distinguished. Colors can also be assigned to neurons based on biological relevance, such as gene expression, neurotransmitters, and/or development history. For cross-lab examination, the identity of every neuron is retrievable from the displayed image. To demonstrate the effectiveness and tractability of the method, we applied Kaleido to visualize the 10,000 Drosophila brain neurons obtained from the FlyCircuit database ( http://www.flycircuit.tw/modules.php?name=kaleido ). Thus, Kaleido visualization requires only sensible computer memory for manual examination of big connectomics data.

  9. Contactless physiological signals extraction based on skin color magnification

    Science.gov (United States)

    Suh, Kun Ha; Lee, Eui Chul

    2017-11-01

    Although the human visual system is not sufficiently sensitive to perceive blood circulation, blood flow caused by cardiac activity makes slight changes on human skin surfaces. With advances in imaging technology, it has become possible to capture these changes through digital cameras. However, it is difficult to obtain clear physiological signals from such changes due to its fineness and noise factors, such as motion artifacts and camera sensing disturbances. We propose a method for extracting physiological signals with improved quality from skin colored-videos recorded with a remote RGB camera. The results showed that our skin color magnification method reveals the hidden physiological components remarkably in the time-series signal. A Korea Food and Drug Administration-approved heart rate monitor was used for verifying the resulting signal synchronized with the actual cardiac pulse, and comparisons of signal peaks showed correlation coefficients of almost 1.0. In particular, our method can be an effective preprocessing before applying additional postfiltering techniques to improve accuracy in image-based physiological signal extractions.

  10. Single vs. dual color fire detection systems: operational tradeoffs

    Science.gov (United States)

    Danino, Meir; Danan, Yossef; Sinvani, Moshe

    2017-10-01

    In attempt to supply a reasonable fire plume detection, multinational cooperation with significant capital is invested in the development of two major Infra-Red (IR) based fire detection alternatives, single-color IR (SCIR) and dual-color IR (DCIR). False alarm rate was expected to be high not only as a result of real heat sources but mainly due to the IR natural clutter especially solar reflections clutter. SCIR uses state-of-the-art technology and sophisticated algorithms to filter out threats from clutter. On the other hand, DCIR are aiming at using additional spectral band measurements (acting as a guard), to allow the implementation of a simpler and more robust approach for performing the same task. In this paper we present the basics of SCIR & DCIR architecture and the main differences between them. In addition, we will present the results from a thorough study conducted for the purpose of learning about the added value of the additional data available from the second spectral band. Here we consider the two CO2 bands of 4-5 micron and of 2.5-3 micron band as well as off peak band (guard). The findings of this study refer also to Missile warning systems (MWS) efficacy, in terms of operational value. We also present a new approach for tunable filter to such sensor.

  11. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    Directory of Open Access Journals (Sweden)

    Chun-Tang Chao

    2016-03-01

    Full Text Available In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  12. Characterization of a digital camera as an absolute tristimulus colorimeter

    Science.gov (United States)

    Martinez-Verdu, Francisco; Pujol, Jaume; Vilaseca, Meritxell; Capilla, Pascual

    2003-01-01

    An algorithm is proposed for the spectral and colorimetric characterization of digital still cameras (DSC) which allows to use them as tele-colorimeters with CIE-XYZ color output, in cd/m2. The spectral characterization consists of the calculation of the color-matching functions from the previously measured spectral sensitivities. The colorimetric characterization consists of transforming the RGB digital data into absolute tristimulus values CIE-XYZ (in cd/m2) under variable and unknown spectroradiometric conditions. Thus, at the first stage, a gray balance has been applied over the RGB digital data to convert them into RGB relative colorimetric values. At a second stage, an algorithm of luminance adaptation vs. lens aperture has been inserted in the basic colorimetric profile. Capturing the ColorChecker chart under different light sources, the DSC color analysis accuracy indexes, both in a raw state and with the corrections from a linear model of color correction, have been evaluated using the Pointer'86 color reproduction index with the unrelated Hunt'91 color appearance model. The results indicate that our digital image capture device, in raw performance, lightens and desaturates the colors.

  13. Multi-Angle Snowflake Camera Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Stuefer, Martin [Univ. of Alaska, Fairbanks, AK (United States); Bailey, J. [Univ. of Alaska, Fairbanks, AK (United States)

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASC cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.

  14. An Early Fire Detection Algorithm Using IP Cameras

    Directory of Open Access Journals (Sweden)

    Hector Perez-Meana

    2012-05-01

    Full Text Available The presence of smoke is the first symptom of fire; therefore to achieve early fire detection, accurate and quick estimation of the presence of smoke is very important. In this paper we propose an algorithm to detect the presence of smoke using video sequences captured by Internet Protocol (IP cameras, in which important features of smoke, such as color, motion and growth properties are employed. For an efficient smoke detection in the IP camera platform, a detection algorithm must operate directly in the Discrete Cosine Transform (DCT domain to reduce computational cost, avoiding a complete decoding process required for algorithms that operate in spatial domain. In the proposed algorithm the DCT Inter-transformation technique is used to increase the detection accuracy without inverse DCT operation. In the proposed scheme, firstly the candidate smoke regions are estimated using motion and color smoke properties; next using morphological operations the noise is reduced. Finally the growth properties of the candidate smoke regions are furthermore analyzed through time using the connected component labeling technique. Evaluation results show that a feasible smoke detection method with false negative and false positive error rates approximately equal to 4% and 2%, respectively, is obtained.

  15. Event-Based Color Segmentation With a High Dynamic Range Sensor

    Directory of Open Access Journals (Sweden)

    Alexandre Marcireau

    2018-04-01

    Full Text Available This paper introduces a color asynchronous neuromorphic event-based camera and a methodology to process color output from the device to perform color segmentation and tracking at the native temporal resolution of the sensor (down to one microsecond. Our color vision sensor prototype is a combination of three Asynchronous Time-based Image Sensors, sensitive to absolute color information. We devise a color processing algorithm leveraging this information. It is designed to be computationally cheap, thus showing how low level processing benefits from asynchronous acquisition and high temporal resolution data. The resulting color segmentation and tracking performance is assessed both with an indoor controlled scene and two outdoor uncontrolled scenes. The tracking's mean error to the ground truth for the objects of the outdoor scenes ranges from two to twenty pixels.

  16. The MVACS Robotic Arm Camera

    Science.gov (United States)

    Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.

    2001-08-01

    The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.

  17. Environmental Effects on Measurement Uncertainties of Time-of-Flight Cameras

    DEFF Research Database (Denmark)

    Gudmundsson, Sigurjon Arni; Aanæs, Henrik; Larsen, Rasmus

    2007-01-01

    In this paper the effect the environment has on the SwissRanger SR3000 Time-Of-Flight camera is investigated. The accuracy of this camera is highly affected by the scene it is pointed at: Such as the reflective properties, color and gloss. Also the complexity of the scene has considerable effects...... on the accuracy. To mention a few: The angle of the objects to the emitted light and the scattering effects of near objects. In this paper a general overview of known such inaccuracy factors are described, followed by experiments illustrating the additional uncertainty factors. Specifically we give a better...

  18. Single-color, in situ photolithography marking of individual CdTe/ZnTe quantum dots containing a single Mn{sup 2+} ion

    Energy Technology Data Exchange (ETDEWEB)

    Sawicki, K.; Malinowski, F. K.; Gałkowski, K.; Jakubczyk, T.; Kossacki, P.; Pacuski, W.; Suffczyński, J., E-mail: Jan.Suffczynski@fuw.edu.pl [Institute of Experimental Physics, Faculty of Physics, University of Warsaw, Pasteura 5 St., PL-02-093 Warsaw (Poland)

    2015-01-05

    A simple, single-color method for permanent marking of the position of individual self-assembled semiconductor Quantum Dots (QDs) at cryogenic temperatures is reported. The method combines in situ photolithography with standard micro-photoluminescence spectroscopy. Its utility is proven by a systematic magnetooptical study of a single CdTe/ZnTe QD containing a Mn{sup 2+} ion, where a magnetic field of up to 10 T in two orthogonal, Faraday and Voigt, configurations is applied to the same QD. The presented approach can be applied to a wide range of solid state nanoemitters.

  19. Color enhancement in multispectral image of human skin

    Science.gov (United States)

    Mitsui, Masanori; Murakami, Yuri; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2003-07-01

    Multispectral imaging is receiving attention in medical color imaging, as high-fidelity color information can be acquired by the multispectral image capturing. On the other hand, as color enhancement in medical color image is effective for distinguishing lesion from normal part, we apply a new technique for color enhancement using multispectral image to enhance the features contained in a certain spectral band, without changing the average color distribution of original image. In this method, to keep the average color distribution, KL transform is applied to spectral data, and only high-order KL coefficients are amplified in the enhancement. Multispectral images of human skin of bruised arm are captured by 16-band multispectral camera, and the proposed color enhancement is applied. The resultant images are compared with the color images reproduced assuming CIE D65 illuminant (obtained by natural color reproduction technique). As a result, the proposed technique successfully visualizes unclear bruised lesions, which are almost invisible in natural color images. The proposed technique will provide support tool for the diagnosis in dermatology, visual examination in internal medicine, nursing care for preventing bedsore, and so on.

  20. Tomographic Particle Image Velocimetry using Smartphones and Colored Shadows

    KAUST Repository

    Aguirre-Pablo, Andres A.

    2017-06-12

    We demonstrate the viability of using four low-cost smartphone cameras to perform Tomographic PIV. We use colored shadows to imprint two or three different time-steps on the same image. The back-lighting is accomplished with three sets of differently-colored pulsed LEDs. Each set of Red, Green & Blue LEDs is shone on a diffuser screen facing each of the cameras. We thereby record the RGB-colored shadows of opaque suspended particles, rather than the conventionally used scattered light. We subsequently separate the RGB color channels, to represent the separate times, with preprocessing to minimize noise and cross-talk. We use commercially available Tomo-PIV software for the calibration, 3-D particle reconstruction and particle-field correlations, to obtain all three velocity components in a volume. Acceleration estimations can be done thanks to the triple pulse illumination. Our test flow is a vortex ring produced by forcing flow through a circular orifice, using a flexible membrane, which is driven by a pressurized air pulse. Our system is compared to a commercial stereoscopic PIV system for error estimations. We believe this proof of concept experiment will make this technique available for education, industry and scientists for a fraction of the hardware cost needed for traditional Tomo-PIV.

  1. Joint depth map and color consistency estimation for stereo images with different illuminations and cameras.

    Science.gov (United States)

    Heo, Yong Seok; Lee, Kyoung Mu; Lee, Sang Uk

    2013-05-01

    Abstract—In this paper, we propose a method that infers both accurate depth maps and color-consistent stereo images for radiometrically varying stereo images. In general, stereo matching and performing color consistency between stereo images are a chicken-and-egg problem since it is not a trivial task to simultaneously achieve both goals. Hence, we have developed an iterative framework in which these two processes can boost each other. First, we transform the input color images to log-chromaticity color space, from which a linear relationship can be established during constructing a joint pdf of transformed left and right color images. From this joint pdf, we can estimate a linear function that relates the corresponding pixels in stereo images. Based on this linear property, we present a new stereo matching cost by combining Mutual Information (MI), SIFT descriptor, and segment-based plane-fitting to robustly find correspondence for stereo image pairs which undergo radiometric variations. Meanwhile, we devise a Stereo Color Histogram Equalization (SCHE) method to produce color-consistent stereo image pairs, which conversely boost the disparity map estimation. Experimental results show that our method produces both accurate depth maps and color-consistent stereo images, even for stereo images with severe radiometric differences.

  2. Two-color single-photon emission from InAs quantum dots: toward logic information management using quantum light.

    Science.gov (United States)

    Rivas, David; Muñoz-Matutano, Guillermo; Canet-Ferrer, Josep; García-Calzada, Raúl; Trevisi, Giovanna; Seravalli, Luca; Frigeri, Paola; Martínez-Pastor, Juan P

    2014-02-12

    In this work, we propose the use of the Hanbury-Brown and Twiss interferometric technique and a switchable two-color excitation method for evaluating the exciton and noncorrelated electron-hole dynamics associated with single photon emission from indium arsenide (InAs) self-assembled quantum dots (QDs). Using a microstate master equation model we demonstrate that our single QDs are described by nonlinear exciton dynamics. The simultaneous detection of two-color, single photon emission from InAs QDs using these nonlinear dynamics was used to design a NOT AND logic transference function. This computational functionality combines the advantages of working with light/photons as input/output device parameters (all-optical system) and that of a nanodevice (QD size of ∼ 20 nm) while also providing high optical sensitivity (ultralow optical power operational requirements). These system features represent an important and interesting step toward the development of new prototypes for the incoming quantum information technologies.

  3. Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor

    Directory of Open Access Journals (Sweden)

    Dong Seop Kim

    2018-03-01

    Full Text Available Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR open database, show that our method outperforms previous works.

  4. Investigation of cross talk in single grain luminescence measurements using an EMCCD camera

    International Nuclear Information System (INIS)

    Gribenski, Natacha; Preusser, Frank; Greilich, Steffen; Huot, Sebastien; Mittelstraß, Dirk

    2015-01-01

    Highly sensitive electron multiplying charges coupled devices (EMCCD) enable the spatial detection of luminescence emissions from samples and have a high potential in single grain luminescence dating. However, the main challenge of this approach is the potential effect of cross talk, i.e. the influence of signal emitted by neighbouring grains, which will bias the information recorded from individual grains. Here, we present the first investigations into this phenomenon when performing single grain luminescence measurements of quartz grains spread over the flat surface of a sample carrier. Dose recovery tests using mixed populations show an important effect of cross talk, even when some distance is kept between grains. This issue is further investigated by focusing just on two grains and complemented by simulated experiments. Creation of an additional rejection criteria based on the brightness properties of the grains is inefficient in selecting grains unaffected by their surroundings. Therefore, the use of physical approaches or image processing algorithms to directly counteract cross talk is essential to allow routine single grain luminescence dating using EMCCD cameras. - Highlights: • We have performed single grain OSL measurements using an EMCCD detector. • Individual equivalent dose cannot be accurately recovered from a mixed dose population. • Grains are influenced by signal emitted by their neighbours during the measurements. • Simulated data confirm the strong effect of this phenomenon. • Increasing the distance between grains or applying brightness criteria are inefficient.

  5. Feasibility of LED-Assisted CMOS Camera: Contrast Estimation for Laser Tattoo Treatment

    Directory of Open Access Journals (Sweden)

    Ngot Thi Pham

    2018-04-01

    Full Text Available Understanding the residual tattoo ink in skin after laser treatment is often critical for achieving good clinical outcomes. The current study aims to investigate the feasibility of a light-emitting diode (LED-assisted CMOS camera to estimate the relative variations in tattoo contrast after the laser treatment. Asian mice were tattooed using two color inks (black and red. The LED illumination was a separate process from the laser tattoo treatment. Images of the ink tattoos in skin were acquired under the irradiation of three different LED colors (red, green, and blue for pre- and post-treatment. The degree of contrast variation due to the treatment was calculated and compared with the residual tattoo distribution in the skin. The black tattoo demonstrated that the contrast consistently decreased after the laser treatment for all LED colors. However, the red tattoo showed that the red LED yielded an insignificant contrast whereas the green and blue LEDs induced a 30% (p < 0.001 and 26% (p < 0.01 contrast reduction between the treatment conditions, respectively. The proposed LED-assisted CMOS camera can estimate the relative variations in the image contrast before and after the laser tattoo treatment.

  6. Novel computer-based endoscopic camera

    Science.gov (United States)

    Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia

    1995-05-01

    We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.

  7. Colors and Photometry of Bright Materials on Vesta as Seen by the Dawn Framing Camera

    Science.gov (United States)

    Schroeder, S. E.; Li, J.-Y.; Mittlefehldt, D. W.; Pieters, C. M.; De Sanctis, M. C.; Hiesinger, H.; Blewett, D. T.; Russell, C. T.; Raymond, C. A.; Keller, H. U.; hide

    2012-01-01

    The Dawn spacecraft has been in orbit around the asteroid Vesta since July, 2011. The on-board Framing Camera has acquired thousands of high-resolution images of the regolith-covered surface through one clear and seven narrow-band filters in the visible and near-IR wavelength range. It has observed bright and dark materials that have a range of reflectance that is unusually wide for an asteroid. Material brighter than average is predominantly found on crater walls, and in ejecta surrounding caters in the southern hemisphere. Most likely, the brightest material identified on the Vesta surface so far is located on the inside of a crater at 64.27deg S, 1.54deg . The apparent brightness of a regolith is influenced by factors such as particle size, mineralogical composition, and viewing geometry. As such, the presence of bright material can indicate differences in lithology and/or degree of space weathering. We retrieve the spectral and photometric properties of various bright terrains from false-color images acquired in the High Altitude Mapping Orbit (HAMO). We find that most bright material has a deeper 1-m pyroxene band than average. However, the aforementioned brightest material appears to have a 1-m band that is actually less deep, a result that awaits confirmation by the on-board VIR spectrometer. This site may harbor a class of material unique for Vesta. We discuss the implications of our spectral findings for the origin of bright materials.

  8. Animal coloration research: why it matters.

    Science.gov (United States)

    Caro, Tim; Stoddard, Mary Caswell; Stuart-Fox, Devi

    2017-07-05

    While basic research on animal coloration is the theme of this special edition, here we highlight its applied significance for industry, innovation and society. Both the nanophotonic structures producing stunning optical effects and the colour perception mechanisms in animals are extremely diverse, having been honed over millions of years of evolution for many different purposes. Consequently, there is a wealth of opportunity for biomimetic and bioinspired applications of animal coloration research, spanning colour production, perception and function. Fundamental research on the production and perception of animal coloration is contributing to breakthroughs in the design of new materials (cosmetics, textiles, paints, optical coatings, security labels) and new technologies (cameras, sensors, optical devices, robots, biomedical implants). In addition, discoveries about the function of animal colour are influencing sport, fashion, the military and conservation. Understanding and applying knowledge of animal coloration is now a multidisciplinary exercise. Our goal here is to provide a catalyst for new ideas and collaborations between biologists studying animal coloration and researchers in other disciplines.This article is part of the themed issue 'Animal coloration: production, perception, function and application'. © 2017 The Author(s).

  9. Development of a Method for Measuring the Range of Colors Indicated by Terms Used on Color Samples and Digital Cameras

    OpenAIRE

    畑田, 明信; Hatada, Akinobu

    2009-01-01

     色を指す言葉が示す色空間上の範囲は、その言葉を使う人間の文化的な背景や個人的な嗜好に大きく左右される。このような色を指す言葉が示す色の範囲を測定するための方法は一般的には存在しない。本研究では実験協力者に色見本帳に色を指す言葉に対応する色見本に印をつけてもらい、それをデジタルスチルカメラで撮影した画像から、物理的な色範囲に変換する手法の開発を行った。Color gamut of color word would be different by each personality, background of culture. In other hand, there is no standard method for measuring a gamut of color word. This paper reports a trial of developing a new method of measuring color gamut of color word, and some practical test result by pilot system. It use co...

  10. [Constructing 3-dimensional colorized digital dental model assisted by digital photography].

    Science.gov (United States)

    Ye, Hong-qiang; Liu, Yu-shu; Liu, Yun-song; Ning, Jing; Zhao, Yi-jiao; Zhou, Yong-sheng

    2016-02-18

    To explore a method of constructing universal 3-dimensional (3D) colorized digital dental model which can be displayed and edited in common 3D software (such as Geomagic series), in order to improve the visual effect of digital dental model in 3D software. The morphological data of teeth and gingivae were obtained by intra-oral scanning system (3Shape TRIOS), constructing 3D digital dental models. The 3D digital dental models were exported as STL files. Meanwhile, referring to the accredited photography guide of American Academy of Cosmetic Dentistry (AACD), five selected digital photographs of patients'teeth and gingivae were taken by digital single lens reflex camera (DSLR) with the same exposure parameters (except occlusal views) to capture the color data. In Geomagic Studio 2013, after STL file of 3D digital dental model being imported, digital photographs were projected on 3D digital dental model with corresponding position and angle. The junctions of different photos were carefully trimmed to get continuous and natural color transitions. Then the 3D colorized digital dental model was constructed, which was exported as OBJ file or WRP file which was a special file for software of Geomagic series. For the purpose of evaluating the visual effect of the 3D colorized digital model, a rating scale on color simulation effect in views of patients'evaluation was used. Sixteen patients were recruited and their scores on colored and non-colored digital dental models were recorded. The data were analyzed using McNemar-Bowker test in SPSS 20. Universal 3D colorized digital dental model with better color simulation was constructed based on intra-oral scanning and digital photography. For clinical application, the 3D colorized digital dental models, combined with 3D face images, were introduced into 3D smile design of aesthetic rehabilitation, which could improve the patients' cognition for the esthetic digital design and virtual prosthetic effect. Universal 3D colorized

  11. How to photograph the Moon and planets with your digital camera

    CERN Document Server

    Buick, Tony

    2007-01-01

    Since the advent of astronomical CCD imaging it has been possible for amateurs to produce images of a quality that was attainable only by universities and professional observatories just a decade ago. However, astronomical CCD cameras are still very expensive, and technology has now progressed so that digital cameras - the kind you use on holiday - are more than capable of photographing the brighter astronomical objects, notably the Moon and major planets. Tony Buick has worked for two years on the techniques involved, and has written this illustrated step-by-step manual for anyone who has a telescope (of any size) and a digital camera. The color images he has produced - there are over 300 of them in the book - are of breathtaking quality. His book is more than a manual of techniques (including details of how to make a low-cost DIY camera mount) and examples; it also provides a concise photographic atlas of the whole of the nearside of the Moon - with every image made using a standard digital camera - and des...

  12. Tomographic Particle Image Velocimetry using Smartphones and Colored Shadows

    KAUST Repository

    Aguirre-Pablo, Andres A.; Alarfaj, Meshal K.; Li, Erqiang; Hernandez Sanchez, Jose Federico; Thoroddsen, Sigurdur T

    2017-01-01

    We demonstrate the viability of using four low-cost smartphone cameras to perform Tomographic PIV. We use colored shadows to imprint two or three different time-steps on the same image. The back-lighting is accomplished with three sets

  13. Two-color spatial and temporal temperature measurements using a streaked soft x-ray imager

    Energy Technology Data Exchange (ETDEWEB)

    Moore, A. S., E-mail: alastair.moore@physics.org; Ahmed, M. F.; Soufli, R.; Pardini, T.; Hibbard, R. L.; Bailey, C. G.; Bell, P. M.; Hau-Riege, S. [Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, California 94551-0808 (United States); Benstead, J.; Morton, J.; Guymer, T. M.; Garbett, W. J.; Rubery, M. S.; Skidmore, J. W. [Directorate Science and Technology, AWE Aldermaston, Reading RG7 4PR (United Kingdom); Bedzyk, M.; Shoup, M. J.; Regan, S. P.; Agliata, T.; Jungquist, R. [Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623 (United States); Schmidt, D. W. [Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); and others

    2016-11-15

    A dual-channel streaked soft x-ray imager has been designed and used on high energy-density physics experiments at the National Ignition Facility. This streaked imager creates two images of the same x-ray source using two slit apertures and a single shallow angle reflection from a nickel mirror. Thin filters are used to create narrow band pass images at 510 eV and 360 eV. When measuring a Planckian spectrum, the brightness ratio of the two images can be translated into a color-temperature, provided that the spectral sensitivity of the two images is well known. To reduce uncertainty and remove spectral features in the streak camera photocathode from this photon energy range, a thin 100 nm CsI on 50 nm Al streak camera photocathode was implemented. Provided that the spectral shape is well-known, then uncertainties on the spectral sensitivity limits the accuracy of the temperature measurement to approximately 4.5% at 100 eV.

  14. Gamma camera performance: technical assessment protocol

    Energy Technology Data Exchange (ETDEWEB)

    Bolster, A.A. [West Glasgow Hospitals NHS Trust, London (United Kingdom). Dept. of Clinical Physics; Waddington, W.A. [University College London Hospitals NHS Trust, London (United Kingdom). Inst. of Nuclear Medicine

    1996-12-31

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera`s computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author).

  15. Advanced microlens and color filter process technology for the high-efficiency CMOS and CCD image sensors

    Science.gov (United States)

    Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu

    2000-12-01

    New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.

  16. Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products

    Science.gov (United States)

    Williams, Don; Burns, Peter D.

    2007-01-01

    There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.

  17. Simple, fast, and low-cost camera-based water content measurement with colorimetric fluorescent indicator

    Science.gov (United States)

    Song, Seok-Jeong; Kim, Tae-Il; Kim, Youngmi; Nam, Hyoungsik

    2018-05-01

    Recently, a simple, sensitive, and low-cost fluorescent indicator has been proposed to determine water contents in organic solvents, drugs, and foodstuffs. The change of water content leads to the change of the indicator's fluorescence color under the ultra-violet (UV) light. Whereas the water content values could be estimated from the spectrum obtained by a bulky and expensive spectrometer in the previous research, this paper demonstrates a simple and low-cost camera-based water content measurement scheme with the same fluorescent water indicator. Water content is calculated over the range of 0-30% by quadratic polynomial regression models with color information extracted from the captured images of samples. Especially, several color spaces such as RGB, xyY, L∗a∗b∗, u‧v‧, HSV, and YCBCR have been investigated to establish the optimal color information features over both linear and nonlinear RGB data given by a camera before and after gamma correction. In the end, a 2nd order polynomial regression model along with HSV in a linear domain achieves the minimum mean square error of 1.06% for a 3-fold cross validation method. Additionally, the resultant water content estimation model is implemented and evaluated in an off-the-shelf Android-based smartphone.

  18. Use of cameras for monitoring visibility impairment

    Science.gov (United States)

    Malm, William; Cismoski, Scott; Prenni, Anthony; Peters, Melanie

    2018-02-01

    Webcams and automated, color photography cameras have been routinely operated in many U.S. national parks and other federal lands as far back as 1988, with a general goal of meeting interpretive needs within the public lands system and communicating effects of haze on scenic vistas to the general public, policy makers, and scientists. Additionally, it would be desirable to extract quantifiable information from these images to document how visibility conditions change over time and space and to further reflect the effects of haze on a scene, in the form of atmospheric extinction, independent of changing lighting conditions due to time of day, year, or cloud cover. Many studies have demonstrated a link between image indexes and visual range or extinction in urban settings where visibility is significantly degraded and where scenes tend to be gray and devoid of color. In relatively clean, clear atmospheric conditions, clouds and lighting conditions can sometimes affect the image radiance field as much or more than the effects of haze. In addition, over the course of many years, cameras have been replaced many times as technology improved or older systems wore out, and therefore camera image pixel density has changed dramatically. It is shown that gradient operators are very sensitive to image resolution while contrast indexes are not. Furthermore, temporal averaging and time of day restrictions allow for developing quantitative relationships between atmospheric extinction and contrast-type indexes even when image resolution has varied over time. Temporal averaging effectively removes the variability of visibility indexes associated with changing cloud cover and weather conditions, and changes in lighting conditions resulting from sun angle effects are best compensated for by restricting averaging to only certain times of the day.

  19. Multiocular image sensor with on-chip beam-splitter and inner meta-micro-lens for single-main-lens stereo camera.

    Science.gov (United States)

    Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa

    2016-08-08

    We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.

  20. Comparison between magnetic anchoring and guidance system camera-assisted laparoendoscopic single-site surgery nephrectomy and conventional laparoendoscopic single-site surgery nephrectomy in a porcine model: focus on ergonomics and workload profiles.

    Science.gov (United States)

    Han, Woong Kyu; Tan, Yung K; Olweny, Ephrem O; Yin, Gang; Liu, Zhuo-Wei; Faddegon, Stephen; Scott, Daniel J; Cadeddu, Jeffrey A

    2013-04-01

    To compare surgeon-assessed ergonomic and workload demands of magnetic anchoring and guidance system (MAGS) laparoendoscopic single-site surgery (LESS) nephrectomy with conventional LESS nephrectomy in a porcine model. Participants included two expert and five novice surgeons who each performed bilateral LESS nephrectomy in two nonsurvival animals using either the MAGS camera or conventional laparoscope. Task difficulty and workload demands of the surgeon and camera driver were assessed using the validated National Aeronautics and Space Administration Task Load Index (NASA-TLX) questionnaire. Surgeons were also asked to score 6 parameters on a Likert scale (range 1=low/easy to 5=high/hard): procedure-associated workload, ergonomics, technical challenge, visualization, accidental events, and instrument handling. Each step of the nephrectomy was also timed and instrument clashing was quantified. Scores for each parameter on the Likert scale were significantly lower for MAGS-LESS nephrectomy. Mean number of internal and external clashes were significantly lower for the MAGS camera (pNASA-TLX workload ratings by the surgeon and camera driver showed that MAGS resulted in a significantly lower workload than the conventional laparoscope during LESS nephrectomy (p<0.05). The use of the MAGS camera during LESS nephrectomy lowers the task workload for both the surgeon and camera driver when compared to conventional laparoscope use. Subjectively, it appears to also improve surgeons' impressions of ergonomics and technical challenge. Pending approval for clinical use, further evaluation in the clinical setting is warranted.

  1. THE HUBBLE WIDE FIELD CAMERA 3 TEST OF SURFACES IN THE OUTER SOLAR SYSTEM: SPECTRAL VARIATION ON KUIPER BELT OBJECTS

    International Nuclear Information System (INIS)

    Fraser, Wesley C.; Brown, Michael E.; Glass, Florian

    2015-01-01

    Here, we present additional photometry of targets observed as part of the Hubble Wide Field Camera 3 (WFC3) Test of Surfaces in the Outer Solar System. Twelve targets were re-observed with the WFC3 in the optical and NIR wavebands designed to complement those used during the first visit. Additionally, all of the observations originally presented by Fraser and Brown were reanalyzed through the same updated photometry pipeline. A re-analysis of the optical and NIR color distribution reveals a bifurcated optical color distribution and only two identifiable spectral classes, each of which occupies a broad range of colors and has correlated optical and NIR colors, in agreement with our previous findings. We report the detection of significant spectral variations on five targets which cannot be attributed to photometry errors, cosmic rays, point-spread function or sensitivity variations, or other image artifacts capable of explaining the magnitude of the variation. The spectrally variable objects are found to have a broad range of dynamical classes and absolute magnitudes, exhibit a broad range of apparent magnitude variations, and are found in both compositional classes. The spectrally variable objects with sufficiently accurate colors for spectral classification maintain their membership, belonging to the same class at both epochs. 2005 TV189 exhibits a sufficiently broad difference in color at the two epochs that span the full range of colors of the neutral class. This strongly argues that the neutral class is one single class with a broad range of colors, rather than the combination of multiple overlapping classes

  2. Development of an Algorithm for Heart Rate Measurement Using a Mobile Phone Camera

    Directory of Open Access Journals (Sweden)

    D. A. Laure

    2014-01-01

    Full Text Available Nowadays there exist many different ways to measure a person’s heart rate. One of them assumes the usage of a mobile phone built-in camera. This method is easy to use and does not require any additional skills or special devices for heart rate measurement. It requires only a mobile cellphone with a built-in camera and a flash. The main idea of the method is to detect changes in finger skin color that occur due to blood pulsation. The measurement process is simple: the user covers the camera lens with a finger and the application on the mobile phone starts catching and analyzing frames from the camera. Heart rate can be calculated by analyzing average red component values of frames taken by the mobile cellphone camera that contain images of an area of the skin.In this paper the authors review the existing algorithms for heart rate measurement with the help of a mobile phone camera and propose their own algorithm which is more efficient than the reviewed algorithms.

  3. Digital dental photography. Part 4: choosing a camera.

    Science.gov (United States)

    Ahmad, I

    2009-06-13

    With so many cameras and systems on the market, making a choice of the right one for your practice needs is a daunting task. As described in Part 1 of this series, a digital single reflex (DSLR) camera is an ideal choice for dental use in enabling the taking of portraits, close-up or macro images of the dentition and study casts. However, for the sake of completion, some other cameras systems that are used in dentistry are also discussed.

  4. Establishing imaging sensor specifications for digital still cameras

    Science.gov (United States)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  5. Single photon emission tomography

    International Nuclear Information System (INIS)

    Buvat, Irene

    2011-09-01

    The objective of this lecture is to present the single photon emission computed tomography (SPECT) imaging technique. Content: 1 - Introduction: anatomic, functional and molecular imaging; Principle and role of functional or molecular imaging; 2 - Radiotracers: chemical and physical constraints, main emitters, radioisotopes production, emitters type and imaging techniques; 3 - Single photon emission computed tomography: gamma cameras and their components, gamma camera specifications, planar single photon imaging characteristics, gamma camera and tomography; 4 - Quantification in single photon emission tomography: attenuation, scattering, un-stationary spatial resolution, partial volume effect, movements, others; 5 - Synthesis and conclusion

  6. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

    Directory of Open Access Journals (Sweden)

    Heegwang Kim

    2017-12-01

    Full Text Available Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  7. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.

    Science.gov (United States)

    Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki

    2017-12-09

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  8. Comparatively Studied Color Correction Methods for Color Calibration of Automated Microscopy Complex of Biomedical Specimens

    Directory of Open Access Journals (Sweden)

    T. A. Kravtsova

    2016-01-01

    Full Text Available The paper considers a task of generating the requirements and creating a calibration target for automated microscopy systems (AMS of biomedical specimens to provide the invariance of algorithms and software to the hardware configuration. The required number of color fields of the calibration target and their color coordinates are mostly determined by the color correction method, for which coefficients of the equations are estimated during the calibration process. The paper analyses existing color calibration techniques for digital imaging systems using an optical microscope and shows that there is a lack of published results of comparative studies to demonstrate a particular useful color correction method for microscopic images. A comparative study of ten image color correction methods in RGB space using polynomials and combinations of color coordinate of different orders was carried out. The method of conditioned least squares to estimate the coefficients in the color correction equations using captured images of 217 color fields of the calibration target Kodak Q60-E3 was applied. The regularization parameter in this method was chosen experimentally. It was demonstrated that the best color correction quality characteristics are provided by the method that uses a combination of color coordinates of the 3rd order. The study of the influence of the number and the set of color fields included in calibration target on color correction quality for microscopic images was performed. Six train sets containing 30, 35, 40, 50, 60 and 80 color fields, and test set of 47 color fields not included in any of the train sets were formed. It was found out that the train set of 60 color fields minimizes the color correction error values for both operating modes of digital camera: using "default" color settings and with automatic white balance. At the same time it was established that the use of color fields from the widely used now Kodak Q60-E3 target does not

  9. Preliminary analysis on faint luminous lightning events recorded by multiple high speed cameras

    Science.gov (United States)

    Alves, J.; Saraiva, A. V.; Pinto, O.; Campos, L. Z.; Antunes, L.; Luz, E. S.; Medeiros, C.; Buzato, T. S.

    2013-12-01

    The objective of this work is the study of some faint luminous events produced by lightning flashes that were recorded simultaneously by multiple high-speed cameras during the previous RAMMER (Automated Multi-camera Network for Monitoring and Study of Lightning) campaigns. The RAMMER network is composed by three fixed cameras and one mobile color camera separated by, in average, distances of 13 kilometers. They were located in the Paraiba Valley (in the cities of São José dos Campos and Caçapava), SP, Brazil, arranged in a quadrilateral shape, centered in São José dos Campos region. This configuration allowed RAMMER to see a thunderstorm from different angles, registering the same lightning flashes simultaneously by multiple cameras. Each RAMMER sensor is composed by a triggering system and a Phantom high-speed camera version 9.1, which is set to operate at a frame rate of 2,500 frames per second with a lens Nikkor (model AF-S DX 18-55 mm 1:3.5 - 5.6 G in the stationary sensors, and a lens model AF-S ED 24 mm - 1:1.4 in the mobile sensor). All videos were GPS (Global Positioning System) time stamped. For this work we used a data set collected in four RAMMER manual operation days in the campaign of 2012 and 2013. On Feb. 18th the data set is composed by 15 flashes recorded by two cameras and 4 flashes recorded by three cameras. On Feb. 19th a total of 5 flashes was registered by two cameras and 1 flash registered by three cameras. On Feb. 22th we obtained 4 flashes registered by two cameras. Finally, in March 6th two cameras recorded 2 flashes. The analysis in this study proposes an evaluation methodology for faint luminous lightning events, such as continuing current. Problems in the temporal measurement of the continuing current can generate some imprecisions during the optical analysis, therefore this work aim to evaluate the effects of distance in this parameter with this preliminary data set. In the cases that include the color camera we analyzed the RGB

  10. Scintillation camera with second order resolution

    International Nuclear Information System (INIS)

    Muehllehner, G.

    1976-01-01

    A scintillation camera for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area is described in which means is provided for second order positional resolution. The phototubes, which normally provide only a single order of resolution, are modified to provide second order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved

  11. Scintillation camera with second order resolution

    International Nuclear Information System (INIS)

    1975-01-01

    A scintillation camera is described for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area in which means is provided for second-order positional resolution. The phototubes which normally provide only a single order of resolution, are modified to provide second-order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved

  12. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellut, Paolo; Sherwin, Gary

    2011-01-01

    TIR cameras can be used for day/night Unmanned Ground Vehicle (UGV) autonomous navigation when stealth is required. The quality of uncooled TIR cameras has significantly improved over the last decade, making them a viable option at low speed Limiting factors for stereo ranging with uncooled LWIR cameras are image blur and low texture scenes TIR perception capabilities JPL has explored includes: (1) single and dual band TIR terrain classification (2) obstacle detection (pedestrian, vehicle, tree trunks, ditches, and water) (3) perception thru obscurants

  13. A novel super-resolution camera model

    Science.gov (United States)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  14. Development of Automated Tracking System with Active Cameras for Figure Skating

    Science.gov (United States)

    Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi

    This paper presents a system based on the control of PTZ cameras for automated real-time tracking of individual figure skaters moving on an ice rink. In the video images of figure skating, irregular trajectories, various postures, rapid movements, and various costume colors are included. Therefore, it is difficult to determine some features useful for image tracking. On the other hand, an ice rink has a limited area and uniform high intensity, and skating is always performed on ice. In the proposed system, an ice rink region is first extracted from a video image by the region growing method, and then, a skater region is extracted using the rink shape information. In the camera control process, each camera is automatically panned and/or tilted so that the skater region is as close to the center of the image as possible; further, the camera is zoomed to maintain the skater image at an appropriate scale. The results of experiments performed for 10 training scenes show that the skater extraction rate is approximately 98%. Thus, it was concluded that tracking with camera control was successful for almost all the cases considered in the study.

  15. Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.

    Science.gov (United States)

    Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

    2015-01-01

    The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.

  16. Using Three-color Single-molecule FRET to Study the Correlation of Protein Interactions.

    Science.gov (United States)

    Götz, Markus; Wortmann, Philipp; Schmid, Sonja; Hugel, Thorsten

    2018-01-30

    Single-molecule Förster resonance energy transfer (smFRET) has become a widely used biophysical technique to study the dynamics of biomolecules. For many molecular machines in a cell proteins have to act together with interaction partners in a functional cycle to fulfill their task. The extension of two-color to multi-color smFRET makes it possible to simultaneously probe more than one interaction or conformational change. This not only adds a new dimension to smFRET experiments but it also offers the unique possibility to directly study the sequence of events and to detect correlated interactions when using an immobilized sample and a total internal reflection fluorescence microscope (TIRFM). Therefore, multi-color smFRET is a versatile tool for studying biomolecular complexes in a quantitative manner and in a previously unachievable detail. Here, we demonstrate how to overcome the special challenges of multi-color smFRET experiments on proteins. We present detailed protocols for obtaining the data and for extracting kinetic information. This includes trace selection criteria, state separation, and the recovery of state trajectories from the noisy data using a 3D ensemble Hidden Markov Model (HMM). Compared to other methods, the kinetic information is not recovered from dwell time histograms but directly from the HMM. The maximum likelihood framework allows us to critically evaluate the kinetic model and to provide meaningful uncertainties for the rates. By applying our method to the heat shock protein 90 (Hsp90), we are able to disentangle the nucleotide binding and the global conformational changes of the protein. This allows us to directly observe the cooperativity between the two nucleotide binding pockets of the Hsp90 dimer.

  17. A digital gigapixel large-format tile-scan camera.

    Science.gov (United States)

    Ben-Ezra, M

    2011-01-01

    Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.

  18. The making of analog module for gamma camera interface

    International Nuclear Information System (INIS)

    Yulinarsari, Leli; Rl, Tjutju; Susila, Atang; Sukandar

    2003-01-01

    The making of an analog module for gamma camera has been conducted. For computerization of planar gamma camera 37 PMT it has been developed interface hardware technology and software between the planar gamma camera with PC. With this interface gamma camera image information (Originally analog signal) was changed to digital single, therefore processes of data acquisition, image quality increase and data analysis as well as data base processing can be conducted with the help of computers, there are three gamma camera main signals, i.e. X, Y and Z . This analog module makes digitation of analog signal X and Y from the gamma camera that conveys position information coming from the gamma camera crystal. Analog conversion to digital was conducted by 2 converters ADC 12 bit with conversion time 800 ns each, conversion procedure for each coordinate X and Y was synchronized using suitable strobe signal Z for information acceptance

  19. Quantitative measurement of binocular color fusion limit for non-spectral colors.

    Science.gov (United States)

    Jung, Yong Ju; Sohn, Hosik; Lee, Seong-il; Ro, Yong Man; Park, Hyun Wook

    2011-04-11

    Human perception becomes difficult in the event of binocular color fusion when the color difference presented for the left and right eyes exceeds a certain threshold value, known as the binocular color fusion limit. This paper discusses the binocular color fusion limit for non-spectral colors within the color gamut of a conventional LCD 3DTV. We performed experiments to measure the color fusion limit for eight chromaticity points sampled from the CIE 1976 chromaticity diagram. A total of 2480 trials were recorded for a single observer. By analyzing the results, the color fusion limit was quantified by ellipses in the chromaticity diagram. The semi-minor axis of the ellipses ranges from 0.0415 to 0.0923 in terms of the Euclidean distance in the u'v´ chromaticity diagram and the semi-major axis ranges from 0.0640 to 0.1560. These eight ellipses are drawn on the chromaticity diagram. © 2011 Optical Society of America

  20. Gamma camera performance: technical assessment protocol

    International Nuclear Information System (INIS)

    Bolster, A.A.; Waddington, W.A.

    1996-01-01

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera's computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author)

  1. Development and Performance of Bechtel Nevada's Nine-Frame Camera System

    International Nuclear Information System (INIS)

    S. A. Baker; M. J. Griffith; J. L. Tybo

    2002-01-01

    Bechtel Nevada, Los Alamos Operations, has developed a high-speed, nine-frame camera system that records a sequence from a changing or dynamic scene. The system incorporates an electrostatic image tube with custom gating and deflection electrodes. The framing tube is shuttered with high-speed gating electronics, yielding frame rates of up to 5MHz. Dynamic scenes are lens-coupled to the camera, which contains a single photocathode gated on and off to control each exposure time. Deflection plates and drive electronics move the frames to different locations on the framing tube output. A single charge-coupled device (CCD) camera then records the phosphor image of all nine frames. This paper discusses setup techniques to optimize system performance. It examines two alternate philosophies for system configuration and respective performance results. We also present performance metrics for system evaluation, experimental results, and applications to four-frame cameras

  2. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  3. Application of Different HSI Color Models to Detect Fire-Damaged Mortar

    Directory of Open Access Journals (Sweden)

    H. Luo

    2013-12-01

    Full Text Available To obtain a better understanding of the effect of vehicle fires on rigid pavement, a nondestructive test method utilizing an ordinary digital camera to capture images of mortar at five elevated temperatures was undertaken. These images were then analyzed by “image color-intensity analyzer” software. In image analysis, the RGB color model was the basic system used to represent the color information of images. HSI is a derived-color model that is transformed from an RGB model by formulae. In order to understand more about surface color changes and temperatures after a vehicle fire, various transformation formulae used in different research areas were applied in this study. They were then evaluated to obtain the optimum HSI model for further studies of fire-damaged mortar through the use of image analysis.

  4. Single-acquisition method for simultaneous determination of extrinsic gamma-camera sensitivity and spatial resolution

    Energy Technology Data Exchange (ETDEWEB)

    Santos, J.A.M. [Servico de Fisica Medica, Instituto Portugues de Oncologia Francisco Gentil do Porto, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200-072 Porto (Portugal)], E-mail: a.miranda@portugalmail.pt; Sarmento, S. [Servico de Fisica Medica, Instituto Portugues de Oncologia Francisco Gentil do Porto, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200-072 Porto (Portugal); Alves, P.; Torres, M.C. [Departamento de Fisica da Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto (Portugal); Bastos, A.L. [Servico de Medicina Nuclear, Instituto Portugues de Oncologia Francisco Gentil do Porto, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200-072 Porto (Portugal); Ponte, F. [Servico de Fisica Medica, Instituto Portugues de Oncologia Francisco Gentil do Porto, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200-072 Porto (Portugal)

    2008-01-15

    A new method for measuring simultaneously both the extrinsic sensitivity and spatial resolution of a gamma-camera in a single planar acquisition was implemented. A dual-purpose phantom (SR phantom; sensitivity/resolution) was developed, tested and the results compared with other conventional methods used for separate determination of these two important image quality parameters. The SR phantom yielded reproducible and accurate results, allowing an immediate visual inspection of the spatial resolution as well as the quantitative determination of the contrast for six different spatial frequencies. It also proved to be useful in the estimation of the modulation transfer function (MTF) of the image formation collimator/detector system at six different frequencies and can be used to estimate the spatial resolution as function of the direction relative to the digital matrix of the detector.

  5. Single-acquisition method for simultaneous determination of extrinsic gamma-camera sensitivity and spatial resolution

    International Nuclear Information System (INIS)

    Santos, J.A.M.; Sarmento, S.; Alves, P.; Torres, M.C.; Bastos, A.L.; Ponte, F.

    2008-01-01

    A new method for measuring simultaneously both the extrinsic sensitivity and spatial resolution of a gamma-camera in a single planar acquisition was implemented. A dual-purpose phantom (SR phantom; sensitivity/resolution) was developed, tested and the results compared with other conventional methods used for separate determination of these two important image quality parameters. The SR phantom yielded reproducible and accurate results, allowing an immediate visual inspection of the spatial resolution as well as the quantitative determination of the contrast for six different spatial frequencies. It also proved to be useful in the estimation of the modulation transfer function (MTF) of the image formation collimator/detector system at six different frequencies and can be used to estimate the spatial resolution as function of the direction relative to the digital matrix of the detector

  6. Influence of restorative materials on color of implant-supported single crowns in esthetic zone: A spectrophotometric evaluation

    DEFF Research Database (Denmark)

    M., Peng; W.-J., Zhao; M., Hosseini

    2017-01-01

    of the esthetic outcome of soft tissue around implant-supported single crowns in the anterior zone, and the crown color match score was used for subjective evaluation of the esthetic outcome of implant-supported restoration. ANOVA analysis was used to compare the differences among groups and Spearman correlation...

  7. Simulation-based camera navigation training in laparoscopy-a randomized trial

    DEFF Research Database (Denmark)

    Nilsson, Cecilia; Sørensen, Jette Led; Konge, Lars

    2017-01-01

    patient safety. The objectives of this trial were to examine how to train laparoscopic camera navigation and to explore the transfer of skills to the operating room. MATERIALS AND METHODS: A randomized, single-center superiority trial with three groups: The first group practiced simulation-based camera...... navigation tasks (camera group), the second group practiced performing a simulation-based cholecystectomy (procedure group), and the third group received no training (control group). Participants were surgical novices without prior laparoscopic experience. The primary outcome was assessment of camera.......033), had a higher score. CONCLUSIONS: Simulation-based training improves the technical skills required for camera navigation, regardless of practicing camera navigation or the procedure itself. Transfer to the clinical setting could, however, not be demonstrated. The control group demonstrated higher...

  8. Subwavelength Plasmonic Color Printing Protected for Ambient Use

    DEFF Research Database (Denmark)

    Roberts, Alexander Sylvester; Pors, Anders Lambertus; Albrektsen, Ole

    2014-01-01

    We demonstrate plasmonic color printing with subwavelength resolution using circular gap-plasmon resonators (GPRs) arranged in 340 nm period arrays of square unit cells and fabricated with single-step electron-beam lithography. We develop a printing procedure resulting in correct single-pixel color...... reproduction, high color uniformity of colored areas, and high reproduction fidelity. Furthermore, we demonstrate that, due to inherent stability of GPRs with respect to surfactants, the fabricated color print can be protected with a transparent dielectric overlay for ambient use without destroying its...... coloring. Using finite-element simulations, we uncover the physical mechanisms responsible for color printing with GPR arrays and suggest the appropriate design procedure minimizing the influence of the protection layer....

  9. Encyclopedia of color science and technology

    CERN Document Server

    2016-01-01

    The Encyclopedia of Color Science and Technology provides an authoritative single source for understanding and applying the concepts of color to all fields of science and technology, including artistic and historical aspects of color. Many topics are discussed in this timely reference, including an introduction to the science of color, and entries on the physics, chemistry and perception of color. Color is described as it relates to optical phenomena of color and continues on through colorants and materials used to modulate color and also to human vision of color. The measurement of color is provided as is colorimetry, color spaces, color difference metrics, color appearance models, color order systems and cognitive color. Other topics discussed include industrial color, color imaging, capturing color, displaying color and printing color. Descriptions of color encodings, color management, processing color and applications relating to color synthesis for computer graphics are included in this work. The Encyclo...

  10. Photography by Cameras Integrated in Smartphones as a Tool for Analytical Chemistry Represented by an Butyrylcholinesterase Activity Assay.

    Science.gov (United States)

    Pohanka, Miroslav

    2015-06-11

    Smartphones are popular devices frequently equipped with sensitive sensors and great computational ability. Despite the widespread availability of smartphones, practical uses in analytical chemistry are limited, though some papers have proposed promising applications. In the present paper, a smartphone is used as a tool for the determination of cholinesterasemia i.e., the determination of a biochemical marker butyrylcholinesterase (BChE). The work should demonstrate suitability of a smartphone-integrated camera for analytical purposes. Paper strips soaked with indoxylacetate were used for the determination of BChE activity, while the standard Ellman's assay was used as a reference measurement. In the smartphone-based assay, BChE converted indoxylacetate to indigo blue and coloration was photographed using the phone's integrated camera. A RGB color model was analyzed and color values for the individual color channels were determined. The assay was verified using plasma samples and samples containing pure BChE, and validated using Ellmans's assay. The smartphone assay was proved to be reliable and applicable for routine diagnoses where BChE serves as a marker (liver function tests; some poisonings, etc.). It can be concluded that the assay is expected to be of practical applicability because of the results' relevance.

  11. Note: A disposable x-ray camera based on mass produced complementary metal-oxide-semiconductor sensors and single-board computers

    Energy Technology Data Exchange (ETDEWEB)

    Hoidn, Oliver R.; Seidler, Gerald T., E-mail: seidler@uw.edu [Physics Department, University of Washington, Seattle, Washington 98195 (United States)

    2015-08-15

    We have integrated mass-produced commercial complementary metal-oxide-semiconductor (CMOS) image sensors and off-the-shelf single-board computers into an x-ray camera platform optimized for acquisition of x-ray spectra and radiographs at energies of 2–6 keV. The CMOS sensor and single-board computer are complemented by custom mounting and interface hardware that can be easily acquired from rapid prototyping services. For single-pixel detection events, i.e., events where the deposited energy from one photon is substantially localized in a single pixel, we establish ∼20% quantum efficiency at 2.6 keV with ∼190 eV resolution and a 100 kHz maximum detection rate. The detector platform’s useful intrinsic energy resolution, 5-μm pixel size, ease of use, and obvious potential for parallelization make it a promising candidate for many applications at synchrotron facilities, in laser-heating plasma physics studies, and in laboratory-based x-ray spectrometry.

  12. Clustering method for counting passengers getting in a bus with single camera

    Science.gov (United States)

    Yang, Tao; Zhang, Yanning; Shao, Dapei; Li, Ying

    2010-03-01

    Automatic counting of passengers is very important for both business and security applications. We present a single-camera-based vision system that is able to count passengers in a highly crowded situation at the entrance of a traffic bus. The unique characteristics of the proposed system include, First, a novel feature-point-tracking- and online clustering-based passenger counting framework, which performs much better than those of background-modeling-and foreground-blob-tracking-based methods. Second, a simple and highly accurate clustering algorithm is developed that projects the high-dimensional feature point trajectories into a 2-D feature space by their appearance and disappearance times and counts the number of people through online clustering. Finally, all test video sequences in the experiment are captured from a real traffic bus in Shanghai, China. The results show that the system can process two 320×240 video sequences at a frame rate of 25 fps simultaneously, and can count passengers reliably in various difficult scenarios with complex interaction and occlusion among people. The method achieves high accuracy rates up to 96.5%.

  13. Capturing complex human behaviors in representative sports contexts with a single camera.

    Science.gov (United States)

    Duarte, Ricardo; Araújo, Duarte; Fernandes, Orlando; Fonseca, Cristina; Correia, Vanda; Gazimba, Vítor; Travassos, Bruno; Esteves, Pedro; Vilar, Luís; Lopes, José

    2010-01-01

    In the last years, several motion analysis methods have been developed without considering representative contexts for sports performance. The purpose of this paper was to explain and underscore a straightforward method to measure human behavior in these contexts. Procedures combining manual video tracking (with TACTO device) and bidimensional reconstruction (through direct linear transformation) using a single camera were used in order to capture kinematic data required to compute collective variable(s) and control parameter(s). These procedures were applied to a 1vs1 association football task as an illustrative subphase of team sports and will be presented in a tutorial fashion. Preliminary analysis of distance and velocity data identified a collective variable (difference between the distance of the attacker and the defender to a target defensive area) and two nested control parameters (interpersonal distance and relative velocity). Findings demonstrated that the complementary use of TACTO software and direct linear transformation permit to capture and reconstruct complex human actions in their context in a low dimensional space (information reduction).

  14. Multi-view collimators for scintillation cameras

    International Nuclear Information System (INIS)

    Hatton, J.; Grenier, R.P.

    1982-01-01

    This patent specification describes a collimator for obtaining multiple images of a portion of a body with a scintillation camera comprises a body of radiation-impervious material defining two or more groups of channels each group comprising a plurality of parallel channels having axes intersecting the portion of the body being viewed on one side of the collimator and intersecting the input surface of the camera on the other side of the collimator to produce a single view of said body, a number of different such views of said body being provided by each of said groups of channels, each axis of each channel lying in a plane approximately perpendicular to the plane of the input surface of the camera and all of such planes containing said axes being approximately parallel to each other. (author)

  15. Comparison of myocardial perfusion imaging between the new high-speed gamma camera and the standard anger camera

    International Nuclear Information System (INIS)

    Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi

    2013-01-01

    Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest 99m Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time. (author)

  16. An imaging colorimeter for noncontact tissue color mapping.

    Science.gov (United States)

    Balas, C

    1997-06-01

    There has been a considerable effort in several medical fields, for objective color analysis and characterization of biological tissues. Conventional colorimeters have proved inadequate for this purpose, since they do not provide spatial color information and because the measuring procedure randomly affects the color of the tissue. In this paper an imaging colorimeter is presented, where the nonimaging optical photodetector of colorimeters is replaced with the charge-coupled device (CCD) sensor of a color video camera, enabling the independent capturing of the color information for any spatial point within its field-of-view. Combining imaging and colorimetry methods, the acquired image is calibrated and corrected, under several ambient light conditions, providing noncontact reproducible color measurements and mapping, free of the errors and the limitations present in conventional colorimeters. This system was used for monitoring of blood supply changes of psoriatic plaques, that have undergone Psoralens and ultraviolet-A radiation (PUVA) therapy, where reproducible and reliable measurements were demonstrated. These features highlight the potential of the imaging colorimeters as clinical and research tools for the standardization of clinical diagnosis and for the objective evaluation of treatment effectiveness.

  17. Image dynamic range test and evaluation of Gaofen-2 dual cameras

    Science.gov (United States)

    Zhang, Zhenhua; Gan, Fuping; Wei, Dandan

    2015-12-01

    In order to fully understand the dynamic range of Gaofen-2 satellite data and support the data processing, application and next satellites development, in this article, we evaluated the dynamic range by calculating some statistics such as maximum ,minimum, average and stand deviation of four images obtained at the same time by Gaofen-2 dual cameras in Beijing area; then the maximum ,minimum, average and stand deviation of each longitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of each camera's dynamic range consistency; and these four statistics of each latitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of the dynamic range consistency between PMS1 and PMS2 at last. The results suggest that there is a wide dynamic range of DN value in the image obtained by PMS1 and PMS2 which contains rich information of ground objects; in general, the consistency of dynamic range between the single camera images is in close agreement, but also a little difference, so do the dual cameras. The consistency of dynamic range between the single camera images is better than the dual cameras'.

  18. Delay line clipping in a scintillation camera system

    International Nuclear Information System (INIS)

    Hatch, K.F.

    1979-01-01

    The present invention provides a novel base line restoring circuit and a novel delay line clipping circuit in a scintillation camera system. Single and double delay line clipped signal waveforms are generated for increasing the operational frequency and fidelity of data detection of the camera system by base line distortion such as undershooting, overshooting, and capacitive build-up. The camera system includes a set of photomultiplier tubes and associated amplifiers which generate sequences of pulses. These pulses are pulse-height analyzed for detecting a scintillation having an energy level which falls within a predetermined energy range. Data pulses are combined to provide coordinates and energy of photopeak events. The amplifiers are biassed out of saturation over all ranges of pulse energy level and count rate. Single delay line clipping circuitry is provided for narrowing the pulse width of the decaying electrical data pulses which increase operating speed without the occurrence of data loss. (JTA)

  19. A Printer Indexing System for Color Calibration with Applications in Dietary Assessment.

    Science.gov (United States)

    Fang, Shaobo; Liu, Chang; Zhu, Fengqing; Boushey, Carol; Delp, Edward

    2015-09-01

    In image based dietary assessment, color is a very important feature in food identification. One issue with using color in image analysis in the calibration of the color imaging capture system. In this paper we propose an indexing system for color camera calibration using printed color checkerboards also known as fiducial markers (FMs). To use the FM for color calibration one must know which printer was used to print the FM so that the correct color calibration matrix can be used for calibration. We have designed a printer indexing scheme that allows one to determine which printer was used to print the FM based on a unique arrangement of color squares and binarized marks (used for error control) printed on the FM. Using normalized cross correlation and pattern detection, the index corresponding to the printer for a particular FM can be determined. Our experimental results show this scheme is robust against most types of lighting conditions.

  20. Color and Contour Based Identification of Stem of Coconut Bunch

    Science.gov (United States)

    Kannan Megalingam, Rajesh; Manoharan, Sakthiprasad K.; Reddy, Rajesh G.; Sriteja, Gone; Kashyap, Ashwin

    2017-08-01

    Vision is the key component of Artificial Intelligence and Automated Robotics. Sensors or Cameras are the sight organs for a robot. Only through this, they are able to locate themselves or identify the shape of a regular or an irregular object. This paper presents the method of Identification of an object based on color and contour recognition using a camera through digital image processing techniques for robotic applications. In order to identify the contour, shape matching technique is used, which takes the input data from the database provided, and uses it to identify the contour by checking for shape match. The shape match is based on the idea of iterating through each contour of the threshold image. The color is identified on HSV Scale, by approximating the desired range of values from the database. HSV data along with iteration is used for identifying a quadrilateral, which is our required contour. This algorithm could also be used in a non-deterministic plane, which only uses HSV values exclusively.

  1. 4K x 2K pixel color video pickup system

    Science.gov (United States)

    Sugawara, Masayuki; Mitani, Kohji; Shimamoto, Hiroshi; Fujita, Yoshihiro; Yuyama, Ichiro; Itakura, Keijirou

    1998-12-01

    This paper describes the development of an experimental super- high-definition color video camera system. During the past several years there has been much interest in super-high- definition images as the next generation image media. One of the difficulties in implementing a super-high-definition motion imaging system is constructing the image-capturing section (camera). Even the state-of-the-art semiconductor technology can not realize the image sensor which has enough pixels and output data rate for super-high-definition images. The present study is an attempt to fill the gap in this respect. The authors intend to solve the problem by using new imaging method in which four HDTV sensors are attached on a new color separation optics so that their pixel sample pattern forms checkerboard pattern. A series of imaging experiments demonstrate that this technique is an effective approach to capturing super-high-definition moving images in the present situation where no image sensors exist for such images.

  2. Visible Color and Photometry of Bright Materials on Vesta

    Science.gov (United States)

    Schroder, S. E.; Li, J. Y.; Mittlefehldt, D. W.; Pieters, C. M.; De Sanctis, M. C.; Hiesinger, H.; Blewett, D. T.; Russell, C. T.; Raymond, C. A.; Keller, H. U.

    2012-01-01

    The Dawn Framing Camera (FC) collected images of the surface of Vesta at a pixel scale of 70 m in the High Altitude Mapping Orbit (HAMO) phase through its clear and seven color filters spanning from 430 nm to 980 nm. The surface of Vesta displays a large diversity in its brightness and colors, evidently related to the diverse geology [1] and mineralogy [2]. Here we report a detailed investigation of the visible colors and photometric properties of the apparently bright materials on Vesta in order to study their origin. The global distribution and the spectroscopy of bright materials are discussed in companion papers [3, 4], and the synthesis results about the origin of Vestan bright materials are reported in [5].

  3. Extended spectrum SWIR camera with user-accessible Dewar

    Science.gov (United States)

    Benapfl, Brendan; Miller, John Lester; Vemuri, Hari; Grein, Christoph; Sivananthan, Siva

    2017-02-01

    Episensors has developed a series of extended short wavelength infrared (eSWIR) cameras based on high-Cd concentration Hg1-xCdxTe absorbers. The cameras have a bandpass extending to 3 microns cutoff wavelength, opening new applications relative to traditional InGaAs-based cameras. Applications and uses are discussed and examples given. A liquid nitrogen pour-filled version was initially developed. This was followed by a compact Stirling-cooled version with detectors operating at 200 K. Each camera has unique sensitivity and performance characteristics. The cameras' size, weight and power specifications are presented along with images captured with band pass filters and eSWIR sources to demonstrate spectral response beyond 1.7 microns. The soft seal Dewars of the cameras are designed for accessibility, and can be opened and modified in a standard laboratory environment. This modular approach allows user flexibility for swapping internal components such as cold filters and cold stops. The core electronics of the Stirlingcooled camera are based on a single commercial field programmable gate array (FPGA) that also performs on-board non-uniformity corrections, bad pixel replacement, and directly drives any standard HDMI display.

  4. Limits in feature-based attention to multiple colors.

    Science.gov (United States)

    Liu, Taosheng; Jigo, Michael

    2017-11-01

    Attention to a feature enhances the sensory representation of that feature. Although much has been learned about the properties of attentional modulation when attending to a single feature, the effectiveness of attending to multiple features is not well understood. We investigated this question in a series of experiments using a color-detection task while varying the number of attended colors in a cueing paradigm. Observers were shown either a single cue, two cues, or no cue (baseline) before detecting a coherent color target. We measured detection threshold by varying the coherence level of the target. Compared to the baseline condition, we found consistent facilitation of detection performance in the one-cue and two-cue conditions, but performance in the two-cue condition was lower than that in the one-cue condition. In the final experiment, we presented a 50% valid cue to emulate the situation in which observers were only able to attend a single color in the two-cue condition, and found equivalent detection thresholds with the standard two-cue condition. These results indicate a limit in attending to two colors and further imply that observers could effectively attend a single color at a time. Such a limit is likely due to an inability to maintain multiple active attentional templates for colors.

  5. Comparison of digital intraoral scanners by single-image capture system and full-color movie system.

    Science.gov (United States)

    Yamamoto, Meguru; Kataoka, Yu; Manabe, Atsufumi

    2017-01-01

    The use of dental computer-aided design/computer-aided manufacturing (CAD/CAM) restoration is rapidly increasing. This study was performed to evaluate the marginal and internal cement thickness and the adhesive gap of internal cavities comprising CAD/CAM materials using two digital impression acquisition methods and micro-computed tomography. Images obtained by a single-image acquisition system (Bluecam Ver. 4.0) and a full-color video acquisition system (Omnicam Ver. 4.2) were divided into the BL and OM groups, respectively. Silicone impressions were prepared from an ISO-standard metal mold, and CEREC Stone BC and New Fuji Rock IMP were used to create working models (n=20) in the BL and OM groups (n=10 per group), respectively. Individual inlays were designed in a conventional manner using designated software, and all restorations were prepared using CEREC inLab MC XL. These were assembled with the corresponding working models used for measurement, and the level of fit was examined by three-dimensional analysis based on micro-computed tomography. Significant differences in the marginal and internal cement thickness and adhesive gap spacing were found between the OM and BL groups. The full-color movie capture system appears to be a more optimal restoration system than the single-image capture system.

  6. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  7. Color correction for chromatic distortion in a multi-wavelength digital holographic system

    International Nuclear Information System (INIS)

    Lin, Li-Chien; Huang, Yi-Lun; Tu, Han-Yen; Lai, Xin-Ji; Cheng, Chau-Jern

    2011-01-01

    A multi-wavelength digital holographic (MWDH) system has been developed to record and reconstruct color images. In comparison to working with digital cameras, however, high-quality color reproduction is difficult to achieve, because of the imperfections from the light sources, optical components, optical recording devices and recording processes. Thus, we face the problem of correcting the colors altered during the digital holographic process. We therefore propose a color correction scheme to correct the chromatic distortion caused by the MWDH system. The scheme consists of two steps: (1) creating a color correction profile and (2) applying it to the correction of the distorted colors. To create the color correction profile, we generate two algorithms: the sequential algorithm and the integrated algorithm. The ColorChecker is used to generate the distorted colors and their desired corrected colors. The relationship between these two color patches is fixed into a specific mathematical model, the parameters of which are estimated, creating the profile. Next, the profile is used to correct the color distortion of images, capturing and preserving the original vibrancy of the reproduced colors for different reconstructed images

  8. CBF tomographic measurement with the scintillation camera

    International Nuclear Information System (INIS)

    Kayayan, R.; Philippon, B.; Pehlivanian, E.

    1989-01-01

    Single photon emission tomography (SPECT) allows calculation of regional cerebral blood flow (CBF) in multiple cross-sections of the human brain. The methods of Kanno and Lassen is utilized and a study of reproductibility in terms of integration numbers and period of integrations is performed by computer simulation and experimental study with a Gamma-camera. Finally, the possibility of calculating the regional cerabral blood flow with a double headed rotating Gamma-camera by inert gas inhalation, like the Xenon-133 is discussed [fr

  9. Demosaicking algorithm for the Kodak-RGBW color filter array

    Science.gov (United States)

    Rafinazari, M.; Dubois, E.

    2015-01-01

    Digital cameras capture images through different Color Filter Arrays and then reconstruct the full color image. Each CFA pixel only captures one primary color component; the other primary components will be estimated using information from neighboring pixels. During the demosaicking algorithm, the two unknown color components will be estimated at each pixel location. Most of the demosaicking algorithms use the RGB Bayer CFA pattern with Red, Green and Blue filters. The least-Squares Luma-Chroma demultiplexing method is a state of the art demosaicking method for the Bayer CFA. In this paper we develop a new demosaicking algorithm using the Kodak-RGBW CFA. This particular CFA reduces noise and improves the quality of the reconstructed images by adding white pixels. We have applied non-adaptive and adaptive demosaicking method using the Kodak-RGBW CFA on the standard Kodak image dataset and the results have been compared with previous work.

  10. 3D Rainbow Particle Tracking Velocimetry

    Science.gov (United States)

    Aguirre-Pablo, Andres A.; Xiong, Jinhui; Idoughi, Ramzi; Aljedaani, Abdulrahman B.; Dun, Xiong; Fu, Qiang; Thoroddsen, Sigurdur T.; Heidrich, Wolfgang

    2017-11-01

    A single color camera is used to reconstruct a 3D-3C velocity flow field. The camera is used to record the 2D (X,Y) position and colored scattered light intensity (Z) from white polyethylene tracer particles in a flow. The main advantage of using a color camera is the capability of combining different intensity levels for each color channel to obtain more depth levels. The illumination system consists of an LCD projector placed perpendicularly to the camera. Different intensity colored level gradients are projected onto the particles to encode the depth position (Z) information of each particle, benefiting from the possibility of varying the color profiles and projected frequencies up to 60 Hz. Chromatic aberrations and distortions are estimated and corrected using a 3D laser engraved calibration target. The camera-projector system characterization is presented considering size and depth position of the particles. The use of these components reduces dramatically the cost and complexity of traditional 3D-PTV systems.

  11. DistancePPG: Robust non-contact vital signs monitoring using a camera

    Science.gov (United States)

    Kumar, Mayank; Veeraraghavan, Ashok; Sabharwal, Ashutosh

    2015-01-01

    Vital signs such as pulse rate and breathing rate are currently measured using contact probes. But, non-contact methods for measuring vital signs are desirable both in hospital settings (e.g. in NICU) and for ubiquitous in-situ health tracking (e.g. on mobile phone and computers with webcams). Recently, camera-based non-contact vital sign monitoring have been shown to be feasible. However, camera-based vital sign monitoring is challenging for people with darker skin tone, under low lighting conditions, and/or during movement of an individual in front of the camera. In this paper, we propose distancePPG, a new camera-based vital sign estimation algorithm which addresses these challenges. DistancePPG proposes a new method of combining skin-color change signals from different tracked regions of the face using a weighted average, where the weights depend on the blood perfusion and incident light intensity in the region, to improve the signal-to-noise ratio (SNR) of camera-based estimate. One of our key contributions is a new automatic method for determining the weights based only on the video recording of the subject. The gains in SNR of camera-based PPG estimated using distancePPG translate into reduction of the error in vital sign estimation, and thus expand the scope of camera-based vital sign monitoring to potentially challenging scenarios. Further, a dataset will be released, comprising of synchronized video recordings of face and pulse oximeter based ground truth recordings from the earlobe for people with different skin tones, under different lighting conditions and for various motion scenarios. PMID:26137365

  12. Solid-phase single molecule biosensing using dual-color colocalization of fluorescent quantum dot nanoprobes

    Science.gov (United States)

    Liu, Jianbo; Yang, Xiaohai; Wang, Kemin; Wang, Qing; Liu, Wei; Wang, Dong

    2013-10-01

    The development of solid-phase surface-based single molecule imaging technology has attracted significant interest during the past decades. Here we demonstrate a sandwich hybridization method for highly sensitive detection of a single thrombin protein at a solid-phase surface based on the use of dual-color colocalization of fluorescent quantum dot (QD) nanoprobes. Green QD560-modified thrombin binding aptamer I (QD560-TBA I) were deposited on a positive poly(l-lysine) assembled layer, followed by bovine serum albumin blocking. It allowed the thrombin protein to mediate the binding of the easily detectable red QD650-modified thrombin binding aptamer II (QD650-TBA II) to the QD560-TBA I substrate. Thus, the presence of the target thrombin can be determined based on fluorescent colocalization measurements of the nanoassemblies, without target amplification or probe separation. The detection limit of this assay reached 0.8 pM. This fluorescent colocalization assay has enabled single molecule recognition in a separation-free detection format, and can serve as a sensitive biosensing platform that greatly suppresses the nonspecific adsorption false-positive signal. This method can be extended to other areas such as multiplexed immunoassay, single cell analysis, and real time biomolecule interaction studies.The development of solid-phase surface-based single molecule imaging technology has attracted significant interest during the past decades. Here we demonstrate a sandwich hybridization method for highly sensitive detection of a single thrombin protein at a solid-phase surface based on the use of dual-color colocalization of fluorescent quantum dot (QD) nanoprobes. Green QD560-modified thrombin binding aptamer I (QD560-TBA I) were deposited on a positive poly(l-lysine) assembled layer, followed by bovine serum albumin blocking. It allowed the thrombin protein to mediate the binding of the easily detectable red QD650-modified thrombin binding aptamer II (QD650-TBA II) to

  13. Gamma cameras - a method of evaluation

    International Nuclear Information System (INIS)

    Oates, L.; Bibbo, G.

    2000-01-01

    Full text: With the sophistication and longevity of the modern gamma camera it is not often that the need arises to evaluate a gamma camera for purchase. We have recently been placed in the position of retiring our two single headed cameras of some vintage and replacing them with a state of the art dual head variable angle gamma camera. The process used for the evaluation consisted of five parts: (1) Evaluation of the technical specification as expressed in the tender document; (2) A questionnaire adapted from the British Society of Nuclear Medicine; (3) Site visits to assess gantry configuration, movement, patient access and occupational health, welfare and safety considerations; (4) Evaluation of the processing systems offered; (5) Whole of life costing based on equally configured systems. The results of each part of the evaluation were expressed using a weighted matrix analysis with each of the criteria assessed being weighted in accordance with their importance to the provision of an effective nuclear medicine service for our centre and the particular importance to paediatric nuclear medicine. This analysis provided an objective assessment of each gamma camera system from which a purchase recommendation was made. Copyright (2000) The Australian and New Zealand Society of Nuclear Medicine Inc

  14. 7 CFR 29.3012 - Color symbols.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Color symbols. 29.3012 Section 29.3012 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... Color symbols. As applied to Burley, single color symbols are as follows: L—buff, F—tan, R—red, D—dark...

  15. Measurement of the timing behaviour of off-the-shelf cameras

    Science.gov (United States)

    Schatz, Volker

    2017-04-01

    This paper presents a measurement method suitable for investigating the timing properties of cameras. A single light source illuminates the camera detector starting with a varying defined delay after the camera trigger. Pixels from the recorded camera frames are summed up and normalised, and the resulting function is indicative of the overlap between illumination and exposure. This allows one to infer the trigger delay and the exposure time with sub-microsecond accuracy. The method is therefore of interest when off-the-shelf cameras are used in reactive systems or synchronised with other cameras. It can supplement radiometric and geometric calibration methods for cameras in scientific use. A closer look at the measurement results reveals deviations from the ideal camera behaviour of constant sensitivity limited to the exposure interval. One of the industrial cameras investigated retains a small sensitivity long after the end of the nominal exposure interval. All three investigated cameras show non-linear variations of sensitivity at O≤ft({{10}-3}\\right) to O≤ft({{10}-2}\\right) during exposure. Due to its sign, the latter effect cannot be described by a sensitivity function depending on the time after triggering, but represents non-linear pixel characteristics.

  16. Camera network video summarization

    Science.gov (United States)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  17. Variational Histogram Equalization for Single Color Image Defogging

    Directory of Open Access Journals (Sweden)

    Li Zhou

    2016-01-01

    Full Text Available Foggy images taken in the bad weather inevitably suffer from contrast loss and color distortion. Existing defogging methods merely resort to digging out an accurate scene transmission in ignorance of their unpleasing distortion and high complexity. Different from previous works, we propose a simple but powerful method based on histogram equalization and the physical degradation model. By revising two constraints in a variational histogram equalization framework, the intensity component of a fog-free image can be estimated in HSI color space, since the airlight is inferred through a color attenuation prior in advance. To cut down the time consumption, a general variation filter is proposed to obtain a numerical solution from the revised framework. After getting the estimated intensity component, it is easy to infer the saturation component from the physical degradation model in saturation channel. Accordingly, the fog-free image can be restored with the estimated intensity and saturation components. In the end, the proposed method is tested on several foggy images and assessed by two no-reference indexes. Experimental results reveal that our method is relatively superior to three groups of relevant and state-of-the-art defogging methods.

  18. Laparoendoscopic single site (LESS) in vivo suturing using a magnetic anchoring and guidance system (MAGS) camera in a porcine model: impact on ergonomics and workload.

    Science.gov (United States)

    Yin, Gang; Han, Woong Kyu; Faddegon, Stephen; Tan, Yung Khan; Liu, Zhuo-Wei; Olweny, Ephrem O; Scott, Daniel J; Cadeddu, Jeffrey A

    2013-01-01

    To compare the ergonomics and workload of the surgeon during single-site suturing while using the magnetic anchoring and guidance system (MAGS) camera vs a conventional laparoscope. Seven urologic surgeons were enrolled and divided into an expert group (n=2) and a novice group (n=5) according to their laparoendoscopic single-site (LESS) experience. Each surgeon performed 2 conventional LESS and 2 MAGS camera-assisted LESS vesicostomy closures in a porcine model. A Likert scale (scoring 1-5) questionnaire assessing workload, ergonomics, technical difficulty, visualization, and needle handling, as well as a validated National Aeronautics and Space Administration Task Load Index (NASA-TLX) questionnaire were used to evaluate the tasks and workloads. MAGS LESS suturing was universally favored by expert and novice surgeons compared with conventional LESS in workload (3.4 vs 4.2), ergonomics (3.4 vs 4.4), technical challenge (3.3 vs 4.3), visualization (2.4 vs 3.3), and needle handling (3.1 vs 3.9 respectively; PNASA-TLX assessments found MAGS LESS suturing significantly decreased the workload in physical demand (P=.004), temporal demand (P=.017), and effort (P=.006). External instrument clashing was significantly reduced in MAGS LESS suturing (P<.001). The total operative time of MAGS LESS suturing was comparable to that of conventional LESS (P=.89). MAGS camera technology significantly decreased surgeon workload and improved ergonomics. Nevertheless, LESS suturing and knot tying remains a challenging task that requires training, regardless of which camera is used. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Auto white balance method using a pigmentation separation technique for human skin color

    Science.gov (United States)

    Tanaka, Satomi; Kakinuma, Akihiro; Kamijo, Naohiro; Takahashi, Hiroshi; Tsumura, Norimichi

    2017-02-01

    The human visual system maintains the perception of colors of an object across various light sources. Similarly, current digital cameras feature an auto white balance function, which estimates the illuminant color and corrects the color of a photograph as if the photograph was taken under a certain light source. The main subject in a photograph is often a person's face, which could be used to estimate the illuminant color. However, such estimation is adversely affected by differences in facial colors among individuals. The present paper proposes an auto white balance algorithm based on a pigmentation separation method that separates the human skin color image into the components of melanin, hemoglobin and shading. Pigment densities have a uniform property within the same race that can be calculated from the components of melanin and hemoglobin in the face. We, thus, propose a method that uses the subject's facial color in an image and is unaffected by individual differences in facial color among Japanese people.

  20. Human preference for individual colors

    Science.gov (United States)

    Palmer, Stephen E.; Schloss, Karen B.

    2010-02-01

    Color preference is an important aspect of human behavior, but little is known about why people like some colors more than others. Recent results from the Berkeley Color Project (BCP) provide detailed measurements of preferences among 32 chromatic colors as well as other relevant aspects of color perception. We describe the fit of several color preference models, including ones based on cone outputs, color-emotion associations, and Palmer and Schloss's ecological valence theory. The ecological valence theory postulates that color serves an adaptive "steering' function, analogous to taste preferences, biasing organisms to approach advantageous objects and avoid disadvantageous ones. It predicts that people will tend to like colors to the extent that they like the objects that are characteristically that color, averaged over all such objects. The ecological valence theory predicts 80% of the variance in average color preference ratings from the Weighted Affective Valence Estimates (WAVEs) of correspondingly colored objects, much more variance than any of the other models. We also describe how hue preferences for single colors differ as a function of gender, expertise, culture, social institutions, and perceptual experience.

  1. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  2. Photography by Cameras Integrated in Smartphones as a Tool for Analytical Chemistry Represented by an Butyrylcholinesterase Activity Assay

    Directory of Open Access Journals (Sweden)

    Miroslav Pohanka

    2015-06-01

    Full Text Available Smartphones are popular devices frequently equipped with sensitive sensors and great computational ability. Despite the widespread availability of smartphones, practical uses in analytical chemistry are limited, though some papers have proposed promising applications. In the present paper, a smartphone is used as a tool for the determination of cholinesterasemia i.e., the determination of a biochemical marker butyrylcholinesterase (BChE. The work should demonstrate suitability of a smartphone-integrated camera for analytical purposes. Paper strips soaked with indoxylacetate were used for the determination of BChE activity, while the standard Ellman’s assay was used as a reference measurement. In the smartphone-based assay, BChE converted indoxylacetate to indigo blue and coloration was photographed using the phone’s integrated camera. A RGB color model was analyzed and color values for the individual color channels were determined. The assay was verified using plasma samples and samples containing pure BChE, and validated using Ellmans’s assay. The smartphone assay was proved to be reliable and applicable for routine diagnoses where BChE serves as a marker (liver function tests; some poisonings, etc.. It can be concluded that the assay is expected to be of practical applicability because of the results’ relevance.

  3. New feature of the neutron color image intensifier

    Science.gov (United States)

    Nittoh, Koichi; Konagai, Chikara; Noji, Takashi; Miyabe, Keisuke

    2009-06-01

    We developed prototype neutron color image intensifiers with high-sensitivity, wide dynamic range and long-life characteristics. In the prototype intensifier (Gd-Type 1), a terbium-activated Gd 2O 2S is used as the input-screen phosphor. In the upgraded model (Gd-Type 2), Gd 2O 3 and CsI:Na are vacuum deposited to form the phosphor layer, which improved the sensitivity and the spatial uniformity. A europium-activated Y 2O 2S multi-color scintillator, emitting red, green and blue photons with different intensities, is utilized as the output screen of the intensifier. By combining this image intensifier with a suitably tuned high-sensitive color CCD camera, higher sensitivity and wider dynamic range could be simultaneously attained than that of the conventional P20-phosphor-type image intensifier. The results of experiments at the JRR-3M neutron radiography irradiation port (flux: 1.5×10 8 n/cm 2/s) showed that these neutron color image intensifiers can clearly image dynamic phenomena with a 30 frame/s video picture. It is expected that the color image intensifier will be used as a new two-dimensional neutron sensor in new application fields.

  4. New feature of the neutron color image intensifier

    International Nuclear Information System (INIS)

    Nittoh, Koichi; Konagai, Chikara; Noji, Takashi; Miyabe, Keisuke

    2009-01-01

    We developed prototype neutron color image intensifiers with high-sensitivity, wide dynamic range and long-life characteristics. In the prototype intensifier (Gd-Type 1), a terbium-activated Gd 2 O 2 S is used as the input-screen phosphor. In the upgraded model (Gd-Type 2), Gd 2 O 3 and CsI:Na are vacuum deposited to form the phosphor layer, which improved the sensitivity and the spatial uniformity. A europium-activated Y 2 O 2 S multi-color scintillator, emitting red, green and blue photons with different intensities, is utilized as the output screen of the intensifier. By combining this image intensifier with a suitably tuned high-sensitive color CCD camera, higher sensitivity and wider dynamic range could be simultaneously attained than that of the conventional P20-phosphor-type image intensifier. The results of experiments at the JRR-3M neutron radiography irradiation port (flux: 1.5x10 8 n/cm 2 /s) showed that these neutron color image intensifiers can clearly image dynamic phenomena with a 30 frame/s video picture. It is expected that the color image intensifier will be used as a new two-dimensional neutron sensor in new application fields.

  5. Development of multi-color scintillator based X-ray image intensifier

    International Nuclear Information System (INIS)

    Nittoh, Koichi; Konagai, Chikara; Noji, Takashi

    2004-01-01

    A multi-color scintillator based high-sensitive, wide dynamic range and long-life X-ray image intensifier has been developed. An europium activated Y 2 O 2 S scintillator, emitting red, green and blue photons of different intensities, is utilized as the output fluorescent screen of the intensifier. By combining this image intensifier with a suitably tuned high sensitive color CCD camera, it is possible for a sensitivity of the red color component to become six times higher than that of the conventional image intensifier. Simultaneous emission of a moderate green color and a weak blue color covers different sensitivity regions. This widens the dynamic range, by nearly two orders of ten. With this image intensifier, it is possible to image simultaneously complex objects containing various different X-ray transmission from paper, water or plastic to heavy metals. This high sensitivity intensifier, operated at lower X-ray exposure, causes less degradation of scintillator materials and less colorization of output screen glass, and thus helps achieve a longer lifetime. This color scintillator based image intensifier is being introduced for X-ray inspection in various fields

  6. Gate Simulation of a Gamma Camera

    International Nuclear Information System (INIS)

    Abidi, Sana; Mlaouhi, Zohra

    2008-01-01

    Medical imaging is a very important diagnostic because it allows for an exploration of the internal human body. The nuclear imaging is an imaging technique used in the nuclear medicine. It is to determine the distribution in the body of a radiotracers by detecting the radiation it emits using a detection device. Two methods are commonly used: Single Photon Emission Computed Tomography (SPECT) and the Positrons Emission Tomography (PET). In this work we are interested on modelling of a gamma camera. This simulation is based on Monte-Carlo language and in particular Gate simulator (Geant4 Application Tomographic Emission). We have simulated a clinical gamma camera called GAEDE (GKS-1) and then we validate these simulations by experiments. The purpose of this work is to monitor the performance of these gamma camera and the optimization of the detector performance and the the improvement of the images quality. (Author)

  7. Digital data storage of core image using high resolution full color core scanner; Kokaizodo full color scanner wo mochiita core image no digital ka

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, W; Ujo, S; Osato, K; Takasugi, S [Geothermal Energy Research and Development Co. Ltd., Tokyo (Japan)

    1996-05-01

    This paper reports on digitization of core images by using a new type core scanner system. This system consists of a core scanner unit (equipped with a CCD camera), a personal computer and ancillary devices. This is a modification of the old type system, with measurable core length made to 100 cm/3 scans, and resolution enhanced to 5100 pixels/m (1024 pixels/m in the old type). The camera was changed to that of a color specification, and the A/D conversion was improved to 24-bit full color. As a result of carrying out a detail reproduction test on digital images of this core scanner, it was found that objects can be identified at a level of about the size of pixels constituting the image in the case when the best contrast is obtained between the objects and the background, and that in an evaluation test on visibility of concaves and convexes on core surface, reproducibility is not very good in large concaves and convexes. 2 refs., 6 figs.

  8. Derivation of Color Confusion Lines for Pseudo-Dichromat Observers from Color Discrimination Thresholds

    Directory of Open Access Journals (Sweden)

    Kahiro Matsudaira

    2011-05-01

    Full Text Available The objective is to develop a method of defining color confusion lines in the display RGB color space through color discrimination tasks. In the experiment, reference and test square patches were presented side by side on a CRT display. The subject's task is to set the test color where the color difference from the reference is just noticeable to him/her. In a single trial, the test color was only adjustable along one of 26 directions around the reference. Thus 26 colors with just noticeable difference (JND were obtained and made up a tube-like or an ellipsoidal shape around each reference. With color-anomalous subjects, the major axes of these shapes should be parallel to color confusion lines that have a common orientation vector corresponding to one of the cone excitation axes L, M, or S. In our method, the orientation vector was determined by minimizing the sum of the squares of the distances from JND colors to each confusion line. To assess the performance the method, the orientation vectors obtained by pseudo-dichromats (color normal observers with a dichromat simulator were compared to those theoretically calculated from the color vision model used in the simulator.

  9. Determination of the impact of RGB points cloud attribute quality on color-based segmentation process

    Directory of Open Access Journals (Sweden)

    Bartłomiej Kraszewski

    2015-06-01

    Full Text Available The article presents the results of research on the effect that radiometric quality of point cloud RGB attributes have on color-based segmentation. In the research, a point cloud with a resolution of 5 mm, received from FAROARO Photon 120 scanner, described the fragment of an office’s room and color images were taken by various digital cameras. The images were acquired by SLR Nikon D3X, and SLR Canon D200 integrated with the laser scanner, compact camera Panasonic TZ-30 and a mobile phone digital camera. Color information from images was spatially related to point cloud in FAROARO Scene software. The color-based segmentation of testing data was performed with the use of a developed application named “RGB Segmentation”. The application was based on public Point Cloud Libraries (PCL and allowed to extract subsets of points fulfilling the criteria of segmentation from the source point cloud using region growing method.Using the developed application, the segmentation of four tested point clouds containing different RGB attributes from various images was performed. Evaluation of segmentation process was performed based on comparison of segments acquired using the developed application and extracted manually by an operator. The following items were compared: the number of obtained segments, the number of correctly identified objects and the correctness of segmentation process. The best correctness of segmentation and most identified objects were obtained using the data with RGB attribute from Nikon D3X images. Based on the results it was found that quality of RGB attributes of point cloud had impact only on the number of identified objects. In case of correctness of the segmentation, as well as its error no apparent relationship between the quality of color information and the result of the process was found.[b]Keywords[/b]: terrestrial laser scanning, color-based segmentation, RGB attribute, region growing method, digital images, points cloud

  10. The z~4 Lyman Break Galaxies: Colors and Theoretical Predictions

    Science.gov (United States)

    Idzi, Rafal; Somerville, Rachel; Papovich, Casey; Ferguson, Henry C.; Giavalisco, Mauro; Kretchmer, Claudia; Lotz, Jennifer

    2004-01-01

    We investigate several fundamental properties of z~4 Lyman break galaxies by comparing observations with the predictions of a semianalytic model based on the cold dark matter theory of hierarchical structure formation. We use a sample of B435-dropouts from the Great Observatories Origins Deep Survey and complement the Advanced Camera for Surveys optical B435, V606, i775, and z850 data with the Very Large Telescope Infrared Spectrometer and Array Camera J, H, and Ks observations. We extract B435-dropouts from our semianalytic mock catalog using the same color criteria and magnitude limits that were applied to the observed sample. We find that the i775-Ks colors of the model-derived and observed B435-dropouts are in good agreement. However, we find that the i775-z850 colors differ significantly, indicating perhaps that either too little dust or an incorrect extinction curve has been used. Motivated by the reasonably good agreement between the model and observed data, we present predictions for the stellar masses, star formation rates, and ages for the z~4 Lyman break sample. We find that according to our model, the color selection criteria used to select our z~4 sample surveys 67% of all galaxies at this epoch down to z850Universities for Research in Astronomy (AURA), Inc., under NASA contract NAS5-26555. Based on observations collected at the European Southern Observatory, Chile (ESO programmes 168.A-0485, 64.0-0643, 66.A-0572, and 68.A-0544).

  11. Homography-based multiple-camera person-tracking

    Science.gov (United States)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of

  12. Imaging tristimulus colorimeter for the evaluation of color in printed textiles

    Science.gov (United States)

    Hunt, Martin A.; Goddard, James S., Jr.; Hylton, Kathy W.; Karnowski, Thomas P.; Richards, Roger K.; Simpson, Marc L.; Tobin, Kenneth W., Jr.; Treece, Dale A.

    1999-03-01

    The high-speed production of textiles with complicated printed patterns presents a difficult problem for a colorimetric measurement system. Accurate assessment of product quality requires a repeatable measurement using a standard color space, such as CIELAB, and the use of a perceptually based color difference formula, e.g. (Delta) ECMC color difference formula. Image based color sensors used for on-line measurement are not colorimetric by nature and require a non-linear transformation of the component colors based on the spectral properties of the incident illumination, imaging sensor, and the actual textile color. This research and development effort describes a benchtop, proof-of-principle system that implements a projection onto convex sets (POCS) algorithm for mapping component color measurements to standard tristimulus values and incorporates structural and color based segmentation for improved precision and accuracy. The POCS algorithm consists of determining the closed convex sets that describe the constraints on the reconstruction of the true tristimulus values based on the measured imperfect values. We show that using a simulated D65 standard illuminant, commercial filters and a CCD camera, accurate (under perceptibility limits) per-region based (Delta) ECMC values can be measured on real textile samples.

  13. Commissioning of the advanced light source dual-axis streak camera

    International Nuclear Information System (INIS)

    Hinkson, J.; Keller, R.; Byrd, J.

    1997-05-01

    A dual-axis camera, Hamamatsu model C5680, has been installed on the Advanced Light Source photon-diagnostics beam-line to investigate electron-beam parameters. During its commissioning process, the camera has been used to measure single-bunch length vs. current, relative bunch charge in adjacent RF buckets, and bunchphase stability. In this paper the authors describe the visible-light branch of the diagnostics beam-line, the streak-camera installation, and the timing electronics. They will show graphical results of beam measurements taken during a variety of accelerator conditions

  14. Camera trap arrays improve detection probability of wildlife: Investigating study design considerations using an empirical dataset.

    Science.gov (United States)

    O'Connor, Kelly M; Nathan, Lucas R; Liberati, Marjorie R; Tingley, Morgan W; Vokoun, Jason C; Rittenhouse, Tracy A G

    2017-01-01

    Camera trapping is a standard tool in ecological research and wildlife conservation. Study designs, particularly for small-bodied or cryptic wildlife species often attempt to boost low detection probabilities by using non-random camera placement or baited cameras, which may bias data, or incorrectly estimate detection and occupancy. We investigated the ability of non-baited, multi-camera arrays to increase detection probabilities of wildlife. Study design components were evaluated for their influence on wildlife detectability by iteratively parsing an empirical dataset (1) by different sizes of camera arrays deployed (1-10 cameras), and (2) by total season length (1-365 days). Four species from our dataset that represented a range of body sizes and differing degrees of presumed detectability based on life history traits were investigated: white-tailed deer (Odocoileus virginianus), bobcat (Lynx rufus), raccoon (Procyon lotor), and Virginia opossum (Didelphis virginiana). For all species, increasing from a single camera to a multi-camera array significantly improved detection probability across the range of season lengths and number of study sites evaluated. The use of a two camera array increased survey detection an average of 80% (range 40-128%) from the detection probability of a single camera across the four species. Species that were detected infrequently benefited most from a multiple-camera array, where the addition of up to eight cameras produced significant increases in detectability. However, for species detected at high frequencies, single cameras produced a season-long (i.e, the length of time over which cameras are deployed and actively monitored) detectability greater than 0.75. These results highlight the need for researchers to be critical about camera trap study designs based on their intended target species, as detectability for each focal species responded differently to array size and season length. We suggest that researchers a priori identify

  15. Camera trap arrays improve detection probability of wildlife: Investigating study design considerations using an empirical dataset.

    Directory of Open Access Journals (Sweden)

    Kelly M O'Connor

    Full Text Available Camera trapping is a standard tool in ecological research and wildlife conservation. Study designs, particularly for small-bodied or cryptic wildlife species often attempt to boost low detection probabilities by using non-random camera placement or baited cameras, which may bias data, or incorrectly estimate detection and occupancy. We investigated the ability of non-baited, multi-camera arrays to increase detection probabilities of wildlife. Study design components were evaluated for their influence on wildlife detectability by iteratively parsing an empirical dataset (1 by different sizes of camera arrays deployed (1-10 cameras, and (2 by total season length (1-365 days. Four species from our dataset that represented a range of body sizes and differing degrees of presumed detectability based on life history traits were investigated: white-tailed deer (Odocoileus virginianus, bobcat (Lynx rufus, raccoon (Procyon lotor, and Virginia opossum (Didelphis virginiana. For all species, increasing from a single camera to a multi-camera array significantly improved detection probability across the range of season lengths and number of study sites evaluated. The use of a two camera array increased survey detection an average of 80% (range 40-128% from the detection probability of a single camera across the four species. Species that were detected infrequently benefited most from a multiple-camera array, where the addition of up to eight cameras produced significant increases in detectability. However, for species detected at high frequencies, single cameras produced a season-long (i.e, the length of time over which cameras are deployed and actively monitored detectability greater than 0.75. These results highlight the need for researchers to be critical about camera trap study designs based on their intended target species, as detectability for each focal species responded differently to array size and season length. We suggest that researchers a priori

  16. Visible Watermarking Technique Based on Human Visual System for Single Sensor Digital Cameras

    Directory of Open Access Journals (Sweden)

    Hector Santoyo-Garcia

    2017-01-01

    Full Text Available In this paper we propose a visible watermarking algorithm, in which a visible watermark is embedded into the Bayer Colour Filter Array (CFA domain. The Bayer CFA is the most common raw image representation for images captured by single sensor digital cameras equipped in almost all mobile devices. In proposed scheme, the captured image is watermarked before it is compressed and stored in the storage system. Then this method enforces the rightful ownership of the watermarked image, since there is no other version of the image rather than the watermarked one. We also take into consideration the Human Visual System (HVS so that the proposed technique provides desired characteristics of a visible watermarking scheme, such that the embedded watermark is sufficiently perceptible and at same time not obtrusive in colour and grey-scale images. Unlike other Bayer CFA domain visible watermarking algorithms, in which only binary watermark pattern is supported, proposed watermarking algorithm allows grey-scale and colour images as watermark patterns. It is suitable for advertisement purpose, such as digital library and e-commerce, besides copyright protection.

  17. NV-CMOS HD camera for day/night imaging

    Science.gov (United States)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  18. LEA Detection and Tracking Method for Color-Independent Visual-MIMO

    Directory of Open Access Journals (Sweden)

    Jai-Eun Kim

    2016-07-01

    Full Text Available Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO technique is deteriorated by light emitting array (LEA detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER performance improvement.

  19. Data-Acquisition Software for PSP/TSP Wind-Tunnel Cameras

    Science.gov (United States)

    Amer, Tahani R.; Goad, William K.

    2005-01-01

    Wing-Viewer is a computer program for acquisition and reduction of image data acquired by any of five different scientificgrade commercial electronic cameras used at Langley Research center to observe wind-tunnel models coated with pressure or temperature-sensitive paints (PSP/TSP). Wing-Viewer provides full automation of camera operation and acquisition of image data, and has limited data-preprocessing capability for quick viewing of the results of PSP/TSP test images. Wing- Viewer satisfies a requirement for a standard interface between all the cameras and a single personal computer: Written by use of Microsoft Visual C++ and the Microsoft Foundation Class Library as a framework, Wing-Viewer has the ability to communicate with the C/C++ software libraries that run on the controller circuit cards of all five cameras.

  20. Development Of A Multicolor Sub/millimeter Camera Using Microwave Kinetic Inductance Detectors

    Science.gov (United States)

    Schlaerth, James A.; Czakon, N. G.; Day, P. K.; Downes, T. P.; Duan, R.; Glenn, J.; Golwala, S. R.; Hollister, M. I.; LeDuc, H. G.; Maloney, P. R.; Mazin, B. A.; Noroozian, O.; Sayers, J.; Siegel, S.; Vayonakis, A.; Zmuidzinas, J.

    2011-01-01

    Microwave Kinetic Inductance Detectors (MKIDs) are superconducting resonators useful for detecting light from the millimeter-wave to the X-ray. These detectors are easily multiplexed, as the resonances can be tuned to slightly different frequencies, allowing hundreds of detectors to be read out simultaneously using a single feedline. The Multicolor Submillimeter Inductance Camera, MUSIC, will use 2304 antenna-coupled MKIDs in multicolor operation, with bands centered at wavelengths of 0.85, 1.1, 1.3 and 2.0 mm, beginning in 2011. Here we present the results of our demonstration instrument, DemoCam, containing a single 3-color array with 72 detectors and optics similar to MUSIC. We present sensitivities achieved at the telescope, and compare to those expected based upon laboratory tests. We explore the factors that limit the sensitivity, in particular electronics noise, antenna efficiency, and excess loading. We discuss mitigation of these factors, and how we plan to improve sensitivity to the level of background-limited performance for the scientific operation of MUSIC. Finally, we note the expected mapping speed and contributions of MUSIC to astrophysics, and in particular to the study of submillimeter galaxies. This research has been funded by grants from the National Science Foundation, the Gordon and Betty Moore Foundation, and the NASA Graduate Student Researchers Program.

  1. Hewlett-Packard's Approaches to Full Color Reflective Displays

    Science.gov (United States)

    Gibson, Gary

    2012-02-01

    Reflective displays are desirable in applications requiring low power or daylight readability. However, commercial reflective displays are currently either monochrome or capable of only dim color gamuts. Low cost, high-quality color technology would be rapidly adopted in existing reflective display markets and would enable new solutions in areas such as retail pricing and outdoor digital signage. Technical breakthroughs are required to enable bright color gamuts at reasonable cost. Pixel architectures that rely on pure reflection from a single layer of side-by-side primary-color sub-pixels use only a fraction of the display area to reflect incident light of a given color and are, therefore, unacceptably dark. Reflective devices employing stacked color primaries offer the possibility of a somewhat brighter color gamut but can be more complex to manufacture. In this talk, we describe HP's successes in addressing these fundamental challenges and creating both high performance stacked-primary reflective color displays as well as inexpensive single layer prototypes that provide good color. Our stacked displays utilize a combination of careful light management techniques, proprietary high-contrast electro-optic shutters, and highly transparent active-matrix TFT arrays based on transparent metal oxides. They also offer the possibility of relatively low cost manufacturing through roll-to-roll processing on plastic webs. To create even lower cost color displays with acceptable brightness, we have developed means for utilizing photoluminescence to make more efficient use of ambient light in a single layer device. Existing reflective displays create a desired color by reflecting a portion of the incident spectrum while absorbing undesired wavelengths. We have developed methods for converting the otherwise-wasted absorbed light to desired wavelengths via tailored photoluminescent composites. Here we describe a single active layer prototype display that utilizes these materials

  2. Study of the fluorescence blinking behavior of single F2 color centers in LiF crystal

    International Nuclear Information System (INIS)

    Boichenko, S V; Koenig, K; Zilov, S A; Dresvianskiy, V P; Rakevich, A L; Kuznetsov, A V; Bartul, A V; Martynovich, E F; Voitovich, A P

    2014-01-01

    Using confocal fluorescence microscopy technique, we observed experimentally the luminescence of single F 2 color centers in LiF crystal. It is disclosed that the fluorescence shows blinking behavior. It is shown that this phenomenon is caused by the F 2 center reorientation occurring during the experiment. The ratio of luminescence intensities of differently oriented centers is assessed theoretically for two different experiment configurations. The calculated ratios are in fine agreement with experimental result

  3. COLORS OF ELLIPTICALS FROM GALEX TO SPITZER

    Energy Technology Data Exchange (ETDEWEB)

    Schombert, James M., E-mail: jschombe@uoregon.edu [Department of Physics, University of Oregon, Eugene, OR 97403 (United States)

    2016-12-01

    Multi-color photometry is presented for a large sample of local ellipticals selected by morphology and isolation. The sample uses data from the Galaxy Evolution Explorer ( GALEX ), Sloan Digital Sky Survey (SDSS), Two Micron All-Sky Survey (2MASS), and Spitzer to cover the filters NUV , ugri , JHK and 3.6 μ m. Various two-color diagrams, using the half-light aperture defined in the 2MASS J filter, are very coherent from color to color, meaning that galaxies defined to be red in one color are always red in other colors. Comparison to globular cluster colors demonstrates that ellipticals are not composed of a single age, single metallicity (e.g., [Fe/H]) stellar population, but require a multi-metallicity model using a chemical enrichment scenario. Such a model is sufficient to explain two-color diagrams and the color–magnitude relations for all colors using only metallicity as a variable on a solely 12 Gyr stellar population with no evidence of stars younger than 10 Gyr. The [Fe/H] values that match galaxy colors range from −0.5 to +0.4, much higher (and older) than population characteristics deduced from Lick/IDS line-strength system studies, indicating an inconsistency between galaxy colors and line indices values for reasons unknown. The NUV colors have unusual behavior, signaling the rise and fall of the UV upturn with elliptical luminosity. Models with blue horizontal branch tracks can reproduce this behavior, indicating the UV upturn is strictly a metallicity effect.

  4. COLORS OF ELLIPTICALS FROM GALEX TO SPITZER

    International Nuclear Information System (INIS)

    Schombert, James M.

    2016-01-01

    Multi-color photometry is presented for a large sample of local ellipticals selected by morphology and isolation. The sample uses data from the Galaxy Evolution Explorer ( GALEX ), Sloan Digital Sky Survey (SDSS), Two Micron All-Sky Survey (2MASS), and Spitzer to cover the filters NUV , ugri , JHK and 3.6 μ m. Various two-color diagrams, using the half-light aperture defined in the 2MASS J filter, are very coherent from color to color, meaning that galaxies defined to be red in one color are always red in other colors. Comparison to globular cluster colors demonstrates that ellipticals are not composed of a single age, single metallicity (e.g., [Fe/H]) stellar population, but require a multi-metallicity model using a chemical enrichment scenario. Such a model is sufficient to explain two-color diagrams and the color–magnitude relations for all colors using only metallicity as a variable on a solely 12 Gyr stellar population with no evidence of stars younger than 10 Gyr. The [Fe/H] values that match galaxy colors range from −0.5 to +0.4, much higher (and older) than population characteristics deduced from Lick/IDS line-strength system studies, indicating an inconsistency between galaxy colors and line indices values for reasons unknown. The NUV colors have unusual behavior, signaling the rise and fall of the UV upturn with elliptical luminosity. Models with blue horizontal branch tracks can reproduce this behavior, indicating the UV upturn is strictly a metallicity effect.

  5. Control system for several rotating mirror camera synchronization operation

    Science.gov (United States)

    Liu, Ningwen; Wu, Yunfeng; Tan, Xianxiang; Lai, Guoji

    1997-05-01

    This paper introduces a single chip microcomputer control system for synchronization operation of several rotating mirror high-speed cameras. The system consists of four parts: the microcomputer control unit (including the synchronization part and precise measurement part and the time delay part), the shutter control unit, the motor driving unit and the high voltage pulse generator unit. The control system has been used to control the synchronization working process of the GSI cameras (driven by a motor) and FJZ-250 rotating mirror cameras (driven by a gas driven turbine). We have obtained the films of the same objective from different directions in different speed or in same speed.

  6. OCAMS: The OSIRIS-REx Camera Suite

    Science.gov (United States)

    Rizk, B.; Drouet d'Aubigny, C.; Golish, D.; Fellows, C.; Merrill, C.; Smith, P.; Walker, M. S.; Hendershot, J. E.; Hancock, J.; Bailey, S. H.; DellaGiustina, D. N.; Lauretta, D. S.; Tanner, R.; Williams, M.; Harshman, K.; Fitzgibbon, M.; Verts, W.; Chen, J.; Connors, T.; Hamara, D.; Dowd, A.; Lowman, A.; Dubin, M.; Burt, R.; Whiteley, M.; Watson, M.; McMahon, T.; Ward, M.; Booher, D.; Read, M.; Williams, B.; Hunten, M.; Little, E.; Saltzman, T.; Alfred, D.; O'Dougherty, S.; Walthall, M.; Kenagy, K.; Peterson, S.; Crowther, B.; Perry, M. L.; See, C.; Selznick, S.; Sauve, C.; Beiser, M.; Black, W.; Pfisterer, R. N.; Lancaster, A.; Oliver, S.; Oquest, C.; Crowley, D.; Morgan, C.; Castle, C.; Dominguez, R.; Sullivan, M.

    2018-02-01

    The OSIRIS-REx Camera Suite (OCAMS) will acquire images essential to collecting a sample from the surface of Bennu. During proximity operations, these images will document the presence of satellites and plumes, record spin state, enable an accurate model of the asteroid's shape, and identify any surface hazards. They will confirm the presence of sampleable regolith on the surface, observe the sampling event itself, and image the sample head in order to verify its readiness to be stowed. They will document Bennu's history as an example of early solar system material, as a microgravity body with a planetesimal size-scale, and as a carbonaceous object. OCAMS is fitted with three cameras. The MapCam will record color images of Bennu as a point source on approach to the asteroid in order to connect Bennu's ground-based point-source observational record to later higher-resolution surface spectral imaging. The SamCam will document the sample site before, during, and after it is disturbed by the sample mechanism. The PolyCam, using its focus mechanism, will observe the sample site at sub-centimeter resolutions, revealing surface texture and morphology. While their imaging requirements divide naturally between the three cameras, they preserve a strong degree of functional overlap. OCAMS and the other spacecraft instruments will allow the OSIRIS-REx mission to collect a sample from a microgravity body on the same visit during which it was first optically acquired from long range, a useful capability as humanity reaches out to explore near-Earth, Main-Belt and Jupiter Trojan asteroids.

  7. Design and Expected Performance of GISMO-2, a Two Color Millimeter Camera for the IRAM 30 m Telescope

    Science.gov (United States)

    Staguhn, Johannes G.; Benford, Dominic J.; Dwek, Eli; Hilton, Gene; Fixsen, Dale J.; Irwin, Kent; Jhabvala, Christine; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; hide

    2014-01-01

    We present the main design features for the GISMO-2 bolometer camera, which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISMO-2 will operate simultaneously in the 1 and 2 mm atmospherical windows. The 1 mm channel uses a 32 × 40 TES-based backshort under grid (BUG) bolometer array, the 2 mm channel operates with a 16 × 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISMO-2 was strongly influenced by our experience with the GISMO 2mm bolometer camera, which is successfully operating at the 30 m telescope. GISMO is accessible to the astronomical community through the regularIRAMcall for proposals.

  8. High-speed potato grading and quality inspection based on a color vision system

    Science.gov (United States)

    Noordam, Jacco C.; Otten, Gerwoud W.; Timmermans, Toine J. M.; van Zwol, Bauke H.

    2000-03-01

    A high-speed machine vision system for the quality inspection and grading of potatoes has been developed. The vision system grades potatoes on size, shape and external defects such as greening, mechanical damages, rhizoctonia, silver scab, common scab, cracks and growth cracks. A 3-CCD line-scan camera inspects the potatoes in flight as they pass under the camera. The use of mirrors to obtain a 360-degree view of the potato and the lack of product holders guarantee a full view of the potato. To achieve the required capacity of 12 tons/hour, 11 SHARC Digital Signal Processors perform the image processing and classification tasks. The total capacity of the system is about 50 potatoes/sec. The color segmentation procedure uses Linear Discriminant Analysis (LDA) in combination with a Mahalanobis distance classifier to classify the pixels. The procedure for the detection of misshapen potatoes uses a Fourier based shape classification technique. Features such as area, eccentricity and central moments are used to discriminate between similar colored defects. Experiments with red and yellow skin-colored potatoes have shown that the system is robust and consistent in its classification.

  9. Layers of 'Cabo Frio' in 'Victoria Crater' (False Color)

    Science.gov (United States)

    2006-01-01

    This view of 'Victoria crater' is looking southeast from 'Duck Bay' towards the dramatic promontory called 'Cabo Frio.' The small crater in the right foreground, informally known as 'Sputnik,' is about 20 meters (about 65 feet) away from the rover, the tip of the spectacular, layered, Cabo Frio promontory itself is about 200 meters (about 650 feet) away from the rover, and the exposed rock layers are about 15 meters (about 50 feet) tall. This is an enhanced false color rendering of images taken by the panoramic camera (Pancam) on NASA's Mars Exploration Rover Opportunity during the rover's 952nd sol, or Martian day, (Sept. 28, 2006) using the camera's 750-nanometer, 530-nanometer and 430-nanometer filters.

  10. Compact 6 dB Two-Color Continuous Variable Entangled Source Based on a Single Ring Optical Resonator

    Directory of Open Access Journals (Sweden)

    Ning Wang

    2018-02-01

    Full Text Available Continuous-variable entangled optical beams at the degenerate wavelength of 0.8 μm or 1.5 μm have been investigated extensively, but separately. The two-color entangled states of these two useful wavelengths, with sufficiently high degrees of entanglement, still lag behind. In this work, we analyze the various limiting factors that affect the entanglement degree. On the basis of this, we successfully achieve 6 dB of two-color quadrature entangled light beams by improving the escape efficiency of the nondegenerate optical amplifier, the stability of the phase-locking servo system, and the detection efficiency. Our entangled source is constructed only from a single ring optical resonator, and thus is highly compact, which is suitable for applications in long-distance quantum communication networks.

  11. ISPA - a high accuracy X-ray and gamma camera Exhibition LEPFest 2000

    CERN Multimedia

    2000-01-01

    ISPA offers ... Ten times better resolution than Anger cameras High efficiency single gamma counting Noise reduction by sensitivity to gamma energy ...for Single Photon Emission Computed Tomography (SPECT)

  12. Sky brightness and color measurements during the 21 August 2017 total solar eclipse.

    Science.gov (United States)

    Bruns, Donald G; Bruns, Ronald D

    2018-06-01

    The sky brightness was measured during the partial phases and during totality of the 21 August 2017 total solar eclipse. A tracking CCD camera with color filters and a wide-angle lens allowed measurements across a wide field of view, recording images every 10 s. The partially and totally eclipsed Sun was kept behind an occulting disk attached to the camera, allowing direct brightness measurements from 1.5° to 38° from the Sun. During the partial phases, the sky brightness as a function of time closely followed the integrated intensity of the unobscured fraction of the solar disk. A redder sky was measured close to the Sun just before totality, caused by the redder color of the exposed solar limb. During totality, a bluer sky was measured, dimmer than the normal sky by a factor of 10,000. Suggestions for enhanced measurements at future eclipses are offered.

  13. A new technique for robot vision in autonomous underwater vehicles using the color shift in underwater imaging

    Science.gov (United States)

    2017-06-01

    FOR ROBOT VISION IN AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING by Jake A. Jones June 2017 Thesis Advisor...techniques to determine the distances from each pixel to the camera. 14. SUBJECT TERMS unmanned undersea vehicles (UUVs), autonomous ... AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING Jake A. Jones Lieutenant Commander, United States Navy B.S

  14. Comparison of Three Smart Camera Architectures for Real-Time Machine Vision System

    Directory of Open Access Journals (Sweden)

    Abdul Waheed Malik

    2013-12-01

    Full Text Available This paper presents a machine vision system for real-time computation of distance and angle of a camera from a set of reference points located on a target board. Three different smart camera architectures were explored to compare performance parameters such as power consumption, frame speed and latency. Architecture 1 consists of hardware machine vision modules modeled at Register Transfer (RT level and a soft-core processor on a single FPGA chip. Architecture 2 is commercially available software based smart camera, Matrox Iris GT. Architecture 3 is a two-chip solution composed of hardware machine vision modules on FPGA and an external microcontroller. Results from a performance comparison show that Architecture 2 has higher latency and consumes much more power than Architecture 1 and 3. However, Architecture 2 benefits from an easy programming model. Smart camera system with FPGA and external microcontroller has lower latency and consumes less power as compared to single FPGA chip having hardware modules and soft-core processor.

  15. rf streak camera based ultrafast relativistic electron diffraction.

    Science.gov (United States)

    Musumeci, P; Moody, J T; Scoby, C M; Gutierrez, M S; Tran, T

    2009-01-01

    We theoretically and experimentally investigate the possibility of using a rf streak camera to time resolve in a single shot structural changes at the sub-100 fs time scale via relativistic electron diffraction. We experimentally tested this novel concept at the UCLA Pegasus rf photoinjector. Time-resolved diffraction patterns from thin Al foil are recorded. Averaging over 50 shots is required in order to get statistics sufficient to uncover a variation in time of the diffraction patterns. In the absence of an external pump laser, this is explained as due to the energy chirp on the beam out of the electron gun. With further improvements to the electron source, rf streak camera based ultrafast electron diffraction has the potential to yield truly single shot measurements of ultrafast processes.

  16. Single Trial Classification of Evoked EEG Signals Due to RGB Colors

    Directory of Open Access Journals (Sweden)

    Eman Alharbi

    2016-03-01

    Full Text Available Recently, the impact of colors on the brain signals has become one of the leading researches in BCI systems. These researches are based on studying the brain behavior after color stimulus, and finding a way to classify its signals offline without considering the real time. Moving to the next step, we present a real time classification model (online for EEG signals evoked by RGB colors stimuli, which is not presented in previous studies. In this research, EEG signals were recorded from 7 subjects through BCI2000 toolbox. The Empirical Mode Decomposition (EMD technique was used at the signal analysis stage. Various feature extraction methods were investigated to find the best and reliable set, including Event-related spectral perturbations (ERSP, Target mean with Feast Fourier Transform (FFT, Wavelet Packet Decomposition (WPD, Auto Regressive model (AR and EMD residual. A new feature selection method was created based on the peak's time of EEG signal when red and blue colors stimuli are presented. The ERP image was used to find out the peak's time, which was around 300 ms for the red color and around 450 ms for the blue color. The classification was performed using the Support Vector Machine (SVM classifier, LIBSVM toolbox being used for that purpose. The EMD residual was found to be the most reliable method that gives the highest classification accuracy with an average of 88.5% and with an execution time of only 14 seconds.

  17. Real-time implementation of a color sorting system

    Science.gov (United States)

    Srikanteswara, Srikathyanyani; Lu, Qiang O.; King, William; Drayer, Thomas H.; Conners, Richard W.; Kline, D. Earl; Araman, Philip A.

    1997-09-01

    Wood edge glued panels are used extensively in the furniture and cabinetry industries. They are used to make doors, tops, and sides of solid wood furniture and cabinets. Since lightly stained furniture and cabinets are gaining in popularity, there is an increasing demand to color sort the parts used to make these edge glued panels. The goal of the sorting processing is to create panels that are uniform in both color and intensity across their visible surface. If performed manually, the color sorting of edge-glued panel parts is very labor intensive and prone to error. This paper describes a complete machine vision system for performing this sort. This system uses two color line scan cameras for image input and a specially designed custom computing machine to allow real-time implementation. Users define the number of color classes that are to be used. An 'out' class is provided to handle unusually colored parts. The system removes areas of character mark, e.g., knots, mineral streak, etc., from consideration when assigning a color class to a part. The system also includes a better face algorithm for determining which part face would be the better to put on the side of the panel that will show. The throughput is two linear feet per second. Only a four inch between part spacing is required. This system has undergone extensive in plant testing and will be commercially available in the very near future. The results of this testing will be presented.

  18. High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project

    Science.gov (United States)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2015-04-01

    Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.

  19. Image Alignment for Multiple Camera High Dynamic Range Microscopy.

    Science.gov (United States)

    Eastwood, Brian S; Childs, Elisabeth C

    2012-01-09

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.

  20. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera

    OpenAIRE

    Thuy Tuong Nguyen; David C. Slaughter; Bradley D. Hanson; Andrew Barber; Amy Freitas; Daniel Robles; Erin Whelan

    2015-01-01

    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a t...

  1. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  2. Automated Meteor Detection by All-Sky Digital Camera Systems

    Science.gov (United States)

    Suk, Tomáš; Šimberová, Stanislava

    2017-12-01

    We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.

  3. Development of a compact scintillator-based high-resolution Compton camera for molecular imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kishimoto, A., E-mail: daphne3h-aya@ruri.waseda.jp [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Kataoka, J.; Koide, A.; Sueoka, K.; Iwamoto, Y.; Taya, T. [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Ohsuka, S. [Central Research Laboratory, Hamamatsu Photonics K.K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu, Shizuoka (Japan)

    2017-02-11

    The Compton camera, which shows gamma-ray distribution utilizing the kinematics of Compton scattering, is a promising detector capable of imaging across a wide range of energy. In this study, we aim to construct a small-animal molecular imaging system in a wide energy range by using the Compton camera. We developed a compact medical Compton camera based on a Ce-doped Gd{sub 3}Al{sub 2}Ga{sub 3}O{sub 12} (Ce:GAGG) scintillator and multi-pixel photon counter (MPPC). A basic performance confirmed that for 662 keV, the typical energy resolution was 7.4 % (FWHM) and the angular resolution was 4.5° (FWHM). We then used the medical Compton camera to conduct imaging experiments based on a 3-D imaging reconstruction algorithm using the multi-angle data acquisition method. The result confirmed that for a {sup 137}Cs point source at a distance of 4 cm, the image had a spatial resolution of 3.1 mm (FWHM). Furthermore, we succeeded in producing 3-D multi-color image of different simultaneous energy sources ({sup 22}Na [511 keV], {sup 137}Cs [662 keV], and {sup 54}Mn [834 keV]).

  4. Cost Effective Paper-Based Colorimetric Microfluidic Devices and Mobile Phone Camera Readers for the Classroom

    Science.gov (United States)

    Koesdjojo, Myra T.; Pengpumkiat, Sumate; Wu, Yuanyuan; Boonloed, Anukul; Huynh, Daniel; Remcho, Thomas P.; Remcho, Vincent T.

    2015-01-01

    We have developed a simple and direct method to fabricate paper-based microfluidic devices that can be used for a wide range of colorimetric assay applications. With these devices, assays can be performed within minutes to allow for quantitative colorimetric analysis by use of a widely accessible iPhone camera and an RGB color reader application…

  5. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects.

    Science.gov (United States)

    Bulczak, David; Lambers, Martin; Kolb, Andreas

    2017-12-22

    In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.

  6. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects

    Directory of Open Access Journals (Sweden)

    David Bulczak

    2017-12-01

    Full Text Available In the last decade, Time-of-Flight (ToF range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF measurements for selected, purchasable materials in the near-infrared (NIR range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.

  7. Time-Sharing-Based Synchronization and Performance Evaluation of Color-Independent Visual-MIMO Communication.

    Science.gov (United States)

    Kwon, Tae-Ho; Kim, Jai-Eun; Kim, Ki-Doo

    2018-05-14

    In the field of communication, synchronization is always an important issue. The communication between a light-emitting diode (LED) array (LEA) and a camera is known as visual multiple-input multiple-output (MIMO), for which the data transmitter and receiver must be synchronized for seamless communication. In visual-MIMO, LEDs generally have a faster data rate than the camera. Hence, we propose an effective time-sharing-based synchronization technique with its color-independent characteristics providing the key to overcome this synchronization problem in visual-MIMO communication. We also evaluated the performance of our synchronization technique by varying the distance between the LEA and camera. A graphical analysis is also presented to compare the symbol error rate (SER) at different distances.

  8. Quantitative single-particle digital autoradiography with α-particle emitters for targeted radionuclide therapy using the iQID camera.

    Science.gov (United States)

    Miller, Brian W; Frost, Sofia H L; Frayo, Shani L; Kenoyer, Aimee L; Santos, Erlinda; Jones, Jon C; Green, Damian J; Hamlin, Donald K; Wilbur, D Scott; Fisher, Darrell R; Orozco, Johnnie J; Press, Oliver W; Pagel, John M; Sandmaier, Brenda M

    2015-07-01

    Alpha-emitting radionuclides exhibit a potential advantage for cancer treatments because they release large amounts of ionizing energy over a few cell diameters (50-80 μm), causing localized, irreparable double-strand DNA breaks that lead to cell death. Radioimmunotherapy (RIT) approaches using monoclonal antibodies labeled with α emitters may thus inactivate targeted cells with minimal radiation damage to surrounding tissues. Tools are needed to visualize and quantify the radioactivity distribution and absorbed doses to targeted and nontargeted cells for accurate dosimetry of all treatment regimens utilizing α particles, including RIT and others (e.g., Ra-223), especially for organs and tumors with heterogeneous radionuclide distributions. The aim of this study was to evaluate and characterize a novel single-particle digital autoradiography imager, the ionizing-radiation quantum imaging detector (iQID) camera, for use in α-RIT experiments. The iQID camera is a scintillator-based radiation detection system that images and identifies charged-particle and gamma-ray/x-ray emissions spatially and temporally on an event-by-event basis. It employs CCD-CMOS cameras and high-performance computing hardware for real-time imaging and activity quantification of tissue sections, approaching cellular resolutions. In this work, the authors evaluated its characteristics for α-particle imaging, including measurements of intrinsic detector spatial resolutions and background count rates at various detector configurations and quantification of activity distributions. The technique was assessed for quantitative imaging of astatine-211 ((211)At) activity distributions in cryosections of murine and canine tissue samples. The highest spatial resolution was measured at ∼20 μm full width at half maximum and the α-particle background was measured at a rate as low as (2.6 ± 0.5) × 10(-4) cpm/cm(2) (40 mm diameter detector area). Simultaneous imaging of multiple tissue sections was

  9. The GCT camera for the Cherenkov Telescope Array

    Science.gov (United States)

    Lapington, J. S.; Abchiche, A.; Allan, D.; Amans, J.-P.; Armstrong, T. P.; Balzer, A.; Berge, D.; Boisson, C.; Bousquet, J.-J.; Bose, R.; Brown, A. M.; Bryan, M.; Buchholtz, G.; Buckley, J.; Chadwick, P. M.; Costantini, H.; Cotter, G.; Daniel, M. K.; De Franco, A.; De Frondat, F.; Dournaux, J.-L.; Dumas, D.; Ernenwein, J.-P.; Fasola, G.; Funk, S.; Gironnet, J.; Graham, J. A.; Greenshaw, T.; Hervet, O.; Hidaka, N.; Hinton, J. A.; Huet, J.-M.; Jankowsky, D.; Jegouzo, I.; Jogler, T.; Kawashima, T.; Kraus, M.; Laporte, P.; Leach, S.; Lefaucheur, J.; Markoff, S.; Melse, T.; Minaya, I. A.; Mohrmann, L.; Molyneux, P.; Moore, P.; Nolan, S. J.; Okumura, A.; Osborne, J. P.; Parsons, R. D.; Rosen, S.; Ross, D.; Rowell, G.; Rulten, C. B.; Sato, Y.; Sayede, F.; Schmoll, J.; Schoorlemmer, H.; Servillat, M.; Sol, H.; Stamatescu, V.; Stephan, M.; Stuik, R.; Sykes, J.; Tajima, H.; Thornhill, J.; Tibaldo, L.; Trichard, C.; Varner, G.; Vink, J.; Watson, J. J.; White, R.; Yamane, N.; Zech, A.; Zink, A.; Zorn, J.; CTA Consortium

    2017-12-01

    The Gamma Cherenkov Telescope (GCT) is one of the designs proposed for the Small Sized Telescope (SST) section of the Cherenkov Telescope Array (CTA). The GCT uses dual-mirror optics, resulting in a compact telescope with good image quality and a large field of view with a smaller, more economical, camera than is achievable with conventional single mirror solutions. The photon counting GCT camera is designed to record the flashes of atmospheric Cherenkov light from gamma and cosmic ray initiated cascades, which last only a few tens of nanoseconds. The GCT optics require that the camera detectors follow a convex surface with a radius of curvature of 1 m and a diameter of 35 cm, which is approximated by tiling the focal plane with 32 modules. The first camera prototype is equipped with multi-anode photomultipliers, each comprising an 8×8 array of 6×6 mm2 pixels to provide the required angular scale, adding up to 2048 pixels in total. Detector signals are shaped, amplified and digitised by electronics based on custom ASICs that provide digitisation at 1 GSample/s. The camera is self-triggering, retaining images where the focal plane light distribution matches predefined spatial and temporal criteria. The electronics are housed in the liquid-cooled, sealed camera enclosure. LED flashers at the corners of the focal plane provide a calibration source via reflection from the secondary mirror. The first GCT camera prototype underwent preliminary laboratory tests last year. In November 2015, the camera was installed on a prototype GCT telescope (SST-GATE) in Paris and was used to successfully record the first Cherenkov light of any CTA prototype, and the first Cherenkov light seen with such a dual-mirror optical system. A second full-camera prototype based on Silicon Photomultipliers is under construction. Up to 35 GCTs are envisaged for CTA.

  10. Sensing system with USB camera for experiments of polarization of the light

    Directory of Open Access Journals (Sweden)

    José Luís Fabris

    2017-08-01

    Full Text Available This work shows a sensor system for educational experiments, composed of a USB camera and a software developed and provided by the authors. The sensor system is suitable for the purpose of studying phenomena related to the polarization of the light. The system was tested in experiments performed to verify the Malus’ Law and the spectral efficiency of polarizers. Details of the experimental setup are shown. The camera captures the light in the visible spectral range from a LED that illuminates a white screen after passing through two polarizers. The software uses the image captured by the camera to provide the relative intensity of the light. With the use of two rotating H-sheet linear polarizers, a linear fitting of the Malus’s Law to the transmitted light intensity data resulted in correlation coefficients R larger than 0.9988. The efficiency of the polarizers in different visible spectral regions was verified with the aid of color filters added to the experimental setup. The system was also used to evaluate the intensity time stability of a white LED.

  11. Single underwater image enhancement based on color cast removal and visibility restoration

    Science.gov (United States)

    Li, Chongyi; Guo, Jichang; Wang, Bo; Cong, Runmin; Zhang, Yan; Wang, Jian

    2016-05-01

    Images taken under underwater condition usually have color cast and serious loss of contrast and visibility. Degraded underwater images are inconvenient for observation and analysis. In order to address these problems, an underwater image-enhancement method is proposed. A simple yet effective underwater image color cast removal algorithm is first presented based on the optimization theory. Then, based on the minimum information loss principle and inherent relationship of medium transmission maps of three color channels in an underwater image, an effective visibility restoration algorithm is proposed to recover visibility, contrast, and natural appearance of degraded underwater images. To evaluate the performance of the proposed method, qualitative comparison, quantitative comparison, and color accuracy test are conducted. Experimental results demonstrate that the proposed method can effectively remove color cast, improve contrast and visibility, and recover natural appearance of degraded underwater images. Additionally, the proposed method is comparable to and even better than several state-of-the-art methods.

  12. Software for fast cameras and image handling on MAST

    International Nuclear Information System (INIS)

    Shibaev, S.

    2008-01-01

    The rapid progress in fast imaging gives new opportunities for fusion research. The data obtained by fast cameras play an important and ever-increasing role in analysis and understanding of plasma phenomena. The fast cameras produce a huge amount of data which creates considerable problems for acquisition, analysis, and storage. We use a number of fast cameras on the Mega-Amp Spherical Tokamak (MAST). They cover several spectral ranges: broadband visible, infra-red and narrow band filtered for spectroscopic studies. These cameras are controlled by programs developed in-house. The programs provide full camera configuration and image acquisition in the MAST shot cycle. Despite the great variety of image sources, all images should be stored in a single format. This simplifies development of data handling tools and hence the data analysis. A universal file format has been developed for MAST images which supports storage in both raw and compressed forms, using either lossless or lossy compression. A number of access and conversion routines have been developed for all languages used on MAST. Two movie-style display tools have been developed-Windows native and Qt based for Linux. The camera control programs run as autonomous data acquisition units with full camera configuration set and stored locally. This allows easy porting of the code to other data acquisition systems. The software developed for MAST fast cameras has been adapted for several other tokamaks where it is in regular use

  13. Development of underwater camera using high-definition camera

    International Nuclear Information System (INIS)

    Tsuji, Kenji; Watanabe, Masato; Takashima, Masanobu; Kawamura, Shingo; Tanaka, Hiroyuki

    2012-01-01

    In order to reduce the time for core verification or visual inspection of BWR fuels, the underwater camera using a High-Definition camera has been developed. As a result of this development, the underwater camera has 2 lights and 370 x 400 x 328mm dimensions and 20.5kg weight. Using the camera, 6 or so spent-fuel IDs are identified at 1 or 1.5m distance at a time, and 0.3mmφ pin-hole is recognized at 1.5m distance and 20 times zoom-up. Noises caused by radiation less than 15 Gy/h are not affected the images. (author)

  14. Computer vision camera with embedded FPGA processing

    Science.gov (United States)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  15. Medium-sized aperture camera for Earth observation

    Science.gov (United States)

    Kim, Eugene D.; Choi, Young-Wan; Kang, Myung-Seok; Kim, Ee-Eul; Yang, Ho-Soon; Rasheed, Ad. Aziz Ad.; Arshad, Ahmad Sabirin

    2017-11-01

    Satrec Initiative and ATSB have been developing a medium-sized aperture camera (MAC) for an earth observation payload on a small satellite. Developed as a push-broom type high-resolution camera, the camera has one panchromatic and four multispectral channels. The panchromatic channel has 2.5m, and multispectral channels have 5m of ground sampling distances at a nominal altitude of 685km. The 300mm-aperture Cassegrain telescope contains two aspheric mirrors and two spherical correction lenses. With a philosophy of building a simple and cost-effective camera, the mirrors incorporate no light-weighting, and the linear CCDs are mounted on a single PCB with no beam splitters. MAC is the main payload of RazakSAT to be launched in 2005. RazakSAT is a 180kg satellite including MAC, designed to provide high-resolution imagery of 20km swath width on a near equatorial orbit (NEqO). The mission objective is to demonstrate the capability of a high-resolution remote sensing satellite system on a near equatorial orbit. This paper describes the overview of the MAC and RarakSAT programmes, and presents the current development status of MAC focusing on key optical aspects of Qualification Model.

  16. Thermodynamic free-energy minimization for unsupervised fusion of dual-color infrared breast images

    Science.gov (United States)

    Szu, Harold; Miao, Lidan; Qi, Hairong

    2006-04-01

    function [A] may vary from the point tumor to its neighborhood, we could not rely on neighborhood statistics as did in a popular unsupervised independent component analysis (ICA) mathematical statistical method, we instead impose the physics equilibrium condition of the minimum of Helmholtz free-energy, H = E - T °S. In case of the point breast cancer, we can assume the constant ground state energy E ° to be normalized by those benign neighborhood tissue, and then the excited state can be computed by means of Taylor series expansion in terms of the pixel I/O data. We can augment the X-ray mammogram technique with passive IR imaging to reduce the unwanted X-rays during the chemotherapy recovery. When the sequence is animated into a movie, and the recovery dynamics is played backward in time, the movie simulates the cameras' potential for early detection without suffering the PD=0.1 search uncertainty. In summary, we applied two satellite-grade dual-color IR imaging cameras and advanced military (automatic target recognition) ATR spectrum fusion algorithm at the middle wavelength IR (3 - 5μm) and long wavelength IR (8 - 12μm), which are capable to screen malignant tumors proved by the time-reverse fashion of the animated movie experiments. On the contrary, the traditional thermal breast scanning/imaging, known as thermograms over decades, was IR spectrum-blind, and limited to a single night-vision camera and the necessary waiting for the cool down period for taking a second look for change detection suffers too many environmental and personnel variabilities.

  17. Study on color identification for monitoring and controlling fermentation process of branched chain amino acid

    Science.gov (United States)

    Ma, Lei; Wang, Yizhong; Chen, Ning; Liu, Tiegen; Xu, Qingyang; Kong, Fanzhi

    2008-12-01

    In this paper, a new method for monitoring and controlling fermentation process of branched chain amino acid (BCAA) was proposed based on color identification. The color image of fermentation broth of BCAA was firstly taken by a CCD camera. Then, it was changed from RGB color model to HIS color model. Its histograms of hue H and saturation S were calculated, which were used as the input of a designed BP network. The output of the BP network was the description of the color of fermentation broth of BCAA. After training, the color of fermentation broth was identified by the BP network according to the histograms of H and S of a fermentation broth image. Along with other parameters, the fermentation process of BCAA was monitored and controlled to start the stationary phase of fermentation soon. Experiments were conducted with satisfied results to show the feasibility and usefulness of color identification of fermentation broth in fermentation process control of BCAA.

  18. Spatial capture–recapture with partial identity: An application to camera traps

    Science.gov (United States)

    Augustine, Ben C.; Royle, J. Andrew; Kelly, Marcella J.; Satter, Christopher B.; Alonso, Robert S.; Boydston, Erin E.; Crooks, Kevin R.

    2018-01-01

    Camera trapping surveys frequently capture individuals whose identity is only known from a single flank. The most widely used methods for incorporating these partial identity individuals into density analyses discard some of the partial identity capture histories, reducing precision, and, while not previously recognized, introducing bias. Here, we present the spatial partial identity model (SPIM), which uses the spatial location where partial identity samples are captured to probabilistically resolve their complete identities, allowing all partial identity samples to be used in the analysis. We show that the SPIM outperforms other analytical alternatives. We then apply the SPIM to an ocelot data set collected on a trapping array with double-camera stations and a bobcat data set collected on a trapping array with single-camera stations. The SPIM improves inference in both cases and, in the ocelot example, individual sex is determined from photographs used to further resolve partial identities—one of which is resolved to near certainty. The SPIM opens the door for the investigation of trapping designs that deviate from the standard two camera design, the combination of other data types between which identities cannot be deterministically linked, and can be extended to the problem of partial genotypes.

  19. Laser-evoked coloration in polymers

    International Nuclear Information System (INIS)

    Zheng, H.Y.; Rosseinsky, David; Lim, G.C.

    2005-01-01

    Laser-evoked coloration in polymers has long been a major aim of polymer technology for potential applications in product surface decoration, marking personalised images and logos. However, the coloration results reported so far were mostly attributed to laser-induced thermal-chemical reactions. The laser-irradiated areas are characterized with grooves due to material removal. Furthermore, only single color was laser-induced in any given polymer matrix. To induce multiple colors in a given polymer matrix with no apparent surface material removal is most desirable and challenging and may be achieved through laser-induced photo-chemical reactions. However, little public information is available at present. We report that two colors of red and green have been produced on an initially transparent CPV/PVA samples through UV laser-induced photo-chemical reactions. This is believed the first observation of laser-induced multiple-colors in the given polymer matrix. It is believed that the colorants underwent photo-effected electron transfer with suitable electron donors from the polymers to change from colorless bipyridilium Bipm 2+ to the colored Bipm + species. The discovery may lead to new approaches to the development of laser-evoked multiple coloration in polymers

  20. A Multi-Addressable Dyad with Switchable CMY Colors for Full-Color Rewritable Papers.

    Science.gov (United States)

    Qin, Tianyou; Han, Jiaqi; Geng, Yue; Ju, Le; Sheng, Lan; Zhang, Sean Xiao-An

    2018-06-23

    Reversible multicolor displays on solid media using single molecule pigments have been a long-awaited goal. Herein, a new and simple molecular dyad, which can undergo switchable CMY color changes both in solution and solid substrate upon exposure to light, water/acid, and nucleophiles, is designed and synthesized. The stimuli used in this work can be applied independent of each other, which is beneficial for color changes without mutual interference. As a comparison, the mixtures of the two molecular switching motifs forming the basis of the dyad were also studied. The dyad greatly outperforms the corresponding mixed system with respect to reversible color-switching on the paper substrate. Its potential for full-color rewritable paper with excellent reversibility has been demonstrated. Legible multicolor prints, that is, high color contrast and resolution, good dispersion, excellent reversibility, were achieved using common water-jet and light-based printers. This work provides a very promising approach for further development of full-color switchable molecules, materials and displays. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Gold nanoshell photomodification under a single-nanosecond laser pulse accompanied by color-shifting and bubble formation phenomena

    International Nuclear Information System (INIS)

    Akchurin, Garif; Khlebtsov, Boris; Akchurin, Georgy; Tuchin, Valery; Zharov, Vladimir; Khlebtsov, Nikolai

    2008-01-01

    Laser-nanoparticle interaction is crucial for biomedical applications of lasers and nanotechnology to the treatment of cancer or pathogenic microorganisms. We report on the first observation of laser-induced coloring of gold nanoshell solution after a one nanosecond pulse and an unprecedentedly low bubble formation (as the main mechanism of cancer cell killing) threshold at a laser fluence of about 4 mJ cm -2 , which is safe for normal tissue. Specifically, silica/gold nanoshell (140/15 nm) suspensions were irradiated with a single 4 ns (1064 nm) or 8 ns (900 nm) laser pulse at fluences ranging from 0.1 mJ cm -2 to 50 J cm -2 . Solution red coloring was observed by the naked eye confirmed by blue-shifting of the absorption spectrum maximum from the initial 900 nm for nanoshells to 530 nm for conventional colloidal gold nanospheres. TEM images revealed significant photomodification of nanoparticles including complete fragmentation of gold shells, changes in silica core structure, formation of small 20-30 nm isolated spherical gold nanoparticles, gold nanoshells with central holes, and large and small spherical gold particles attached to a silica core. The time-resolved monitoring of bubble formation phenomena with the photothermal (PT) thermolens technique demonstrated that after application of a single 8 ns pulse at fluences 5-10 mJ cm -2 and higher the next pulse did not produce any PT response, indicating a dramatic decrease in absorption because of gold shell modification. We also observed a dependence of the bubble expansion time on the laser energy with unusually very fast PT signal rising (∼3.5 ns scale at 0.2 J cm -2 ). Application of the observed phenomena to medical applications is discussed, including a simple visual color test for laser-nanoparticle interaction

  2. Development of Measurement Device of Working Radius of Crane Based on Single CCD Camera and Laser Range Finder

    Science.gov (United States)

    Nara, Shunsuke; Takahashi, Satoru

    In this paper, what we want to do is to develop an observation device to measure the working radius of a crane truck. The device has a single CCD camera, a laser range finder and two AC servo motors. First, in order to measure the working radius, we need to consider algorithm of a crane hook recognition. Then, we attach the cross mark on the crane hook. Namely, instead of the crane hook, we try to recognize the cross mark. Further, for the observation device, we construct PI control system with an extended Kalman filter to track the moving cross mark. Through experiments, we show the usefulness of our device including new control system of mark tracking.

  3. Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras.

    Science.gov (United States)

    Wu, Dewen; Chen, Ruizhi; Chen, Liang

    2017-11-16

    Artificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm.

  4. Estimating tiger abundance from camera trap data: Field surveys and analytical issues

    Science.gov (United States)

    Karanth, K. Ullas; Nichols, James D.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Automated photography of tigers Panthera tigris for purely illustrative purposes was pioneered by British forester Fred Champion (1927, 1933) in India in the early part of the Twentieth Century. However, it was McDougal (1977) in Nepal who first used camera traps, equipped with single-lens reflex cameras activated by pressure pads, to identify individual tigers and study their social and predatory behaviors. These attempts involved a small number of expensive, cumbersome camera traps, and were not, in any formal sense, directed at “sampling” tiger populations.

  5. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  6. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  7. Optomechanical System Development of the AWARE Gigapixel Scale Camera

    Science.gov (United States)

    Son, Hui S.

    Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems. The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology. This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.

  8. SFR test fixture for hemispherical and hyperhemispherical camera systems

    Science.gov (United States)

    Tamkin, John M.

    2017-08-01

    Optical testing of camera systems in volume production environments can often require expensive tooling and test fixturing. Wide field (fish-eye, hemispheric and hyperhemispheric) optical systems create unique challenges because of the inherent distortion, and difficulty in controlling reflections from front-lit high resolution test targets over the hemisphere. We present a unique design for a test fixture that uses low-cost manufacturing methods and equipment such as 3D printing and an Arduino processor to control back-lit multi-color (VIS/NIR) targets and sources. Special care with LED drive electronics is required to accommodate both global and rolling shutter sensors.

  9. Collaborative 3D Target Tracking in Distributed Smart Camera Networks for Wide-Area Surveillance

    Directory of Open Access Journals (Sweden)

    Xenofon Koutsoukos

    2013-05-01

    Full Text Available With the evolution and fusion of wireless sensor network and embedded camera technologies, distributed smart camera networks have emerged as a new class of systems for wide-area surveillance applications. Wireless networks, however, introduce a number of constraints to the system that need to be considered, notably the communication bandwidth constraints. Existing approaches for target tracking using a camera network typically utilize target handover mechanisms between cameras, or combine results from 2D trackers in each camera into 3D target estimation. Such approaches suffer from scale selection, target rotation, and occlusion, drawbacks typically associated with 2D tracking. In this paper, we present an approach for tracking multiple targets directly in 3D space using a network of smart cameras. The approach employs multi-view histograms to characterize targets in 3D space using color and texture as the visual features. The visual features from each camera along with the target models are used in a probabilistic tracker to estimate the target state. We introduce four variations of our base tracker that incur different computational and communication costs on each node and result in different tracking accuracy. We demonstrate the effectiveness of our proposed trackers by comparing their performance to a 3D tracker that fuses the results of independent 2D trackers. We also present performance analysis of the base tracker along Quality-of-Service (QoS and Quality-of-Information (QoI metrics, and study QoS vs. QoI trade-offs between the proposed tracker variations. Finally, we demonstrate our tracker in a real-life scenario using a camera network deployed in a building.

  10. ePix100 camera: Use and applications at LCLS

    Energy Technology Data Exchange (ETDEWEB)

    Carini, G. A., E-mail: carini@slac.stanford.edu; Alonso-Mori, R.; Blaj, G.; Caragiulo, P.; Chollet, M.; Damiani, D.; Dragone, A.; Feng, Y.; Haller, G.; Hart, P.; Hasi, J.; Herbst, R.; Herrmann, S.; Kenney, C.; Lemke, H.; Manger, L.; Markovic, B.; Mehta, A.; Nelson, S.; Nishimura, K. [SLAC National Accelerator Laboratory (United States); and others

    2016-07-27

    The ePix100 x-ray camera is a new system designed and built at SLAC for experiments at the Linac Coherent Light Source (LCLS). The camera is the first member of a family of detectors built around a single hardware and software platform, supporting a variety of front-end chips. With a readout speed of 120 Hz, matching the LCLS repetition rate, a noise lower than 80 e-rms and pixels of 50 µm × 50 µm, this camera offers a viable alternative to fast readout, direct conversion, scientific CCDs in imaging mode. The detector, designed for applications such as X-ray Photon Correlation Spectroscopy (XPCS) and wavelength dispersive X-ray Emission Spectroscopy (XES) in the energy range from 2 to 10 keV and above, comprises up to 0.5 Mpixels in a very compact form factor. In this paper, we report the performance of the camera during its first use at LCLS.

  11. A Novel Mechanism for Color Vision: Pupil Shape and Chromatic Aberration Can Provide Spectral Discrimination for Color Blind Organisms.

    OpenAIRE

    Stubbs, Christopher; Stubbs, Alexander

    2015-01-01

    We present a mechanism by which organisms with only a single photoreceptor, that have a monochromatic view of the world, can achieve color discrimination. The combination of an off axis pupil and the principle of chromatic aberration (where light of different colors focus at different distances behind a lens) can combine to provide color-blind animals with a way to distinguish colors. As a specific example we constructed a computer model of the visual system of cephalopods, (octopus, squid, a...

  12. A Novel Mechanism for Color Vision: Pupil Shape and Chromatic Aberration Can Provide Spectral Discrimination for Color Blind Organisms.

    OpenAIRE

    Stubbs, Alexander L; Stubbs, Christopher William

    2016-01-01

    We present a mechanism by which organisms with only a single photoreceptor, that have a monochromatic view of the world, can achieve color discrimination. The combination of an off axis pupil and the principle of chromatic aberration (where light of different colors focus at different distances behind a lens) can combine to provide color-blind animals with a way to distinguish colors. As a specific example we constructed a computer model of the visual system of cephalopods, (octopus, squid, a...

  13. High-performance dual-speed CCD camera system for scientific imaging

    Science.gov (United States)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  14. A fluorometric lateral flow assay for visual detection of nucleic acids using a digital camera readout.

    Science.gov (United States)

    Magiati, Maria; Sevastou, Areti; Kalogianni, Despina P

    2018-06-04

    A fluorometric lateral flow assay has been developed for the detection of nucleic acids. The fluorophores phycoerythrin (PE) and fluorescein isothiocyanate (FITC) were used as labels, while a common digital camera and a colored vinyl-sheet, acting as a cut-off optical filter, are used for fluorescence imaging. After DNA amplification by polymerase chain reaction (PCR), the biotinylated PCR product is hybridized to its complementary probe that carries a poly(dA) tail at 3΄ edge and then applied to the lateral flow strip. The hybrids are captured to the test zone of the strip by immobilized poly(dT) sequences and detected by streptavidin-fluorescein and streptavidin-phycoerythrin conjugates, through streptavidin-biotin interaction. The assay is widely applicable, simple, cost-effective, and offers a large multiplexing potential. Its performance is comparable to assays based on the use of streptavidin-gold nanoparticles conjugates. As low as 7.8 fmol of a ssDNA and 12.5 fmol of an amplified dsDNA target were detectable. Graphical abstract Schematic presentation of a fluorometric lateral flow assay based on fluorescein and phycoerythrin fluorescent labels for the detection of single-stranded (ssDNA) and double-stranded DNA (dsDNA) sequences and using a digital camera readout. SA: streptavidin, BSA: Bovine Serum Albumin, B: biotin, FITC: fluorescein isothiocyanate, PE: phycoerythrin, TZ: test zone, CZ: control zone.

  15. Rolling cycle amplification based single-color quantum dots–ruthenium complex assembling dyads for homogeneous and highly selective detection of DNA

    Energy Technology Data Exchange (ETDEWEB)

    Su, Chen; Liu, Yufei; Ye, Tai; Xiang, Xia; Ji, Xinghu; He, Zhike, E-mail: zhkhe@whu.edu.cn

    2015-01-01

    Graphical abstract: A universal, label-free, homogeneous, highly sensitive, and selective fluorescent biosensor for DNA detection is developed by using rolling-circle amplification (RCA) based single-color quantum dots–ruthenium complex (QDs–Ru) assembling dyads. - Highlights: • The single-color QDs–Ru assembling dyads were applied in homogeneous DNA assay. • This biosensor exhibited high selectivity against base mismatched sequences. • This biosensor could be severed as universal platform for the detection of ssDNA. • This sensor could be used to detect the target in human serum samples. • This DNA sensor had a good selectivity under the interference of other dsDNA. - Abstract: In this work, a new, label-free, homogeneous, highly sensitive, and selective fluorescent biosensor for DNA detection is developed by using rolling-circle amplification (RCA) based single-color quantum dots–ruthenium complex (QDs–Ru) assembling dyads. This strategy includes three steps: (1) the target DNA initiates RCA reaction and generates linear RCA products; (2) the complementary DNA hybridizes with the RCA products to form long double-strand DNA (dsDNA); (3) [Ru(phen){sub 2}(dppx)]{sup 2+} (dppx = 7,8-dimethyldipyrido [3,2-a:2′,3′-c] phenanthroline) intercalates into the long dsDNA with strong fluorescence emission. Due to its strong binding propensity with the long dsDNA, [Ru(phen){sub 2}(dppx)]{sup 2+} is removed from the surface of the QDs, resulting in restoring the fluorescence of the QDs, which has been quenched by [Ru(phen){sub 2}(dppx)]{sup 2+} through a photoinduced electron transfer process and is overlaid with the fluorescence of dsDNA bonded Ru(II) polypyridyl complex (Ru-dsDNA). Thus, high fluorescence intensity is observed, and is related to the concentration of target. This sensor exhibits not only high sensitivity for hepatitis B virus (HBV) ssDNA with a low detection limit (0.5 pM), but also excellent selectivity in the complex matrix. Moreover

  16. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera.

    Science.gov (United States)

    Nguyen, Thuy Tuong; Slaughter, David C; Hanson, Bradley D; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin

    2015-07-28

    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.

  17. Development of the RGB LEDs color mixing mechanism for stability the color temperature at different projection distances.

    Science.gov (United States)

    Hung, Chih-Ching

    2015-01-01

    In lighting application, the color mixing of the RGB LEDs can provide more color selection in correlated color temperature and color rendering. Therefore, the purpose of this study is to propose a RGB color mixing mechanism by applying the mechanism design. Three sets of lamp-type RGB LEDs are individually installed on three four-bar linkages. A crank is used to drive three groups of RGB LEDs lamp-type to project lights onto a single plane in order to mix the lights. And, simulations of the illuminance and associated color temperatures are conducted by changing the distance to the projection plane, under the assumption that the stability of the color temperature of the projected light does not change according to the projecting height. Thus, the effect of change in the color temperature on color determination by the humans' eyes was avoided. The success of the proposed method will allow medical personnel to choose suitable wavelengths and color temperatures according to the particular requirements of their medical-examination environments.

  18. The influence of distrubing effects on the performance of a wide field coded mask X-ray camera

    International Nuclear Information System (INIS)

    Sims, M.R.; Turner, M.J.L.; Willingale, R.

    1985-01-01

    The coded aperture telescope, or Dicke camera, is seen as an instrument suitable for many applications in X-ray and gamma ray imaging. In this paper the effects of a partially obscuring window mask support or collimator, a detector with limited spatial resolution, and motion of the camera during image integration are considered using a computer simulation of the performance of such a camera. Cross correlation and the Wiener filter are used to deconvolve the data. It is shown that while these effects cause a degradation in performance this is in no case catastrophic. Deterioration of the image is shown to be greatest where strong sources are present in the field of view and is quite small (proportional 10%) when diffuse background is the major element. A comparison between the cyclic mask camera and the single mask camera is made under various conditions and it is shown the single mask camera has a moderate advantage particularly when imaging a wide field of view. (orig.)

  19. Non-mydriatic, wide field, fundus video camera

    Science.gov (United States)

    Hoeher, Bernhard; Voigtmann, Peter; Michelson, Georg; Schmauss, Bernhard

    2014-02-01

    We describe a method we call "stripe field imaging" that is capable of capturing wide field color fundus videos and images of the human eye at pupil sizes of 2mm. This means that it can be used with a non-dilated pupil even with bright ambient light. We realized a mobile demonstrator to prove the method and we could acquire color fundus videos of subjects successfully. We designed the demonstrator as a low-cost device consisting of mass market components to show that there is no major additional technical outlay to realize the improvements we propose. The technical core idea of our method is breaking the rotational symmetry in the optical design that is given in many conventional fundus cameras. By this measure we could extend the possible field of view (FOV) at a pupil size of 2mm from a circular field with 20° in diameter to a square field with 68° by 18° in size. We acquired a fundus video while the subject was slightly touching and releasing the lid. The resulting video showed changes at vessels in the region of the papilla and a change of the paleness of the papilla.

  20. Color symmetrical superconductivity in a schematic nuclear quark model

    DEFF Research Database (Denmark)

    Bohr, Henrik; Providencia, C.; da Providencia, J.

    2010-01-01

    In this letter, a novel BCS-type formalism is constructed in the framework of a schematic QCD inspired quark model, having in mind the description of color symmetrical superconducting states. In the usual approach to color superconductivity, the pairing correlations affect only the quasi-particle...... states of two colors, the single-particle states of the third color remaining unaffected by the pairing correlations. In the theory of color symmetrical superconductivity here proposed, the pairing correlations affect symmetrically the quasi-particle states of the three colors and vanishing net color...

  1. A flexible geometry Compton camera for industrial gamma ray imaging

    International Nuclear Information System (INIS)

    Royle, G.J.; Speller, R.D.

    1996-01-01

    A design for a Compton scatter camera is proposed which is applicable to gamma ray imaging within limited access industrial sites. The camera consists of a number of single element detectors arranged in a small cluster. Coincidence circuitry enables the detectors to act as a scatter camera. Positioning the detector cluster at various locations within the site, and subsequent reconstruction of the recorded data, allows an image to be obtained. The camera design allows flexibility to cater for limited space or access simply by positioning the detectors in the optimum geometric arrangement within the space allowed. The quality of the image will be limited but imaging could still be achieved in regions which are otherwise inaccessible. Computer simulation algorithms have been written to optimize the various parameters involved, such as geometrical arrangement of the detector cluster and the positioning of the cluster within the site, and to estimate the performance of such a device. Both scintillator and semiconductor detectors have been studied. A prototype camera has been constructed which operates three small single element detectors in coincidence. It has been tested in a laboratory simulation of an industrial site. This consisted of a small room (2 m wide x 1 m deep x 2 m high) into which the only access points were two 6 cm diameter holes in a side wall. Simple images of Cs-137 sources have been produced. The work described has been done on behalf of BNFL for applications at their Sellafield reprocessing plant in the UK

  2. A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i

    Science.gov (United States)

    Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.

    2015-01-01

    We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity. 

  3. Color sorter for waste bottles; Hai garasu bin no iro senbetsu sochi

    Energy Technology Data Exchange (ETDEWEB)

    Uchida, M. [Sumitomo Metal Industries Ltd., Osaka (Japan)

    1994-08-12

    The recycling business about glass bottles having extended widely, ratio of recycled materials has come up to 55% out of all raw materials for bottle manufacturing. For the purpose of effective reuse, sorting by color into colorless, brown, green, etc. is indispensable. But, at present, this sorting work relies solely upon manpower. In response to the demand for automation of the above work, automatic color sorting system has been developed. In the first place, a pre-sorter can divide bottles into large ones (larger than 10cm in diameter), small ones and cullets. Bottles from the pre-sorter are arranged horizontally on a conveyer, and then light is shined in the direction from each bottle neck. By a color camera, each light permeated through a bottle bottom is caught. Next, by means of an image processing unit, bottles are gathered by color. Cullets are put on several conveyers and are separated by means of color sensors and air nozzles. Disposing capacity of one unit is 5,000 bottles/hr by each size, and one ton of cullets/hr. 4 figs., 2 tabs.

  4. Full-Color LCD Microdisplay System Based on OLED Backlight Unit and Field-Sequential Color Driving Method

    Directory of Open Access Journals (Sweden)

    Sungho Woo

    2011-01-01

    Full Text Available We developed a single-panel LCD microdisplay system using a field-sequential color (FSC driving method and an organic light-emitting diode (OLED as a backlight unit (BLU. The 0.76′′ OLED BLU with red, green, and blue (RGB colors was fabricated by a conventional UV photolithography patterning process and by vacuum deposition of small molecule organic layers. The field-sequential driving frequency was set to 255 Hz to allow each of the RGB colors to be generated without color mixing at the given display frame rate. A prototype FSC LCD microdisplay system consisting of a 0.7′′ LCD microdisplay panel and the 0.76′′ OLED BLU successfully exhibited color display and moving picture images using the FSC driving method.

  5. NEW METHOD FOR THE CALIBRATION OF MULTI-CAMERA MOBILE MAPPING SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. P. Kersting

    2012-07-01

    Full Text Available Mobile Mapping Systems (MMS allow for fast and cost-effective collection of geo-spatial information. Such systems integrate a set of imaging sensors and a position and orientation system (POS, which entails GPS and INS units. System calibration is a crucial process to ensure the attainment of the expected accuracy of such systems. It involves the calibration of the individual sensors as well as the calibration of the mounting parameters relating the system components. The mounting parameters of multi-camera MMS include two sets of relative orientation parameters (ROP: the lever arm offsets and the boresight angles relating the cameras and the IMU body frame and the ROP among the cameras (in the absence of GPS/INS data. In this paper, a novel single-step calibration method, which has the ability of estimating these two sets of ROP, is devised. Besides the ability to estimate the ROP among the cameras, the proposed method can use such parameters as prior information in the ISO procedure. The implemented procedure consists of an integrated sensor orientation (ISO where the GPS/INS-derived position and orientation and the system mounting parameters are directly incorporated in the collinearity equations. The concept of modified collinearity equations has been used by few authors for single-camera systems. In this paper, a new modification to the collinearity equations for GPS/INS-assisted multicamera systems is introduced. Experimental results using a real dataset demonstrate the feasibility of the proposed method.

  6. New Method for the Calibration of Multi-Camera Mobile Mapping Systems

    Science.gov (United States)

    Kersting, A. P.; Habib, A.; Rau, J.

    2012-07-01

    Mobile Mapping Systems (MMS) allow for fast and cost-effective collection of geo-spatial information. Such systems integrate a set of imaging sensors and a position and orientation system (POS), which entails GPS and INS units. System calibration is a crucial process to ensure the attainment of the expected accuracy of such systems. It involves the calibration of the individual sensors as well as the calibration of the mounting parameters relating the system components. The mounting parameters of multi-camera MMS include two sets of relative orientation parameters (ROP): the lever arm offsets and the boresight angles relating the cameras and the IMU body frame and the ROP among the cameras (in the absence of GPS/INS data). In this paper, a novel single-step calibration method, which has the ability of estimating these two sets of ROP, is devised. Besides the ability to estimate the ROP among the cameras, the proposed method can use such parameters as prior information in the ISO procedure. The implemented procedure consists of an integrated sensor orientation (ISO) where the GPS/INS-derived position and orientation and the system mounting parameters are directly incorporated in the collinearity equations. The concept of modified collinearity equations has been used by few authors for single-camera systems. In this paper, a new modification to the collinearity equations for GPS/INS-assisted multicamera systems is introduced. Experimental results using a real dataset demonstrate the feasibility of the proposed method.

  7. X-ray imaging using digital cameras

    Science.gov (United States)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  8. On the accuracy potential of focused plenoptic camera range determination in long distance operation

    Science.gov (United States)

    Sardemann, Hannes; Maas, Hans-Gerd

    2016-04-01

    Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.

  9. Optimisation of a dual head semiconductor Compton camera using Geant4

    Energy Technology Data Exchange (ETDEWEB)

    Harkness, L.J. [Department of Physics, University of Liverpool, Oliver Lodge Laboratory, Liverpool L697ZE (United Kingdom)], E-mail: ljh@ns.ph.liv.ac.uk; Boston, A.J.; Boston, H.C.; Cooper, R.J.; Cresswell, J.R.; Grint, A.N.; Nolan, P.J.; Oxley, D.C.; Scraggs, D.P. [Department of Physics, University of Liverpool, Oliver Lodge Laboratory, Liverpool L697ZE (United Kingdom); Beveridge, T.; Gillam, J. [School of Physics and Materials Engineering, Monash University, Melbourne (Australia); Lazarus, I. [STFC Daresbury Laboratory, Warrington, Cheshire (United Kingdom)

    2009-06-01

    Conventional medical gamma-ray camera systems utilise mechanical collimation to provide information on the position of an incident gamma-ray photon. Systems that use electronic collimation utilising Compton image reconstruction techniques have the potential to offer huge improvements in sensitivity. Position sensitive high purity germanium (HPGe) detector systems are being evaluated as part of a single photon emission computed tomography (SPECT) Compton camera system. Data have been acquired from the orthogonally segmented planar SmartPET detectors, operated in Compton camera mode. The minimum gamma-ray energy which can be imaged by the current system in Compton camera configuration is 244 keV due to the 20 mm thickness of the first scatter detector which causes large gamma-ray absorption. A simulation package for the optimisation of a new semiconductor Compton camera has been developed using the Geant4 toolkit. This paper will show results of preliminary analysis of the validated Geant4 simulation for gamma-ray energies of SPECT, 141 keV.

  10. INVESTIGATING THE SUITABILITY OF MIRRORLESS CAMERAS IN TERRESTRIAL PHOTOGRAMMETRIC APPLICATIONS

    Directory of Open Access Journals (Sweden)

    A. H. Incekara

    2017-11-01

    Full Text Available Digital single-lens reflex cameras (DSLR which are commonly referred as mirrored cameras are preferred for terrestrial photogrammetric applications such as documentation of cultural heritage, archaeological excavations and industrial measurements. Recently, digital cameras which are called as mirrorless systems that can be used with different lens combinations have become available for using similar applications. The main difference between these two camera types is the presence of the mirror mechanism which means that the incoming beam towards the lens is different in the way it reaches the sensor. In this study, two different digital cameras, one with a mirror (Nikon D700 and the other without a mirror (Sony a6000, were used to apply close range photogrammetric application on the rock surface at Istanbul Technical University (ITU Ayazaga Campus. Accuracy of the 3D models created by means of photographs taken with both cameras were compared with each other using difference values between field and model coordinates which were obtained after the alignment of the photographs. In addition, cross sections were created on the 3D models for both data source and maximum area difference between them is quite small because they are almost overlapping. The mirrored camera has become more consistent in itself with respect to the change of model coordinates for models created with photographs taken at different times, with almost the same ground sample distance. As a result, it has been determined that mirrorless cameras and point cloud produced using photographs obtained from these cameras can be used for terrestrial photogrammetric studies.

  11. Investigating the Suitability of Mirrorless Cameras in Terrestrial Photogrammetric Applications

    Science.gov (United States)

    Incekara, A. H.; Seker, D. Z.; Delen, A.; Acar, A.

    2017-11-01

    Digital single-lens reflex cameras (DSLR) which are commonly referred as mirrored cameras are preferred for terrestrial photogrammetric applications such as documentation of cultural heritage, archaeological excavations and industrial measurements. Recently, digital cameras which are called as mirrorless systems that can be used with different lens combinations have become available for using similar applications. The main difference between these two camera types is the presence of the mirror mechanism which means that the incoming beam towards the lens is different in the way it reaches the sensor. In this study, two different digital cameras, one with a mirror (Nikon D700) and the other without a mirror (Sony a6000), were used to apply close range photogrammetric application on the rock surface at Istanbul Technical University (ITU) Ayazaga Campus. Accuracy of the 3D models created by means of photographs taken with both cameras were compared with each other using difference values between field and model coordinates which were obtained after the alignment of the photographs. In addition, cross sections were created on the 3D models for both data source and maximum area difference between them is quite small because they are almost overlapping. The mirrored camera has become more consistent in itself with respect to the change of model coordinates for models created with photographs taken at different times, with almost the same ground sample distance. As a result, it has been determined that mirrorless cameras and point cloud produced using photographs obtained from these cameras can be used for terrestrial photogrammetric studies.

  12. Modeling human color categorization: Color discrimination and color memory

    OpenAIRE

    Heskes, T.; van den Broek, Egon; Lucas, P.; Hendriks, Maria A.; Vuurpijl, L.G.; Puts, M.J.H.; Wiegerinck, W.

    2003-01-01

    Color matching in Content-Based Image Retrieval is done using a color space and measuring distances between colors. Such an approach yields non-intuitive results for the user. We introduce color categories (or focal colors), determine that they are valid, and use them in two experiments. The experiments conducted prove the difference between color categorization by the cognitive processes color discrimination and color memory. In addition, they yield a Color Look-Up Table, which can improve c...

  13. Mobile phone camera benchmarking: combination of camera speed and image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  14. Phenolic Composition and Color of Single Cultivar Young Red Wines Made with Mencia and Alicante-Bouschet Grapes in AOC Valdeorras (Galicia, NW Spain

    Directory of Open Access Journals (Sweden)

    Eugenio Revilla

    2016-07-01

    Full Text Available Single cultivar wines made with two different red grape cultivars from AOC Valdeorras (Galicia, NW Spain, Mencia and Alicante Bouschet, were studied with the aim of determining their color and phenolic composition. Two sets of analyses were made on 30 wine samples of 2014 vintage, after malolactic fermentation took place, to evaluate several physicochemical characteristics from these wines related to color and polyphenols. Several parameters related with color and the general phenolic composition of wines (total phenols index, color intensity, hue, total anthocyans, total anthocyanins, colored anthocyanins, chemical age index, and total tannins were determined by UV-VIS spectrophotometry. Those analyses revealed that Alicante Bouschet wines presented, in general, a higher content of polyphenols and a more intense color than Mencia wines. Using HPLC-DAD, five anthocyanin monoglucosides and nine acylated anthocyanins were identified in both types of wine; each type of wine showed a distinctive anthocyanin fingerprint, as Alicante Bouschet wines contained a higher proportion of cyanidin-derived anthocyanins. Multivariate statistic studies were performed to both datasets to explore relationships among variables and among samples. These studies revealed relationships among several variables considered, and were capable to group the samples in two different classes using principal component analysis (PCA.

  15. High-speed three-frame image recording system using colored flash units and low-cost video equipment

    Science.gov (United States)

    Racca, Roberto G.; Scotten, Larry N.

    1995-05-01

    This article describes a method that allows the digital recording of sequences of three black and white images at rates of several thousand frames per second using a system consisting of an ordinary CCD camcorder, three flash units with color filters, a PC-based frame grabber board and some additional electronics. The maximum framing rate is determined by the duration of the flashtube emission, and for common photographic flash units lasting about 20 microsecond(s) it can exceed 10,000 frames per second in actual use. The subject under study is strobe- illuminated using a red, a green and a blue flash unit controlled by a special sequencer, and the three images are captured by a color CCD camera on a single video field. Color is used as the distinguishing parameter that allows the overlaid exposures to be resolved. The video output for that particular field will contain three individual scenes, one for each primary color component, which potentially can be resolved with no crosstalk between them. The output is electronically decoded into the primary color channels, frame grabbed and stored into digital memory, yielding three time-resolved images of the subject. A synchronization pulse provided by the flash sequencer triggers the frame grabbing so that the correct video field is acquired. A scheme involving the use of videotape as intermediate storage allows the frame grabbing to be performed using a monochrome video digitizer. Ideally each flash- illuminated scene would be confined to one color channel, but in practice various factors, both optical and electronic, affect color separation. Correction equations have been derived that counteract these effects in the digitized images and minimize 'ghosting' between frames. Once the appropriate coefficients have been established through a calibration procedure that needs to be performed only once for a given configuration of the equipment, the correction process is carried out transparently in software every time a

  16. Quantitative single-particle digital autoradiography with α-particle emitters for targeted radionuclide therapy using the iQID camera

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Brian W., E-mail: brian.miller@pnnl.gov [Pacific Northwest National Laboratory, Richland, Washington 99354 and College of Optical Sciences, The University of Arizona, Tucson, Arizona 85719 (United States); Frost, Sofia H. L.; Frayo, Shani L.; Kenoyer, Aimee L.; Santos, Erlinda; Jones, Jon C.; Orozco, Johnnie J. [Fred Hutchinson Cancer Research Center, Seattle, Washington 98109 (United States); Green, Damian J.; Press, Oliver W.; Pagel, John M.; Sandmaier, Brenda M. [Fred Hutchinson Cancer Research Center, Seattle, Washington 98109 and Department of Medicine, University of Washington, Seattle, Washington 98195 (United States); Hamlin, Donald K.; Wilbur, D. Scott [Department of Radiation Oncology, University of Washington, Seattle, Washington 98195 (United States); Fisher, Darrell R. [Dade Moeller Health Group, Richland, Washington 99354 (United States)

    2015-07-15

    Purpose: Alpha-emitting radionuclides exhibit a potential advantage for cancer treatments because they release large amounts of ionizing energy over a few cell diameters (50–80 μm), causing localized, irreparable double-strand DNA breaks that lead to cell death. Radioimmunotherapy (RIT) approaches using monoclonal antibodies labeled with α emitters may thus inactivate targeted cells with minimal radiation damage to surrounding tissues. Tools are needed to visualize and quantify the radioactivity distribution and absorbed doses to targeted and nontargeted cells for accurate dosimetry of all treatment regimens utilizing α particles, including RIT and others (e.g., Ra-223), especially for organs and tumors with heterogeneous radionuclide distributions. The aim of this study was to evaluate and characterize a novel single-particle digital autoradiography imager, the ionizing-radiation quantum imaging detector (iQID) camera, for use in α-RIT experiments. Methods: The iQID camera is a scintillator-based radiation detection system that images and identifies charged-particle and gamma-ray/x-ray emissions spatially and temporally on an event-by-event basis. It employs CCD-CMOS cameras and high-performance computing hardware for real-time imaging and activity quantification of tissue sections, approaching cellular resolutions. In this work, the authors evaluated its characteristics for α-particle imaging, including measurements of intrinsic detector spatial resolutions and background count rates at various detector configurations and quantification of activity distributions. The technique was assessed for quantitative imaging of astatine-211 ({sup 211}At) activity distributions in cryosections of murine and canine tissue samples. Results: The highest spatial resolution was measured at ∼20 μm full width at half maximum and the α-particle background was measured at a rate as low as (2.6 ± 0.5) × 10{sup −4} cpm/cm{sup 2} (40 mm diameter detector area

  17. Camera-based ratiometric fluorescence transduction of nucleic acid hybridization with reagentless signal amplification on a paper-based platform using immobilized quantum dots as donors.

    Science.gov (United States)

    Noor, M Omair; Krull, Ulrich J

    2014-10-21

    Paper-based diagnostic assays are gaining increasing popularity for their potential application in resource-limited settings and for point-of-care screening. Achievement of high sensitivity with precision and accuracy can be challenging when using paper substrates. Herein, we implement the red-green-blue color palette of a digital camera for quantitative ratiometric transduction of nucleic acid hybridization on a paper-based platform using immobilized quantum dots (QDs) as donors in fluorescence resonance energy transfer (FRET). A nonenzymatic and reagentless means of signal enhancement for QD-FRET assays on paper substrates is based on the use of dry paper substrates for data acquisition. This approach offered at least a 10-fold higher assay sensitivity and at least a 10-fold lower limit of detection (LOD) as compared to hydrated paper substrates. The surface of paper was modified with imidazole groups to assemble a transduction interface that consisted of immobilized QD-probe oligonucleotide conjugates. Green-emitting QDs (gQDs) served as donors with Cy3 as an acceptor. A hybridization event that brought the Cy3 acceptor dye in close proximity to the surface of immobilized gQDs was responsible for a FRET-sensitized emission from the acceptor dye, which served as an analytical signal. A hand-held UV lamp was used as an excitation source and ratiometric analysis using an iPad camera was possible by a relative intensity analysis of the red (Cy3 photoluminescence (PL)) and green (gQD PL) color channels of the digital camera. For digital imaging using an iPad camera, the LOD of the assay in a sandwich format was 450 fmol with a dynamic range spanning 2 orders of magnitude, while an epifluorescence microscope detection platform offered a LOD of 30 fmol and a dynamic range spanning 3 orders of magnitude. The selectivity of the hybridization assay was demonstrated by detection of a single nucleotide polymorphism at a contrast ratio of 60:1. This work provides an

  18. Color-to-grayscale conversion through weighted multiresolution channel fusion

    NARCIS (Netherlands)

    Wu, T.; Toet, A.

    2014-01-01

    We present a color-to-gray conversion algorithm that retains both the overall appearance and the discriminability of details of the input color image. The algorithm employs a weighted pyramid image fusion scheme to blend the R, G, and B color channels of the input image into a single grayscale

  19. Applications of color machine vision in the agricultural and food industries

    Science.gov (United States)

    Zhang, Min; Ludas, Laszlo I.; Morgan, Mark T.; Krutz, Gary W.; Precetti, Cyrille J.

    1999-01-01

    Color is an important factor in Agricultural and the Food Industry. Agricultural or prepared food products are often grade by producers and consumers using color parameters. Color is used to estimate maturity, sort produce for defects, but also perform genetic screenings or make an aesthetic judgement. The task of sorting produce following a color scale is very complex, requires special illumination and training. Also, this task cannot be performed for long durations without fatigue and loss of accuracy. This paper describes a machine vision system designed to perform color classification in real-time. Applications for sorting a variety of agricultural products are included: e.g. seeds, meat, baked goods, plant and wood.FIrst the theory of color classification of agricultural and biological materials is introduced. Then, some tools for classifier development are presented. Finally, the implementation of the algorithm on real-time image processing hardware and example applications for industry is described. This paper also presented an image analysis algorithm and a prototype machine vision system which was developed for industry. This system will automatically locate the surface of some plants using digital camera and predict information such as size, potential value and type of this plant. The algorithm developed will be feasible for real-time identification in an industrial environment.

  20. Depth profile analysis of non-specific fluorescence and color of tooth tissues after peroxide bleaching.

    Science.gov (United States)

    Klukowska, Malgorzata; Götz, Hermann; White, Donald J; Zoladz, James; Schwarz, Björn-Olaf; Duschner, Heinz

    2013-02-01

    To examine laboratory changes of endogenous non-specific fluorescence and color throughout subsurface of tooth structures prior to and following peroxide bleaching. Extracted human teeth were cross sectioned and mounted on glass slides. Cross sections were examined for internal color (digital camera) and nonspecific fluorescence (microRaman spectroscopy) throughout the tooth structure at specified locations. Surfaces of sections were then saturation bleached for 70 hours with a gel containing 6% hydrogen peroxide. Cross sections were reexamined for color and non-specific fluorescence changes. Unbleached enamel, dentin-enamel junction and dentin exhibit different CIELab color and non-specific fluorescence properties. Bleaching of teeth produced significant changes in color of internal cross sections and substantial reductions of non-specific fluorescence levels within enamel dentin and DEJ. Enamel and dentin non-specific fluorescence were reduced to common values with bleaching with enamel and the DEJ showing larger reductions than dentin.

  1. Measuring metallicities with Hubble space telescope/wide-field camera 3 photometry

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Teresa L.; Holtzman, Jon A. [Department of Astronomy, New Mexico State University, P.O. Box 30001, MSC 4500, Las Cruces, NM 88003-8001 (United States); Anthony-Twarog, Barbara J.; Twarog, Bruce [Department of Physics and Astronomy, University of Kansas, Lawrence, KS 66045-7582 (United States); Bond, Howard E. [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States); Saha, Abhijit [National Optical Astronomy Observatory, P.O. Box 26732, Tucson, AZ 85726 (United States); Walker, Alistair, E-mail: rosst@nmsu.edu, E-mail: holtz@nmsu.edu, E-mail: bjat@ku.edu, E-mail: btwarog@ku.edu, E-mail: heb11@psu.edu, E-mail: awalker@ctio.noao.edu [Cerro Tololo Inter-American Observatory (CTIO), National Optical Astronomy Observatory, Casilla 603, La Serena (Chile)

    2014-01-01

    We quantified and calibrated the metallicity and temperature sensitivities of colors derived from nine Wide-Field Camera 3 filters on board the Hubble Space Telescope using Dartmouth isochrones and Kurucz atmosphere models. The theoretical isochrone colors were tested and calibrated against observations of five well studied galactic clusters, M92, NGC 6752, NGC 104, NGC 5927, and NGC 6791, all of which have spectroscopically determined metallicities spanning –2.30 < [Fe/H] <+0.4. We found empirical corrections to the Dartmouth isochrone grid for each of the following color-magnitude diagrams (CMDs): (F555W-F814W, F814W), (F336W-F555W, F814W), (F390M-F555W, F814W), and (F390W-F555W, F814W). Using empirical corrections, we tested the accuracy and spread of the photometric metallicities assigned from CMDs and color-color diagrams (which are necessary to break the age-metallicity degeneracy). Testing three color-color diagrams [(F336W-F555W),(F390M-F555W),(F390W-F555W), versus (F555W-F814W)], we found the colors (F390M-F555W) and (F390W-F555W) to be the best suited to measure photometric metallicities. The color (F390W-F555W) requires much less integration time, but generally produces wider metallicity distributions and, at very low metallicity, the metallicity distribution function (MDF) from (F390W-F555W) is ∼60% wider than that from (F390M-F555W). Using the calibrated isochrones, we recovered the overall cluster metallicity to within ∼0.1 dex in [Fe/H] when using CMDs (i.e., when the distance, reddening, and ages are approximately known). The measured MDF from color-color diagrams shows that this method measures metallicities of stellar clusters of unknown age and metallicity with an accuracy of ∼0.2-0.5 dex using F336W-F555W, ∼0.15-0.25 dex using F390M-F555W, and ∼0.2-0.4 dex with F390W-F555W, with the larger uncertainty pertaining to the lowest metallicity range.

  2. FY 1999 project on the development of new industry support type international standards. Standardization of color management; 1999 nendo shinki sangyo shiengata kokusai hyojun kaihatsu jigyo seika hokokusho. Iro saigen kanri (color management) no hyojunka

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    For the purpose of eliminating variations in color reproduction in multi-media output devices, study was conducted of making a standard plan for color information equipment as hardware and for the software to determine the characteristics, and the FY 1999 results were summarized. As to the standardization of color control in input/output devices, characteristic dependency of color printer under environmental conditions such as ambient temperature/humidity was determined. And to integrate it into the ICC profile, the profile was expanded. Concerning the standardization of multi-spectrum color image description format, non-linear spectral characteristics were investigated which are dependent on types of light sources, light intensity and geometric conditions. In relation to the standardization of psychological color reproduction, the scope of the experiment using examinee was expanded, and the study was so conducted that the standard image for skin color evaluation use can be recognized as the one that has statistical accuracy. A sequence method which is more simple and more effective was established. In the discussions of the proposed international standards, studies were carried out of the electronic still camera color characteristics measurement and the color space ISO RGB. (NEDO)

  3. Technique for improving the quality of images from digital cameras using ink-jet printers and smoothed RGB transfer curves

    Science.gov (United States)

    Sampat, Nitin; Grim, John F.; O'Hara, James E.

    1998-04-01

    The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.

  4. Specialized Color Targets for Spectral Reflectance Reconstruction of Magnified Images

    Science.gov (United States)

    Kruschwitz, Jennifer D. T.

    Digital images are used almost exclusively instead of film to capture visual information across many scientific fields. The colorimetric color representation within these digital images can be relayed from the digital counts produced by the camera with the use of a known color target. In image capture of magnified images, there is currently no reliable color target that can be used at multiple magnifications and give the user a solid understanding of the color ground truth within those images. The first part of this dissertation included the design, fabrication, and testing of a color target produced with optical interference coated microlenses for use in an off-axis illumination, compound microscope. An ideal target was designed to increase the color gamut for colorimetric imaging and provide the necessary "Block Dye" spectral reflectance profiles across the visible spectrum to reduce the number of color patches necessary for multiple filter imaging systems that rely on statistical models for spectral reflectance reconstruction. There are other scientific disciplines that can benefit from a specialized color target to determine the color ground truth in their magnified images and perform spectral estimation. Not every discipline has the luxury of having a multi-filter imaging system. The second part of this dissertation developed two unique ways of using an interference coated color mirror target: one that relies on multiple light-source angles, and one that leverages a dynamic color change with time. The source multi-angle technique would be used for the microelectronic discipline where the reconstructed spectral reflectance would be used to determine a dielectric film thickness on a silicon substrate, and the time varying technique would be used for a biomedical example to determine the thickness of human tear film.

  5. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    Science.gov (United States)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  6. Family Of Calibrated Stereometric Cameras For Direct Intraoral Use

    Science.gov (United States)

    Curry, Sean; Moffitt, Francis; Symes, Douglas; Baumrind, Sheldon

    1983-07-01

    In order to study empirically the relative efficiencies of different types of orthodontic appliances in repositioning teeth in vivo, we have designed and constructed a pair of fixed-focus, normal case, fully-calibrated stereometric cameras. One is used to obtain stereo photography of single teeth, at a scale of approximately 2:1, and the other is designed for stereo imaging of the entire dentition, study casts, facial structures, and other related objects at a scale of approximately 1:8. Twin lenses simultaneously expose adjacent frames on a single roll of 70 mm film. Physical flatness of the film is ensured by the use of a spring-loaded metal pressure plate. The film is forced against a 3/16" optical glass plate upon which is etched an array of 16 fiducial marks which divide the film format into 9 rectangular regions. Using this approach, it has been possible to produce photographs which are undistorted for qualitative viewing and from which quantitative data can be acquired by direct digitization of conventional photographic enlargements. We are in the process of designing additional members of this family of cameras. All calibration and data acquisition and analysis techniques previously developed will be directly applicable to these new cameras.

  7. Evaluation of Operator Performance Using True Color and Artificial Color in Natural Scene Perception

    National Research Council Canada - National Science Library

    Vargo, John

    1999-01-01

    .... Recent advances in technology have permitted the fusion of the output of these two devices into a single color display that potentially combines the capabilities of both sensors while overcoming their limitations...

  8. SVBRDF-Invariant Shape and Reflectance Estimation from a Light-Field Camera.

    Science.gov (United States)

    Wang, Ting-Chun; Chandraker, Manmohan; Efros, Alexei A; Ramamoorthi, Ravi

    2018-03-01

    Light-field cameras have recently emerged as a powerful tool for one-shot passive 3D shape capture. However, obtaining the shape of glossy objects like metals or plastics remains challenging, since standard Lambertian cues like photo-consistency cannot be easily applied. In this paper, we derive a spatially-varying (SV)BRDF-invariant theory for recovering 3D shape and reflectance from light-field cameras. Our key theoretical insight is a novel analysis of diffuse plus single-lobe SVBRDFs under a light-field setup. We show that, although direct shape recovery is not possible, an equation relating depths and normals can still be derived. Using this equation, we then propose using a polynomial (quadratic) shape prior to resolve the shape ambiguity. Once shape is estimated, we also recover the reflectance. We present extensive synthetic data on the entire MERL BRDF dataset, as well as a number of real examples to validate the theory, where we simultaneously recover shape and BRDFs from a single image taken with a Lytro Illum camera.

  9. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  10. Skin color and makeup strategies of women from different ethnic groups.

    Science.gov (United States)

    Caisey, L; Grangeat, F; Lemasson, A; Talabot, J; Voirin, A

    2006-12-01

    The development of a world-wide makeup foundation range requires a thorough understanding of skin color features of women around the world. To understand the cosmetic needs of women from different ethnic groups, we measured skin color in five different groups (French and American Caucasian, Japanese, African-American, and Hispanic-American) and compared the data obtained with women's self-perception of skin color, before or after applying their usual foundation product. Skin color was measured using a spectro-radiometer and a spheric lighting device with CCD camera ensuring a highly reliable imaging and data acquisition. The diversity of skin types involved in the study lead to define a large, continuous color space where color spectra from various ethnic groups overlap. Three types of complexion - dark, medium, or light - were distinguished in each group. Only Japanese women did not identify with this lightness scale and considered it makes more sense to classify their skin according to a pink-ocher-beige color scale. The approach however revealed the great variety of skin colors within each ethnic group and the extent of unevenness. A fairly good agreement appeared between women's self-perception and data from color measurements but in Hispanic-American group. Data recorded, after foundation was applied, showed overall consistency with makeup strategy as described by volunteers except for the latter group whose approach looked more uncertain and variable. The findings of the study demonstrate the advantage of combining qualitative and quantitative approach for assessing the cosmetic needs and expectations of women from different ethnic origin and cultural background.

  11. The iQID camera: An ionizing-radiation quantum imaging detector

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Brian W., E-mail: brian.miller@pnnl.gov [Pacific Northwest National Laboratory, Richland, WA 99352 (United States); College of Optical Sciences, The University of Arizona, Tucson, AZ 85719 (United States); Gregory, Stephanie J.; Fuller, Erin S. [Pacific Northwest National Laboratory, Richland, WA 99352 (United States); Barrett, Harrison H.; Bradford Barber, H.; Furenlid, Lars R. [Center for Gamma-Ray Imaging, The University of Arizona, Tucson, AZ 85719 (United States); College of Optical Sciences, The University of Arizona, Tucson, AZ 85719 (United States)

    2014-12-11

    We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detector's response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The confirmed response to this broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated by particle interactions is optically amplified by the intensifier and then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. The spatial location and energy of individual particles are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, excellent detection efficiency for charged particles, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discriminate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is real-time, single-particle digital autoradiography. We present the latest results and discuss potential applications.

  12. Noncontact imaging of plethysmographic pulsation and spontaneous low-frequency oscillation in skin perfusion with a digital red-green-blue camera

    Science.gov (United States)

    Nishidate, Izumi; Hoshi, Akira; Aoki, Yuta; Nakano, Kazuya; Niizeki, Kyuichi; Aizu, Yoshihisa

    2016-03-01

    A non-contact imaging method with a digital RGB camera is proposed to evaluate plethysmogram and spontaneous lowfrequency oscillation. In vivo experiments with human skin during mental stress induced by the Stroop color-word test demonstrated the feasibility of the method to evaluate the activities of autonomic nervous systems.

  13. New camera-based microswitch technology to monitor small head and mouth responses of children with multiple disabilities.

    Science.gov (United States)

    Lancioni, Giulio E; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N; O'Reilly, Mark F; Green, Vanessa A; Furniss, Fred

    2014-06-01

    Assessing a new camera-based microswitch technology, which did not require the use of color marks on the participants' face. Two children with extensive multiple disabilities participated. The responses selected for them consisted of small, lateral head movements and mouth closing or opening. The intervention was carried out according to a multiple probe design across responses. The technology involved a computer with a CPU using a 2-GHz clock, a USB video camera with a 16-mm lens, a USB cable connecting the camera and the computer, and a special software program written in ISO C++ language. The new technology was satisfactorily used with both children. Large increases in their responding were observed during the intervention periods (i.e. when the responses were followed by preferred stimulation). The new technology may be an important resource for persons with multiple disabilities and minimal motor behavior.

  14. Photometric Characterization of the Dark Energy Camera

    Science.gov (United States)

    Bernstein, G. M.; Abbott, T. M. C.; Armstrong, R.; Burke, D. L.; Diehl, H. T.; Gruendl, R. A.; Johnson, M. D.; Li, T. S.; Rykoff, E. S.; Walker, A. R.; Wester, W.; Yanny, B.

    2018-05-01

    We characterize the variation in photometric response of the Dark Energy Camera (DECam) across its 520 Mpix science array during 4 years of operation. These variations are measured using high signal-to-noise aperture photometry of >107 stellar images in thousands of exposures of a few selected fields, with the telescope dithered to move the sources around the array. A calibration procedure based on these results brings the rms variation in aperture magnitudes of bright stars on cloudless nights down to 2–3 mmag, with color corrections; and the use of an aperture-correction proxy. The DECam response pattern across the 2° field drifts over months by up to ±9 mmag, in a nearly wavelength-independent low-order pattern. We find no fundamental barriers to pushing global photometric calibrations toward mmag accuracy.

  15. Continuous Learning of a Multilayered Network Topology in a Video Camera Network

    Directory of Open Access Journals (Sweden)

    Zou Xiaotao

    2009-01-01

    Full Text Available Abstract A multilayered camera network architecture with nodes as entry/exit points, cameras, and clusters of cameras at different layers is proposed. Unlike existing methods that used discrete events or appearance information to infer the network topology at a single level, this paper integrates face recognition that provides robustness to appearance changes and better models the time-varying traffic patterns in the network. The statistical dependence between the nodes, indicating the connectivity and traffic patterns of the camera network, is represented by a weighted directed graph and transition times that may have multimodal distributions. The traffic patterns and the network topology may be changing in the dynamic environment. We propose a Monte Carlo Expectation-Maximization algorithm-based continuous learning mechanism to capture the latent dynamically changing characteristics of the network topology. In the experiments, a nine-camera network with twenty-five nodes (at the lowest level is analyzed both in simulation and in real-life experiments and compared with previous approaches.

  16. Continuous Learning of a Multilayered Network Topology in a Video Camera Network

    Directory of Open Access Journals (Sweden)

    Xiaotao Zou

    2009-01-01

    Full Text Available A multilayered camera network architecture with nodes as entry/exit points, cameras, and clusters of cameras at different layers is proposed. Unlike existing methods that used discrete events or appearance information to infer the network topology at a single level, this paper integrates face recognition that provides robustness to appearance changes and better models the time-varying traffic patterns in the network. The statistical dependence between the nodes, indicating the connectivity and traffic patterns of the camera network, is represented by a weighted directed graph and transition times that may have multimodal distributions. The traffic patterns and the network topology may be changing in the dynamic environment. We propose a Monte Carlo Expectation-Maximization algorithm-based continuous learning mechanism to capture the latent dynamically changing characteristics of the network topology. In the experiments, a nine-camera network with twenty-five nodes (at the lowest level is analyzed both in simulation and in real-life experiments and compared with previous approaches.

  17. Non-invasive diagnostics of ion beams in strong toroidal magnetic fields with standard CMOS cameras

    Science.gov (United States)

    Ates, Adem; Ates, Yakup; Niebuhr, Heiko; Ratzinger, Ulrich

    2018-01-01

    A superconducting Figure-8 stellarator type magnetostatic Storage Ring (F8SR) is under investigation at the Institute for Applied Physics (IAP) at Goethe University Frankfurt. Besides numerical simulations on an optimized design for beam transport and injection a scaled down (0.6T) experiment with two 30°toroidal magnets is set up for further investigations. A great challenge is the development of a non-destructive, magnetically insensitive and flexible detector for local investigations of an ion beam propagating through the toroidal magnetostatic field. This paper introduces a new way of beam path measurement by residual gas monitoring. It uses a single board camera connected to a standard single board computer by a camera serial interface all placed inside the vacuum chamber. First experiments with one camera were done and in a next step two under 90 degree arranged cameras were installed. With the help of the two cameras which are moveable along the beam pipe the theoretical predictions are experimentally verified successfully. Previous experimental results have been confirmed. The transport of H+ and H2+ ion beams with energies of 7 keV and at beam currents of about 1 mA is investigated successfully.

  18. Gamma camera image acquisition, display, and processing with the personal microcomputer

    International Nuclear Information System (INIS)

    Lear, J.L.; Pratt, J.P.; Roberts, D.R.; Johnson, T.; Feyerabend, A.

    1990-01-01

    The authors evaluated the potential of a microcomputer for direct acquisition, display, and processing of gamma camera images. Boards for analog-to-digital conversion and image zooming were designed, constructed, and interfaced to the Macintosh II (Apple Computer, Cupertino, Calif). Software was written for processing of single, gated, and time series images. The system was connected to gamma cameras, and its performance was compared with that of dedicated nuclear medicine computers. Data could be acquired from gamma cameras at rates exceeding 200,000 counts per second, with spatial resolution exceeding intrinsic camera resolution. Clinical analysis could be rapidly performed. This system performed better than most dedicated nuclear medicine computers with respect to speed of data acquisition and spatial resolution of images while maintaining full compatibility with the standard image display, hard-copy, and networking formats. It could replace such dedicated systems in the near future as software is refined

  19. CMOS Imaging Sensor Technology for Aerial Mapping Cameras

    Science.gov (United States)

    Neumann, Klaus; Welzenbach, Martin; Timm, Martin

    2016-06-01

    In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.

  20. Modeling human color categorization: Color discrimination and color memory

    NARCIS (Netherlands)

    Heskes, T.; van den Broek, Egon; Lucas, P.; Hendriks, Maria A.; Vuurpijl, L.G.; Puts, M.J.H.; Wiegerinck, W.

    2003-01-01

    Color matching in Content-Based Image Retrieval is done using a color space and measuring distances between colors. Such an approach yields non-intuitive results for the user. We introduce color categories (or focal colors), determine that they are valid, and use them in two experiments. The

  1. Multi-camera synchronization core implemented on USB3 based FPGA platform

    Science.gov (United States)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  2. Color Views of Soil Scooped on Sol 9

    Science.gov (United States)

    2008-01-01

    These three color views show the Robotic Arm scoop from NASA's Phoenix Mars Lander. The image shows a handful of Martian soil dug from the digging site informally called 'Knave of Hearts,' from the trench informally called 'Dodo,' on the ninth Martian day of the mission, or Sol 9 (June 3, 2008). 'Dodo' is the same site as the earlier test trench dug on the seventh Martian day of the mission, or Sol 7 (June 1, 2008). The Robotic Arm Camera took the three color views at different focus positions. Scientists can better study soil structure and estimate how much soil was collected by taking multiple images at different foci. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  3. Food color and appearance measurement, specification and communication, can we do better?

    Science.gov (United States)

    Hutchings, John; Singleton, Mark; Plater, Keith; Dias, Benjamin

    2002-06-01

    Conventional methods of color specification demand a sample that is flat, uniformly colored, diffusely reflecting and opaque. Very many natural, processed and manufactured foods, on the other hand, are three-dimensional, irregularly shaped unevenly colored and translucent. Hence, spectrophotometers and tristimulus colorimeters can only be used for reliable and accurate color measurement in certain cases and under controlled conditions. These techniques are certainly unsuitable for specification of color patterning and other factors of total appearance in which, for example, surface texture and gloss interfere with the surface color. Hence, conventional techniques are more appropriate to food materials than to foods themselves. This paper reports investigations on the application of digital camera and screen technologies to these problems. Results indicated that accuracy sufficient for wide scale use in the food industry is obtainable. Measurement applications include the specification and automatic measurement and classification of total appearance properties of three-dimensional products. This will be applicable to specification and monitoring of fruit and vegetables within the growing, storage and marketing supply chain and to on-line monitoring. Applications to sensory panels include monitoring of color and appearance changes occurring during paneling and the development of physical reference scales based pigment chemistry changes. Digital technology will be extendable to the on-screen judging of real and virtual products as well as to the improvement of appearance archiving and communication.

  4. MOSS spectroscopic camera for imaging time resolved plasma species temperature and flow speed

    International Nuclear Information System (INIS)

    Michael, Clive; Howard, John

    2000-01-01

    A MOSS (Modulated Optical Solid-State) spectroscopic camera has been devised to monitor the spatial and temporal variations of temperatures and flow speeds of plasma ion species, the Doppler broadening measurement being made of spectroscopic lines specified. As opposed to a single channel MOSS spectrometer, the camera images light from plasma onto an array of light detectors, being mentioned 2D imaging of plasma ion temperatures and flow speeds. In addition, compared to a conventional grating spectrometer, the MOSS camera shows an excellent light collecting performance which leads to the improvement of signal to noise ratio and of time resolution. The present paper first describes basic items of MOSS spectroscopy, then follows MOSS camera with an emphasis on the optical system of 2D imaging. (author)

  5. MOSS spectroscopic camera for imaging time resolved plasma species temperature and flow speed

    Energy Technology Data Exchange (ETDEWEB)

    Michael, Clive; Howard, John [Australian National Univ., Plasma Research Laboratory, Canberra (Australia)

    2000-03-01

    A MOSS (Modulated Optical Solid-State) spectroscopic camera has been devised to monitor the spatial and temporal variations of temperatures and flow speeds of plasma ion species, the Doppler broadening measurement being made of spectroscopic lines specified. As opposed to a single channel MOSS spectrometer, the camera images light from plasma onto an array of light detectors, being mentioned 2D imaging of plasma ion temperatures and flow speeds. In addition, compared to a conventional grating spectrometer, the MOSS camera shows an excellent light collecting performance which leads to the improvement of signal to noise ratio and of time resolution. The present paper first describes basic items of MOSS spectroscopy, then follows MOSS camera with an emphasis on the optical system of 2D imaging. (author)

  6. Stability Analysis for a Multi-Camera Photogrammetric System

    Directory of Open Access Journals (Sweden)

    Ayman Habib

    2014-08-01

    Full Text Available Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.

  7. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  8. New color-photographic observation of thermoluminescence from sliced rock samples

    International Nuclear Information System (INIS)

    Hashimoto, Tetsuo; Kimura, Kenichi; Koyanagi, Akira; Takahashi, Kuniaki; Sotobayashi, Takeshi

    1983-01-01

    New observation technique has been established for the thermoluminescence photography using extremely high-sensitive color films. Considering future application to the geological fields, a granite was selected as a testing material. The sliced specimens (0.5--0.7 mm in thickness), which were irradiated with a 60 Co source, were mounted on the heater attached with a thermocouple, which was connected to a microcomputer for measuring the temperature. The samples were heated in the temperature range of 80--400 0 C by operating the camera-shutter controlled with the microcomputer. Four commercially available films (Kodak-1000(ASA), -400, Sakura-400, Fuji-400) could give apparently detectable color-images of artificial thermoluminescence above a total absorbed dose of 880 Gy(88 krad). The specimens, irradiated upto 8.4 kGy(840krad), allowed easily to distinguish the distinct appearance of the thermoluminescence images depending on kinds of white mineral constituents. Moreover, such color images were changeable with the heating temperature. Sakura-400 film has proved the most colorful images from aspects of color tone although Kodak-1000 film showed the highest sensitivity. By applying this Kodak-1000, it was found that the characteristic color image due to natural thermoluminescence was significantly observed on the Precambrian granite which was exposed with natural radiation alone since its formation. This simple and interesting technique, obtainable surface information reflecting impurities and local crystal defects in addition to small mineral constituents, was named as the thermoluminescence color imaging (abbreviated to TLCI) technique by the authors and its versatile applications were discussed. (author)

  9. Image color reduction method for color-defective observers using a color palette composed of 20 particular colors

    Science.gov (United States)

    Sakamoto, Takashi

    2015-01-01

    This study describes a color enhancement method that uses a color palette especially designed for protan and deutan defects, commonly known as red-green color blindness. The proposed color reduction method is based on a simple color mapping. Complicated computation and image processing are not required by using the proposed method, and the method can replace protan and deutan confusion (p/d-confusion) colors with protan and deutan safe (p/d-safe) colors. Color palettes for protan and deutan defects proposed by previous studies are composed of few p/d-safe colors. Thus, the colors contained in these palettes are insufficient for replacing colors in photographs. Recently, Ito et al. proposed a p/dsafe color palette composed of 20 particular colors. The author demonstrated that their p/d-safe color palette could be applied to image color reduction in photographs as a means to replace p/d-confusion colors. This study describes the results of the proposed color reduction in photographs that include typical p/d-confusion colors, which can be replaced. After the reduction process is completed, color-defective observers can distinguish these confusion colors.

  10. A real-time error-free color-correction facility for digital consumers

    Science.gov (United States)

    Shaw, Rodney

    2008-01-01

    It has been well known since the earliest days of color photography that color-balance in general, and facial reproduction (flesh tones) in particular, are of dominant interest to the consumer, and significant research resources have been expended in satisfying this need. The general problem is a difficult one, spanning the factors that govern perception and personal preference, the physics and chemistry of color reproduction, as well as wide field of color measurement specification, and analysis. However, with the advent of digital photography and its widespread acceptance in the consumer market, and with the possibility of a much greater degree of individual control over color reproduction, the field is taking on a new consumer-driven impetus, and the provision of user facilities for preferred color choice now constitutes an intense field of research. In addition, due to the conveniences of digital technology, the collection of large data bases and statistics relating to individual color preferences have now become a relatively straightforward operation. Using a consumer preference approach of this type, we have developed a user-friendly facility whereby unskilled consumers may manipulate the color of their personal digital images according to their preferred choice. By virtue of its ease of operation and the real-time nature of the color-correction transforms, this facility can readily be inserted anywhere a consumer interacts with a digital image, from camera, printer, or scanner, to web or photo-kiosk. Here the underlying scientific principles are explored in detail, and these are related to the practical color-preference outcomes. Examples are given of the application to the correction of images with unsatisfactory color balance, and especially to flesh tones and faces, and the nature of the consumer controls and their corresponding image transformations are explored.

  11. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Thuy Tuong Nguyen

    2015-07-01

    Full Text Available This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1 an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2 a plant counting method based on projection histograms; and (3 a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.

  12. Example-Based Image Colorization Using Locality Consistent Sparse Representation.

    Science.gov (United States)

    Bo Li; Fuchen Zhao; Zhuo Su; Xiangguo Liang; Yu-Kun Lai; Rosin, Paul L

    2017-11-01

    Image colorization aims to produce a natural looking color image from a given gray-scale image, which remains a challenging problem. In this paper, we propose a novel example-based image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target gray-scale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features, and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation, which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target gray-scale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms the state-of-the-art methods, both visually and quantitatively using a user study.

  13. NEMA NU-1 2007 based and independent quality control software for gamma cameras and SPECT

    International Nuclear Information System (INIS)

    Vickery, A; Joergensen, T; De Nijs, R

    2011-01-01

    A thorough quality assurance of gamma and SPECT cameras requires a careful handling of the measured quality control (QC) data. Most gamma camera manufacturers provide the users with camera specific QC Software. This QC software is indeed a useful tool for the following of day-to-day performance of a single camera. However, when it comes to objective performance comparison of different gamma cameras and a deeper understanding of the calculated numbers, the use of camera specific QC software without access to the source code is rather avoided. Calculations and definitions might differ, and manufacturer independent standardized results are preferred. Based upon the NEMA Standards Publication NU 1-2007, we have developed a suite of easy-to-use data handling software for processing acquired QC data providing the user with instructive images and text files with the results.

  14. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  15. An innovative silicon photomultiplier digitizing camera for gamma-ray astronomy

    Energy Technology Data Exchange (ETDEWEB)

    Heller, M. [DPNC-Universite de Geneve, Geneva (Switzerland); Schioppa, E. Jr; Porcelli, A.; Pujadas, I.T.; Della Volpe, D.; Montaruli, T.; Cadoux, F.; Favre, Y.; Christov, A.; Rameez, M.; Miranda, L.D.M. [DPNC-Universite de Geneve, Geneva (Switzerland); Zietara, K.; Idzkowski, B.; Jamrozy, M.; Ostrowski, M.; Stawarz, L.; Zagdanski, A. [Jagellonian University, Astronomical Observatory, Krakow (Poland); Aguilar, J.A. [DPNC-Universite de Geneve, Geneva (Switzerland); Universite Libre Bruxelles, Faculte des Sciences, Brussels (Belgium); Prandini, E.; Lyard, E.; Neronov, A.; Walter, R. [Universite de Geneve, Department of Astronomy, Geneva (Switzerland); Rajda, P.; Bilnik, W.; Kasperek, J.; Lalik, K.; Wiecek, M. [AGH University of Science and Technology, Krakow (Poland); Blocki, J.; Mach, E.; Michalowski, J.; Niemiec, J.; Skowron, K.; Stodulski, M. [Instytut Fizyki Jadrowej im. H. Niewodniczanskiego Polskiej Akademii Nauk, Krakow (Poland); Bogacz, L. [Jagiellonian University, Department of Information Technologies, Krakow (Poland); Borkowski, J.; Frankowski, A.; Janiak, M.; Moderski, R. [Polish Academy of Science, Nicolaus Copernicus Astronomical Center, Warsaw (Poland); Bulik, T.; Grudzinska, M. [University of Warsaw, Astronomical Observatory, Warsaw (Poland); Mandat, D.; Pech, M.; Schovanek, P. [Institute of Physics of the Czech Academy of Sciences, Prague (Czech Republic); Marszalek, A.; Stodulska, M. [Instytut Fizyki Jadrowej im. H. Niewodniczanskiego Polskiej Akademii Nauk, Krakow (Poland); Jagellonian University, Astronomical Observatory, Krakow (Poland); Pasko, P.; Seweryn, K. [Centrum Badan Kosmicznych Polskiej Akademii Nauk, Warsaw (Poland); Sliusar, V. [Universite de Geneve, Department of Astronomy, Geneva (Switzerland); Taras Shevchenko National University of Kyiv, Astronomical Observatory, Kyiv (Ukraine)

    2017-01-15

    The single-mirror small-size telescope (SST-1M) is one of the three proposed designs for the small-size telescopes (SSTs) of the Cherenkov Telescope Array (CTA) project. The SST-1M will be equipped with a 4 m-diameter segmented reflector dish and an innovative fully digital camera based on silicon photo-multipliers. Since the SST sub-array will consist of up to 70 telescopes, the challenge is not only to build telescopes with excellent performance, but also to design them so that their components can be commissioned, assembled and tested by industry. In this paper we review the basic steps that led to the design concepts for the SST-1M camera and the ongoing realization of the first prototype, with focus on the innovative solutions adopted for the photodetector plane and the readout and trigger parts of the camera. In addition, we report on results of laboratory measurements on real scale elements that validate the camera design and show that it is capable of matching the CTA requirements of operating up to high moonlight background conditions. (orig.)

  16. Advances in low-level color image processing

    CERN Document Server

    Smolka, Bogdan

    2014-01-01

    Color perception plays an important role in object recognition and scene understanding both for humans and intelligent vision systems. Recent advances in digital color imaging and computer hardware technology have led to an explosion in the use of color images in a variety of applications including medical imaging, content-based image retrieval, biometrics, watermarking, digital inpainting, remote sensing, visual quality inspection, among many others. As a result, automated processing and analysis of color images has become an active area of research, to which the large number of publications of the past two decades bears witness. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for single channel images are often not directly applicable to multichannel  ones. The goal of this volume is to summarize the state-of-the-art in the early stages of the color image processing pipeline.

  17. Influence of Surrounding Colors in the Illuminant-Color Mode on Color Constancy

    Directory of Open Access Journals (Sweden)

    Kazuho Fukuda

    2011-05-01

    Full Text Available On color constancy, we showed that brighter surrounding colors had greater influence than dim colors (Uchikawa, Kitazawa, MacLeod, Fukuda, 2010 APCV. Increasing luminance of a stimulus causes the change in appearance from the surface-color to the illuminant-color mode. However it is unknown whether the visual system considers such color appearance mode of surrounding colors to achieve color constancy. We investigated the influence of surrounding colors that appeared illuminant on color constancy. The stimulus was composed of a central test stimulus and surrounding six colors: bright and dim red, green and blue. The observers adjusted the chromaticity of the test stimulus to be appeared as an achromatic surface. The luminance balance of three bright surrounding colors was equalized with that of the optimal colors in three illuminant conditions, then, the luminance of one of the three bright colors was varied in the range beyond the critical luminance of color appearance mode transition. The results showed that increasing luminance of a bright surrounding color shifted the observers' achromatic setting toward its chromaticity, but this effect diminished for the surrounding color in the illuminant-color mode. These results suggest that the visual system considers color appearance mode of surrounding colors to accomplish color constancy.

  18. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    International Nuclear Information System (INIS)

    Anderson, Robert J.

    2014-01-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  19. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  20. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Robotic and Security Systems Dept.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  1. The Mars Science Laboratory (MSL) Mast cameras and Descent imager: Investigation and instrument descriptions

    Science.gov (United States)

    Malin, Michal C.; Ravine, Michael A.; Caplinger, Michael A.; Tony Ghaemi, F.; Schaffner, Jacob A.; Maki, Justin N.; Bell, James F.; Cameron, James F.; Dietrich, William E.; Edgett, Kenneth S.; Edwards, Laurence J.; Garvin, James B.; Hallet, Bernard; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sletten, Ron; Sullivan, Robert J.; Sumner, Dawn Y.; Aileen Yingst, R.; Duston, Brian M.; McNair, Sean; Jensen, Elsa H.

    2017-08-01

    The Mars Science Laboratory Mast camera and Descent Imager investigations were designed, built, and operated by Malin Space Science Systems of San Diego, CA. They share common electronics and focal plane designs but have different optics. There are two Mastcams of dissimilar focal length. The Mastcam-34 has an f/8, 34 mm focal length lens, and the M-100 an f/10, 100 mm focal length lens. The M-34 field of view is about 20° × 15° with an instantaneous field of view (IFOV) of 218 μrad; the M-100 field of view (FOV) is 6.8° × 5.1° with an IFOV of 74 μrad. The M-34 can focus from 0.5 m to infinity, and the M-100 from 1.6 m to infinity. All three cameras can acquire color images through a Bayer color filter array, and the Mastcams can also acquire images through seven science filters. Images are ≤1600 pixels wide by 1200 pixels tall. The Mastcams, mounted on the 2 m tall Remote Sensing Mast, have a 360° azimuth and 180° elevation field of regard. Mars Descent Imager is fixed-mounted to the bottom left front side of the rover at 66 cm above the surface. Its fixed focus lens is in focus from 2 m to infinity, but out of focus at 66 cm. The f/3 lens has a FOV of 70° by 52° across and along the direction of motion, with an IFOV of 0.76 mrad. All cameras can acquire video at 4 frames/second for full frames or 720p HD at 6 fps. Images can be processed using lossy Joint Photographic Experts Group and predictive lossless compression.

  2. Depth profile measurement with lenslet images of the plenoptic camera

    Science.gov (United States)

    Yang, Peng; Wang, Zhaomin; Zhang, Wei; Zhao, Hongying; Qu, Weijuan; Zhao, Haimeng; Asundi, Anand; Yan, Lei

    2018-03-01

    An approach for carrying out depth profile measurement of an object with the plenoptic camera is proposed. A single plenoptic image consists of multiple lenslet images. To begin with, these images are processed directly with a refocusing technique to obtain the depth map, which does not need to align and decode the plenoptic image. Then, a linear depth calibration is applied based on the optical structure of the plenoptic camera for depth profile reconstruction. One significant improvement of the proposed method concerns the resolution of the depth map. Unlike the traditional method, our resolution is not limited by the number of microlenses inside the camera, and the depth map can be globally optimized. We validated the method with experiments on depth map reconstruction, depth calibration, and depth profile measurement, with the results indicating that the proposed approach is both efficient and accurate.

  3. Electroactive subwavelength gratings (ESWGs) from conjugated polymers for color and intensity modulation

    Science.gov (United States)

    Bhuvana, Thiruvelu; Kim, Byeonggwan; Yang, Xu; Shin, Haijin; Kim, Eunkyoung

    2012-05-01

    Subwavelength gratings with electroactive polymers such as poly(3-hexylthiophene) (P3HT) and poly(3,4-propylenedioxythiophene-phenylene) (P(ProDOT-Ph)) controlled the color intensity for various visible colors of diffracted light in a single device. Under the illumination of a white light, at a fixed angle of incidence, the color intensity of the diffracted light was reversibly switched from the maximum value down to 15% (85% decrease) by applying -2 to 2 V due to electrochemical (EC) reaction. All spectral colors including red, green, and blue were generated by changing the angle of incidence, and the intensity of each color was modulated electrochemically at a single EC device. With electroactive subwavelength gratings (ESWGs) of P3HT, the maximum modulation of the color intensity was observed in the red-yellow quadrant in the CIE color plot, whereas for the ESWGs of P(ProDOT-Ph), the maximum modulation of the color intensity was observed in the yellow-green and green-blue quadrants. Both ESWGs showed a memory effect, keeping their color and intensity even after power was turned off for longer than 40 hours.Subwavelength gratings with electroactive polymers such as poly(3-hexylthiophene) (P3HT) and poly(3,4-propylenedioxythiophene-phenylene) (P(ProDOT-Ph)) controlled the color intensity for various visible colors of diffracted light in a single device. Under the illumination of a white light, at a fixed angle of incidence, the color intensity of the diffracted light was reversibly switched from the maximum value down to 15% (85% decrease) by applying -2 to 2 V due to electrochemical (EC) reaction. All spectral colors including red, green, and blue were generated by changing the angle of incidence, and the intensity of each color was modulated electrochemically at a single EC device. With electroactive subwavelength gratings (ESWGs) of P3HT, the maximum modulation of the color intensity was observed in the red-yellow quadrant in the CIE color plot, whereas for the

  4. Performance characteristics of the novel PETRRA positron camera

    CERN Document Server

    Ott, R J; Erlandsson, K; Reader, A; Duxbury, D; Bateman, J; Stephenson, R; Spill, E

    2002-01-01

    The PETRRA positron camera consists of two 60 cmx40 cm annihilation photon detectors mounted on a rotating gantry. Each detector contains large BaF sub 2 scintillators interfaced to large area multiwire proportional chambers filled with a photo-sensitive vapour (tetrakis-(dimethylamino)-ethylene). The spatial resolution of the camera has been measured as 6.5+-1.0 mm FWHM throughout the sensitive field-of-view (FoV), the timing resolution is between 7 and 10 ns FWHM and the detection efficiency for annihilation photons is approx 30% per detector. The count-rates obtained, from a 20 cm diameter by 11 cm long water filled phantom containing 90 MBq of sup 1 sup 8 F, were approx 1.25x10 sup 6 singles and approx 1.1x10 sup 5 cps raw coincidences, limited only by the read-out system dead-time of approx 4 mu s. The count-rate performance, sensitivity and large FoV make the camera ideal for whole-body imaging in oncology.

  5. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  6. Picosecond camera

    International Nuclear Information System (INIS)

    Decroisette, Michel

    A Kerr cell activated by infrared pulses of a model locked Nd glass laser, acts as an ultra-fast and periodic shutter, with a few p.s. opening time. Associated with a S.T.L. camera, it gives rise to a picosecond camera allowing us to study very fast effects [fr

  7. Color image analysis of contaminants and bacteria transport in porous media

    Science.gov (United States)

    Rashidi, Mehdi; Dehmeshki, Jamshid; Daemi, Mohammad F.; Cole, Larry; Dickenson, Eric

    1997-10-01

    Transport of contaminants and bacteria in aqueous heterogeneous saturated porous systems have been studied experimentally using a novel fluorescent microscopic imaging technique. The approach involves color visualization and quantification of bacterium and contaminant distributions within a transparent porous column. By introducing stained bacteria and an organic dye as a contaminant into the column and illuminating the porous regions with a planar sheet of laser beam, contaminant and bacterial transport processes through the porous medium can be observed and measured microscopically. A computer controlled color CCD camera is used to record the fluorescent images as a function of time. These images are recorded by a frame accurate high resolution VCR and are then analyzed using a color image analysis code written in our laboratories. The color images are digitized this way and simultaneous concentration and velocity distributions of both contaminant and bacterium are evaluated as a function of time and pore characteristics. The approach provides a unique dynamic probe to observe these transport processes microscopically. These results are extremely valuable in in-situ bioremediation problems since microscopic particle-contaminant- bacterium interactions are the key to understanding and optimization of these processes.

  8. Reliability of single kidney glomerular filtration rate measured by a 99mTc-DTPA gamma camera technique

    International Nuclear Information System (INIS)

    Rehling, M.; Moller, M.L.; Jensen, J.J.; Thamdrup, B.; Lund, J.O.; Trap-Jensen, J.

    1986-01-01

    The reliability of a previously published method for determination of single kidney glomerular filtration rate (SKGFR) by means of technetium-99m-diethylenetriaminepenta-acetate (99mTc-DTPA) gamma camera renography was evaluated. The day-to-day variation in the calculated SKGFR values was earlier found to be 8.8%. The technique was compared to the simultaneously measured renal clearance of inulin in 19 unilaterally nephrectomized patients with GFR varying from 11 to 76 ml/min. The regression line (y = 1.04 X -2.5) did not differ significantly from the line of identity. The standard error of estimate was 4.3 ml/min. In 17 patients the inter- and intraobserver variation of the calculated SKGFR values was 1.2 ml/min and 1.3 ml/min, respectively. In 21 of 25 healthy subjects studied (age range 27-29 years), total GFR calculated from the renograms was within an established age-dependent normal range of GFR

  9. A multi-camera system for real-time pose estimation

    Science.gov (United States)

    Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin

    2007-04-01

    This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.

  10. a Uav-Based Low-Cost Stereo Camera System for Archaeological Surveys - Experiences from Doliche (turkey)

    Science.gov (United States)

    Haubeck, K.; Prinz, T.

    2013-08-01

    The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.

  11. Assessment of color parameters of composite resin shade guides using digital imaging versus colorimeter.

    Science.gov (United States)

    Yamanel, Kivanc; Caglar, Alper; Özcan, Mutlu; Gulsah, Kamran; Bagis, Bora

    2010-12-01

    This study evaluated the color parameters of resin composite shade guides determined using a colorimeter and digital imaging method. Four composite shade guides, namely: two nanohybrid (Grandio [Voco GmbH, Cuxhaven, Germany]; Premise [KerrHawe SA, Bioggio, Switzerland]) and two hybrid (Charisma [Heraeus Kulzer, GmbH & Co. KG, Hanau, Germany]; Filtek Z250 [3M ESPE, Seefeld, Germany]) were evaluated. Ten shade tabs were selected (A1, A2, A3, A3,5, A4, B1, B2, B3, C2, C3) from each shade guide. CIE Lab values were obtained using digital imaging and a colorimeter (ShadeEye NCC Dental Chroma Meter, Shofu Inc., Kyoto, Japan). The data were analyzed using two-way analysis of variance and Bonferroni post hoc test. Overall, the mean ΔE values from different composite pairs demonstrated statistically significant differences when evaluated with the colorimeter (p 6.8). For all shade pairs evaluated, the most significant shade mismatches were obtained between Grandio-Filtek Z250 (p = 0.021) and Filtek Z250-Premise (p = 0.01) regarding ΔE mean values, whereas the best shade match was between Grandio-Charisma (p = 0.255) regardless of the measurement method. The best color match (mean ΔE values) was recorded for A1, A2, and A3 shade pairs in both methods. When proper object-camera distance, digital camera settings, and suitable illumination conditions are provided, digital imaging method could be used in the assessment of color parameters. Interchanging use of shade guides from different composite systems should be avoided during color selection. © 2010, COPYRIGHT THE AUTHORS. JOURNAL COMPILATION © 2010, WILEY PERIODICALS, INC.

  12. A device for the color measurement and detection of spots on the skin

    Science.gov (United States)

    Pladellorens, Josep; Pintó, Agusti; Segura, Jordi; Cadevall, Cristina; Antó, Joan; Pujol, Jaume; Vilaseca, Meritxell; Coll, Joaquín

    2006-08-01

    In this work we present a new and fast easyâ€``to-use device which allows the measurement of color and the detection of spots on the human skin. The developed device is highly practical for relatively untrained operators and uses inexpensive consumer equipment, such as a CCD color camera, a light source composed of LEDs and a laptop. In order to perform these measurements the system takes a picture of the skin. After that, the operator selects the region of the skin to be analyzed on the image displayed and the system provides the CIELAB color coordinates, the chroma and the ITA parameter (Individual Tipology Angle), allowing the comparison with other reference images by means of the CIELAB color differences. The system also detects the spots, such as freckles, age spots, sun spots, pimples, black heads, etc., in a determined region, allowing the objective measurement of their size and area. The knowledge of the color of the skin and the detection of spots can be useful in several areas such as in dermatology applications, the cosmetics industry, the biometrics field, health care etc.

  13. Scalable IC Platform for Smart Cameras

    Directory of Open Access Journals (Sweden)

    Harry Broers

    2005-08-01

    Full Text Available Smart cameras are among the emerging new fields of electronics. The points of interest are in the application areas, software and IC development. In order to reduce cost, it is worthwhile to invest in a single architecture that can be scaled for the various application areas in performance (and resulting power consumption. In this paper, we show that the combination of an SIMD (single-instruction multiple-data processor and a general-purpose DSP is very advantageous for the image processing tasks encountered in smart cameras. While the SIMD processor gives the very high performance necessary by exploiting the inherent data parallelism found in the pixel crunching part of the algorithms, the DSP offers a friendly approach to the more complex tasks. The paper continues to motivate that SIMD processors have very convenient scaling properties in silicon, making the complete, SIMD-DSP architecture suitable for different application areas without changing the software suite. Analysis of the changes in power consumption due to scaling shows that for typical image processing tasks, it is beneficial to scale the SIMD processor to use the maximum level of parallelism available in the algorithm if the IC supply voltage can be lowered. If silicon cost is of importance, the parallelism of the processor should be scaled to just reach the desired performance given the speed of the silicon.

  14. Reducing the Variance of Intrinsic Camera Calibration Results in the ROS Camera_Calibration Package

    Science.gov (United States)

    Chiou, Geoffrey Nelson

    The intrinsic calibration of a camera is the process in which the internal optical and geometric characteristics of the camera are determined. If accurate intrinsic parameters of a camera are known, the ray in 3D space that every point in the image lies on can be determined. Pairing with another camera allows for the position of the points in the image to be calculated by intersection of the rays. Accurate intrinsics also allow for the position and orientation of a camera relative to some world coordinate system to be calculated. These two reasons for having accurate intrinsic calibration for a camera are especially important in the field of industrial robotics where 3D cameras are frequently mounted on the ends of manipulators. In the ROS (Robot Operating System) ecosystem, the camera_calibration package is the default standard for intrinsic camera calibration. Several researchers from the Industrial Robotics & Automation division at Southwest Research Institute have noted that this package results in large variances in the intrinsic parameters of the camera when calibrating across multiple attempts. There are also open issues on this matter in their public repository that have not been addressed by the developers. In this thesis, we confirm that the camera_calibration package does indeed return different results across multiple attempts, test out several possible hypothesizes as to why, identify the reason, and provide simple solution to fix the cause of the issue.

  15. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  16. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10 6 - 10 8 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  17. Marketing and the law: defending single color trademarks

    OpenAIRE

    Keating, Byron W.; Coltman, Tim

    2008-01-01

    Most international jurisdictions have sought to broaden their definition of a trade mark following the Qualitex v Jacobson Products (Qualitex Case)2. In Australia, the Trade Marks Act (Cth) 1995 was introduced to recognise that colors, scents, shapes and sounds could be registered as a trade mark provided the mark was capable of distinguishing, in the course of trade, the proprietor’s goods or services from the goods or services of others. However, to date, it has proven extremely difficult t...

  18. Cameras in mobile phones

    Science.gov (United States)

    Nummela, Ville; Viinikanoja, Jarkko; Alakarhu, Juha

    2006-04-01

    One of the fastest growing markets in consumer markets today are camera phones. During past few years total volume has been growing fast and today millions of mobile phones with camera will be sold. At the same time resolution and functionality of the cameras has been growing from CIF towards DSC level. From camera point of view the mobile world is an extremely challenging field. Cameras should have good image quality but in small size. They also need to be reliable and their construction should be suitable for mass manufacturing. All components of the imaging chain should be well optimized in this environment. Image quality and usability are the most important parameters to user. The current trend of adding more megapixels to cameras and at the same time using smaller pixels is affecting both. On the other hand reliability and miniaturization are key drivers for product development as well as the cost. In optimized solution all parameters are in balance but the process of finding the right trade-offs is not an easy task. In this paper trade-offs related to optics and their effects to image quality and usability of cameras are discussed. Key development areas from mobile phone camera point of view are also listed.

  19. EXAFS analysis of full color glasses and glass ceramics: local order and color

    International Nuclear Information System (INIS)

    Santa Cruz, Petrus A.; Sa, Gilberto F. de; Malta, Oscar L.; Silva, Jose expedito Cavalcante

    1996-01-01

    The generation and control of the relative intensities of the primary additive colors in solid state light emitters is very important to the development of higher resolution media, used in color monitors, solid state sensors, large area and flat displays and other optoelectronic devices. We have developed a multi-doped glassy material named FCG (full color glass, to generate and to control the primary light colors, allowing the simulation of any color of light by additive synthesis. Tm(III), Tb(III) and Eu(III) ions were used (0.01 to 5.0 mol%) as blue, green and red narrow emitters. A wide color gamut was obtained under ultraviolet excitation by varying the material composition. The chromaticity diagram is covered, including the white simulation. We proposed a mechanism to control the chromaticity of a fixed composition of the material, using the Er (III) as a selective quencher that may be deactivated by infrared excitation. Although this new material presents at this time a high efficiency, it may be improved because the energy transfer between the rare earth triad may be still reduced. Optical spectroscopy measurements confirms that it is still possible to improve the efficiency of the FCC material. EXAFS analysis will be used to probe the local environment around the triad of rare earth that generates the primary colors. For this purpose we have prepared single doped glasses with each component of the triad with the same concentration than FCG. The devitrification of these glasses will be analyzed in order to produce glassceramics with ion segregation. (author)

  20. Intraocular camera for retinal prostheses: Refractive and diffractive lens systems

    Science.gov (United States)

    Hauer, Michelle Christine

    The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.

  1. Automatic camera to laser calibration for high accuracy mobile mapping systems using INS

    Science.gov (United States)

    Goeman, Werner; Douterloigne, Koen; Gautama, Sidharta

    2013-09-01

    A mobile mapping system (MMS) is a mobile multi-sensor platform developed by the geoinformation community to support the acquisition of huge amounts of geodata in the form of georeferenced high resolution images and dense laser clouds. Since data fusion and data integration techniques are increasingly able to combine the complementary strengths of different sensor types, the external calibration of a camera to a laser scanner is a common pre-requisite on today's mobile platforms. The methods of calibration, nevertheless, are often relatively poorly documented, are almost always time-consuming, demand expert knowledge and often require a carefully constructed calibration environment. A new methodology is studied and explored to provide a high quality external calibration for a pinhole camera to a laser scanner which is automatic, easy to perform, robust and foolproof. The method presented here, uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration, a well studied absolute orientation problem needs to be solved. In many cases, the camera and laser sensor are calibrated in relation to the INS system. Therefore, the transformation from camera to laser contains the cumulated error of each sensor in relation to the INS. Here, the calibration of the camera is performed in relation to the laser frame using the time synchronization between the sensors for data association. In this study, the use of the inertial relative movement will be explored to collect more useful calibration data. This results in a better intersensor calibration allowing better coloring of the clouds and a more accurate depth mask for images, especially on the edges of objects in the scene.

  2. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  3. Single-shot dual-wavelength in-line and off-axis hybrid digital holography

    Science.gov (United States)

    Wang, Fengpeng; Wang, Dayong; Rong, Lu; Wang, Yunxin; Zhao, Jie

    2018-02-01

    We propose an in-line and off-axis hybrid holographic real-time imaging technique. The in-line and off-axis digital holograms are generated simultaneously by two lasers with different wavelengths, and they are recorded using a color camera with a single shot. The reconstruction is carried using an iterative algorithm in which the initial input is designed to include the intensity of the in-line hologram and the approximate phase distributions obtained from the off-axis hologram. In this way, the complex field in the object plane and the output by the iterative procedure can produce higher quality amplitude and phase images compared to traditional iterative phase retrieval. The performance of the technique has been demonstrated by acquiring the amplitude and phase images of a green lacewing's wing and a living moon jellyfish.

  4. Color sensitivity of the multi-exposure HDR imaging process

    Science.gov (United States)

    Lenseigne, Boris; Jacobs, Valéry Ann; Withouck, Martijn; Hanselaer, Peter; Jonker, Pieter P.

    2013-04-01

    Multi-exposure high dynamic range(HDR) imaging builds HDR radiance maps by stitching together different views of a same scene with varying exposures. Practically, this process involves converting raw sensor data into low dynamic range (LDR) images, estimate the camera response curves, and use them in order to recover the irradiance for every pixel. During the export, applying white balance settings and image stitching, which both have an influence on the color balance in the final image. In this paper, we use a calibrated quasi-monochromatic light source, an integrating sphere, and a spectrograph in order to evaluate and compare the average spectral response of the image sensor. We finally draw some conclusion about the color consistency of HDR imaging and the additional steps necessary to use multi-exposure HDR imaging as a tool to measure the physical quantities such as radiance and luminance.

  5. Divergence-ratio axi-vision camera (Divcam): A distance mapping camera

    International Nuclear Information System (INIS)

    Iizuka, Keigo

    2006-01-01

    A novel distance mapping camera the divergence-ratio axi-vision camera (Divcam) is proposed. The decay rate of the illuminating light with distance due to the divergence of the light is used as means of mapping the distance. Resolutions of 10 mm over a range of meters and 0.5 mm over a range of decimeters were achieved. The special features of this camera are its high resolution real-time operation, simplicity, compactness, light weight, portability, and yet low fabrication cost. The feasibility of various potential applications is also included

  6. Community cyberinfrastructure for Advanced Microbial Ecology Research and Analysis: the CAMERA resource.

    Science.gov (United States)

    Sun, Shulei; Chen, Jing; Li, Weizhong; Altintas, Ilkay; Lin, Abel; Peltier, Steve; Stocks, Karen; Allen, Eric E; Ellisman, Mark; Grethe, Jeffrey; Wooley, John

    2011-01-01

    The Community Cyberinfrastructure for Advanced Microbial Ecology Research and Analysis (CAMERA, http://camera.calit2.net/) is a database and associated computational infrastructure that provides a single system for depositing, locating, analyzing, visualizing and sharing data about microbial biology through an advanced web-based analysis portal. CAMERA collects and links metadata relevant to environmental metagenome data sets with annotation in a semantically-aware environment allowing users to write expressive semantic queries against the database. To meet the needs of the research community, users are able to query metadata categories such as habitat, sample type, time, location and other environmental physicochemical parameters. CAMERA is compliant with the standards promulgated by the Genomic Standards Consortium (GSC), and sustains a role within the GSC in extending standards for content and format of the metagenomic data and metadata and its submission to the CAMERA repository. To ensure wide, ready access to data and annotation, CAMERA also provides data submission tools to allow researchers to share and forward data to other metagenomics sites and community data archives such as GenBank. It has multiple interfaces for easy submission of large or complex data sets, and supports pre-registration of samples for sequencing. CAMERA integrates a growing list of tools and viewers for querying, analyzing, annotating and comparing metagenome and genome data.

  7. Two-dimensional color-code quantum computation

    International Nuclear Information System (INIS)

    Fowler, Austin G.

    2011-01-01

    We describe in detail how to perform universal fault-tolerant quantum computation on a two-dimensional color code, making use of only nearest neighbor interactions. Three defects (holes) in the code are used to represent logical qubits. Triple-defect logical qubits are deformed into isolated triangular sections of color code to enable transversal implementation of all single logical qubit Clifford group gates. Controlled-NOT (CNOT) is implemented between pairs of triple-defect logical qubits via braiding.

  8. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  9. Can camera traps monitor Komodo dragons a large ectothermic predator?

    Science.gov (United States)

    Ariefiandy, Achmad; Purwandana, Deni; Seno, Aganto; Ciofi, Claudio; Jessop, Tim S

    2013-01-01

    Camera trapping has greatly enhanced population monitoring of often cryptic and low abundance apex carnivores. Effectiveness of passive infrared camera trapping, and ultimately population monitoring, relies on temperature mediated differences between the animal and its ambient environment to ensure good camera detection. In ectothermic predators such as large varanid lizards, this criterion is presumed less certain. Here we evaluated the effectiveness of camera trapping to potentially monitor the population status of the Komodo dragon (Varanus komodoensis), an apex predator, using site occupancy approaches. We compared site-specific estimates of site occupancy and detection derived using camera traps and cage traps at 181 trapping locations established across six sites on four islands within Komodo National Park, Eastern Indonesia. Detection and site occupancy at each site were estimated using eight competing models that considered site-specific variation in occupancy (ψ)and varied detection probabilities (p) according to detection method, site and survey number using a single season site occupancy modelling approach. The most parsimonious model [ψ (site), p (site survey); ω = 0.74] suggested that site occupancy estimates differed among sites. Detection probability varied as an interaction between site and survey number. Our results indicate that overall camera traps produced similar estimates of detection and site occupancy to cage traps, irrespective of being paired, or unpaired, with cage traps. Whilst one site showed some evidence detection was affected by trapping method detection was too low to produce an accurate occupancy estimate. Overall, as camera trapping is logistically more feasible it may provide, with further validation, an alternative method for evaluating long-term site occupancy patterns in Komodo dragons, and potentially other large reptiles, aiding conservation of this species.

  10. Impact of New Camera Technologies on Discoveries in Cell Biology.

    Science.gov (United States)

    Stuurman, Nico; Vale, Ronald D

    2016-08-01

    New technologies can make previously invisible phenomena visible. Nowhere is this more obvious than in the field of light microscopy. Beginning with the observation of "animalcules" by Antonie van Leeuwenhoek, when he figured out how to achieve high magnification by shaping lenses, microscopy has advanced to this day by a continued march of discoveries driven by technical innovations. Recent advances in single-molecule-based technologies have achieved unprecedented resolution, and were the basis of the Nobel prize in Chemistry in 2014. In this article, we focus on developments in camera technologies and associated image processing that have been a major driver of technical innovations in light microscopy. We describe five types of developments in camera technology: video-based analog contrast enhancement, charge-coupled devices (CCDs), intensified sensors, electron multiplying gain, and scientific complementary metal-oxide-semiconductor cameras, which, together, have had major impacts in light microscopy. © 2016 Marine Biological Laboratory.

  11. Color excesses, intrinsic colors, and absolute magnitudes of Galactic and Large Magellanic Cloud Wolf-Rayet stars

    International Nuclear Information System (INIS)

    Vacca, W.D.; Torres-Dodgen, A.V.

    1990-01-01

    A new method of determining the color excesses of WR stars in the Galaxy and the LMC has been developed and is used to determine the excesses for 44 Galactic and 32 LMC WR stars. The excesses are combined with line-free, narrow-band spectrophotometry to derive intrinsic colors of the WR stars of nearly all spectral subtypes. No correlation of UV spectral index or intrinsic colors with spectral subtype is found for the samples of single WN or WC stars. There is evidence that early WN stars in the LMC have flatter UV continua and redder intrinsic colors than early WN stars in the Galaxy. No separation is found between the values derived for Galactic WC stars and those obtained for LMC WC stars. The intrinsic colors are compared with those calculated from model atmospheres of WR stars and generally good agreement is found. Absolute magnitudes are derived for WR stars in the LMC and for those Galactic WR stars located in clusters and associations for which there are reliable distance estimates. 78 refs

  12. Self-assembled structural color in nature

    Science.gov (United States)

    Parnell, Andrew

    The vibrancy and variety of structural color found in nature has long been well-known; what has only recently been discovered is the sophistication of the physics that underlies these effects. In the talk I will discuss some of our recent studies of the structures responsible for color in bird feathers and beetle elytra, based on structural characterization using small angle x-ray scattering, x-ray tomography and optical modeling. These have enabled us to study a large number of structural color exhibiting materials and look for trends in the structures nature uses to provide these optical effects. In terms of creating the optical structure responsible for the color of the Eurasian Jay feathers (Garrulus glandarius) the nanostructure is produced by a phase-separation process that is arrested at a late stage; mastery of the color is achieved by control over the duration of this phase-separation process. Our analysis shows that nanostructure in single bird feather barbs can be varied continuously by controlling the time the keratin network is allowed to phase separate before mobility in the system is arrested. Dynamic scaling analysis of the single barb scattering data implies that the phase separation arrest mechanism is rapid and also distinct from the spinodal phase separation mechanism i.e. it is not gelation or intermolecular re-association. Any growing lengthscale using this spinodal phase separation approach must first traverse the UV and blue wavelength regions, growing the structure by coarsening, resulting in a broad distribution of domain sizes. AJP acknowledges financial support via the APS/DPOLY exchange lectureship 2017.

  13. 'Clovis' in Color

    Science.gov (United States)

    2004-01-01

    [figure removed for brevity, see original site] Figure 1 This approximate true-color image taken by the Mars Exploration Rover Spirit shows the rock outcrop dubbed 'Clovis.' The rock was discovered to be softer than other rocks studied so far at Gusev Crater after the rover easily ground a hole into it with its rock abrasion tool. This image was taken by the 750-, 530- and 480-nanometer filters of the rover's panoramic camera on sol 217 (August 13, 2004). Elemental Trio Found in 'Clovis' Figure 1 above shows that the interior of the rock dubbed 'Clovis' contains higher concentrations of sulfur, bromine and chlorine than basaltic, or volcanic, rocks studied so far at Gusev Crater. The data were taken by the Mars Exploration Rover Spirit's alpha particle X-ray spectrometer after the rover dug into Clovis with its rock abrasion tool. The findings might indicate that this rock was chemically altered, and that fluids once flowed through the rock depositing these elements.

  14. Face validation using 3D information from single calibrated camera

    DEFF Research Database (Denmark)

    Katsarakis, N.; Pnevmatikakis, A.

    2009-01-01

    stages in the cascade. This constrains the misses by making detection easier, but increases the false positives. False positives can be reduced by validating the detected image regions as faces. This has been accomplished using color and pattern information of the detected image regions. In this paper we......Detection of faces in cluttered scenes under arbitrary imaging conditions (pose, expression, illumination and distance) is prone to miss and false positive errors. The well-established approach of using boosted cascades of simple classifiers addresses the problem of missing faces by using fewer...

  15. Processing of Color Words Activates Color Representations

    Science.gov (United States)

    Richter, Tobias; Zwaan, Rolf A.

    2009-01-01

    Two experiments were conducted to investigate whether color representations are routinely activated when color words are processed. Congruency effects of colors and color words were observed in both directions. Lexical decisions on color words were faster when preceding colors matched the color named by the word. Color-discrimination responses…

  16. Shot Boundary Detection in Soccer Video using Twin-comparison Algorithm and Dominant Color Region

    Directory of Open Access Journals (Sweden)

    Matko Šarić

    2008-06-01

    Full Text Available The first step in generic video processing is temporal segmentation, i.e. shot boundary detection. Camera shot transitions can be either abrupt (e.g. cuts or gradual (e.g. fades, dissolves, wipes. Sports video is one of the most challenging domains for robust shot boundary detection. We proposed a shot boundary detection algorithm for soccer video based on the twin-comparison method and the absolute difference between frames in their ratios of dominant colored pixels to total number of pixels. With this approach the detection of gradual transitions is improved by decreasing the number of false positives caused by some camera operations. We also compared performances of our algorithm and the standard twin-comparison method.

  17. Color categories and color appearance

    Science.gov (United States)

    Webster, Michael A.; Kay, Paul

    2011-01-01

    We examined categorical effects in color appearance in two tasks, which in part differed in the extent to which color naming was explicitly required for the response. In one, we measured the effects of color differences on perceptual grouping for hues that spanned the blue–green boundary, to test whether chromatic differences across the boundary were perceptually exaggerated. This task did not require overt judgments of the perceived colors, and the tendency to group showed only a weak and inconsistent categorical bias. In a second case, we analyzed results from two prior studies of hue scaling of chromatic stimuli (De Valois, De Valois, Switkes, & Mahon, 1997; Malkoc, Kay, & Webster, 2005), to test whether color appearance changed more rapidly around the blue–green boundary. In this task observers directly judge the perceived color of the stimuli and these judgments tended to show much stronger categorical effects. The differences between these tasks could arise either because different signals mediate color grouping and color appearance, or because linguistic categories might differentially intrude on the response to color and/or on the perception of color. Our results suggest that the interaction between language and color processing may be highly dependent on the specific task and cognitive demands and strategies of the observer, and also highlight pronounced individual differences in the tendency to exhibit categorical responses. PMID:22176751

  18. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an

  19. Comparison of soot formation for diesel and jet-a in a constant volume combustion chamber using two-color pyrometry

    KAUST Repository

    Jing, Wei; Roberts, William L.; Fang, Tiegang

    2014-01-01

    The measurement of the two-color line of sight soot and KL factor for NO.2 diesel and jet-A fuels was conducted in an optical constant volume combustion chamber by using a high speed camera under 1000 K ambient temperature and varied oxygen

  20. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image