WorldWideScience

Sample records for single color camera

  1. Ultrahigh-speed, high-sensitivity color camera with 300,000-pixel single CCD

    Science.gov (United States)

    Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Ohtake, H.; Kurita, T.; Tanioka, K.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Etoh, T. G.

    2007-01-01

    We have developed an ultrahigh-speed, high-sensitivity portable color camera with a new 300,000-pixel single CCD. The 300,000-pixel CCD, which has four times the number of pixels of our initial model, was developed by seamlessly joining two 150,000-pixel CCDs. A green-red-green-blue (GRGB) Bayer filter is used to realize a color camera with the single-chip CCD. The camera is capable of ultrahigh-speed video recording at up to 1,000,000 frames/sec, and small enough to be handheld. We also developed a technology for dividing the CCD output signal to enable parallel, highspeed readout and recording in external memory; this makes possible long, continuous shots up to 1,000 frames/second. As a result of an experiment, video footage was imaged at an athletics meet. Because of high-speed shooting, even detailed movements of athletes' muscles were captured. This camera can capture clear slow-motion videos, so it enables previously impossible live footage to be imaged for various TV broadcasting programs.

  2. Full-color stereoscopic single-pixel camera based on DMD technology

    Science.gov (United States)

    Salvador-Balaguer, Eva; Clemente, Pere; Tajahuerce, Enrique; Pla, Filiberto; Lancis, Jesús

    2017-02-01

    Imaging systems based on microstructured illumination and single-pixel detection offer several advantages over conventional imaging techniques. They are an effective method for imaging through scattering media even in the dynamic case. They work efficiently under low light levels, and the simplicity of the detector makes it easy to design imaging systems working out of the visible spectrum and to acquire multidimensional information. In particular, several approaches have been proposed to record 3D information. The technique is based on sampling the object with a sequence of microstructured light patterns codified onto a programmable spatial light modulator while light intensity is measured with a single-pixel detector. The image is retrieved computationally from the photocurrent fluctuations provided by the detector. In this contribution we describe an optical system able to produce full-color stereoscopic images by using few and simple optoelectronic components. In our setup we use an off-the-shelf digital light projector (DLP) based on a digital micromirror device (DMD) to generate the light patterns. To capture the color of the scene we take advantage of the codification procedure used by the DLP for color video projection. To record stereoscopic views we use a 90° beam splitter and two mirrors, allowing us two project the patterns form two different viewpoints. By using a single monochromatic photodiode we obtain a pair of color images that can be used as input in a 3-D display. To reduce the time we need to project the patterns we use a compressive sampling algorithm. Experimental results are shown.

  3. Toy Cameras and Color Photographs.

    Science.gov (United States)

    Speight, Jerry

    1979-01-01

    The technique of using toy cameras for both black-and-white and color photography in the art class is described. The author suggests that expensive equipment can limit the growth of a beginning photographer by emphasizing technique and equipment instead of in-depth experience with composition fundamentals and ideas. (KC)

  4. Temperature characterization of a radiating gas layer using digital-single-lens-reflex-camera-based two-color ratio pyrometry.

    Science.gov (United States)

    Deep, Sneh; Krishna, Yedhu; Jagadeesh, Gopalan

    2017-10-20

    The two-color ratio pyrometry technique using a digital single-lens reflex camera has been used to measure the time-averaged and path-integrated temperature distribution in the radiating shock layer in a high-enthalpy flow. A 70 mm diameter cylindrical body with a 70 mm long spike was placed in a hypersonic shock tunnel, and the region behind the shock layer was investigated. The systematic error due to contributions from line emissions was corrected by monitoring the emission spectrum from this region using a spectrometer. The relative contributions due to line emissions on R, G, and B channels of the camera were 7.4%, 2.2%, and 0.4%, respectively. The temperature contours obtained clearly distinguished regions of highest temperature. The maximum absolute temperature obtained in the experiment was ∼2920  K±55  K, which was 20% lower than the stagnation temperature. This lower value is expected due to line-of-sight integration, time averaging, and losses in the flow. Strategies to overcome these limitations are also suggested in the paper.

  5. Initial laboratory evaluation of color video cameras

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P L

    1991-01-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than identify an intruder. Monochrome cameras are adequate for that application and were selected over color cameras because of their greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Color information is useful for identification purposes, and color camera technology is rapidly changing. Thus, Sandia National Laboratories established an ongoing program to evaluate color solid-state cameras. Phase one resulted in the publishing of a report titled, Initial Laboratory Evaluation of Color Video Cameras (SAND--91-2579).'' It gave a brief discussion of imager chips and color cameras and monitors, described the camera selection, detailed traditional test parameters and procedures, and gave the results of the evaluation of twelve cameras. In phase two six additional cameras were tested by the traditional methods and all eighteen cameras were tested by newly developed methods. This report details both the traditional and newly developed test parameters and procedures, and gives the results of both evaluations.

  6. Development of a driving method suitable for ultrahigh-speed shooting in a 2M-fps 300k-pixel single-chip color camera

    Science.gov (United States)

    Yonai, J.; Arai, T.; Hayashida, T.; Ohtake, H.; Namiki, J.; Yoshida, T.; Etoh, T. Goji

    2012-03-01

    We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than 200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for broadcasting purposes.

  7. High resolution RGB color line scan camera

    Science.gov (United States)

    Lynch, Theodore E.; Huettig, Fred

    1998-04-01

    A color line scan camera family which is available with either 6000, 8000 or 10000 pixels/color channel, utilizes off-the-shelf lenses, interfaces with currently available frame grabbers, includes on-board pixel by pixel offset correction, and is configurable and controllable via RS232 serial port for computer controlled or stand alone operation is described in this paper. This line scan camera is based on an available 8000 element monochrome line scan camera designed by AOA for OEM use. The new color version includes improvements such as better packaging and additional user features which make the camera easier to use. The heart of the camera is a tri-linear CCD sensor with on-chip color balancing for maximum accuracy and pinned photodiodes for low lag response. Each color channel is digitized to 12 bits and all three channels are multiplexed together so that the resulting camera output video is either a 12 or 8 bit data stream at a rate of up to 24Megpixels/sec. Conversion from 12 to 8 bit, or user-defined gamma, is accomplished by on board user-defined video look up tables. The camera has two user-selectable operating modes; lows speed, high sensitivity mode or high speed, reduced sensitivity mode. The intended uses of the camera include industrial inspection, digital archiving, document scanning, and graphic arts applications.

  8. Location and Classification of Moving Fruits in Real Time with a Single Color Camera Localización y Clasificación de Frutas Móviles en Tiempo Real con una Cámara Individual a Color

    Directory of Open Access Journals (Sweden)

    José F Reyes

    2009-06-01

    Full Text Available Quality control of fruits to satisfy increasingly competitive food markets requires the implementation of automatic servovisual systems in fruit processing operations to cope with market challenges. A new and fast method for identifying and classifying moving fruits by processing single color images from a static camera in real time was developed and tested. Two algorithms were combined to classify and track moving fruits on image plane using representative color features. The method allows classifying the fruit by color segmentation and estimating its position on the image plane, which provides a reliable algorithm to be implemented in robotic manipulation of fruits. To evaluate the methodology an experimental real time system simulating a conveyor belt and real fruit was used. Testing of the system indicates that with natural lighting conditions and proper calibration of the system a minimum error of 2% in classification of fruits is feasible. The methodology allows for very simple implementation, and although operational results are promising, even higher accuracy may be possible if structured illumination is used.El control de calidad en frutas y hortalizas para satisfacer mercados cada vez mas exigentes, requiere la implementación de sistemas automáticos servo visuales en operaciones de de procesamiento de frutas para responder a estos desafíos de mercado. En está trabajo se desarrollo y evaluó un nuevo método para identificar y clasificar frutas en movimiento mediante el procesamiento en tiempo real imágenes a color capturadas por una cámara individual estática. Se combinaron dos algoritmos para clasificar y rastrear frutas en movimiento en el plano de imagen utilizando aspectos representativos de color. El método permite clasificar las frutas en base a segmentación de color y estimar su posición en el plano de imagen, lo cual proporciona un algoritmo confiable para ser implementado en un brazo robótico.de manipulación de

  9. Initial laboratory evaluation of color video cameras: Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  10. Spectral colors capture and reproduction based on digital camera

    Science.gov (United States)

    Chen, Defen; Huang, Qingmei; Li, Wei; Lu, Yang

    2018-01-01

    The purpose of this work is to develop a method for the accurate reproduction of the spectral colors captured by digital camera. The spectral colors being the purest color in any hue, are difficult to reproduce without distortion on digital devices. In this paper, we attempt to achieve accurate hue reproduction of the spectral colors by focusing on two steps of color correction: the capture of the spectral colors and the color characterization of digital camera. Hence it determines the relationship among the spectral color wavelength, the RGB color space of the digital camera device and the CIEXYZ color space. This study also provides a basis for further studies related to the color spectral reproduction on digital devices. In this paper, methods such as wavelength calibration of the spectral colors and digital camera characterization were utilized. The spectrum was obtained through the grating spectroscopy system. A photo of a clear and reliable primary spectrum was taken by adjusting the relative parameters of the digital camera, from which the RGB values of color spectrum was extracted in 1040 equally-divided locations. Calculated using grating equation and measured by the spectrophotometer, two wavelength values were obtained from each location. The polynomial fitting method for the camera characterization was used to achieve color correction. After wavelength calibration, the maximum error between the two sets of wavelengths is 4.38nm. According to the polynomial fitting method, the average color difference of test samples is 3.76. This has satisfied the application needs of the spectral colors in digital devices such as display and transmission.

  11. Single color and single flavor color superconductivity

    International Nuclear Information System (INIS)

    Alford, Mark G.; Cheyne, Jack M.; Cowan, Greig A.; Bowers, Jeffrey A.

    2003-01-01

    We survey the nonlocked color-flavor-spin channels for quark-quark (color superconducting) condensates in QCD, using a Nambu-Jona-Lasinio model. We also study isotropic quark-antiquark (mesonic) condensates. We make mean-field estimates of the strength and sign of the self-interaction of each condensate, using four-fermion interaction vertices based on known QCD interactions. For the attractive quark pairing channels, we solve the mean-field gap equations to obtain the size of the gap as a function of quark density. We also calculate the dispersion relations for the quasiquarks, in order to see how fully gapped the spectrum of fermionic excitations will be. We use our results to specify the likely pairing patterns in neutral quark matter, and comment on possible phenomenological consequences

  12. Dichromatic Gray Pixel for Camera-agnostic Color Constancy

    OpenAIRE

    Qian, Yanlin; Chen, Ke; Nikkanen, Jarno; Kämäräinen, Joni-Kristian; Matas, Jiri

    2018-01-01

    We propose a novel statistical color constancy method, especially suitable for the Camera-agnostic Color Constancy, i.e. the scenario where nothing is known a priori about the capturing devices. The method, called Dichromatic Gray Pixel, or DGP, relies on a novel gray pixel detection algorithm derived using the Dichromatic Reflection Model. DGP is suitable for camera-agnostic color constancy since varying devices are set to make achromatic pixels look gray under standard neutral illumination....

  13. CCD Color Camera Characterization for Image Measurements

    NARCIS (Netherlands)

    Withagen, P.J.; Groen, F.C.A.; Schutte, K.

    2007-01-01

    In this article, we will analyze a range of different types of cameras for its use in measurements. We verify a general model of a charged coupled device camera using experiments. This model includes gain and offset, additive and multiplicative noise, and gamma correction. It is shown that for

  14. True RGB line scan camera for color machine vision applications

    Science.gov (United States)

    Lemstrom, Guy F.

    1994-11-01

    In this paper a true RGB 3-chip color line scan camera is described. The camera was mainly developed for accurate color measuring in industrial applications. Due to the camera's modularity it's also possible to use it as a B/W-camera. The color separation is made with a RGB-beam splitter. The CCD linear arrays are fixed with a high accuracy to the beam splitters output in order to match the pixels of the three different CCDs on each other. This makes the color analyses simple compared to color line arrays where line or pixel matching has to be done. The beam splitter can be custom made to separate spectral components other than standard RGB. The spectral range is from 200 to 1000 nm for most CCDs and two or three spectral areas can be separately measured with the beam splitter. The camera is totally digital and has a 16-bit parallel computer interface to communicate with a signal processing board. Because of the open architecture of the camera it's possible for the customer to design a board with some special functions handling the preprocessing of the data (for example RGB - HSI conversion). The camera can also be equipped with a high speed CPU-board with enough local memory to do some image processing inside the camera before sending the data forward. The camera has been used in real industrial applications and has proven that its high resolution and high dynamic range can be used to measure color differences of small amounts to separate or grade objects such as minerals, food or other materials that can't be measured with a black and white camera.

  15. Feature-based automatic color calibration for networked camera system

    Science.gov (United States)

    Yamamoto, Shoji; Taki, Keisuke; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2011-01-01

    In this paper, we have developed a feature-based automatic color calibration by using an area-based detection and adaptive nonlinear regression method. Simple color matching of chartless is achieved by using the characteristic of overlapping image area with each camera. Accurate detection of common object is achieved by the area-based detection that combines MSER with SIFT. Adaptive color calibration by using the color of detected object is calculated by nonlinear regression method. This method can indicate the contribution of object's color for color calibration, and automatic selection notification for user is performed by this function. Experimental result show that the accuracy of the calibration improves gradually. It is clear that this method can endure practical use of multi-camera color calibration if an enough sample is obtained.

  16. Quantitative Evaluation of Surface Color of Tomato Fruits Cultivated in Remote Farm Using Digital Camera Images

    Science.gov (United States)

    Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu

    To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.

  17. Engineering task plan for flammable gas atmosphere mobile color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1995-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and testing of the mobile video camera systems. The color video camera systems will be used to observe and record the activities within the vapor space of a tank on a limited exposure basis. The units will be fully mobile and designed for operation in the single-shell flammable gas producing tanks. The objective of this tank is to provide two mobile camera systems for use in flammable gas producing single-shell tanks (SSTs) for the Flammable Gas Tank Safety Program. The camera systems will provide observation, video recording, and monitoring of the activities that occur in the vapor space of applied tanks. The camera systems will be designed to be totally mobile, capable of deployment up to 6.1 meters into a 4 inch (minimum) riser

  18. Express Yourself: Using Color Schemes, Cameras, and Computers

    Science.gov (United States)

    Lott, Debra

    2005-01-01

    Self-portraiture is a great project to introduce the study of color schemes and Expressionism. Through this drawing project, students learn about identity, digital cameras, and creative art software. The lesson can be introduced with a study of Edvard Munch and Expressionism. Expressionism was an art movement in which the intensity of the artist's…

  19. Enhancement of low light level images using color-plus-mono dual camera.

    Science.gov (United States)

    Jung, Yong Ju

    2017-05-15

    In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.

  20. Using Single Colors and Color Pairs to Communicate Basic Tastes

    Directory of Open Access Journals (Sweden)

    Andy T. Woods

    2016-07-01

    Full Text Available Recently, it has been demonstrated that people associate each of the basic tastes (e.g., sweet, sour, bitter, and salty with specific colors (e.g., red, green, black, and white. In the present study, we investigated whether pairs of colors (both associated with a particular taste or taste word would give rise to stronger associations relative to pairs of colors that were associated with different tastes. We replicate the findings of previous studies highlighting the existence of a robust crossmodal correspondence between individual colors and basic tastes. However, while there was evidence that pairs of colors could indeed communicate taste information more consistently than single colors, our participants took more than twice as long to match the color pairs with tastes than the single colors. Possible reasons for these results are discussed.

  1. Using Single Colors and Color Pairs to Communicate Basic Tastes.

    Science.gov (United States)

    Woods, Andy T; Spence, Charles

    2016-01-01

    Recently, it has been demonstrated that people associate each of the basic tastes (e.g., sweet, sour, bitter, and salty) with specific colors (e.g., red, green, black, and white). In the present study, we investigated whether pairs of colors (both associated with a particular taste or taste word) would give rise to stronger associations relative to pairs of colors that were associated with different tastes. We replicate the findings of previous studies highlighting the existence of a robust crossmodal correspondence between individual colors and basic tastes. However, while there was evidence that pairs of colors could indeed communicate taste information more consistently than single colors, our participants took more than twice as long to match the color pairs with tastes than the single colors. Possible reasons for these results are discussed.

  2. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    International Nuclear Information System (INIS)

    Werry, S.M.

    1995-01-01

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure

  3. Performance of Color Camera Machine Vision in Automated Furniture Rough Mill Systems

    Science.gov (United States)

    D. Earl Kline; Agus Widoyoko; Janice K. Wiedenbeck; Philip A. Araman

    1998-01-01

    The objective of this study was to evaluate the performance of color camera machine vision for lumber processing in a furniture rough mill. The study used 134 red oak boards to compare the performance of automated gang-rip-first rough mill yield based on a prototype color camera lumber inspection system developed at Virginia Tech with both estimated optimum rough mill...

  4. Face validation using 3D information from single calibrated camera

    DEFF Research Database (Denmark)

    Katsarakis, N.; Pnevmatikakis, A.

    2009-01-01

    Detection of faces in cluttered scenes under arbitrary imaging conditions (pose, expression, illumination and distance) is prone to miss and false positive errors. The well-established approach of using boosted cascades of simple classifiers addresses the problem of missing faces by using fewer...... stages in the cascade. This constrains the misses by making detection easier, but increases the false positives. False positives can be reduced by validating the detected image regions as faces. This has been accomplished using color and pattern information of the detected image regions. In this paper we...... propose a novel face validation method based on 3D position estimates from a single calibrated camera. This is done by assuming a typical face width; hence the widths of the detected image regions lead to target position estimates. Detected image regions with extreme position estimates can...

  5. SNAPSHOT SPECTRAL AND COLOR IMAGING USING A REGULAR DIGITAL CAMERA WITH A MONOCHROMATIC IMAGE SENSOR

    Directory of Open Access Journals (Sweden)

    J. Hauser

    2017-10-01

    Full Text Available Spectral imaging (SI refers to the acquisition of the three-dimensional (3D spectral cube of spatial and spectral data of a source object at a limited number of wavelengths in a given wavelength range. Snapshot spectral imaging (SSI refers to the instantaneous acquisition (in a single shot of the spectral cube, a process suitable for fast changing objects. Known SSI devices exhibit large total track length (TTL, weight and production costs and relatively low optical throughput. We present a simple SSI camera based on a regular digital camera with (i an added diffusing and dispersing phase-only static optical element at the entrance pupil (diffuser and (ii tailored compressed sensing (CS methods for digital processing of the diffused and dispersed (DD image recorded on the image sensor. The diffuser is designed to mix the spectral cube data spectrally and spatially and thus to enable convergence in its reconstruction by CS-based algorithms. In addition to performing SSI, this SSI camera is capable to perform color imaging using a monochromatic or gray-scale image sensor without color filter arrays.

  6. CCD characterization for a range of color cameras

    NARCIS (Netherlands)

    Withagen, P.J.; Groen, F.C.A.; Schutte, K.

    2005-01-01

    CCD cameras are widely used for remote sensing and image processing applications. However, most cameras are produced to create nice images, not to do accurate measurements. Post processing operations such as gamma adjustment and automatic gain control are incorporated in the camera. When a (CCD)

  7. Single-camera, three-dimensional particle tracking velocimetry

    OpenAIRE

    Peterson, K.; Regaard, B.; Heinemann, S.; Sick, V.

    2012-01-01

    This paper introduces single-camera, three-dimensional particle tracking velocimetry (SC3D-PTV), an image-based, single-camera technique for measuring 3-component, volumetric velocity fields in environments with limited optical access, in particular, optically accessible internal combustion engines. The optical components used for SC3D-PTV are similar to those used for two-camera stereoscopic-PIV, but are adapted to project two simultaneous images onto a single image sensor. A novel PTV algor...

  8. Cinematic camera emulation using two-dimensional color transforms

    Science.gov (United States)

    McElvain, Jon S.; Gish, Walter

    2015-02-01

    For cinematic and episodic productions, on-set look management is an important component of the creative process, and involves iterative adjustments of the set, actors, lighting and camera configuration. Instead of using the professional motion capture device to establish a particular look, the use of a smaller form factor DSLR is considered for this purpose due to its increased agility. Because the spectral response characteristics will be different between the two camera systems, a camera emulation transform is needed to approximate the behavior of the destination camera. Recently, twodimensional transforms have been shown to provide high-accuracy conversion of raw camera signals to a defined colorimetric state. In this study, the same formalism is used for camera emulation, whereby a Canon 5D Mark III DSLR is used to approximate the behavior a Red Epic cinematic camera. The spectral response characteristics for both cameras were measured and used to build 2D as well as 3x3 matrix emulation transforms. When tested on multispectral image databases, the 2D emulation transforms outperform their matrix counterparts, particularly for images containing highly chromatic content.

  9. Using Single Colors and Color Pairs to Communicate Basic Tastes II: Foreground-Background Color Combinations.

    Science.gov (United States)

    Woods, Andy T; Marmolejo-Ramos, Fernando; Velasco, Carlos; Spence, Charles

    2016-01-01

    People associate basic tastes (e.g., sweet, sour, bitter, and salty) with specific colors (e.g., pink or red, green or yellow, black or purple, and white or blue). In the present study, we investigated whether a color bordered by another color (either the same or different) would give rise to stronger taste associations relative to a single patch of color. We replicate previous findings, highlighting the existence of a robust crossmodal correspondence between individual colors and basic tastes. On occasion, color pairs were found to communicate taste expectations more consistently than were single color patches. Furthermore, and in contrast to a recent study in which the color pairs were shown side-by-side, participants took no longer to match the color pairs with tastes than the single colors (they had taken twice as long to respond to the color pairs in the previous study). Possible reasons for these results are discussed, and potential applications for the results, and for the testing methodology developed, are outlined.

  10. Using Single Colors and Color Pairs to Communicate Basic Tastes II: Foreground–Background Color Combinations

    Science.gov (United States)

    Marmolejo-Ramos, Fernando; Velasco, Carlos; Spence, Charles

    2016-01-01

    People associate basic tastes (e.g., sweet, sour, bitter, and salty) with specific colors (e.g., pink or red, green or yellow, black or purple, and white or blue). In the present study, we investigated whether a color bordered by another color (either the same or different) would give rise to stronger taste associations relative to a single patch of color. We replicate previous findings, highlighting the existence of a robust crossmodal correspondence between individual colors and basic tastes. On occasion, color pairs were found to communicate taste expectations more consistently than were single color patches. Furthermore, and in contrast to a recent study in which the color pairs were shown side-by-side, participants took no longer to match the color pairs with tastes than the single colors (they had taken twice as long to respond to the color pairs in the previous study). Possible reasons for these results are discussed, and potential applications for the results, and for the testing methodology developed, are outlined. PMID:27708752

  11. True RGB line-scan camera for color machine vision applications

    Science.gov (United States)

    Lemstrom, Guy F.

    1994-10-01

    The design and technical capabilities of a true RGB 3 CCD chip color line scan camera are presented within this paper. The camera was developed for accurate color monitoring and analysis in industrial applications. A black & white line scan camera has been designed and built utilizing the same modular architecture of the color line scan camera. Color separation is made possible with a tri-chromatic RGB beam splitter. Three CCD linear arrays are precisely mounted to the output surfaces of the prism and the outputs of each CCD are exactly matched pixel by pixel. The beam splitter prism can be tailored to separate other spectral components than the standard RGB. A typical CCD can detect between 200 and 100 nm. Either two or three spectral regions can be separated using a beam splitter prism. The camera is totally digital and has a 16-bit parallel computer interface to communicate with a signal processing board. Because of the open architecture of the camera it's possible for the customer to design a board with some special functions handling the preprocessing of the data (for example RGB - HSI conversion). The camera can also be equipped with a high speed CPU-board with enough of local memory to do some image processing inside the camera before sending the data forward. The camera has been used in real industrial applications and has proven that its high resolution and high dynamic range can be used to measure minute color differences, enabling the separation or grading of objects such as minerals, food or other materials that could not otherwise be measured with a black and white camera.

  12. Human attention filters for single colors

    Science.gov (United States)

    Sun, Peng; Chubb, Charles; Wright, Charles E.; Sperling, George

    2016-01-01

    The visual images in the eyes contain much more information than the brain can process. An important selection mechanism is feature-based attention (FBA). FBA is best described by attention filters that specify precisely the extent to which items containing attended features are selectively processed and the extent to which items that do not contain the attended features are attenuated. The centroid-judgment paradigm enables quick, precise measurements of such human perceptual attention filters, analogous to transmission measurements of photographic color filters. Subjects use a mouse to locate the centroid—the center of gravity—of a briefly displayed cloud of dots and receive precise feedback. A subset of dots is distinguished by some characteristic, such as a different color, and subjects judge the centroid of only the distinguished subset (e.g., dots of a particular color). The analysis efficiently determines the precise weight in the judged centroid of dots of every color in the display (i.e., the attention filter for the particular attended color in that context). We report 32 attention filters for single colors. Attention filters that discriminate one saturated hue from among seven other equiluminant distractor hues are extraordinarily selective, achieving attended/unattended weight ratios >20:1. Attention filters for selecting a color that differs in saturation or lightness from distractors are much less selective than attention filters for hue (given equal discriminability of the colors), and their filter selectivities are proportional to the discriminability distance of neighboring colors, whereas in the same range hue attention-filter selectivity is virtually independent of discriminabilty. PMID:27791040

  13. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition

    Directory of Open Access Journals (Sweden)

    Sumeet Khanduja

    2018-01-01

    Full Text Available Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a anterior segment surgery, (b surgery under direct viewing system, and (c surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.

  14. Single-camera, three-dimensional particle tracking velocimetry.

    Science.gov (United States)

    Peterson, Kevin; Regaard, Boris; Heinemann, Stefan; Sick, Volker

    2012-04-09

    This paper introduces single-camera, three-dimensional particle tracking velocimetry (SC3D-PTV), an image-based, single-camera technique for measuring 3-component, volumetric velocity fields in environments with limited optical access, in particular, optically accessible internal combustion engines. The optical components used for SC3D-PTV are similar to those used for two-camera stereoscopic-µPIV, but are adapted to project two simultaneous images onto a single image sensor. A novel PTV algorithm relying on the similarity of the particle images corresponding to a single, physical particle produces 3-component, volumetric velocity fields, rather than the 3-component, planar results obtained with stereoscopic PIV, and without the reconstruction of an instantaneous 3D particle field. The hardware and software used for SC3D-PTV are described, and experimental results are presented.

  15. Volumetric particle image velocimetry with a single plenoptic camera

    Science.gov (United States)

    Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.

    2015-11-01

    A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300× 200× 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289× 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera

  16. Dual color single particle tracking via nanobodies

    International Nuclear Information System (INIS)

    Albrecht, David; Winterflood, Christian M; Ewers, Helge

    2015-01-01

    Single particle tracking is a powerful tool to investigate the function of biological molecules by following their motion in space. However, the simultaneous tracking of two different species of molecules is still difficult to realize without compromising the length or density of trajectories, the localization accuracy or the simplicity of the assay. Here, we demonstrate a simple dual color single particle tracking assay using small, bright, high-affinity labeling via nanobodies of accessible targets with widely available instrumentation. We furthermore apply a ratiometric step-size analysis method to visualize differences in apparent membrane viscosity. (paper)

  17. Single atom imaging with an sCMOS camera

    Science.gov (United States)

    Picken, C. J.; Legaie, R.; Pritchard, J. D.

    2017-10-01

    Single atom imaging requires discrimination of weak photon count events above the background and has typically been performed using electron-multiplying charge-coupled device cameras, photomultiplier tubes, or single photon counting modules. A scientific complementary metal-oxide semiconductor (sCMOS) provides a cost effective and highly scalable alternative to other single atom imaging technologies, offering fast readout and larger sensor dimensions. We demonstrate single atom resolved imaging of two site-addressable optical traps separated by 10 μm using an sCMOS camera, offering a competitive signal-to-noise ratio at intermediate count rates to allow high fidelity readout discrimination (error <10-6) and sub-μm spatial resolution for applications in quantum technologies.

  18. High-speed imaging using 3CCD camera and multi-color LED flashes

    Science.gov (United States)

    Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis

    2017-11-01

    This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.

  19. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    Science.gov (United States)

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  20. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  1. Color balancing in CCD color cameras using analog signal processors made by Kodak

    Science.gov (United States)

    Kannegundla, Ram

    1995-03-01

    The green, red, and blue color filters used for CCD sensors generally have different responses. It is often necessary to balance these three colors for displaying a high-quality image on the monitor. The color filter arrays on sensors have different architectures. A CCD with standard G R G B pattern is considered for the present discussion. A simple method of separating the colors using CDS/H that is a part of KASPs (Analog Signal Processors made by Kodak) and using the gain control, which is also a part of KASPs for color balance, is presented. The colors are separated from the video output of sensor by using three KASPs, one each for green, red, and blue colors and by using alternate sample pulses for green and 1 in 4 pulses for red and blue. The separated colors gain is adjusted either automatically or manually and sent to the monitor for direct display in the analog mode or through an A/D converter digitally to the memory. This method of color balancing demands high-quality ASPs. Kodak has designed four different chips with varying levels of power consumption and speed for analog signal processing of video output of CCD sensors. The analog ASICs have been characterized for noise, clock feedthrough, acquisition time, linearity, variable gain, line rate clamp, black muxing, affect of temperature variations on chip performance, and droop. The ASP chips have met their design specifications.

  2. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System

    Directory of Open Access Journals (Sweden)

    Rocco Furferi

    2016-08-01

    Full Text Available One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a “fuzzy mass” of tufted fibers into a regular mass of untwisted fibers, named “tow”. During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time.

  3. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System.

    Science.gov (United States)

    Furferi, Rocco; Governi, Lapo; Volpe, Yary; Carfagni, Monica

    2016-08-31

    One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a "fuzzy mass" of tufted fibers into a regular mass of untwisted fibers, named "tow". During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time.

  4. Variable high-resolution color CCD camera system with online capability for professional photo studio application

    Science.gov (United States)

    Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

    1998-04-01

    Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

  5. Improving color constancy by discounting the variation of camera spectral sensitivity

    Science.gov (United States)

    Gao, Shao-Bing; Zhang, Ming; Li, Chao-Yi; Li, Yong-Jie

    2017-08-01

    It is an ill-posed problem to recover the true scene colors from a color biased image by discounting the effects of scene illuminant and camera spectral sensitivity (CSS) at the same time. Most color constancy (CC) models have been designed to first estimate the illuminant color, which is then removed from the color biased image to obtain an image taken under white light, without the explicit consideration of CSS effect on CC. This paper first studies the CSS effect on illuminant estimation arising in the inter-dataset-based CC (inter-CC), i.e., training a CC model on one dataset and then testing on another dataset captured by a distinct CSS. We show the clear degradation of existing CC models for inter-CC application. Then a simple way is proposed to overcome such degradation by first learning quickly a transform matrix between the two distinct CSSs (CSS-1 and CSS-2). The learned matrix is then used to convert the data (including the illuminant ground truth and the color biased images) rendered under CSS-1 into CSS-2, and then train and apply the CC model on the color biased images under CSS-2, without the need of burdensome acquiring of training set under CSS-2. Extensive experiments on synthetic and real images show that our method can clearly improve the inter-CC performance for traditional CC algorithms. We suggest that by taking the CSS effect into account, it is more likely to obtain the truly color constant images invariant to the changes of both illuminant and camera sensors.

  6. Improving color constancy by discounting the variation of camera spectral sensitivity.

    Science.gov (United States)

    Gao, Shao-Bing; Zhang, Ming; Li, Chao-Yi; Li, Yong-Jie

    2017-08-01

    It is an ill-posed problem to recover the true scene colors from a color biased image by discounting the effects of scene illuminant and camera spectral sensitivity (CSS) at the same time. Most color constancy (CC) models have been designed to first estimate the illuminant color, which is then removed from the color biased image to obtain an image taken under white light, without the explicit consideration of CSS effect on CC. This paper first studies the CSS effect on illuminant estimation arising in the inter-dataset-based CC (inter-CC), i.e., training a CC model on one dataset and then testing on another dataset captured by a distinct CSS. We show the clear degradation of existing CC models for inter-CC application. Then a simple way is proposed to overcome such degradation by first learning quickly a transform matrix between the two distinct CSSs (CSS-1 and CSS-2). The learned matrix is then used to convert the data (including the illuminant ground truth and the color-biased images) rendered under CSS-1 into CSS-2, and then train and apply the CC model on the color-biased images under CSS-2 without the need of burdensome acquiring of the training set under CSS-2. Extensive experiments on synthetic and real images show that our method can clearly improve the inter-CC performance for traditional CC algorithms. We suggest that, by taking the CSS effect into account, it is more likely to obtain the truly color constant images invariant to the changes of both illuminant and camera sensors.

  7. Calibration between color camera and 3D LIDAR instruments with a polygonal planar board.

    Science.gov (United States)

    Park, Yoonsu; Yun, Seokmin; Won, Chee Sun; Cho, Kyungeun; Um, Kyhyun; Sim, Sungdae

    2014-03-17

    Calibration between color camera and 3D Light Detection And Ranging (LIDAR) equipment is an essential process for data fusion. The goal of this paper is to improve the calibration accuracy between a camera and a 3D LIDAR. In particular, we are interested in calibrating a low resolution 3D LIDAR with a relatively small number of vertical sensors. Our goal is achieved by employing a new methodology for the calibration board, which exploits 2D-3D correspondences. The 3D corresponding points are estimated from the scanned laser points on the polygonal planar board with adjacent sides. Since the lengths of adjacent sides are known, we can estimate the vertices of the board as a meeting point of two projected sides of the polygonal board. The estimated vertices from the range data and those detected from the color image serve as the corresponding points for the calibration. Experiments using a low-resolution LIDAR with 32 sensors show robust results.

  8. Single-flavor color superconductivity with color-sextet pairing

    International Nuclear Information System (INIS)

    Brauner, T.

    2005-01-01

    We analyze the color superconductivity of one massive flavor quark matter at moderate baryon density. First, we briefly review the framework of color superconductivity. Then, we suggest a mechanism which, within QCD, can lead to formation of a spin-zero color-sextet condensate. The most general form of the order parameter implies a complete breakdown of the SU(3) x U(1) symmetry. However, the conventional fermionic NJL-type description in the mean-field approximation seems to favor an enhanced O(3) symmetry of the ground state. This is ascribed to the use of the mean-field approximation and possible solutions are suggested. (author)

  9. Dynamic Human Body Modeling Using a Single RGB Camera.

    Science.gov (United States)

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-03-18

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones.

  10. Single-exposure color digital holography

    Science.gov (United States)

    Feng, Shaotong; Wang, Yanhui; Zhu, Zhuqing; Nie, Shouping

    2010-11-01

    In this paper, we report a method for color image reconstruction by recording only one single multi-wavelength hologram. In the recording process, three lasers of different wavelengths emitting in the red, green and blue regions are used for illuminating on the object and the object diffraction fields will arrive at the hologram plane simultaneously. Three reference beams with different spatial angles will interfere with the corresponding object diffraction fields on the hologram plane, respectively. Finally, a series of sub-holograms incoherently overlapped on the CCD to be recorded as a multi-wavelength hologram. Angular division multiplexing is employed to reference beams so that the spatial spectra of the multiple recordings will be separated in the Fourier plane. In the reconstruction process, the multi-wavelength hologram will be Fourier transformed into its Fourier plane, where the spatial spectra of different wavelengths are separated and can be easily extracted by employing frequency filtering. The extracted spectra are used to reconstruct the corresponding monochromatic complex amplitudes, which will be synthesized to reconstruct the color image. For singleexposure recording technique, it is convenient for applications on the real-time image processing fields. However, the quality of the reconstructed images is affected by speckle noise. How to improve the quality of the images needs for further research.

  11. Single camera photogrammetry system for EEG electrode identification and localization.

    Science.gov (United States)

    Baysal, Uğur; Sengül, Gökhan

    2010-04-01

    In this study, photogrammetric coordinate measurement and color-based identification of EEG electrode positions on the human head are simultaneously implemented. A rotating, 2MP digital camera about 20 cm above the subject's head is used and the images are acquired at predefined stop points separated azimuthally at equal angular displacements. In order to realize full automation, the electrodes have been labeled by colored circular markers and an electrode recognition algorithm has been developed. The proposed method has been tested by using a plastic head phantom carrying 25 electrode markers. Electrode locations have been determined while incorporating three different methods: (i) the proposed photogrammetric method, (ii) conventional 3D radiofrequency (RF) digitizer, and (iii) coordinate measurement machine having about 6.5 mum accuracy. It is found that the proposed system automatically identifies electrodes and localizes them with a maximum error of 0.77 mm. It is suggested that this method may be used in EEG source localization applications in the human brain.

  12. Single-flavor color superconductivity with color-sextet pairing

    Czech Academy of Sciences Publication Activity Database

    Brauner, Tomáš

    2005-01-01

    Roč. 55, č. 1 (2005), s. 9-16 ISSN 0011-4626 R&D Projects: GA ČR(CZ) GA202/02/0847 Keywords : color superconductivity * spontaneous symmetry breaking Subject RIV: BE - Theoretical Physics Impact factor: 0.360, year: 2005

  13. Multi-capability color night vision HD camera for defense, surveillance, and security

    Science.gov (United States)

    Pang, Francis; Powell, Gareth; Fereyre, Pierre

    2015-05-01

    e2v has developed a family of high performance cameras based on our next generation CMOS imagers that provide multiple features and capabilities to meet the range of challenging imaging applications in defense, surveillance, and security markets. Two resolution sizes are available: 1920x1080 with 5.3 μm pixels, and an ultra-low light level version at 1280x1024 with 10μm pixels. Each type is available in either monochrome or e2v's unique bayer pattern color version. The camera is well suited to accommodate many of the high demands for defense, surveillance, and security applications: compact form factor (SWAP+C), color night vision performance (down to 10-2 lux), ruggedized housing, Global Shutter, low read noise (<6e- in Global shutter mode and <2.5e- in Rolling shutter mode), 60 Hz frame rate, high QE especially in the enhanced NIR range (up to 1100nm). Other capabilities include active illumination and range gating. This paper will describe all the features of the sensor and the camera. It will be followed with a presentation of the latest test data with the current developments. Then, it will conclude with a description of how these features can be easily configured to meet many different applications. With this development, we can tune rather than create a full customization, making it more beneficial for many of our customers and their custom applications.

  14. Color Segmentation Approach of Infrared Thermography Camera Image for Automatic Fault Diagnosis

    International Nuclear Information System (INIS)

    Djoko Hari Nugroho; Ari Satmoko; Budhi Cynthia Dewi

    2007-01-01

    Predictive maintenance based on fault diagnosis becomes very important in current days to assure the availability and reliability of a system. The main purpose of this research is to configure a computer software for automatic fault diagnosis based on image model acquired from infrared thermography camera using color segmentation approach. This technique detects hot spots in equipment of the plants. Image acquired from camera is first converted to RGB (Red, Green, Blue) image model and then converted to CMYK (Cyan, Magenta, Yellow, Key for Black) image model. Assume that the yellow color in the image represented the hot spot in the equipment, the CMYK image model is then diagnosed using color segmentation model to estimate the fault. The software is configured utilizing Borland Delphi 7.0 computer programming language. The performance is then tested for 10 input infrared thermography images. The experimental result shows that the software capable to detect the faulty automatically with performance value of 80 % from 10 sheets of image input. (author)

  15. Single-fiber multi-color pyrometry

    Science.gov (United States)

    Small, IV, Ward; Celliers, Peter

    2000-01-01

    This invention is a fiber-based multi-color pyrometry set-up for real-time non-contact temperature and emissivity measurement. The system includes a single optical fiber to collect radiation emitted by a target, a reflective rotating chopper to split the collected radiation into two or more paths while modulating the radiation for lock-in amplification (i.e., phase-sensitive detection), at least two detectors possibly of different spectral bandwidths with or without filters to limit the wavelength regions detected and optics to direct and focus the radiation onto the sensitive areas of the detectors. A computer algorithm is used to calculate the true temperature and emissivity of a target based on blackbody calibrations. The system components are enclosed in a light-tight housing, with provision for the fiber to extend outside to collect the radiation. Radiation emitted by the target is transmitted through the fiber to the reflective chopper, which either allows the radiation to pass straight through or reflects the radiation into one or more separate paths. Each path includes a detector with or without filters and corresponding optics to direct and focus the radiation onto the active area of the detector. The signals are recovered using lock-in amplification. Calibration formulas for the signals obtained using a blackbody of known temperature are used to compute the true temperature and emissivity of the target. The temperature range of the pyrometer system is determined by the spectral characteristics of the optical components.

  16. The HydroColor App: Above Water Measurements of Remote Sensing Reflectance and Turbidity Using a Smartphone Camera.

    Science.gov (United States)

    Leeuw, Thomas; Boss, Emmanuel

    2018-01-16

    HydroColor is a mobile application that utilizes a smartphone's camera and auxiliary sensors to measure the remote sensing reflectance of natural water bodies. HydroColor uses the smartphone's digital camera as a three-band radiometer. Users are directed by the application to collect a series of three images. These images are used to calculate the remote sensing reflectance in the red, green, and blue broad wavelength bands. As with satellite measurements, the reflectance can be inverted to estimate the concentration of absorbing and scattering substances in the water, which are predominately composed of suspended sediment, chlorophyll, and dissolved organic matter. This publication describes the measurement method and investigates the precision of HydroColor's reflectance and turbidity estimates compared to commercial instruments. It is shown that HydroColor can measure the remote sensing reflectance to within 26% of a precision radiometer and turbidity within 24% of a portable turbidimeter. HydroColor distinguishes itself from other water quality camera methods in that its operation is based on radiometric measurements instead of image color. HydroColor is one of the few mobile applications to use a smartphone as a completely objective sensor, as opposed to subjective user observations or color matching using the human eye. This makes HydroColor a powerful tool for crowdsourcing of aquatic optical data.

  17. An Optical Tracking System based on Hybrid Stereo/Single-View Registration and Controlled Cameras

    OpenAIRE

    Cortes , Guillaume; Marchand , Eric; Ardouin , Jérôme; Lécuyer , Anatole

    2017-01-01

    International audience; Optical tracking is widely used in robotics applications such as unmanned aerial vehicle (UAV) localization. Unfortunately, such systems require many cameras and are, consequently, expensive. In this paper, we propose an approach to considerably increase the optical tracking volume without adding cameras. First, when the target becomes no longer visible by at least two cameras we propose a single-view tracking mode which requires only one camera. Furthermore, we propos...

  18. Formation of the color image based on the vidicon TV camera

    Science.gov (United States)

    Iureva, Radda A.; Maltseva, Nadezhda K.; Dunaev, Vadim I.

    2016-09-01

    The main goal of nuclear safety is to protect from accidents in nuclear power plant (NPP) against radiation arising during normal operation of nuclear installations, or as a result of accidents on them. The most important task in any activities aimed at the maintenance of NPP is a constant maintenance of the desired level of security and reliability. The periodic non-destructive testing during operation provides the most relevant criteria for the integrity of the components of the primary circuit pressure. The objective of this study is to develop a system for forming a color image on the television camera on vidicon which is used to conduct non-destructive testing in conditions of increased radiation at NPPs.

  19. A detailed comparison of single-camera light-field PIV and tomographic PIV

    Science.gov (United States)

    Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.

    2018-03-01

    This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.

  20. Referenced dual pressure- and temperature-sensitive paint for digital color camera read out.

    Science.gov (United States)

    Fischer, Lorenz H; Karakus, Cüneyt; Meier, Robert J; Risch, Nikolaus; Wolfbeis, Otto S; Holder, Elisabeth; Schäferling, Michael

    2012-12-03

    The first fluorescent material for the referenced simultaneous RGB (red green blue) imaging of barometric pressure (oxygen partial pressure) and temperature is presented. This sensitive coating consists of two platinum(II) complexes as indicators and a reference dye, each of which is incorporated in appropriate polymer nanoparticles. These particles are dispersed in a polyurethane hydrogel and spread onto a solid support. The emission of the (oxygen) pressure indicator, PtTFPP, matches the red channel of a RGB color camera, whilst the emission of the temperature indicator [Pt(II) (Br-thq)(acac)] matches the green channel. The reference dye, 9,10-diphenylanthracene, emits in the blue channel. In contrast to other dual-sensitive materials, this new coating allows for the simultaneous imaging of both indicator signals, as well as the reference signal, in one RGB color picture without having to separate the signals with additional optical filters. All of these dyes are excitable with a 405 nm light-emitting diode (LED). With this new composite material, barometric pressure can be determined with a resolution of 22 mbar; the temperature can be determined with a resolution of 4.3 °C. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. A lateral chromatic aberration correction system for ultrahigh-definition color video camera

    Science.gov (United States)

    Yamashita, Takayuki; Shimamoto, Hiroshi; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed color camera for an 8k x 4k-pixel ultrahigh-definition video system, which is called Super Hi- Vision, with a 5x zoom lens and a signal-processing system incorporating a function for real-time lateral chromatic aberration correction. The chromatic aberration of the lens degrades color image resolution. So in order to develop a compact zoom lens consistent with ultrahigh-resolution characteristics, we incorporated a real-time correction function in the signal-processing system. The signal-processing system has eight memory tables to store the correction data at eight focal length points on the blue and red channels. When the focal length data is inputted from the lens control units, the relevant correction data are interpolated from two of eights correction data tables. This system performs geometrical conversion on both channels using this correction data. This paper describes that the correction function can successfully reduce the lateral chromatic aberration, to an amount small enough to ensure the desired image resolution was achieved over the entire range of the lens in real time.

  2. 4 Vesta in Color: High Resolution Mapping from Dawn Framing Camera Images

    Science.gov (United States)

    Reddy, V.; LeCorre, L.; Nathues, A.; Sierks, H.; Christensen, U.; Hoffmann, M.; Schroeder, S. E.; Vincent, J. B.; McSween, H. Y.; Denevi, B. W.; hide

    2011-01-01

    Rotational surface variations on asteroid 4 Vesta have been known from ground-based and HST observations, and they have been interpreted as evidence of compositional diversity. NASA s Dawn mission entered orbit around Vesta on July 16, 2011 for a year-long global characterization. The framing cameras (FC) onboard the Dawn spacecraft will image the asteroid in one clear (broad) and seven narrow band filters covering the wavelength range between 0.4-1.0 microns. We present color mapping results from the Dawn FC observations of Vesta obtained during Survey orbit (approx.3000 km) and High-Altitude Mapping Orbit (HAMO) (approx.950 km). Our aim is to create global color maps of Vesta using multi spectral FC images to identify the spatial extent of compositional units and link them with other available data sets to extract the basic mineralogy. While the VIR spectrometer onboard Dawn has higher spectral resolution (864 channels) allowing precise mineralogical assessment of Vesta s surface, the FC has three times higher spatial resolution in any given orbital phase. In an effort to extract maximum information from FC data we have developed algorithms using laboratory spectra of pyroxenes and HED meteorites to derive parameters associated with the 1-micron absorption band wing. These parameters will help map the global distribution of compositionally related units on Vesta s surface. Interpretation of these units will involve the integration of FC and VIR data.

  3. A Quality Evaluation of Single and Multiple Camera Calibration Approaches for an Indoor Multi Camera Tracking System

    Directory of Open Access Journals (Sweden)

    M. Adduci

    2014-06-01

    Full Text Available Human detection and tracking has been a prominent research area for several scientists around the globe. State of the art algorithms have been implemented, refined and accelerated to significantly improve the detection rate and eliminate false positives. While 2D approaches are well investigated, 3D human detection and tracking is still an unexplored research field. In both 2D/3D cases, introducing a multi camera system could vastly expand the accuracy and confidence of the tracking process. Within this work, a quality evaluation is performed on a multi RGB-D camera indoor tracking system for examining how camera calibration and pose can affect the quality of human tracks in the scene, independently from the detection and tracking approach used. After performing a calibration step on every Kinect sensor, state of the art single camera pose estimators were evaluated for checking how good the quality of the poses is estimated using planar objects such as an ordinate chessboard. With this information, a bundle block adjustment and ICP were performed for verifying the accuracy of the single pose estimators in a multi camera configuration system. Results have shown that single camera estimators provide high accuracy results of less than half a pixel forcing the bundle to converge after very few iterations. In relation to ICP, relative information between cloud pairs is more or less preserved giving a low score of fitting between concatenated pairs. Finally, sensor calibration proved to be an essential step for achieving maximum accuracy in the generated point clouds, and therefore in the accuracy of the produced 3D trajectories, from each sensor.

  4. Single Lens Dual-Aperture 3D Imaging System: Color Modeling

    Science.gov (United States)

    Bae, Sam Y.; Korniski, Ronald; Ream, Allen; Fritz, Eric; Shearn, Michael

    2012-01-01

    In an effort to miniaturize a 3D imaging system, we created two viewpoints in a single objective lens camera. This was accomplished by placing a pair of Complementary Multi-band Bandpass Filters (CMBFs) in the aperture area. Two key characteristics about the CMBFs are that the passbands are staggered so only one viewpoint is opened at a time when a light band matched to that passband is illuminated, and the passbands are positioned throughout the visible spectrum, so each viewpoint can render color by taking RGB spectral images. Each viewpoint takes a different spectral image from the other viewpoint hence yielding a different color image relative to the other. This color mismatch in the two viewpoints could lead to color rivalry, where the human vision system fails to resolve two different colors. The difference will be closer if the number of passbands in a CMBF increases. (However, the number of passbands is constrained by cost and fabrication technique.) In this paper, simulation predicting the color mismatch is reported.

  5. Single-shot quantitative phase microscopy with color-multiplexed differential phase contrast (cDPC.

    Directory of Open Access Journals (Sweden)

    Zachary F Phillips

    Full Text Available We present a new technique for quantitative phase and amplitude microscopy from a single color image with coded illumination. Our system consists of a commercial brightfield microscope with one hardware modification-an inexpensive 3D printed condenser insert. The method, color-multiplexed Differential Phase Contrast (cDPC, is a single-shot variant of Differential Phase Contrast (DPC, which recovers the phase of a sample from images with asymmetric illumination. We employ partially coherent illumination to achieve resolution corresponding to 2× the objective NA. Quantitative phase can then be used to synthesize DIC and phase contrast images or extract shape and density. We demonstrate amplitude and phase recovery at camera-limited frame rates (50 fps for various in vitro cell samples and c. elegans in a micro-fluidic channel.

  6. Colors and Photometry of Bright Materials on Vesta as Seen by the Dawn Framing Camera

    Science.gov (United States)

    Schroeder, S. E.; Li, J.-Y.; Mittlefehldt, D. W.; Pieters, C. M.; De Sanctis, M. C.; Hiesinger, H.; Blewett, D. T.; Russell, C. T.; Raymond, C. A.; Keller, H. U.; hide

    2012-01-01

    The Dawn spacecraft has been in orbit around the asteroid Vesta since July, 2011. The on-board Framing Camera has acquired thousands of high-resolution images of the regolith-covered surface through one clear and seven narrow-band filters in the visible and near-IR wavelength range. It has observed bright and dark materials that have a range of reflectance that is unusually wide for an asteroid. Material brighter than average is predominantly found on crater walls, and in ejecta surrounding caters in the southern hemisphere. Most likely, the brightest material identified on the Vesta surface so far is located on the inside of a crater at 64.27deg S, 1.54deg . The apparent brightness of a regolith is influenced by factors such as particle size, mineralogical composition, and viewing geometry. As such, the presence of bright material can indicate differences in lithology and/or degree of space weathering. We retrieve the spectral and photometric properties of various bright terrains from false-color images acquired in the High Altitude Mapping Orbit (HAMO). We find that most bright material has a deeper 1-m pyroxene band than average. However, the aforementioned brightest material appears to have a 1-m band that is actually less deep, a result that awaits confirmation by the on-board VIR spectrometer. This site may harbor a class of material unique for Vesta. We discuss the implications of our spectral findings for the origin of bright materials.

  7. Aerial SLAM with a Single Camera using Visual Expectation

    OpenAIRE

    Milford, Michael J.; Schill, Felix Stephan; Corke, Peter; Mahony, Robert

    2011-01-01

    Micro aerial vehicles (MAVs) are a rapidly growing area of research and development in robotics. For autonomous robot operations, localization has typically been calculated using GPS, external camera arrays, or onboard range or vision sensing. In cluttered indoor or outdoor environments, onboard sensing is the only viable option. In this paper we present an appearance-based approach to visual SLAM on a flying MAV using only low quality vision. Our approach consists of a visual place recogniti...

  8. Color-filter-free spatial visible light communication using RGB-LED and mobile-phone camera.

    Science.gov (United States)

    Chen, Shih-Hao; Chow, Chi-Wai

    2014-12-15

    A novel color-filter-free visible-light communication (VLC) system using red-green-blue (RGB) light emitting diode (LED) and mobile-phone camera is proposed and demonstrated for the first time. A feature matching method, which is based on the scale-invariant feature transform (SIFT) algorithm for the received grayscale image is used instead of the chromatic information decoding method. The proposed method is simple and saves the computation complexity. The signal processing is based on the grayscale image computation; hence neither color-filter nor chromatic channel information is required. A proof-of-concept experiment is performed and high performance channel recognition is achieved.

  9. You're on Camera---in Color; A Television Handbook for Extension Workers.

    Science.gov (United States)

    Tonkin, Joe

    Color television has brought about new concepts of programming and new production requirements. This handbook is designed to aid those Extension workers who are concerned with or will appear on Extension television programs. The book discusses how to make the most of color, what to wear and how to apply makeup for color TV, how colors appear on…

  10. Single Particle Damage Events in Candidate Star Camera Sensors

    Science.gov (United States)

    Marshall, Paul; Marshall, Cheryl; Polidan, Elizabeth; Wacyznski, Augustyn; Johnson, Scott

    2005-01-01

    Si charge coupled devices (CCDs) are currently the preeminent detector in star cameras as well as in the near ultraviolet (uv) to visible wavelength region for astronomical observations in space and in earth-observing space missions. Unfortunately, the performance of CCDs is permanently degraded by total ionizing dose (TID) and displacement damage effects. TID produces threshold voltage shifts on the CCD gates and displacement damage reduces the charge transfer efficiency (CTE), increases the dark current, produces dark current nonuniformities and creates random telegraph noise in individual pixels. In addition to these long term effects, cosmic ray and trapped proton transients also interfere with device operation on orbit. In the present paper, we investigate the dark current behavior of CCDs - in particular the formation and annealing of hot pixels. Such pixels degrade the ability of a CCD to perform science and also can present problems to the performance of star camera functions (especially if their numbers are not correctly anticipated). To date, most dark current radiation studies have been performed by irradiating the CCDs at room temperature but this can result in a significantly optimistic picture of the hot pixel count. We know from the Hubble Space Telescope (HST) that high dark current pixels (so-called hot pixels or hot spikes) accumulate as a function of time on orbit. For example, the HST Advanced Camera for Surveys/Wide Field Camera instrument performs monthly anneals despite the loss of observational time, in order to partially anneal the hot pixels. Note that the fact that significant reduction in hot pixel populations occurs for room temperature anneals is not presently understood since none of the commonly expected defects in Si (e.g. divacancy, E center, and A-center) anneal at such a low temperature. A HST Wide Field Camera 3 (WFC3) CCD manufactured by E2V was irradiated while operating at -83C and the dark current studied as a function of

  11. Detecting Flying Objects Using a Single Moving Camera.

    Science.gov (United States)

    Rozantsev, Artem; Lepetit, Vincent; Fua, Pascal

    2017-05-01

    We propose an approach for detecting flying objects such as Unmanned Aerial Vehicles (UAVs) and aircrafts when they occupy a small portion of the field of view, possibly moving against complex backgrounds, and are filmed by a camera that itself moves. We argue that solving such a difficult problem requires combining both appearance and motion cues. To this end we propose a regression-based approach for object-centric motion stabilization of image patches that allows us to achieve effective classification on spatio-temporal image cubes and outperform state-of-the-art techniques. As this problem has not yet been extensively studied, no test datasets are publicly available. We therefore built our own, both for UAVs and aircrafts, and will make them publicly available so they can be used to benchmark future flying object detection and collision avoidance algorithms.

  12. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System

    OpenAIRE

    Furferi, Rocco; Governi, Lapo; Volpe, Yary; Carfagni, Monica

    2016-01-01

    One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transfo...

  13. Single-shot observation of growing streamers using an ultrafast camera

    International Nuclear Information System (INIS)

    Takahashi, E; Kato, S; Furutani, H; Sasaki, A; Kishimoto, Y; Takada, K; Matsumura, S; Sasaki, H

    2011-01-01

    A recently developed ultrafast camera that can acquire 10 8 frames per second was used to investigate positive streamer discharge. It enabled single-shot evaluation of streamer evolution without the need to consider shot-to-shot reproducibility. This camera was used to investigate streamers in argon. Growing branches, the transition when a streamer forms a return stroke, and related phenomena were clearly observed. (fast track communication)

  14. Multi-spectral CCD camera system for ocean water color and seacoast observation

    Science.gov (United States)

    Zhu, Min; Chen, Shiping; Wu, Yanlin; Huang, Qiaolin; Jin, Weiqi

    2001-10-01

    One of the earth observing instruments on HY-1 Satellite which will be launched in 2001, the multi-spectral CCD camera system, is developed by Beijing Institute of Space Mechanics & Electricity (BISME), Chinese Academy of Space Technology (CAST). In 798 km orbit, the system can provide images with 250 m ground resolution and a swath of 500 km. It is mainly used for coast zone dynamic mapping and oceanic watercolor monitoring, which include the pollution of offshore and coast zone, plant cover, watercolor, ice, terrain underwater, suspended sediment, mudflat, soil and vapor gross. The multi- spectral camera system is composed of four monocolor CCD cameras, which are line array-based, 'push-broom' scanning cameras, and responding for four spectral bands. The camera system adapts view field registration; that is, each camera scans the same region at the same moment. Each of them contains optics, focal plane assembly, electrical circuit, installation structure, calibration system, thermal control and so on. The primary features on the camera system are: (1) Offset of the central wavelength is better than 5 nm; (2) Degree of polarization is less than 0.5%; (3) Signal-to-noise ratio is about 1000; (4) Dynamic range is better than 2000:1; (5) Registration precision is better than 0.3 pixel; (6) Quantization value is 12 bit.

  15. Signal-to-noise ratio of single-pixel cameras based on photodiodes.

    Science.gov (United States)

    Jauregui-Sánchez, Y; Clemente, P; Latorre-Carmona, P; Tajahuerce, E; Lancis, J

    2018-03-01

    Single-pixel cameras have been successfully used in different imaging applications in the last years. One of the key elements affecting the quality of these cameras is the photodetector. Here, we develop a numerical model of a single-pixel camera, which takes into account not only the characteristics of the incident light but also the physical properties of the detector. In particular, our model considers the photocurrent, the dark current, the photocurrent shot noise, the dark-current shot noise, and the Johnson-Nyquist (thermal) noise of the photodiode used as a light detector. The model establishes a clear relationship between the electric signal and the quality of the final image. This allows us to perform a systematic study of the quality of the image obtained with single-pixel cameras in different contexts. In particular, we study the signal-to-noise ratio as a function of the optical power of the incident light, the wavelength, and the photodiode temperature. The results of the model are compared with those obtained experimentally with a single-pixel camera.

  16. Decision Support System to Choose Digital Single Lens Camera with Simple Additive Weighting Method

    Directory of Open Access Journals (Sweden)

    Tri Pina Putri

    2016-11-01

    Full Text Available One of the technologies that evolve today is Digital Single Lens Reflex (DSLR camera. The number of products makes users have difficulties to choose the appropriate camera based on their criteria. Users may utilize several ways to help them choosing the intended camera such as using magazine, internet, and other media. This paper discusses about a web based decision support system to choose cameras by using SAW (Simple Additive Weighting method in order to make the decision process more effective and efficient. This system is expected to give recommendations about the camera which is appropriate with the user’s need and criteria based on the cost, the resolution, the feature, the ISO, and the censor. The system was implemented by using PHP and MySQL. Based on the result of questionnaire distributed to 20 respondents, 60% respondents agree that this decision support system can help users to choose the appropriate camera DSLR in accordance with the user’s need, 60% of respondents agree that this decision support system is more effective to choose DSLR camera and 75% of respondents agree that this system is more efficient. In addition, 60.55% of respondents agree that this system has met 5 Es Usability Framework.

  17. Electrolytic coloration and spectral properties of hydroxyl-doped potassium chloride single crystals

    International Nuclear Information System (INIS)

    Gu Hongen; Wu Yanru

    2011-01-01

    Hydroxyl-doped potassium chloride single crystals are colored electrolytically at various temperatures and voltages using a pointed cathode and a flat anode. Characteristic OH - spectral band is observed in the absorption spectrum of uncolored single crystal. Characteristic O - , OH - , U, V 2 , V 3 , O 2- -V a + , F, R 2 and M spectral bands are observed simultaneously in absorption spectra of colored single crystals. Current-time curve for electrolytic coloration of hydroxyl-doped potassium chloride single crystal and its relationship with electrolytic coloration process are given. Production and conversion of color centers are explained. - Highlights: → Expanded the traditional electrolysis method. → Hydroxyl-doped potassium chloride crystals were colored electrolytically for the first time. → Useful V, F and F-aggregate color centers were produced in colored crystals. → V color centers were produced directly and F and F-aggregate color centers indirectly.

  18. Wide-field single photon counting imaging with an ultrafast camera and an image intensifier

    Science.gov (United States)

    Zanda, Gianmarco; Sergent, Nicolas; Green, Mark; Levitt, James A.; Petrášek, Zdeněk; Suhling, Klaus

    2012-12-01

    We are reporting a method for wide-field photon counting imaging using a CMOS camera with a 40 kHz frame rate coupled with a three-stage image intensifier mounted on a standard fluorescence microscope. This system combines high frame rates with single photon sensitivity. The output of the phosphor screen, consisting of single-photon events, is collected by a CMOS camera allowing to create a wide-field image with parallel positional and timing information of each photon. Using a pulsed excitation source and a luminescent sample, the arrival time of hundreds of photons can be determined simultaneously in many pixels with microsecond resolution.

  19. Joint depth map and color consistency estimation for stereo images with different illuminations and cameras.

    Science.gov (United States)

    Heo, Yong Seok; Lee, Kyoung Mu; Lee, Sang Uk

    2013-05-01

    Abstract—In this paper, we propose a method that infers both accurate depth maps and color-consistent stereo images for radiometrically varying stereo images. In general, stereo matching and performing color consistency between stereo images are a chicken-and-egg problem since it is not a trivial task to simultaneously achieve both goals. Hence, we have developed an iterative framework in which these two processes can boost each other. First, we transform the input color images to log-chromaticity color space, from which a linear relationship can be established during constructing a joint pdf of transformed left and right color images. From this joint pdf, we can estimate a linear function that relates the corresponding pixels in stereo images. Based on this linear property, we present a new stereo matching cost by combining Mutual Information (MI), SIFT descriptor, and segment-based plane-fitting to robustly find correspondence for stereo image pairs which undergo radiometric variations. Meanwhile, we devise a Stereo Color Histogram Equalization (SCHE) method to produce color-consistent stereo image pairs, which conversely boost the disparity map estimation. Experimental results show that our method produces both accurate depth maps and color-consistent stereo images, even for stereo images with severe radiometric differences.

  20. Real-time look-up table-based color correction for still image stabilization of digital cameras without using frame memory

    Science.gov (United States)

    Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha

    2012-09-01

    Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.

  1. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    Energy Technology Data Exchange (ETDEWEB)

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  2. A three-step vehicle detection framework for range estimation using a single camera

    CSIR Research Space (South Africa)

    Kanjee, R

    2015-12-01

    Full Text Available This paper proposes and validates a real-time onroad vehicle detection system, which uses a single camera for the purpose of intelligent driver assistance. A three-step vehicle detection framework is presented to detect and track the target vehicle...

  3. Single photon detection and localization accuracy with an ebCMOS camera

    Energy Technology Data Exchange (ETDEWEB)

    Cajgfinger, T. [CNRS/IN2P3, Institut de Physique Nucléaire de Lyon, Villeurbanne F-69622 (France); Dominjon, A., E-mail: agnes.dominjon@nao.ac.jp [Université de Lyon, Université de Lyon 1, Lyon 69003 France. (France); Barbier, R. [CNRS/IN2P3, Institut de Physique Nucléaire de Lyon, Villeurbanne F-69622 (France); Université de Lyon, Université de Lyon 1, Lyon 69003 France. (France)

    2015-07-01

    The CMOS sensor technologies evolve very fast and offer today very promising solutions to existing issues facing by imaging camera systems. CMOS sensors are very attractive for fast and sensitive imaging thanks to their low pixel noise (1e-) and their possibility of backside illumination. The ebCMOS group of IPNL has produced a camera system dedicated to Low Light Level detection and based on a 640 kPixels ebCMOS with its acquisition system. After reminding the principle of detection of an ebCMOS and the characteristics of our prototype, we confront our camera to other imaging systems. We compare the identification efficiency and the localization accuracy of a point source by four different photo-detection devices: the scientific CMOS (sCMOS), the Charge Coupled Device (CDD), the Electron Multiplying CCD (emCCD) and the Electron Bombarded CMOS (ebCMOS). Our ebCMOS camera is able to identify a single photon source in less than 10 ms with a localization accuracy better than 1 µm. We report as well efficiency measurement and the false positive identification of the ebCMOS camera by identifying more than hundreds of single photon sources in parallel. About 700 spots are identified with a detection efficiency higher than 90% and a false positive percentage lower than 5. With these measurements, we show that our target tracking algorithm can be implemented in real time at 500 frames per second under a photon flux of the order of 8000 photons per frame. These results demonstrate that the ebCMOS camera concept with its single photon detection and target tracking algorithm is one of the best devices for low light and fast applications such as bioluminescence imaging, quantum dots tracking or adaptive optics.

  4. Recognition and Matching of Clustered Mature Litchi Fruits Using Binocular Charge-Coupled Device (CCD Color Cameras

    Directory of Open Access Journals (Sweden)

    Chenglin Wang

    2017-11-01

    Full Text Available Recognition and matching of litchi fruits are critical steps for litchi harvesting robots to successfully grasp litchi. However, due to the randomness of litchi growth, such as clustered growth with uncertain number of fruits and random occlusion by leaves, branches and other fruits, the recognition and matching of the fruit become a challenge. Therefore, this study firstly defined mature litchi fruit as three clustered categories. Then an approach for recognition and matching of clustered mature litchi fruit was developed based on litchi color images acquired by binocular charge-coupled device (CCD color cameras. The approach mainly included three steps: (1 calibration of binocular color cameras and litchi image acquisition; (2 segmentation of litchi fruits using four kinds of supervised classifiers, and recognition of the pre-defined categories of clustered litchi fruit using a pixel threshold method; and (3 matching the recognized clustered fruit using a geometric center-based matching method. The experimental results showed that the proposed recognition method could be robust against the influences of varying illumination and occlusion conditions, and precisely recognize clustered litchi fruit. In the tested 432 clustered litchi fruits, the highest and lowest average recognition rates were 94.17% and 92.00% under sunny back-lighting and partial occlusion, and sunny front-lighting and non-occlusion conditions, respectively. From 50 pairs of tested images, the highest and lowest matching success rates were 97.37% and 91.96% under sunny back-lighting and non-occlusion, and sunny front-lighting and partial occlusion conditions, respectively.

  5. 3D digital image correlation using a single 3CCD colour camera and dichroic filter

    Science.gov (United States)

    Zhong, F. Q.; Shao, X. X.; Quan, C.

    2018-04-01

    In recent years, three-dimensional digital image correlation methods using a single colour camera have been reported. In this study, we propose a simplified system by employing a dichroic filter (DF) to replace the beam splitter and colour filters. The DF can be used to combine two views from different perspectives reflected by two planar mirrors and eliminate their interference. A 3CCD colour camera is then used to capture two different views simultaneously via its blue and red channels. Moreover, the measurement accuracy of the proposed method is higher since the effect of refraction is reduced. Experiments are carried out to verify the effectiveness of the proposed method. It is shown that the interference between the blue and red views is insignificant. In addition, the measurement accuracy of the proposed method is validated on the rigid body displacement. The experimental results demonstrate that the measurement accuracy of the proposed method is higher compared with the reported methods using a single colour camera. Finally, the proposed method is employed to measure the in- and out-of-plane displacements of a loaded plastic board. The re-projection errors of the proposed method are smaller than those of the reported methods using a single colour camera.

  6. Two-dimensional displacement measurement using static close range photogrammetry and a single fixed camera

    Directory of Open Access Journals (Sweden)

    Abdallah M. Khalil

    2011-09-01

    Full Text Available This work describes a simple approach to measure the displacement of a moving object in two directions simultaneously. The proposed approach is based on static close range photogrammetry with a single camera and the well-known collinearity equations. The proposed approach requires neither multi-camera synchronization nor mutual camera calibration. It requires no prior knowledge of the kinematic and kinetic data of the moving object. The proposed approach was used to evaluate predefined two-dimensional displacements of a moving object. The root mean square values of the differences between the predefined and evaluated displacements in the two directions are 0.11 and 0.02 mm.

  7. Charon's Light Curves, as Observed by New Horizons' Ralph Color Camera (MVIC) on Approach to the Pluto System.

    Science.gov (United States)

    Howett, C. J. A.; Ennico, K.; Olkin, C. B.; Buie, M. W.; Verbiscer, A. J.; Zangari, A. M.; Parker, A. H.; Reuter, D. C.; Grundy, W. M.; Weaver, H. A.; hide

    2016-01-01

    Light curves produced from color observations taken during New Horizons approach to the Pluto-system by its Multi-spectral Visible Imaging Camera (MVIC, part of the Ralph instrument) are analyzed. Fifty seven observations were analyzed, they were obtained between 9th April and 3rd July 2015, at a phase angle of 14.5 degrees to 15.1 degrees, sub-observer latitude of 51.2 degrees North to 51.5 degrees North, and a sub-solar latitude of 41.2 degrees North. MVIC has four color channels; all are discussed for completeness but only two were found to produce reliable light curves: Blue (400-550 nm) and Red (540-700 nm). The other two channels, Near Infrared (780-975 nm) and Methane-Band (860-910 nm), were found to be potentially erroneous and too noisy respectively. The Blue and Red light curves show that Charon's surface is neutral in color, but slightly brighter on its Pluto-facing hemisphere. This is consistent with previous studies made with the Johnson B and V bands, which are at shorter wavelengths than that of the MVIC Blue and Red channel respectively.

  8. A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.

    Science.gov (United States)

    Qian, Shuo; Sheng, Yang

    2011-11-01

    Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.

  9. Single vs. dual color fire detection systems: operational tradeoffs

    Science.gov (United States)

    Danino, Meir; Danan, Yossef; Sinvani, Moshe

    2017-10-01

    In attempt to supply a reasonable fire plume detection, multinational cooperation with significant capital is invested in the development of two major Infra-Red (IR) based fire detection alternatives, single-color IR (SCIR) and dual-color IR (DCIR). False alarm rate was expected to be high not only as a result of real heat sources but mainly due to the IR natural clutter especially solar reflections clutter. SCIR uses state-of-the-art technology and sophisticated algorithms to filter out threats from clutter. On the other hand, DCIR are aiming at using additional spectral band measurements (acting as a guard), to allow the implementation of a simpler and more robust approach for performing the same task. In this paper we present the basics of SCIR & DCIR architecture and the main differences between them. In addition, we will present the results from a thorough study conducted for the purpose of learning about the added value of the additional data available from the second spectral band. Here we consider the two CO2 bands of 4-5 micron and of 2.5-3 micron band as well as off peak band (guard). The findings of this study refer also to Missile warning systems (MWS) efficacy, in terms of operational value. We also present a new approach for tunable filter to such sensor.

  10. Single chip system LSI for digital still camera signal processing; Doga taio digital still camera yo shingo shori one chip

    Energy Technology Data Exchange (ETDEWEB)

    Yamada, T.; Okada, S.; Kobayashi, A.; Komura, Y.; Kiyozaki, K. [Sanyo Electric Co. Ltd., Osaka (Japan)

    1998-11-01

    This paper introduces the summary of development of a single chip system LSI for digital still camera (DSC) signal real-time processing, which can deal with animation. In developing the LSI, a DSC was identified as a system device, and the target was set to developing a system LSI capable of processing all of the signals from the DSC. In the real-time signal processing, signal processing of animated images and still images with less shutter waiting time was realized by mounting a dedicated M-JPEC core and by signal-processing contraction and elongation of the JPEG with the hardware at high speed. Writing and reading at higher speeds into and from image buffer memories to reduce the shutter waiting time and higher speed transfer of image data were realized by making a dual path architecture inside the LSI. Other functions performed by the software in the built-in RISC core include recording and replaying of voice, preparation of AVI files to replay the images on home-use TV sets, and a window function for DSC to synthesize still images. 7 refs., 8 figs., 2 tabs.

  11. Dual-Colored DNA Comb Polymers for Single Molecule Rheology

    Science.gov (United States)

    Mai, Danielle; Marciel, Amanda; Schroeder, Charles

    2014-03-01

    We report the synthesis and characterization of branched biopolymers for single molecule rheology. In our work, we utilize a hybrid enzymatic-synthetic approach to graft ``short'' DNA branches to ``long'' DNA backbones, thereby producing macromolecular DNA comb polymers. The branches and backbones are synthesized via polymerase chain reaction with chemically modified deoxyribonucleotides (dNTPs): ``short'' branches consist of Cy5-labeled dNTPs and a terminal azide group, and ``long'' backbones contain dibenzylcyclooctyne-modified (DBCO) dNTPs. In this way, we utilize strain-promoted, copper-free cycloaddition ``click'' reactions for facile grafting of azide-terminated branches at DBCO sites along backbones. Copper-free click reactions are bio-orthogonal and nearly quantitative when carried out under mild conditions. Moreover, comb polymers can be labeled with an intercalating dye (e.g., YOYO) for dual-color fluorescence imaging. We characterized these materials using gel electrophoresis, HPLC, and optical microscopy, with atomic force microscopy in progress. Overall, DNA combs are suitable for single molecule dynamics, and in this way, our work holds the potential to improve our understanding of topologically complex polymer melts and solutions.

  12. Perception of color emotions for single colors in red-green defective observers.

    Science.gov (United States)

    Sato, Keiko; Inoue, Takaaki

    2016-01-01

    It is estimated that inherited red-green color deficiency, which involves both the protan and deutan deficiency types, is common in men. For red-green defective observers, some reddish colors appear desaturated and brownish, unlike those seen by normal observers. Despite its prevalence, few studies have investigated the effects that red-green color deficiency has on the psychological properties of colors (color emotions). The current study investigated the influence of red-green color deficiency on the following six color emotions: cleanliness, freshness, hardness, preference, warmth, and weight. Specifically, this study aimed to: (1) reveal differences between normal and red-green defective observers in rating patterns of six color emotions; (2) examine differences in color emotions related to the three cardinal channels in human color vision; and (3) explore relationships between color emotions and color naming behavior. Thirteen men and 10 women with normal vision and 13 men who were red-green defective performed both a color naming task and an emotion rating task with 32 colors from the Berkeley Color Project (BCP). Results revealed noticeable differences in the cleanliness and hardness ratings between the normal vision observers, particularly in women, and red-green defective observers, which appeared mainly for colors in the orange to cyan range, and in the preference and warmth ratings for colors with cyan and purple hues. Similarly, naming errors also mainly occurred in the cyan colors. A regression analysis that included the three cone-contrasts (i.e., red-green, blue-yellow, and luminance) as predictors significantly accounted for variability in color emotion ratings for the red-green defective observers as much as the normal individuals. Expressly, for warmth ratings, the weight of the red-green opponent channel was significantly lower in color defective observers than in normal participants. In addition, the analyses for individual warmth ratings in

  13. Perception of color emotions for single colors in red-green defective observers

    Directory of Open Access Journals (Sweden)

    Keiko Sato

    2016-12-01

    Full Text Available It is estimated that inherited red-green color deficiency, which involves both the protan and deutan deficiency types, is common in men. For red-green defective observers, some reddish colors appear desaturated and brownish, unlike those seen by normal observers. Despite its prevalence, few studies have investigated the effects that red-green color deficiency has on the psychological properties of colors (color emotions. The current study investigated the influence of red-green color deficiency on the following six color emotions: cleanliness, freshness, hardness, preference, warmth, and weight. Specifically, this study aimed to: (1 reveal differences between normal and red-green defective observers in rating patterns of six color emotions; (2 examine differences in color emotions related to the three cardinal channels in human color vision; and (3 explore relationships between color emotions and color naming behavior. Thirteen men and 10 women with normal vision and 13 men who were red-green defective performed both a color naming task and an emotion rating task with 32 colors from the Berkeley Color Project (BCP. Results revealed noticeable differences in the cleanliness and hardness ratings between the normal vision observers, particularly in women, and red-green defective observers, which appeared mainly for colors in the orange to cyan range, and in the preference and warmth ratings for colors with cyan and purple hues. Similarly, naming errors also mainly occurred in the cyan colors. A regression analysis that included the three cone-contrasts (i.e., red-green, blue-yellow, and luminance as predictors significantly accounted for variability in color emotion ratings for the red-green defective observers as much as the normal individuals. Expressly, for warmth ratings, the weight of the red-green opponent channel was significantly lower in color defective observers than in normal participants. In addition, the analyses for individual warmth

  14. Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery

    Science.gov (United States)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2012-10-01

    In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.

  15. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    Science.gov (United States)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  16. Single sensor processing to obtain high resolution color component signals

    Science.gov (United States)

    Glenn, William E. (Inventor)

    2010-01-01

    A method for generating color video signals representative of color images of a scene includes the following steps: focusing light from the scene on an electronic image sensor via a filter having a tri-color filter pattern; producing, from outputs of the sensor, first and second relatively low resolution luminance signals; producing, from outputs of the sensor, a relatively high resolution luminance signal; producing, from a ratio of the relatively high resolution luminance signal to the first relatively low resolution luminance signal, a high band luminance component signal; producing, from outputs of the sensor, relatively low resolution color component signals; and combining each of the relatively low resolution color component signals with the high band luminance component signal to obtain relatively high resolution color component signals.

  17. SINGLE IMAGE CAMERA CALIBRATION IN CLOSE RANGE PHOTOGRAMMETRY FOR SOLDER JOINT ANALYSIS

    Directory of Open Access Journals (Sweden)

    D. Heinemann

    2016-06-01

    Full Text Available Printed Circuit Boards (PCB play an important role in the manufacturing of electronic devices. To ensure a correct function of the PCBs a certain amount of solder paste is needed during the placement of components. The aim of the current research is to develop an real-time, closed-loop solution for the analysis of the printing process where solder is printed onto PCBs. Close range photogrammetry allows for determination of the solder volume and a subsequent correction if necessary. Photogrammetry is an image based method for three dimensional reconstruction from two dimensional image data of an object. A precise camera calibration is indispensable for an accurate reconstruction. In our certain application it is not possible to use calibration methods with two dimensional calibration targets. Therefore a special calibration target was developed and manufactured, which allows for single image camera calibration.

  18. A Simple Setup to Perform 3D Locomotion Tracking in Zebrafish by Using a Single Camera

    Directory of Open Access Journals (Sweden)

    Gilbert Audira

    2018-02-01

    Full Text Available Generally, the measurement of three-dimensional (3D swimming behavior in zebrafish relies on commercial software or requires sophisticated scripts, and depends on more than two cameras to capture the video. Here, we establish a simple and economic apparatus to detect 3D locomotion in zebrafish, which involves a single camera capture system that records zebrafish movement in a specially designed water tank with a mirror tilted at 45 degrees. The recorded videos are analyzed using idTracker, while spatial positions are calibrated by ImageJ software and 3D trajectories are plotted by Origin 9.1 software. This easy setting allowed scientists to track 3D swimming behavior of multiple zebrafish with low cost and precise spatial position, showing great potential for fish behavioral research in the future.

  19. Application of colon capsule endoscopy (CCE to evaluate the whole gastrointestinal tract: a comparative study of single-camera and dual-camera analysis

    Directory of Open Access Journals (Sweden)

    Remes-Troche JM

    2013-09-01

    Full Text Available José María Remes-Troche,1 Victoria Alejandra Jiménez-García,2 Josefa María García-Montes,2 Pedro Hergueta-Delgado,2 Federico Roesch-Dietlen,1 Juan Manuel Herrerías-Gutiérrez2 1Digestive Physiology and Motility Lab, Medical Biological Research Institute, Universidad Veracruzana, Veracruz, México; 2Gastroenterology Service, Virgen Macarena University Hospital, Seville, Spain Background and study aims: Colon capsule endoscopy (CCE was developed for the evaluation of colorectal pathology. In this study, our aim was to assess if a dual-camera analysis using CCE allows better evaluation of the whole gastrointestinal (GI tract compared to a single-camera analysis. Patients and methods: We included 21 patients (12 males, mean age 56.20 years submitted for a CCE examination. After standard colon preparation, the colon capsule endoscope (PillCam Colon™ was swallowed after reinitiation from its “sleep” mode. Four physicians performed the analysis: two reviewed both video streams at the same time (dual-camera analysis; one analyzed images from one side of the device (“camera 1”; and the other reviewed the opposite side (“camera 2”. We compared numbers of findings from different parts of the entire GI tract and level of agreement among reviewers. Results: A complete evaluation of the GI tract was possible in all patients. Dual-camera analysis provided 16% and 5% more findings compared to camera 1 and camera 2 analysis, respectively. Overall agreement was 62.7% (kappa = 0.44, 95% CI: 0.373–0.510. Esophageal (kappa = 0.611 and colorectal (kappa = 0.595 findings had a good level of agreement, while small bowel (kappa = 0.405 showed moderate agreement. Conclusion: The use of dual-camera analysis with CCE for the evaluation of the GI tract is feasible and detects more abnormalities when compared with single-camera analysis. Keywords: capsule endoscopy, colon, gastrointestinal tract, small bowel

  20. Single Channel Quantum Color Image Encryption Algorithm Based on HSI Model and Quantum Fourier Transform

    Science.gov (United States)

    Gong, Li-Hua; He, Xiang-Tao; Tan, Ru-Chao; Zhou, Zhi-Hong

    2018-01-01

    In order to obtain high-quality color images, it is important to keep the hue component unchanged while emphasize the intensity or saturation component. As a public color model, Hue-Saturation Intensity (HSI) model is commonly used in image processing. A new single channel quantum color image encryption algorithm based on HSI model and quantum Fourier transform (QFT) is investigated, where the color components of the original color image are converted to HSI and the logistic map is employed to diffuse the relationship of pixels in color components. Subsequently, quantum Fourier transform is exploited to fulfill the encryption. The cipher-text is a combination of a gray image and a phase matrix. Simulations and theoretical analyses demonstrate that the proposed single channel quantum color image encryption scheme based on the HSI model and quantum Fourier transform is secure and effective.

  1. Optical determination and magnetic manipulation of a single nitrogen-vacancy color center in diamond nanocrystal

    International Nuclear Information System (INIS)

    Diep Lai, Ngoc; Zheng, Dingwei; Treussart, François; Roch, Jean-François

    2010-01-01

    The controlled and coherent manipulation of individual quantum systems is fundamental for the development of quantum information processing. The nitrogen-vacancy (NV) color center in diamond is a promising system since its photoluminescence is perfectly stable at room temperature and its electron spin can be optically read out at the individual level. We review here the experiments currently realized in our laboratory concerning the use of a single NV color center as the single photon source and the coherent magnetic manipulation of the electron spin associated with a single NV color center. Furthermore, we demonstrate a nanoscopy experiment based on the saturation absorption effect, which allows to optically pin-point a single NV color center at sub-λ resolution. This offers the possibility to independently address two or multiple magnetically coupled single NV color centers, which is a necessary step towards the realization of a diamond-based quantum computer

  2. Single camera volumetric velocimetry in aortic sinus with a percutaneous valve

    Science.gov (United States)

    Clifford, Chris; Thurow, Brian; Midha, Prem; Okafor, Ikechukwu; Raghav, Vrishank; Yoganathan, Ajit

    2016-11-01

    Cardiac flows have long been understood to be highly three dimensional, yet traditional in vitro techniques used to capture these complexities are costly and cumbersome. Thus, two dimensional techniques are primarily used for heart valve flow diagnostics. The recent introduction of plenoptic camera technology allows for traditional cameras to capture both spatial and angular information from a light field through the addition of a microlens array in front of the image sensor. When combined with traditional particle image velocimetry (PIV) techniques, volumetric velocity data may be acquired with a single camera using off-the-shelf optics. Particle volume pairs are reconstructed from raw plenoptic images using a filtered refocusing scheme, followed by three-dimensional cross-correlation. This technique was applied to the sinus region (known for having highly three-dimensional flow structures) of an in vitro aortic model with a percutaneous valve. Phase-locked plenoptic PIV data was acquired at two cardiac outputs (2 and 5 L/min) and 7 phases of the cardiac cycle. The volumetric PIV data was compared to standard 2D-2C PIV. Flow features such as recirculation and stagnation were observed in the sinus region in both cases.

  3. Single-Camera Trap Survey Designs Miss Detections: Impacts on Estimates of Occupancy and Community Metrics

    OpenAIRE

    Pease, Brent S.; Nielsen, Clayton K.; Holzmueller, Eric J.

    2016-01-01

    The use of camera traps as a tool for studying wildlife populations is commonplace. However, few have considered how the number of detections of wildlife differ depending upon the number of camera traps placed at cameras-sites, and how this impacts estimates of occupancy and community composition. During December 2015-February 2016, we deployed four camera traps per camera-site, separated into treatment groups of one, two, and four camera traps, in southern Illinois to compare whether estimat...

  4. Biomimetic plasmonic color generated by the single-layer coaxial honeycomb nanostructure arrays

    Science.gov (United States)

    Zhao, Jiancun; Gao, Bo; Li, Haoyong; Yu, Xiaochang; Yang, Xiaoming; Yu, Yiting

    2017-07-01

    We proposed a periodic coaxial honeycomb nanostructure array patterned in a silver film to realize the plasmonic structural color, which was inspired from natural honeybee hives. The spectral characteristics of the structure with variant geometrical parameters are investigated by employing a finite-difference time-domain method, and the corresponding colors are thus derived by calculating XYZ tristimulus values corresponding with the transmission spectra. The study demonstrates that the suggested structure with only a single layer has high transmission, narrow full-width at half-maximum, and wide color tunability by changing geometrical parameters. Therefore, the plasmonic colors realized possess a high color brightness, saturation, as well as a wide color gamut. In addition, the strong polarization independence makes it more attractive for practical applications. These results indicate that the recommended color-generating plasmonic structure has various potential applications in highly integrated optoelectronic devices, such as color filters and high-definition displays.

  5. Electrolytic coloration and spectral properties of hydroxyl-doped potassium bromide single crystals

    International Nuclear Information System (INIS)

    Qi, Lan; Song, Cuiying; Gu, Hongen

    2013-01-01

    Hydroxyl-doped potassium bromide single crystals are colored electrolytically at various temperatures and voltages by using a pointed cathode and a flat anode. The characteristic OH − spectral band is observed in absorption spectrum of uncolored single crystal. The characteristic O − , OH − , U, V 2 , O 2− −V a + , M L1 , F and M spectral bands are observed simultaneously in absorption spectra of colored single crystals. Current–time curve for electrolytic coloration of hydroxyl-doped potassium bromide single crystal and its relationship with electrolytic coloration processes are given. Production and conversion of color centers are explained. - Highlights: ► We expanded the traditional electrolysis method. ► Hydroxyl-doped potassium bromide crystals were colored electrolytically for the first time. ► Useful V, F and F-aggregate color centers were produced in colored crystals. ► V color centers were produced directly and F as well as F-aggregate color centers indirectly.

  6. Precise measurement of a subpicosecond electron single bunch by the femtosecond streak camera

    International Nuclear Information System (INIS)

    Uesaka, M.; Ueda, T.; Kozawa, T.; Kobayashi, T.

    1998-01-01

    Precise measurement of a subpicosecond electron single bunch by the femtosecond streak camera is presented. The subpicosecond electron single bunch of energy 35 MeV was generated by the achromatic magnetic pulse compressor at the S-band linear accelerator of nuclear engineering research laboratory (NERL), University of Tokyo. The electric charge per bunch and beam size are 0.5 nC and the horizontal and vertical beam sizes are 3.3 and 5.5 mm (full width at half maximum; FWHM), respectively. Pulse shape of the electron single bunch is measured via Cherenkov radiation emitted in air by the femtosecond streak camera. Optical parameters of the optical measurement system were optimized based on much experiment and numerical analysis in order to achieve a subpicosecond time resolution. By using the optimized optical measurement system, the subpicosecond pulse shape, its variation for the differents rf phases in the accelerating tube, the jitter of the total system and the correlation between measured streak images and calculated longitudinal phase space distributions were precisely evaluated. This measurement system is going to be utilized in several subpicosecond analyses for radiation physics and chemistry. (orig.)

  7. Calibration for 3D imaging with a single-pixel camera

    Science.gov (United States)

    Gribben, Jeremy; Boate, Alan R.; Boukerche, Azzedine

    2017-02-01

    Traditional methods for calibrating structured light 3D imaging systems often suffer from various sources of error. By enabling our projector to both project images as well as capture them using the same optical path, we turn our DMD based projector into a dual-purpose projector and single-pixel camera (SPC). A coarse-to-fine SPC scanning technique based on coded apertures was developed to detect calibration target points with sub-pixel accuracy. Our new calibration approach shows improved depth measurement accuracy when used in structured light 3D imaging by reducing cumulative errors caused by multiple imaging paths.

  8. ROS wrapper for real-time multi-person pose estimation with a single camera

    OpenAIRE

    Arduengo García, Miguel; Jorgensen, Steven Jens; Hambuchen, Kimberly; Sentis, Luis; Moreno-Noguer, Francesc; Alenyà Ribas, Guillem

    2017-01-01

    For robots to be deployable in human occupied environments, the robots must have human-awareness and generate human-aware behaviors and policies. OpenPose is a library for real-time multi-person keypoint detection. We have considered the implementation of a ROS package that would allow the estimation of 2d pose from simple RGB images, for which we have introduced a ROS wrapper that automatically recovers the pose of several people from a single camera using OpenPose. Additionally, a ROS node ...

  9. SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems

    Science.gov (United States)

    Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.

    2015-02-01

    Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of

  10. Dual versus single Scheimpflug camera for anterior segment analysis: Precision and agreement.

    Science.gov (United States)

    Aramberri, Jaime; Araiz, Luis; Garcia, Ane; Illarramendi, Igor; Olmos, Jaione; Oyanarte, Izaskun; Romay, Amaya; Vigara, Itxaso

    2012-11-01

    To assess the repeatability, reproducibility, and agreement of the Pentacam HR single-camera and Galilei G2 dual-camera Scheimpflug devices in anterior segment analysis. Begitek Clínica Oftalmológica, San Sebastián, Spain. Prospective randomized observational study. Healthy young individuals had 3 consecutive tests by 2 examiners. Analyzed parameters were anterior and posterior cornea simulated keratometry (K), K flat, K steep, astigmatism magnitude and axis, J(0) and J(45) vectors, asphericity, total corneal higher-order wavefront aberrations (root mean square [RMS], coma, trefoil, spherical aberration), central cornea and thinnest-point thicknesses, and anterior chamber depth. Repeatability and reproducibility were evaluated by calculating the within-subject standard deviation (S(w)), some derived coefficients, and the intraclass correlation coefficient. Agreement was assessed with the Bland-Altman method. The single-camera device reproducibility (S(w)) was simulated K, 0.04 diopter (D); J(0), 0.03 D; J(45), 0.04 D; total power, 0.04 D; spherical aberration, 0.02 μm; higher-order aberrations (HOAs), 0.02 μm; central corneal thickness (CCT), 3.39 μm. The dual-camera device S(w) was simulated K, 0.07 D; J(0), 0.13 D; J(45), 0.04 D; total power, 0.08 D; spherical aberration, 0.02 μm; HOAs, 0.11 μm; CCT, 1.36 μm. Agreement was good for most parameters except total corneal power (mean difference 1.58 D ± 0.22 (SD) and HOA RMS (mean difference 0.48 ± 0.19 μm) (both PAramberri is consultant to Costruzione Strumenti Oftalmici, Firenze, Italy. No other author has a financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  11. Cloud Height Estimation with a Single Digital Camera and Artificial Neural Networks

    Science.gov (United States)

    Carretas, Filipe; Janeiro, Fernando M.

    2014-05-01

    parameter in the output layer is the cloud height estimated by the ANN. The training procedure was performed, using the back-propagation method, in a set of 260 different clouds with heights in the range [1000, 5000] m. The training of the ANN has resulted in a correlation ratio of 0.74. This trained ANN can therefore be used to estimate the cloud height. The previously described system can also measure the wind speed and direction at cloud height by measuring the displacement, in pixels, of a cloud feature between consecutively acquired photos. Also, the geographical north direction can be estimated using this setup through sequential night images with high exposure times. A further advantage of this single camera system is that no camera calibration or synchronization is needed. This significantly reduces the cost and complexity of field deployment of cloud height measurement systems based on digital photography.

  12. Multistabilities and symmetry-broken one-color and two-color states in closely coupled single-mode lasers

    Science.gov (United States)

    Clerkin, Eoin; O'Brien, Stephen; Amann, Andreas

    2014-03-01

    We theoretically investigate the dynamics of two mutually coupled, identical single-mode semi-conductor lasers. For small separation and large coupling between the lasers, symmetry-broken one-color states are shown to be stable. In this case the light outputs of the lasers have significantly different intensities while at the same time the lasers are locked to a single common frequency. For intermediate coupling we observe stable symmetry-broken two-color states, where both lasers lase simultaneously at two optical frequencies which are separated by up to 150 GHz. Using a five-dimensional model, we identify the bifurcation structure which is responsible for the appearance of symmetric and symmetry-broken one-color and two-color states. Several of these states give rise to multistabilities and therefore allow for the design of all-optical memory elements on the basis of two coupled single-mode lasers. The switching performance of selected designs of optical memory elements is studied numerically.

  13. Single-camera visual odometry to track a surgical X-ray C-arm base.

    Science.gov (United States)

    Esfandiari, Hooman; Lichti, Derek; Anglin, Carolyn

    2017-12-01

    This study provides a framework for a single-camera odometry system for localizing a surgical C-arm base. An application-specific monocular visual odometry system (a downward-looking consumer-grade camera rigidly attached to the C-arm base) is proposed in this research. The cumulative dead-reckoning estimation of the base is extracted based on frame-to-frame homography estimation. Optical-flow results are utilized to feed the odometry. Online positional and orientation parameters are then reported. Positional accuracy of better than 2% (of the total traveled distance) for most of the cases and 4% for all the cases studied and angular accuracy of better than 2% (of absolute cumulative changes in orientation) were achieved with this method. This study provides a robust and accurate tracking framework that not only can be integrated with the current C-arm joint-tracking system (i.e. TC-arm) but also is capable of being employed for similar applications in other fields (e.g. robotics).

  14. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    Directory of Open Access Journals (Sweden)

    Chun-Tang Chao

    2016-03-01

    Full Text Available In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  15. Autonomous Gait Event Detection with Portable Single-Camera Gait Kinematics Analysis System

    Directory of Open Access Journals (Sweden)

    Cheng Yang

    2016-01-01

    Full Text Available Laboratory-based nonwearable motion analysis systems have significantly advanced with robust objective measurement of the limb motion, resulting in quantified, standardized, and reliable outcome measures compared with traditional, semisubjective, observational gait analysis. However, the requirement for large laboratory space and operational expertise makes these systems impractical for gait analysis at local clinics and homes. In this paper, we focus on autonomous gait event detection with our bespoke, relatively inexpensive, and portable, single-camera gait kinematics analysis system. Our proposed system includes video acquisition with camera calibration, Kalman filter + Structural-Similarity-based marker tracking, autonomous knee angle calculation, video-frame-identification-based autonomous gait event detection, and result visualization. The only operational effort required is the marker-template selection for tracking initialization, aided by an easy-to-use graphic user interface. The knee angle validation on 10 stroke patients and 5 healthy volunteers against a gold standard optical motion analysis system indicates very good agreement. The autonomous gait event detection shows high detection rates for all gait events. Experimental results demonstrate that the proposed system can automatically measure the knee angle and detect gait events with good accuracy and thus offer an alternative, cost-effective, and convenient solution for clinical gait kinematics analysis.

  16. Towards a better understanding of the overall health impact of the game of squash: automatic and high-resolution motion analysis from a single camera view

    Directory of Open Access Journals (Sweden)

    Brumann Christopher

    2017-09-01

    Full Text Available In this paper, we present a method for locating and tracking players in the game of squash using Gaussian mixture model background subtraction and agglomerative contour clustering from a calibrated single camera view. Furthermore, we describe a method for player re-identification after near total occlusion, based on stored color- and region-descriptors. For camera calibration, no additional pattern is needed, as the squash court itself can serve as a 3D calibration object. In order to exclude non-rally situations from motion analysis, we further classify each video frame into game phases using a multilayer perceptron. By considering a player’s position as well as the current game phase we are able to visualize player-individual motion patterns expressed as court coverage using pseudo colored heat-maps. In total, we analyzed two matches (six games, 1:28h of high quality commercial videos used in sports broadcasting and compute high resolution (1cm per pixel heat-maps. 130184 manually labeled frames (game phases and player identification show an identification correctness of 79.28±8.99% (mean±std. Game phase classification is correct in 60.87±7.62% and the heat-map visualization correctness is 72.47±7.27%.

  17. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  18. Single-acquisition method for simultaneous determination of extrinsic gamma-camera sensitivity and spatial resolution

    Energy Technology Data Exchange (ETDEWEB)

    Santos, J.A.M. [Servico de Fisica Medica, Instituto Portugues de Oncologia Francisco Gentil do Porto, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200-072 Porto (Portugal)], E-mail: a.miranda@portugalmail.pt; Sarmento, S. [Servico de Fisica Medica, Instituto Portugues de Oncologia Francisco Gentil do Porto, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200-072 Porto (Portugal); Alves, P.; Torres, M.C. [Departamento de Fisica da Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto (Portugal); Bastos, A.L. [Servico de Medicina Nuclear, Instituto Portugues de Oncologia Francisco Gentil do Porto, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200-072 Porto (Portugal); Ponte, F. [Servico de Fisica Medica, Instituto Portugues de Oncologia Francisco Gentil do Porto, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200-072 Porto (Portugal)

    2008-01-15

    A new method for measuring simultaneously both the extrinsic sensitivity and spatial resolution of a gamma-camera in a single planar acquisition was implemented. A dual-purpose phantom (SR phantom; sensitivity/resolution) was developed, tested and the results compared with other conventional methods used for separate determination of these two important image quality parameters. The SR phantom yielded reproducible and accurate results, allowing an immediate visual inspection of the spatial resolution as well as the quantitative determination of the contrast for six different spatial frequencies. It also proved to be useful in the estimation of the modulation transfer function (MTF) of the image formation collimator/detector system at six different frequencies and can be used to estimate the spatial resolution as function of the direction relative to the digital matrix of the detector.

  19. A Single Camera Motion Capture System for Human-Computer Interaction

    Science.gov (United States)

    Okada, Ryuzo; Stenger, Björn

    This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.

  20. Variational Histogram Equalization for Single Color Image Defogging

    Directory of Open Access Journals (Sweden)

    Li Zhou

    2016-01-01

    Full Text Available Foggy images taken in the bad weather inevitably suffer from contrast loss and color distortion. Existing defogging methods merely resort to digging out an accurate scene transmission in ignorance of their unpleasing distortion and high complexity. Different from previous works, we propose a simple but powerful method based on histogram equalization and the physical degradation model. By revising two constraints in a variational histogram equalization framework, the intensity component of a fog-free image can be estimated in HSI color space, since the airlight is inferred through a color attenuation prior in advance. To cut down the time consumption, a general variation filter is proposed to obtain a numerical solution from the revised framework. After getting the estimated intensity component, it is easy to infer the saturation component from the physical degradation model in saturation channel. Accordingly, the fog-free image can be restored with the estimated intensity and saturation components. In the end, the proposed method is tested on several foggy images and assessed by two no-reference indexes. Experimental results reveal that our method is relatively superior to three groups of relevant and state-of-the-art defogging methods.

  1. Single-Camera Trap Survey Designs Miss Detections: Impacts on Estimates of Occupancy and Community Metrics.

    Science.gov (United States)

    Pease, Brent S; Nielsen, Clayton K; Holzmueller, Eric J

    2016-01-01

    The use of camera traps as a tool for studying wildlife populations is commonplace. However, few have considered how the number of detections of wildlife differ depending upon the number of camera traps placed at cameras-sites, and how this impacts estimates of occupancy and community composition. During December 2015-February 2016, we deployed four camera traps per camera-site, separated into treatment groups of one, two, and four camera traps, in southern Illinois to compare whether estimates of wildlife community metrics and occupancy probabilities differed among survey methods. The overall number of species detected per camera-site was greatest with the four-camera survey method (Pcamera survey method detected 1.25 additional species per camera-site than the one-camera survey method, and was the only survey method to completely detect the ground-dwelling silvicolous community. The four-camera survey method recorded individual species at 3.57 additional camera-sites (P = 0.003) and nearly doubled the number of camera-sites where white-tailed deer (Odocoileus virginianus) were detected compared to one- and two-camera survey methods. We also compared occupancy rates estimated by survey methods; as the number of cameras deployed per camera-site increased, occupancy estimates were closer to naïve estimates, detection probabilities increased, and standard errors of detection probabilities decreased. Additionally, each survey method resulted in differing top-ranked, species-specific occupancy models when habitat covariates were included. Underestimates of occurrence and misrepresented community metrics can have significant impacts on species of conservation concern, particularly in areas where habitat manipulation is likely. Having multiple camera traps per site revealed significant shortcomings with the common one-camera trap survey method. While we realize survey design is often constrained logistically, we suggest increasing effort to at least two camera traps

  2. 3D velocity measurement by a single camera using Doppler phase-shifting holography

    International Nuclear Information System (INIS)

    Ninomiya, Nao; Kubo, Yamato; Barada, Daisuke; Kiire, Tomohiro

    2016-01-01

    In order to understand the details of the flow field in micro- and nano-fluidic devices, it is necessary to measure the 3D velocities under a microscopy. Thus, there is a strong need for the development of a new measuring technique for 3D velocity by a single camera. One solution is the use of holography, but it is well known that the accuracy in the depth direction is very poor for the commonly used in-line holography. At present, the Doppler phase-shifting holography is used for the 3D measurement of an object. This method extracts the signal of a fixed frequency caused by the Doppler beat between the object light and the reference light. It can measure the 3D shape precisely. Here, the frequency of the Doppler beat is determined by the velocity difference between the object light and the reference light. This implies that the velocity of an object can be calculated by the Doppler frequency. In this study, a Japanese 5 yen coin was traversed at a constant speed and its holography has been observed by a high-speed camera. By extracting only the first order diffraction signal at the Doppler frequency, a precise measurement of the shape and the position of a 5 yen coin has been achieved. At the same time, the longitudinal velocity of a 5 yen coin can be measured by the Doppler frequency. Furthermore, the lateral velocities are obtained by particle image velocimetry (PIV) method. A 5 yen coin has been traversed at different angles and its shapes and the 3D velocities have been measured accurately. This method can be applied to the particle flows in the micro- or nano-devices, and the 3D velocities will be measured under microscopes. (paper)

  3. Derivation of the horizontal wind field in the polar mesopause region by using successive images of noctilucent clouds observed by a color digital camera in Iceland

    Science.gov (United States)

    Suzuki, H.; Yamashita, R.

    2017-12-01

    It is important to quantify amplitude of turbulent motion to understand the energy and momentum budgets and distribution of minor constituents in the upper mesosphere. In particular, to know the eddy diffusion coefficient of minor constituents which are locally and impulsively produced by energetic particle precipitations in the polar mesopause is one of the most important subjects in the upper atmospheric science. One of the straight methods to know the amplitude of the eddy motion is to measure the wind field with both spatial and temporal domain. However, observation technique satisfying such requirements is limited in this region. In this study, derivation of the horizontal wind field in the polar mesopause region by tracking the motion of noctilucent clouds (NLCs) is performed. NLC is the highest cloud in the Earth which appears in a mesopause region during summer season in both polar regions. Since the vertical structure of the NLC is sufficiently thin ( within several hundred meters in typical), the apparent horizontal motion observed from ground can be regarded as the result of transportation by the horizontal winds at a single altitude. In this presentation, initial results of wind field derivation by tracking a motion of noctilucent clouds (NLC) observed by a ground-based color digital camera in Iceland is reported. The procedure for wind field estimation consists with 3 steps; (1) projects raw images to a geographical map (2) enhances NLC structures by using FFT method (3) determines horizontal velocity vectors by applying template matching method to two sequential images. In this talk, a result of the wind derivation by using successive images of NLC with 3 minutes interval and 1.5h duration observed on the night of Aug 1st, 2013 will be reported as a case study.

  4. Realtime Reconstruction of an Animating Human Body from a Single Depth Camera.

    Science.gov (United States)

    Chen, Yin; Cheng, Zhi-Quan; Lai, Chao; Martin, Ralph R; Dang, Gang

    2016-08-01

    We present a method for realtime reconstruction of an animating human body,which produces a sequence of deforming meshes representing a given performance captured by a single commodity depth camera. We achieve realtime single-view mesh completion by enhancing the parameterized SCAPE model.Our method, which we call Realtime SCAPE, performs full-body reconstruction without the use of markers.In Realtime SCAPE, estimations of body shape parameters and pose parameters, needed for reconstruction, are decoupled. Intrinsic body shape is first precomputed for a given subject, by determining shape parameters with the aid of a body shape database. Subsequently, per-frame pose parameter estimation is performed by means of linear blending skinning (LBS); the problem is decomposed into separately finding skinning weights and transformations. The skinning weights are also determined offline from the body shape database,reducing online reconstruction to simply finding the transformations in LBS. Doing so is formulated as a linear variational problem;carefully designed constraints are used to impose temporal coherence and alleviate artifacts. Experiments demonstrate that our method can produce full-body mesh sequences with high fidelity.

  5. Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs).

    Science.gov (United States)

    Jaramillo, Carlos; Valenti, Roberto G; Guo, Ling; Xiao, Jizhong

    2016-02-06

    We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor's projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.

  6. The Control of Single-color and Multiple-color Visual Search by Attentional Templates in Working Memory and in Long-term Memory.

    Science.gov (United States)

    Grubert, Anna; Carlisle, Nancy B; Eimer, Martin

    2016-12-01

    The question whether target selection in visual search can be effectively controlled by simultaneous attentional templates for multiple features is still under dispute. We investigated whether multiple-color attentional guidance is possible when target colors remain constant and can thus be represented in long-term memory but not when they change frequently and have to be held in working memory. Participants searched for one, two, or three possible target colors that were specified by cue displays at the start of each trial. In constant-color blocks, the same colors remained task-relevant throughout. In variable-color blocks, target colors changed between trials. The contralateral delay activity (CDA) to cue displays increased in amplitude as a function of color memory load in variable-color blocks, which indicates that cued target colors were held in working memory. In constant-color blocks, the CDA was much smaller, suggesting that color representations were primarily stored in long-term memory. N2pc components to targets were measured as a marker of attentional target selection. Target N2pcs were attenuated and delayed during multiple-color search, demonstrating less efficient attentional deployment to color-defined target objects relative to single-color search. Importantly, these costs were the same in constant-color and variable-color blocks. These results demonstrate that attentional guidance by multiple-feature as compared with single-feature templates is less efficient both when target features remain constant and can be represented in long-term memory and when they change across trials and therefore have to be maintained in working memory.

  7. Observation of X-ray shadings in synchrotron radiation-total reflection X-ray fluorescence using a color X-ray camera

    Energy Technology Data Exchange (ETDEWEB)

    Fittschen, Ursula Elisabeth Adriane, E-mail: ursula.fittschen@chemie.uni-hamburg.de [Institut für Anorganische und Angewandte Chemie, Universität Hamburg, Martin-Luther-King-Platz 6, 20146 Hamburg (Germany); Menzel, Magnus [Institut für Anorganische und Angewandte Chemie, Universität Hamburg, Martin-Luther-King-Platz 6, 20146 Hamburg (Germany); Scharf, Oliver [IfG Institute for Scientific Instruments GmbH, Berlin (Germany); Radtke, Martin; Reinholz, Uwe; Buzanich, Günther [BAM Federal Institute of Materials Research and Testing, Berlin (Germany); Lopez, Velma M.; McIntosh, Kathryn [Los Alamos National Laboratory, Los Alamos, NM (United States); Streli, Christina [Atominstitut, TU Wien, Vienna (Austria); Havrilla, George Joseph [Los Alamos National Laboratory, Los Alamos, NM (United States)

    2014-09-01

    Absorption effects and the impact of specimen shape on TXRF analysis has been discussed intensively. Model calculations indicated that ring shaped specimens should give better results in terms of higher counts per mass signals than filled rectangle or circle shaped specimens. One major reason for the difference in signal is shading effects. Full field micro-XRF with a color X-ray camera (CXC) was used to investigate shading, which occurs when working with small angles of excitation as in TXRF. The device allows monitoring the illuminated parts of the sample and the shaded parts at the same time. It is expected that sample material hit first by the primary beam shade material behind it. Using the CXC shading could be directly visualized for the high concentration specimens. In order to compare the experimental results with calculation of the shading effect the generation of controlled specimens is crucial. This was achieved by “drop on demand” technology. It allows generating uniform, microscopic deposits of elements. The experimentally measured shadings match well with those expected from calculation. - Highlights: • Use of a color X-ray camera and drop on demand printing to diagnose X-ray shading • Specimens were obtained uniform and well-defined in shape and concentration by printing. • Direct visualization and determination of shading in such specimens using the camera.

  8. A natural-color mapping for single-band night-time image based on FPGA

    Science.gov (United States)

    Wang, Yilun; Qian, Yunsheng

    2018-01-01

    A natural-color mapping for single-band night-time image method based on FPGA can transmit the color of the reference image to single-band night-time image, which is consistent with human visual habits and can help observers identify the target. This paper introduces the processing of the natural-color mapping algorithm based on FPGA. Firstly, the image can be transformed based on histogram equalization, and the intensity features and standard deviation features of reference image are stored in SRAM. Then, the real-time digital images' intensity features and standard deviation features are calculated by FPGA. At last, FPGA completes the color mapping through matching pixels between images using the features in luminance channel.

  9. Multi-Color Single Particle Tracking with Quantum Dots

    DEFF Research Database (Denmark)

    Christensen, Eva Arnspang; Brewer, J. R.; Lagerholm, B. C.

    2012-01-01

    Quantum dots (QDs) have long promised to revolutionize fluorescence detection to include even applications requiring simultaneous multi-species detection at single molecule sensitivity. Despite the early promise, the unique optical properties of QDs have not yet been fully exploited in e. g...

  10. Characterization of Necking Phenomena in High-Speed Experiments by Using a Single Camera

    Directory of Open Access Journals (Sweden)

    Hild François

    2010-01-01

    Full Text Available The purpose of the experiment described herein is the study of material deformation (here a cylinder induced by explosives. During its expansion, the cylinder (initially 3 mm thick is thinning until fracture appears. Some tens of microseconds before destruction, strain localizations occur and induce mechanical necking. To characterize the time of first localizations, 25 stereoscopic acquisitions at about 500,000 frames per second are used by resorting to a single ultra-high speed camera. The 3D reconstruction from stereoscopic movies is described. A special calibration procedure is followed, namely, the calibration target is imaged during the experiment itself. To characterize the performance of the present procedure, resolution and optical distortions are estimated. The principle of stereoscopic reconstruction of an object subjected to a high-speed experiment is then developed. This reconstruction is achieved by using a global image correlation code that exploits random markings on the object outer surface. The spatial resolution of the estimated surface is evaluated thanks to a realistic image pair synthesis. Last, the time evolution of surface roughness is estimated. It gives access to the onset of necking.

  11. Visible Watermarking Technique Based on Human Visual System for Single Sensor Digital Cameras

    Directory of Open Access Journals (Sweden)

    Hector Santoyo-Garcia

    2017-01-01

    Full Text Available In this paper we propose a visible watermarking algorithm, in which a visible watermark is embedded into the Bayer Colour Filter Array (CFA domain. The Bayer CFA is the most common raw image representation for images captured by single sensor digital cameras equipped in almost all mobile devices. In proposed scheme, the captured image is watermarked before it is compressed and stored in the storage system. Then this method enforces the rightful ownership of the watermarked image, since there is no other version of the image rather than the watermarked one. We also take into consideration the Human Visual System (HVS so that the proposed technique provides desired characteristics of a visible watermarking scheme, such that the embedded watermark is sufficiently perceptible and at same time not obtrusive in colour and grey-scale images. Unlike other Bayer CFA domain visible watermarking algorithms, in which only binary watermark pattern is supported, proposed watermarking algorithm allows grey-scale and colour images as watermark patterns. It is suitable for advertisement purpose, such as digital library and e-commerce, besides copyright protection.

  12. Clustering method for counting passengers getting in a bus with single camera

    Science.gov (United States)

    Yang, Tao; Zhang, Yanning; Shao, Dapei; Li, Ying

    2010-03-01

    Automatic counting of passengers is very important for both business and security applications. We present a single-camera-based vision system that is able to count passengers in a highly crowded situation at the entrance of a traffic bus. The unique characteristics of the proposed system include, First, a novel feature-point-tracking- and online clustering-based passenger counting framework, which performs much better than those of background-modeling-and foreground-blob-tracking-based methods. Second, a simple and highly accurate clustering algorithm is developed that projects the high-dimensional feature point trajectories into a 2-D feature space by their appearance and disappearance times and counts the number of people through online clustering. Finally, all test video sequences in the experiment are captured from a real traffic bus in Shanghai, China. The results show that the system can process two 320×240 video sequences at a frame rate of 25 fps simultaneously, and can count passengers reliably in various difficult scenarios with complex interaction and occlusion among people. The method achieves high accuracy rates up to 96.5%.

  13. Capturing complex human behaviors in representative sports contexts with a single camera.

    Science.gov (United States)

    Duarte, Ricardo; Araújo, Duarte; Fernandes, Orlando; Fonseca, Cristina; Correia, Vanda; Gazimba, Vítor; Travassos, Bruno; Esteves, Pedro; Vilar, Luís; Lopes, José

    2010-01-01

    In the last years, several motion analysis methods have been developed without considering representative contexts for sports performance. The purpose of this paper was to explain and underscore a straightforward method to measure human behavior in these contexts. Procedures combining manual video tracking (with TACTO device) and bidimensional reconstruction (through direct linear transformation) using a single camera were used in order to capture kinematic data required to compute collective variable(s) and control parameter(s). These procedures were applied to a 1vs1 association football task as an illustrative subphase of team sports and will be presented in a tutorial fashion. Preliminary analysis of distance and velocity data identified a collective variable (difference between the distance of the attacker and the defender to a target defensive area) and two nested control parameters (interpersonal distance and relative velocity). Findings demonstrated that the complementary use of TACTO software and direct linear transformation permit to capture and reconstruct complex human actions in their context in a low dimensional space (information reduction).

  14. Single-molecule three-color FRET with both negligible spectral overlap and long observation time.

    Directory of Open Access Journals (Sweden)

    Sanghwa Lee

    Full Text Available Full understanding of complex biological interactions frequently requires multi-color detection capability in doing single-molecule fluorescence resonance energy transfer (FRET experiments. Existing single-molecule three-color FRET techniques, however, suffer from severe photobleaching of Alexa 488, or its alternative dyes, and have been limitedly used for kinetics studies. In this work, we developed a single-molecule three-color FRET technique based on the Cy3-Cy5-Cy7 dye trio, thus providing enhanced observation time and improved data quality. Because the absorption spectra of three fluorophores are well separated, real-time monitoring of three FRET efficiencies was possible by incorporating the alternating laser excitation (ALEX technique both in confocal microscopy and in total-internal-reflection fluorescence (TIRF microscopy.

  15. Single-shot color fringe projection for three-dimensional shape measurement of objects with discontinuities.

    Science.gov (United States)

    Dai, Meiling; Yang, Fujun; He, Xiaoyuan

    2012-04-20

    A simple but effective fringe projection profilometry is proposed to measure 3D shape by using one snapshot color sinusoidal fringe pattern. One color fringe pattern encoded with a sinusoidal fringe (as red component) and one uniform intensity pattern (as blue component) is projected by a digital video projector, and the deformed fringe pattern is recorded by a color CCD camera. The captured color fringe pattern is separated into its RGB components and division operation is applied to red and blue channels to reduce the variable reflection intensity. Shape information of the tested object is decoded by applying an arcsine algorithm on the normalized fringe pattern with subpixel resolution. In the case of fringe discontinuities caused by height steps, or spatially isolated surfaces, the separated blue component is binarized and used for correcting the phase demodulation. A simple and robust method is also introduced to compensate for nonlinear intensity response of the digital video projector. The experimental results demonstrate the validity of the proposed method.

  16. Enhancing the brightness of electrically driven single-photon sources using color centers in silicon carbide

    Science.gov (United States)

    Khramtsov, Igor A.; Vyshnevyy, Andrey A.; Fedyanin, Dmitry Yu.

    2018-03-01

    Practical applications of quantum information technologies exploiting the quantum nature of light require efficient and bright true single-photon sources which operate under ambient conditions. Currently, point defects in the crystal lattice of diamond known as color centers have taken the lead in the race for the most promising quantum system for practical non-classical light sources. This work is focused on a different quantum optoelectronic material, namely a color center in silicon carbide, and reveals the physics behind the process of single-photon emission from color centers in SiC under electrical pumping. We show that color centers in silicon carbide can be far superior to any other quantum light emitter under electrical control at room temperature. Using a comprehensive theoretical approach and rigorous numerical simulations, we demonstrate that at room temperature, the photon emission rate from a p-i-n silicon carbide single-photon emitting diode can exceed 5 Gcounts/s, which is higher than what can be achieved with electrically driven color centers in diamond or epitaxial quantum dots. These findings lay the foundation for the development of practical photonic quantum devices which can be produced in a well-developed CMOS compatible process flow.

  17. Development of a pixelated GSO gamma camera system with tungsten parallel hole collimator for single photon imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, S.; Watabe, H.; Kanai, Y.; Shimosegawa, E.; Hatazawa, J. [Kobe City College of Technology, 8-3 Gakuen-Higashi-machi, Nishi-ku, Kobe 651-2194 (Japan); Department of Molecular Imaging in Medicine, Osaka University Graduate School of Medicine, Osaka 565-0871 (Japan); Department of Nuclear Medicine and Tracer Kinetics, Osaka University Graduate School of Medicine, Osaka 565-0871 (Japan); Department of Molecular Imaging in Medicine, Osaka University Graduate School of Medicine, Osaka 565-0871 (Japan) and Department of Nuclear Medicine and Tracer Kinetics, Osaka University Graduate School of Medicine, Osaka 565-0871 (Japan)

    2012-02-15

    Purpose: In small animal imaging using a single photon emitting radionuclide, a high resolution gamma camera is required. Recently, position sensitive photomultiplier tubes (PSPMTs) with high quantum efficiency have been developed. By combining these with nonhygroscopic scintillators with a relatively low light output, a high resolution gamma camera can become useful for low energy gamma photons. Therefore, the authors developed a gamma camera by combining a pixelated Ce-doped Gd{sub 2}SiO{sub 5} (GSO) block with a high quantum efficiency PSPMT. Methods: GSO was selected for the scintillator, because it is not hygroscopic and does not contain any natural radioactivity. An array of 1.9 mm x 1.9 mm x 7 mm individual GSO crystal elements was constructed. These GSOs were combined with a 0.1-mm thick reflector to form a 22 x 22 matrix and optically coupled to a high quantum efficiency PSPMT (H8500C-100 MOD8). The GSO gamma camera was encased in a tungsten gamma-ray shield with tungsten pixelated parallel hole collimator, and the basic performance was measured for Co-57 gamma photons (122 keV). Results: In a two-dimensional position histogram, all pixels were clearly resolved. The energy resolution was {approx}15% FWHM. With the 20-mm thick tungsten pixelated collimator, the spatial resolution was 4.4-mm FWHM 40 mm from the collimator surface, and the sensitivity was {approx}0.05%. Phantom and small animal images were successfully obtained with our developed gamma camera. Conclusions: These results confirmed that the developed pixelated GSO gamma camera has potential as an effective instrument for low energy gamma photon imaging.

  18. Benchmarking of depth of field for large out-of-plane deformations with single camera digital image correlation

    Science.gov (United States)

    Van Mieghem, Bart; Ivens, Jan; Van Bael, Albert

    2017-04-01

    A problem that arises when performing stereo digital image correlation in applications with large out-of-plane displacements is that the images may become unfocused. This unfocusing could result in correlation instabilities or inaccuracies. When performing DIC measurements and expecting large out-of-plane displacements researchers either trust on their experience or use the equations from photography to estimate the parameters affecting the depth of field (DOF) of the camera. A limitation of the latter approach is that the definition of sharpness is a human defined parameter and that it does not reflect the performance of the digital image correlation system. To get a more representative DOF value for DIC applications, a standardised testing method is presented here, making use of real camera and lens combinations as well as actual image correlation results. The method is based on experimental single camera DIC measurements of a backwards moving target. Correlation results from focused and unfocused images are compared and a threshold value defines whether or not the correlation results are acceptable even if the images are (slightly) unfocused. By following the proposed approach, the complete DOF of a specific camera/lens combination as function of the aperture setting and distance from the camera to the target can be defined. The comparison between the theoretical and the experimental DOF results shows that the achievable DOF for DIC applications is larger than what theoretical calculations predict. Practically this means that the cameras can be positioned closer to the target than what is expected from the theoretical approach. This leads to a gain in resolution and measurement accuracy.

  19. An optical fiber coupled streak camera system for multichannel recording of simultaneous emission from a single plasma producing event

    International Nuclear Information System (INIS)

    Tan, T.H.; Williams, A.H.

    1983-01-01

    A streak camera system capable of multichannel sub-nanosecond recording of simultaneous emissions (photons and particles) from a single plasma interaction event (laser or particle beam) has been developed. In this system ultra-fast quenched (benzophenone) plastic scintillator detectors are coupled via optical fibers to a visible streak camera. The use of optical fibers presents two tractive features: miniaturization of detectors permits improved flexibility in placing detectors at the most desirable location and in greater number than can normally be accommodated; and the detectors are insensitive to electromagnetic noise generated both from the interacting plasmas and from the high voltage components associated with the laser or particle beam system. The fibers can be directly routed through vacuum tight couplers at the target chamber wall and brought into direct contact with the photocathode of the camera in most applications. In fusion experiments, however, the fiber florescence and Cerenkov radiation due to the copious emissions of energetic electrons and x-rays can present serious problems in the use of long fibers. Here, short fibers can be used and the visible streak camera is then focused through a glass port of the target chamber onto the open ends of these optical fibers

  20. Camera Embedded Single Lumen Tube as a Rescue Device for Airway Handling during Lung Separation

    DEFF Research Database (Denmark)

    Højberg Holm, Jimmy; Andersen, Claus

    2016-01-01

    endotracheal tube (SLT, ID 7.0 mm, OD 10.0 mm) with embedded camera (VivaSight-SLTM, ET-View Ltd, Misgav, Israel) (Figure 1), and with this secured in the trachea, lung isolation was obtained with the use of an bronchial blocker (VivaSight-EB, 9 Fr) on the left side, resulting in total lung collapse, allowing...

  1. Multi-illumination Gabor holography recorded in a single camera snap-shot for high-resolution phase retrieval in digital in-line holographic microscopy

    Science.gov (United States)

    Sanz, Martin; Picazo-Bueno, Jose A.; Garcia, Javier; Micó, Vicente

    2015-05-01

    In this contribution we introduce MISHELF microscopy, a new concept and design of a lensless holographic microscope based on wavelength multiplexing, single hologram acquisition and digital image processing. The technique which name comes from Multi-Illumination Single-Holographic-Exposure Lensless Fresnel microscopy, is based on the simultaneous illumination and recording of three diffraction patterns in the Fresnel domain. In combination with a novel and fast iterative phase retrieval algorithm, MISHELF microscopy is capable of high-resolution (micron range) phase-retrieved (twin image elimination) biological imaging of dynamic events (video rate recording speed) since it avoids the time multiplexing needed for the in-line hologram sequence recording when using conventional phase-shifting or phase retrieval algorithms. MISHELF microscopy is validated using two different experimental layouts: one using RGB illumination and detection schemes and another using IRRB as illumination while keeping the RGB color camera as detection device. Preliminary experimental results are provided for both experimental layouts using a synthetic object (USAF resolution test target).

  2. Versatile single-molecule multi-color excitation and detection fluorescence setup for studying biomolecular dynamics

    KAUST Repository

    Sobhy, M. A.

    2011-11-07

    Single-molecule fluorescence imaging is at the forefront of tools applied to study biomolecular dynamics both in vitro and in vivo. The ability of the single-molecule fluorescence microscope to conduct simultaneous multi-color excitation and detection is a key experimental feature that is under continuous development. In this paper, we describe in detail the design and the construction of a sophisticated and versatile multi-color excitation and emission fluorescence instrument for studying biomolecular dynamics at the single-molecule level. The setup is novel, economical and compact, where two inverted microscopes share a laser combiner module with six individual laser sources that extend from 400 to 640 nm. Nonetheless, each microscope can independently and in a flexible manner select the combinations, sequences, and intensities of the excitation wavelengths. This high flexibility is achieved by the replacement of conventional mechanical shutters with acousto-optic tunable filter (AOTF). The use of AOTF provides major advancement by controlling the intensities, duration, and selection of up to eight different wavelengths with microsecond alternation time in a transparent and easy manner for the end user. To our knowledge this is the first time AOTF is applied to wide-field total internal reflection fluorescence (TIRF) microscopy even though it has been commonly used in multi-wavelength confocal microscopy. The laser outputs from the combiner module are coupled to the microscopes by two sets of four single-mode optic fibers in order to allow for the optimization of the TIRF angle for each wavelength independently. The emission is split into two or four spectral channels to allow for the simultaneous detection of up to four different fluorophores of wide selection and using many possible excitation and photoactivation schemes. We demonstrate the performance of this new setup by conducting two-color alternating excitation single-molecule fluorescence resonance energy

  3. Coloration of chromium-doped yttrium aluminum garnet single-crystal fibers using a divalent codopant

    International Nuclear Information System (INIS)

    Tissue, B.M.; Jia, W.; Lu, L.; Yen, W.M.

    1991-01-01

    We have grown single-crystal fibers of Cr:YAG and Cr,Ca:YAG under oxidizing and reducing conditions by the laser-heated-pedestal-growth method. The Cr:YAG crystals were light green due to Cr 3+ in octahedral sites, while the Cr,Ca:YAG crystals were brown. The presence of the divalent codopant was the dominant factor determining the coloration in these single-crystal fibers, while the oxidizing power of the growth atmosphere had little effect on the coloration. The Cr,Ca:YAG had a broad absorption band centered at 1.03 μm and fluoresced from 1.1 to 1.7 μm, with a room-temperature lifetime of 3.5 μs. The presence of both chromium and a divalent codopant were necessary to create the optically-active center which produces the near-infrared emission. Doping with only Ca 2+ created a different coloration with absorption in the blue and ultraviolet. The coloration in the Cr,Ca:YAG is attributed to Cr 4+ and is produced in as-grown crystals without irradiation or annealing, as has been necessary in previous work

  4. Understanding single-color multiphoton ionization spectra by pump--probe technique

    Energy Technology Data Exchange (ETDEWEB)

    Dasgupta, K.; Manohar, K.G.; Bajaj, P.N.; Suri, B.M.; Talukdar, R.K.; Chakraborti, P.K.; Rao, P.R.K.

    1988-06-01

    A simple but elegant spectroscopic technique using two narrow-band dye lasers has been demonstrated for analyzing single-color resonant multi-photon-ionization spectra of atoms. This technique provides a direct identification of the starting level of the multi-photon-ionization pathway. This method can also be used to determine intermediate levels, which play an important role in the ionization process. Some typical results for uranium are presented.

  5. Music-to-Color Associations of Single-Line Piano Melodies in Non-synesthetes.

    Science.gov (United States)

    Palmer, Stephen E; Langlois, Thomas A; Schloss, Karen B

    2016-01-01

    Prior research has shown that non-synesthetes' color associations to classical orchestral music are strongly mediated by emotion. The present study examines similar cross-modal music-to-color associations for much better controlled musical stimuli: 64 single-line piano melodies that were generated from four basic melodies by Mozart, whose global musical parameters were manipulated in tempo(slow/fast), note-density (sparse/dense), mode (major/minor) and pitch-height (low/high). Participants first chose the three colors (from 37) that they judged to be most consistent with (and, later, the three that were most inconsistent with) the music they were hearing. They later rated each melody and each color for the strength of its association along four emotional dimensions: happy/sad, agitated/calm, angry/not-angry and strong/weak. The cross-modal choices showed that faster music in the major mode was associated with lighter, more saturated, yellower (warmer) colors than slower music in the minor mode. These results replicate and extend those of Palmer et al. (2013, Proc. Natl Acad. Sci. 110, 8836-8841) with more precisely controlled musical stimuli. Further results replicated strong evidence for emotional mediation of these cross-modal associations, in that the emotional ratings of the melodies were very highly correlated with the emotional associations of the colors chosen as going best/worst with the melodies (r = 0.92, 0.85, 0.82 and 0.70 for happy/sad, strong/weak,angry/not-angry and agitated/calm, respectively). The results are discussed in terms of common emotional associations forming a cross-modal bridge between highly disparate sensory inputs.

  6. /sup 99m/Tc-DTPA scintillation-camera renography: a new method for estimation of single-kidney function

    International Nuclear Information System (INIS)

    Nielsen, S.P.; Moeller, M.L.; Trap-Jensen, J.

    1977-01-01

    A new method of combined serial scintigraphy and renography, using a scintillation camera and /sup 99m/Tc-DTPA is evaluated. Renographic curves, corresponding to light-pen ''areas of interest'' over the renal parenchyma, were processed. ''Blood-background'' curves were recorded from an external detector over the temporal region of the head and also from an ''area of interest'' corresponding to the aorta and inferior vena cava. The uptake phase of the renogram was always linear. The sum of the slopes of the uptake phase of both kidneys correlated well with the measured glomerular filtration rate in 25 patients with renal insufficiency of various degrees. Single-kidney function estimated from the slopes correlated reasonably well with single-kidney function estimated from 131 I-Hippuran renography with external detectors. The method described minimizes errors in the estimation of single-kidney function, and both anatomic and functional information is obtained

  7. Optical low-cost and portable arrangement for full field 3D displacement measurement using a single camera

    Science.gov (United States)

    López-Alba, E.; Felipe-Sesé, L.; Schmeer, S.; Díaz, F. A.

    2016-11-01

    In the current paper, an optical low-cost system for 3D displacement measurement based on a single camera and 3D digital image correlation is presented. The conventional 3D-DIC set-up based on a two-synchronized-cameras system is compared with a proposed pseudo-stereo portable system that employs a mirror system integrated in a device for a straightforward application achieving a novel handle and flexible device for its use in many scenarios. The proposed optical system splits the image by the camera into two stereo images of the object. In order to validate this new approach and quantify its uncertainty compared to traditional 3D-DIC systems, solid rigid in and out-of-plane displacements experiments have been performed and analyzed. The differences between both systems have been studied employing an image decomposition technique which performs a full image comparison. Therefore, results of all field of view are compared with those using a stereoscopy system and 3D-DIC, discussing the accurate results obtained with the proposed device not having influence any distortion or aberration produced by the mirrors. Finally, the adaptability of the proposed system and its accuracy has been tested performing quasi-static and dynamic experiments using a silicon specimen under high deformation. Results have been compared and validated with those obtained from a conventional stereoscopy system showing an excellent level of agreement.

  8. Two-color monochromatic x-ray imaging with a single short-pulse laser

    Science.gov (United States)

    Sawada, H.; Daykin, T.; McLean, H. S.; Chen, H.; Patel, P. K.; Ping, Y.; Pérez, F.

    2017-06-01

    Simultaneous monochromatic crystal imaging at 4.5 and 8.0 keV with x-rays produced by a single short-pulse laser is presented. A layered target consisting of thin foils of titanium and copper glued together is irradiated by the 50 TW Leopard short-pulse laser housed at the Nevada Terawatt Facility. Laser-accelerated MeV fast electrons transmitting through the target induce Kα fluorescence from both foils. Two energy-selective curved crystals in the imaging diagnostic form separate monochromatic images on a single imaging detector. The experiment demonstrates simultaneous two-color monochromatic imaging of the foils on a single detector as well as Kα x-ray production at two different photon energies with a single laser beam. Application of the diagnostic technique to x-ray radiography of a high density plasma is also presented.

  9. Using Color, Texture and Object-Based Image Analysis of Multi-Temporal Camera Data to Monitor Soil Aggregate Breakdown

    Directory of Open Access Journals (Sweden)

    Irena Ymeti

    2017-05-01

    Full Text Available Remote sensing has shown its potential to assess soil properties and is a fast and non-destructive method for monitoring soil surface changes. In this paper, we monitor soil aggregate breakdown under natural conditions. From November 2014 to February 2015, images and weather data were collected on a daily basis from five soils susceptible to detachment (Silty Loam with various organic matter content, Loam and Sandy Loam. Three techniques that vary in image processing complexity and user interaction were tested for the ability of monitoring aggregate breakdown. Considering that the soil surface roughness causes shadow cast, the blue/red band ratio is utilized to observe the soil aggregate changes. Dealing with images with high spatial resolution, image texture entropy, which reflects the process of soil aggregate breakdown, is used. In addition, the Huang thresholding technique, which allows estimation of the image area occupied by soil aggregate, is performed. Our results show that all three techniques indicate soil aggregate breakdown over time. The shadow ratio shows a gradual change over time with no details related to weather conditions. Both the entropy and the Huang thresholding technique show variations of soil aggregate breakdown responding to weather conditions. Using data obtained with a regular camera, we found that freezing–thawing cycles are the cause of soil aggregate breakdown.

  10. Using Color, Texture and Object-Based Image Analysis of Multi-Temporal Camera Data to Monitor Soil Aggregate Breakdown.

    Science.gov (United States)

    Ymeti, Irena; van der Werff, Harald; Shrestha, Dhruba Pikha; Jetten, Victor G; Lievens, Caroline; van der Meer, Freek

    2017-05-30

    Remote sensing has shown its potential to assess soil properties and is a fast and non-destructive method for monitoring soil surface changes. In this paper, we monitor soil aggregate breakdown under natural conditions. From November 2014 to February 2015, images and weather data were collected on a daily basis from five soils susceptible to detachment (Silty Loam with various organic matter content, Loam and Sandy Loam). Three techniques that vary in image processing complexity and user interaction were tested for the ability of monitoring aggregate breakdown. Considering that the soil surface roughness causes shadow cast, the blue/red band ratio is utilized to observe the soil aggregate changes. Dealing with images with high spatial resolution, image texture entropy, which reflects the process of soil aggregate breakdown, is used. In addition, the Huang thresholding technique, which allows estimation of the image area occupied by soil aggregate, is performed. Our results show that all three techniques indicate soil aggregate breakdown over time. The shadow ratio shows a gradual change over time with no details related to weather conditions. Both the entropy and the Huang thresholding technique show variations of soil aggregate breakdown responding to weather conditions. Using data obtained with a regular camera, we found that freezing-thawing cycles are the cause of soil aggregate breakdown.

  11. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  12. Design and first tests of miniature K010X soft x-ray streak and single-frame camera

    Science.gov (United States)

    Lebedev, V. B.; Feldman, G. G.; Myasnikov, A. F.; Chernyshev, N. V.; Shubski, I. I.; Liu, Jingru; Wang, Lijun; Zhang, Yongsheng; Zhao, Xueqin; Zheng, Guoxin; Xiao, Weiwei

    2007-01-01

    Description of a new K010X soft X-ray camera and the first results of it tests carried out in Russia and China are presented. In a streak mode the full sweep time for a 2 cm sweep route length on the image converter screen is from 2 ns up to 600 microseconds. In a single-frame mode the corresponding frame duration is from ~10 ns up to ~ 660 microseconds. Spatial resolution in a single frame mode was not less than 5 l.p./mm for soft X-ray radiation and not less 10 l.p./mm for UV radiation. Spatial resolution in a streak mode for soft X-ray radiation was from 5 up to 10 l.p./mm and for UV radiation on all the sweep ranges was not less than 10 l.p./mm, except for the range of 1 ns/cm where it was 5 l.p./mm. Limiting temporal resolution for UV radiation was near 10 ps and a dynamic range was 200 when full sweep time was 60 ns. The camera has small 430×115×200 mm dimensions, 5.0 kg weight and 10 VA power consumption.

  13. Single software platform used for high speed data transfer implementation in a 65k pixel camera working in single photon counting mode

    International Nuclear Information System (INIS)

    Maj, P.; Kasiński, K.; Gryboś, P.; Szczygieł, R.; Kozioł, A.

    2015-01-01

    Integrated circuits designed for specific applications generally use non-standard communication methods. Hybrid pixel detector readout electronics produces a huge amount of data as a result of number of frames per seconds. The data needs to be transmitted to a higher level system without limiting the ASIC's capabilities. Nowadays, the Camera Link interface is still one of the fastest communication methods, allowing transmission speeds up to 800 MB/s. In order to communicate between a higher level system and the ASIC with a dedicated protocol, an FPGA with dedicated code is required. The configuration data is received from the PC and written to the ASIC. At the same time, the same FPGA should be able to transmit the data from the ASIC to the PC at the very high speed. The camera should be an embedded system enabling autonomous operation and self-monitoring. In the presented solution, at least three different hardware platforms are used—FPGA, microprocessor with real-time operating system and the PC with end-user software. We present the use of a single software platform for high speed data transfer from 65k pixel camera to the personal computer

  14. Single software platform used for high speed data transfer implementation in a 65k pixel camera working in single photon counting mode

    Science.gov (United States)

    Maj, P.; Kasiński, K.; Gryboś, P.; Szczygieł, R.; Kozioł, A.

    2015-12-01

    Integrated circuits designed for specific applications generally use non-standard communication methods. Hybrid pixel detector readout electronics produces a huge amount of data as a result of number of frames per seconds. The data needs to be transmitted to a higher level system without limiting the ASIC's capabilities. Nowadays, the Camera Link interface is still one of the fastest communication methods, allowing transmission speeds up to 800 MB/s. In order to communicate between a higher level system and the ASIC with a dedicated protocol, an FPGA with dedicated code is required. The configuration data is received from the PC and written to the ASIC. At the same time, the same FPGA should be able to transmit the data from the ASIC to the PC at the very high speed. The camera should be an embedded system enabling autonomous operation and self-monitoring. In the presented solution, at least three different hardware platforms are used—FPGA, microprocessor with real-time operating system and the PC with end-user software. We present the use of a single software platform for high speed data transfer from 65k pixel camera to the personal computer.

  15. Two-Color Single-Photon Photoinitiation and Photoinhibition for Subdiffraction Photolithography

    Science.gov (United States)

    Scott, Timothy F.; Kowalski, Benjamin A.; Sullivan, Amy C.; Bowman, Christopher N.; McLeod, Robert R.

    2009-05-01

    Controlling and reducing the developed region initiated by photoexposure is one of the fundamental goals of optical lithography. Here, we demonstrate a two-color irradiation scheme whereby initiating species are generated by single-photon absorption at one wavelength while inhibiting species are generated by single-photon absorption at a second, independent wavelength. Co-irradiation at the second wavelength thus reduces the polymerization rate, delaying gelation of the material and facilitating enhanced spatial control over the polymerization. Appropriate overlapping of the two beams produces structures with both feature sizes and monomer conversions otherwise unobtainable with use of single- or two-photon absorption photopolymerization. Additionally, the generated inhibiting species rapidly recombine when irradiation with the second wavelength ceases, allowing for fast sequential exposures not limited by memory effects in the material and thus enabling fabrication of complex two- or three-dimensional structures.

  16. 3D color reconstructions in single DMD holographic display with LED source and complex coding scheme

    Science.gov (United States)

    Chlipała, Maksymilian; Kozacki, Tomasz

    2017-06-01

    In the paper we investigate the possibility of color reconstructions of holograms with a single DMD and incoherent LED source illumination. Holographic display is built with 4F imaging system centering reconstruction volume around the DMD surface. The display design employs complex coding scheme, which allows reconstructing complex wave from a binary hologram. In order to improve the quality of reconstructed holograms time multiplexing method is used. During the optical reconstructions we analyze quality of reconstructed holograms with incoherent RGB light sources as a function of reconstruction distance, present the possibility of 3D hologram reconstruction, and investigate temporal coherence effects in holographic display with the DMD.

  17. Characterization of an industry-grade CMOS camera well suited for single molecule localization microscopy - high performance super-resolution at low cost.

    Science.gov (United States)

    Diekmann, Robin; Till, Katharina; Müller, Marcel; Simonis, Matthias; Schüttpelz, Mark; Huser, Thomas

    2017-10-31

    Many commercial as well as custom-built fluorescence microscopes use scientific-grade cameras that represent a substantial share of the instrument's cost. This holds particularly true for super-resolution localization microscopy where high demands are placed especially on the detector with respect to sensitivity, noise, and also image acquisition speed. Here, we present and carefully characterize an industry-grade CMOS camera as a cost-efficient alternative to commonly used scientific cameras. Direct experimental comparison of these two detector types shows widely similar performance for imaging by single molecule localization microscopy (SMLM). Furthermore, high image acquisition speeds are demonstrated for the CMOS detector by ultra-fast SMLM imaging.

  18. Monitoring of Wheat Growth Status and Mapping of Wheat Yield’s within-Field Spatial Variations Using Color Images Acquired from UAV-camera System

    Directory of Open Access Journals (Sweden)

    Mengmeng Du

    2017-03-01

    Full Text Available Applications of remote sensing using unmanned aerial vehicle (UAV in agriculture has proved to be an effective and efficient way of obtaining field information. In this study, we validated the feasibility of utilizing multi-temporal color images acquired from a low altitude UAV-camera system to monitor real-time wheat growth status and to map within-field spatial variations of wheat yield for smallholder wheat growers, which could serve as references for site-specific operations. Firstly, eight orthomosaic images covering a small winter wheat field were generated to monitor wheat growth status from heading stage to ripening stage in Hokkaido, Japan. Multi-temporal orthomosaic images indicated straightforward sense of canopy color changes and spatial variations of tiller densities. Besides, the last two orthomosaic images taken from about two weeks prior to harvesting also notified the occurrence of lodging by visual inspection, which could be used to generate navigation maps guiding drivers or autonomous harvesting vehicles to adjust operation speed according to specific lodging situations for less harvesting loss. Subsequently orthomosaic images were geo-referenced so that further study on stepwise regression analysis among nine wheat yield samples and five color vegetation indices (CVI could be conducted, which showed that wheat yield correlated with four accumulative CVIs of visible-band difference vegetation index (VDVI, normalized green-blue difference index (NGBDI, green-red ratio index (GRRI, and excess green vegetation index (ExG, with the coefficient of determination and RMSE as 0.94 and 0.02, respectively. The average value of sampled wheat yield was 8.6 t/ha. The regression model was also validated by using leave-one-out cross validation (LOOCV method, of which root-mean-square error of predication (RMSEP was 0.06. Finally, based on the stepwise regression model, a map of estimated wheat yield was generated, so that within

  19. Observation of X-ray shadings in synchrotron radiation-total reflection X-ray fluorescence using a color X-ray camera

    Science.gov (United States)

    Fittschen, Ursula Elisabeth Adriane; Menzel, Magnus; Scharf, Oliver; Radtke, Martin; Reinholz, Uwe; Buzanich, Günther; Lopez, Velma M.; McIntosh, Kathryn; Streli, Christina; Havrilla, George Joseph

    2014-09-01

    Absorption effects and the impact of specimen shape on TXRF analysis has been discussed intensively. Model calculations indicated that ring shaped specimens should give better results in terms of higher counts per mass signals than filled rectangle or circle shaped specimens. One major reason for the difference in signal is shading effects. Full field micro-XRF with a color X-ray camera (CXC) was used to investigate shading, which occurs when working with small angles of excitation as in TXRF. The device allows monitoring the illuminated parts of the sample and the shaded parts at the same time. It is expected that sample material hit first by the primary beam shade material behind it. Using the CXC shading could be directly visualized for the high concentration specimens. In order to compare the experimental results with calculation of the shading effect the generation of controlled specimens is crucial. This was achieved by “drop on demand” technology. It allows generating uniform, microscopic deposits of elements. The experimentally measured shadings match well with those expected from calculation.

  20. Single-channel color image encryption based on iterative fractional Fourier transform and chaos

    Science.gov (United States)

    Sui, Liansheng; Gao, Bo

    2013-06-01

    A single-channel color image encryption is proposed based on iterative fractional Fourier transform and two-coupled logistic map. Firstly, a gray scale image is constituted with three channels of the color image, and permuted by a sequence of chaotic pairs which is generated by two-coupled logistic map. Firstly, the permutation image is decomposed into three components again. Secondly, the first two components are encrypted into a single one based on iterative fractional Fourier transform. Similarly, the interim image and third component are encrypted into the final gray scale ciphertext with stationary white noise distribution, which has camouflage property to some extent. In the process of encryption and description, chaotic permutation makes the resulting image nonlinear and disorder both in spatial domain and frequency domain, and the proposed iterative fractional Fourier transform algorithm has faster convergent speed. Additionally, the encryption scheme enlarges the key space of the cryptosystem. Simulation results and security analysis verify the feasibility and effectiveness of this method.

  1. Kaleido: Visualizing Big Brain Data with Automatic Color Assignment for Single-Neuron Images.

    Science.gov (United States)

    Wang, Ting-Yuan; Chen, Nan-Yow; He, Guan-Wei; Wang, Guo-Tzau; Shih, Chi-Tin; Chiang, Ann-Shyn

    2018-03-03

    Effective 3D visualization is essential for connectomics analysis, where the number of neural images easily reaches over tens of thousands. A formidable challenge is to simultaneously visualize a large number of distinguishable single-neuron images, with reasonable processing time and memory for file management and 3D rendering. In the present study, we proposed an algorithm named "Kaleido" that can visualize up to at least ten thousand single neurons from the Drosophila brain using only a fraction of the memory traditionally required, without increasing computing time. Adding more brain neurons increases memory only nominally. Importantly, Kaleido maximizes color contrast between neighboring neurons so that individual neurons can be easily distinguished. Colors can also be assigned to neurons based on biological relevance, such as gene expression, neurotransmitters, and/or development history. For cross-lab examination, the identity of every neuron is retrievable from the displayed image. To demonstrate the effectiveness and tractability of the method, we applied Kaleido to visualize the 10,000 Drosophila brain neurons obtained from the FlyCircuit database ( http://www.flycircuit.tw/modules.php?name=kaleido ). Thus, Kaleido visualization requires only sensible computer memory for manual examination of big connectomics data.

  2. Color optimization of single emissive white OLEDs via energy transfer between RGB fluorescent dopants

    International Nuclear Information System (INIS)

    Kim, Nam Ho; Kim, You-Hyun; Yoon, Ju-An; Lee, Sang Youn; Ryu, Dae Hyun; Wood, Richard; Moon, C.-B.; Kim, Woo Young

    2013-01-01

    The electroluminescent characteristics of white organic light-emitting diodes (WOLEDs) were investigated including single emitting layer (SEL) with an ADN host and dopants; BCzVBi, C545T, and DCJTB for blue, green and red emission, respectively. The structure of the high efficiency WOLED device was; ITO/NPB(700 Å)/ADN: BCzVBi-7%:C545T-0.05%:DCJTB-0.1%(300 Å)/Bphen(300 Å)/Liq(20 Å)/Al(1200 Å) for mixing three primary colors. Luminous efficiency was 9.08 cd/A at 3.5 V and Commission Intenationale de L’eclairage (CIE x,y ) coordinates of white emission was measured as (0.320, 0.338) at 8 V while simulated CIE x,y coordinates were (0.336, 0.324) via estimation from each dopant's PL spectrum. -- Highlights: • This paper observes single-emissive-layered white OLED using fluorescent dopants. • Electrical and optical properties are analyzed. • Color stability of white OLED is confirmed for new planar light source

  3. Color optimization of single emissive white OLEDs via energy transfer between RGB fluorescent dopants

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Nam Ho; Kim, You-Hyun; Yoon, Ju-An; Lee, Sang Youn [Department of Green Energy and Semiconductor Engineering, Hoseo University, Asan (Korea, Republic of); Ryu, Dae Hyun [Department of Information Technology, Hansei University, Gunpo (Korea, Republic of); Wood, Richard [Department of Engineering Physics, McMaster University, Hamilton, Ontario, Canada L8S 4L7 (Canada); Moon, C.-B. [Department of Green Energy and Semiconductor Engineering, Hoseo University, Asan (Korea, Republic of); Kim, Woo Young, E-mail: wykim@hoseo.edu [Department of Green Energy and Semiconductor Engineering, Hoseo University, Asan (Korea, Republic of); Department of Engineering Physics, McMaster University, Hamilton, Ontario, Canada L8S 4L7 (Canada)

    2013-11-15

    The electroluminescent characteristics of white organic light-emitting diodes (WOLEDs) were investigated including single emitting layer (SEL) with an ADN host and dopants; BCzVBi, C545T, and DCJTB for blue, green and red emission, respectively. The structure of the high efficiency WOLED device was; ITO/NPB(700 Å)/ADN: BCzVBi-7%:C545T-0.05%:DCJTB-0.1%(300 Å)/Bphen(300 Å)/Liq(20 Å)/Al(1200 Å) for mixing three primary colors. Luminous efficiency was 9.08 cd/A at 3.5 V and Commission Intenationale de L’eclairage (CIE{sub x,y}) coordinates of white emission was measured as (0.320, 0.338) at 8 V while simulated CIE{sub x,y} coordinates were (0.336, 0.324) via estimation from each dopant's PL spectrum. -- Highlights: • This paper observes single-emissive-layered white OLED using fluorescent dopants. • Electrical and optical properties are analyzed. • Color stability of white OLED is confirmed for new planar light source.

  4. THE DEVELOPMENT OF A FAMILY OF LIGHTWEIGHT AND WIDE SWATH UAV CAMERA SYSTEMS AROUND AN INNOVATIVE DUAL-SENSOR ON-SINGLE-CHIP DETECTOR

    Directory of Open Access Journals (Sweden)

    B. Delauré

    2013-08-01

    Full Text Available Together with a Belgian industrial consortium VITO has developed the lightweight camera system MEDUSA. It combines high spatial resolution with a wide swath to support missions for large scale mapping and disaster monitoring applications. MEDUSA has been designed to be operated on a solar-powered unmanned aerial vehicle flying in the stratosphere. The camera system contains a custom designed CMOS imager with 2 sensors (each having 10000 × 1200 pixels on 1 chip. One sensor is panchromatic, one is equipped with colour filters. The MEDUSA flight model camera has passed an extensive test campaign and is ready to conduct its maiden flight. First airborne test flights with an engineering model version of the camera have been executed to validate the functionality and the performance of the camera. An image stitching work flow has been developed in order to generate an image composite in near real time of the acquired images. The unique properties of the dual-sensor-on-single-chip detector triggered the development of 2 new camera designs which are currently in preparation. MEDUSA-low is a modified camera system optimised for compatibility with more conventional UAV systems with a payload capacity of 5–10 kg flying at an altitude around 1 km. Its camera acquires both panchromatic and colour images. The MEDUSA geospectral camera is an innovative hyperspectral imager which is equipped with a spatially varying spectral filter installed in front of one of the two sensors. It acquires both hyperspectral and broad band high spatial resolution image data from one and the same camera.

  5. Real-time Human Pose and Shape Estimation for Virtual Try-On Using a Single Commodity Depth Camera.

    Science.gov (United States)

    Ye, Mao; Wang, Huamin; Deng, Nianchen; Yang, Xubo; Yang, Ruigang

    2014-04-01

    We present a system that allows the user to virtually try on new clothes. It uses a single commodity depth camera to capture the user in 3D. Both the pose and the shape of the user are estimated with a novel real-time template-based approach that performs tracking and shape adaptation jointly. The result is then used to drive realistic cloth simulation, in which the synthesized clothes are overlayed on the input image. The main challenge is to handle missing data and pose ambiguities due to the monocular setup, which captures less than 50 percent of the full body. Our solution is to incorporate automatic shape adaptation and novel constraints in pose tracking. The effectiveness of our system is demonstrated with a number of examples.

  6. Development of single frame X-ray framing camera for pulsed ...

    Indian Academy of Sciences (India)

    X-ray emission from a laser-produced copper plasma. A reduction factor of ∼6·5 is seen in the dark ... 30 J, 2 ns (FWHM) Nd:glass laser on a copper target. 2. System description. A schematic diagram of the .... using this construction, we can get clean single pulse width upto τ. Pulse amplitude can be changed by changing ...

  7. Color tuning in alert macaque V1 assessed with fMRI and single-unit recording shows a bias toward daylight colors.

    Science.gov (United States)

    Lafer-Sousa, Rosa; Liu, Yang O; Lafer-Sousa, Luis; Wiest, Michael C; Conway, Bevil R

    2012-05-01

    Colors defined by the two intermediate directions in color space, "orange-cyan" and "lime-magenta," elicit the same spatiotemporal average response from the two cardinal chromatic channels in the lateral geniculate nucleus (LGN). While we found LGN functional magnetic resonance imaging (fMRI) responses to these pairs of colors were statistically indistinguishable, primary visual cortex (V1) fMRI responses were stronger to orange-cyan. Moreover, linear combinations of single-cell responses to cone-isolating stimuli of V1 cone-opponent cells also yielded stronger predicted responses to orange-cyan over lime-magenta, suggesting these neurons underlie the fMRI result. These observations are consistent with the hypothesis that V1 recombines LGN signals into "higher-order" mechanisms tuned to noncardinal color directions. In light of work showing that natural images and daylight samples are biased toward orange-cyan, our findings further suggest that V1 is adapted to daylight. V1, especially double-opponent cells, may function to extract spatial information from color boundaries correlated with scene-structure cues, such as shadows lit by ambient blue sky juxtaposed with surfaces reflecting sunshine. © 2012 Optical Society of America

  8. Time-of-flight camera via a single-pixel correlation image sensor

    Science.gov (United States)

    Mao, Tianyi; Chen, Qian; He, Weiji; Dai, Huidong; Ye, Ling; Gu, Guohua

    2018-04-01

    A time-of-flight imager based on single-pixel correlation image sensors is proposed for noise-free depth map acquisition in presence of ambient light. Digital micro-mirror device and time-modulated IR-laser provide spatial and temporal illumination on the unknown object. Compressed sensing and ‘four bucket principle’ method are combined to reconstruct the depth map from a sequence of measurements at a low sampling rate. Second-order correlation transform is also introduced to reduce the noise from the detector itself and direct ambient light. Computer simulations are presented to validate the computational models and improvement of reconstructions.

  9. Realtime Color Stereovision Processing

    National Research Council Canada - National Science Library

    Formwalt, Bryon

    2000-01-01

    .... This research takes a step forward in real time machine vision processing. It investigates techniques for implementing a real time stereovision processing system using two miniature color cameras...

  10. Single-channel color image encryption using phase retrieve algorithm in fractional Fourier domain

    Science.gov (United States)

    Sui, Liansheng; Xin, Meiting; Tian, Ailing; Jin, Haiyan

    2013-12-01

    A single-channel color image encryption is proposed based on a phase retrieve algorithm and a two-coupled logistic map. Firstly, a gray scale image is constituted with three channels of the color image, and then permuted by a sequence of chaotic pairs generated by the two-coupled logistic map. Secondly, the permutation image is decomposed into three new components, where each component is encoded into a phase-only function in the fractional Fourier domain with a phase retrieve algorithm that is proposed based on the iterative fractional Fourier transform. Finally, an interim image is formed by the combination of these phase-only functions and encrypted into the final gray scale ciphertext with stationary white noise distribution by using chaotic diffusion, which has camouflage property to some extent. In the process of encryption and decryption, chaotic permutation and diffusion makes the resultant image nonlinear and disorder both in spatial domain and frequency domain, and the proposed phase iterative algorithm has faster convergent speed. Additionally, the encryption scheme enlarges the key space of the cryptosystem. Simulation results and security analysis verify the feasibility and effectiveness of this method.

  11. Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs) †

    Science.gov (United States)

    Jaramillo, Carlos; Valenti, Roberto G.; Guo, Ling; Xiao, Jizhong

    2016-01-01

    We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances. PMID:26861351

  12. A novel single-step procedure for the calibration of the mounting parameters of a multi-camera terrestrial mobile mapping system

    Science.gov (United States)

    Habib, A.; Kersting, P.; Bang, K.; Rau, J.

    2011-12-01

    Mobile Mapping Systems (MMS) can be defined as moving platforms which integrates a set of imaging sensors and a position and orientation system (POS) for the collection of geo-spatial information. In order to fully explore the potential accuracy of such systems and guarantee accurate multi-sensor integration, a careful system calibration must be carried out. System calibration involves individual sensor calibration as well as the estimation of the inter-sensor geometric relationship. This paper tackles a specific component of the system calibration process of a multi-camera MMS - the estimation of the relative orientation parameters among the cameras, i.e., the inter-camera geometric relationship (lever-arm offsets and boresight angles among the cameras). For that purpose, a novel single step procedure, which is easy to implement and not computationally intensive, will be introduced. The proposed method is implemented in such a way that it can also be used for the estimation of the mounting parameters among the cameras and the IMU body frame, in case of directly georeferenced systems. The performance of the proposed method is evaluated through experimental results using simulated data. A comparative analysis between the proposed single-step and the two-step, which makes use of the traditional bundle adjustment procedure, is demonstrated.

  13. Influence of Restorative Materials on Color of Implant-Supported Single Crowns in Esthetic Zone: A Spectrophotometric Evaluation

    Science.gov (United States)

    Zhao, Wei-Jie; Hosseini, Mandana; Zhou, Wen-Juan; Xiao, Ting

    2017-01-01

    Restorations of 98 implant-supported single crowns in anterior maxillary area were divided into 5 groups: zirconia abutment, titanium abutment, and gold/gold hue abutment with zirconia coping, respectively, and titanium abutment with metal coping as well as gold/gold hue abutment with metal coping. A reflectance spectrophotometer was used to evaluate the color difference between the implant crowns and contralateral/neighboring teeth, as well as the color difference between the peri-implant soft tissue and the natural marginal mucosa. The mucosal discoloration score was used for subjective evaluation of the esthetic outcome of soft tissue around implant-supported single crowns in the anterior zone, and the crown color match score was used for subjective evaluation of the esthetic outcome of implant-supported restoration. ANOVA analysis was used to compare the differences among groups and Spearman correlation was used to test the relationships. A gold/gold hue abutment with zirconia coping was the best choice for an esthetic crown and the all-ceramic combination was the best for peri-implant soft tissue. Significant correlation was found between the spectrophotometric color difference of peri-implant soft tissue and mucosal discoloration score, while no significant correlation was found between the total spectrophotometric color difference of implant crown and crown color match score. PMID:29349075

  14. Influence of Restorative Materials on Color of Implant-Supported Single Crowns in Esthetic Zone: A Spectrophotometric Evaluation

    Directory of Open Access Journals (Sweden)

    Min Peng

    2017-01-01

    Full Text Available Restorations of 98 implant-supported single crowns in anterior maxillary area were divided into 5 groups: zirconia abutment, titanium abutment, and gold/gold hue abutment with zirconia coping, respectively, and titanium abutment with metal coping as well as gold/gold hue abutment with metal coping. A reflectance spectrophotometer was used to evaluate the color difference between the implant crowns and contralateral/neighboring teeth, as well as the color difference between the peri-implant soft tissue and the natural marginal mucosa. The mucosal discoloration score was used for subjective evaluation of the esthetic outcome of soft tissue around implant-supported single crowns in the anterior zone, and the crown color match score was used for subjective evaluation of the esthetic outcome of implant-supported restoration. ANOVA analysis was used to compare the differences among groups and Spearman correlation was used to test the relationships. A gold/gold hue abutment with zirconia coping was the best choice for an esthetic crown and the all-ceramic combination was the best for peri-implant soft tissue. Significant correlation was found between the spectrophotometric color difference of peri-implant soft tissue and mucosal discoloration score, while no significant correlation was found between the total spectrophotometric color difference of implant crown and crown color match score.

  15. A compact single-camera system for high-speed, simultaneous 3-D velocity and temperature measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Louise; Sick, Volker; Frank, Jonathan H.

    2013-09-01

    The University of Michigan and Sandia National Laboratories collaborated on the initial development of a compact single-camera approach for simultaneously measuring 3-D gasphase velocity and temperature fields at high frame rates. A compact diagnostic tool is desired to enable investigations of flows with limited optical access, such as near-wall flows in an internal combustion engine. These in-cylinder flows play a crucial role in improving engine performance. Thermographic phosphors were proposed as flow and temperature tracers to extend the capabilities of a novel, compact 3D velocimetry diagnostic to include high-speed thermometry. Ratiometric measurements were performed using two spectral bands of laser-induced phosphorescence emission from BaMg2Al10O17:Eu (BAM) phosphors in a heated air flow to determine the optimal optical configuration for accurate temperature measurements. The originally planned multi-year research project ended prematurely after the first year due to the Sandia-sponsored student leaving the research group at the University of Michigan.

  16. Reliability of a 99sp(m)Tc-DTPA gamma camera technique for determination of single kidney glomerular filtration rate

    International Nuclear Information System (INIS)

    Rehling, M.; Moeller, M.L.; Thamdrup, B.; Lund, J.O.; Trap-Jensen, J.

    1986-01-01

    In a recent paper we described a method for calculation of single kidney glomerular filtration rate (SKGFR) from the 99 sp(m)Tc-DTPA renogram obtained by gamma camera. In this paper the reliability of the method was compared to other methods for estimation of GFR in 20 unilaterally nephrectomized patients. The values for SKGFR obtained from the renograms and from the estimated endogenous creatinine clearances according to serum creatinine concentration and a nomogram were both accurate. The reliability of the renography method was significantly better judged by less variance in the estimates. SKGFR calculated from the plasma clearance of 51 Cr-EDTA overestimated the renal clearance of inulin on an average by 11.3%. No difference was found in the variance of the values obtained from the renograms and from the plasma clearances of 51 Cr-EDTA compared to the renal clearance of inulin. Apart from the inaccuracy in the GFR values calculated from the plasma clearance of 51 Cr-EDTA, the reliability of these two methods was equal. (author)

  17. Reliability of single kidney glomerular filtration rate measured by a 99mTc-DTPA gamma camera technique

    International Nuclear Information System (INIS)

    Rehling, M.; Moller, M.L.; Jensen, J.J.; Thamdrup, B.; Lund, J.O.; Trap-Jensen, J.

    1986-01-01

    The reliability of a previously published method for determination of single kidney glomerular filtration rate (SKGFR) by means of technetium-99m-diethylenetriaminepenta-acetate (99mTc-DTPA) gamma camera renography was evaluated. The day-to-day variation in the calculated SKGFR values was earlier found to be 8.8%. The technique was compared to the simultaneously measured renal clearance of inulin in 19 unilaterally nephrectomized patients with GFR varying from 11 to 76 ml/min. The regression line (y = 1.04 X -2.5) did not differ significantly from the line of identity. The standard error of estimate was 4.3 ml/min. In 17 patients the inter- and intraobserver variation of the calculated SKGFR values was 1.2 ml/min and 1.3 ml/min, respectively. In 21 of 25 healthy subjects studied (age range 27-29 years), total GFR calculated from the renograms was within an established age-dependent normal range of GFR

  18. Single-Camera Closed-Form Real-Time Needle Tracking for Ultrasound-Guided Needle Insertion.

    Science.gov (United States)

    Najafi, Mohammad; Abolmaesumi, Purang; Rohling, Robert

    2015-10-01

    Many common needle intervention procedures are performed with ultrasound guidance because it is a flexible, cost-effective and widely available intra-operative imaging modality. In a needle insertion procedure with ultrasound guidance, real-time calculation and visualization of the needle trajectory can help to guide the choice of puncture site and needle angle to reach the target depicted in the ultrasound image. We found that it is feasible to calculate the needle trajectory with a single camera mounted directly on the ultrasound transducer by using the needle markings. Higher accuracy is achieved compared with other similar transducer-mounted needle trackers. We used an inexpensive, real-time and easy-to-use tracking method based on an automatic feature extraction algorithm and a closed-form method for pose estimation of the needle. The overall accuracy was 0.94 ± 0.46 mm. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  19. Validation of quantitative brain dopamine D2 receptor imaging with a conventional single-head SPET camera

    International Nuclear Information System (INIS)

    Nikkinen, P.; Liewendahl, K.; Savolainen, S.; Launes, J.

    1993-01-01

    Phantom measurements were performed with a conventional single-head single-photon emission tomography (SPET) camera in order to validate the relevance of the basal ganglia/frontal cortex iodine-123 iodobenzamide (IBZM) uptake ratios measured in patients. Inside a cylindrical phantom (diameter 22 cm), two cylinders with a diameter of 3.3 cm were inserted. The activity concentrations of the cylinders ranged from 6.0 to 22.6 kBq/ml and the cylinder/background activity ratios varied from 1.4 to 3.8. From reconstructed SPET images the cylinder/background activity ratios were calculated using three different regions of interest (ROIs). A linear relationship between the measured activity ratio and the true activity ratio was obtained. In patient studies, basal ganglia/frontal cortex IBZM uptake ratios determined from the reconstructed slices using attentuation correction prior to reconstruction were 1.30 ±0.03 in idiopathic Parkinson's disease (n = 9), 1,33 ±0.09 in infantile and juvenile neuronal ceroid lipofuscinosis (n = 7) and 1.34 ±0.05 in narcolepsy (n = 8). Patients with Huntington's disease had significantly lower ratios (1.09 ±0.04, n = 5). The corrected basal ganglia/frontal cortex ratios, determined using linear regression, were about 80 % higher. The use of dual-window scatter correction increased the measured ratios by about 10 %. Although comprehensive correction methods can further improve the resolution in SPET images, the resolution of the SPET system used by us (1.5 - 2 cm) will determine what is achievable in basal ganglia D2 receptor imaging. (orig.)

  20. A standardized approach for iris color determination.

    Science.gov (United States)

    Niggemann, Birgit; Weinbauer, Gerhard; Vogel, Friedhelm; Korte, Rainhart

    2003-01-01

    Latanoprost, the phenyl-substituted prostaglandin F2alpha, has been found to be an effective agent for glaucoma therapy. This prostaglandin derivative exerts ocular hypotensive activity but is also associated with an untoward side effect, namely iris color changes. Latanoprost provoked iris color changes in cynomolgus monkeys and in multicenter clinical trials. Until now photographs were taken and compared with color plates to document these changes. The disadvantage of this method is obvious, i.e., the color luminance varies between measurements due to changes in the developer. Furthermore, subjective comparison of color changes relative to color plates rendered judgment subject to impression and opinion rather than to objective data. Therefore, a computerized method using a 3-CCD video camera attached to a slit lamp was developed. The signals were transferred to a computer and a single frame, which was "frozen" by means of a "grabber card." Camera and the computer had previously been calibrated and color plates were measured to check the standard conditions. They were evaluated by a software program displaying average color (as red, green, and blue values) of the selected area. This method provides a fast and accurate way to quantify color changes in the iris of both experimental animals and clinical trials.

  1. Applicability of single-camera photogrammetry to determine body dimensions of pinnipeds: Galapagos sea lions as an example.

    Science.gov (United States)

    Meise, Kristine; Mueller, Birte; Zein, Beate; Trillmich, Fritz

    2014-01-01

    Morphological features correlate with many life history traits and are therefore of high interest to behavioral and evolutionary biologists. Photogrammetry provides a useful tool to collect morphological data from species for which measurements are otherwise difficult to obtain. This method reduces disturbance and avoids capture stress. Using the Galapagos sea lion (Zalophus wollebaeki) as a model system, we tested the applicability of single-camera photogrammetry in combination with laser distance measurement to estimate morphological traits which may vary with an animal's body position. We assessed whether linear morphological traits estimated by photogrammetry can be used to estimate body length and mass. We show that accurate estimates of body length (males: ±2.0%, females: ±2.6%) and reliable estimates of body mass are possible (males: ±6.8%, females: 14.5%). Furthermore, we developed correction factors that allow the use of animal photos that diverge somewhat from a flat-out position. The product of estimated body length and girth produced sufficiently reliable estimates of mass to categorize individuals into 10 kg-classes of body mass. Data of individuals repeatedly photographed within one season suggested relatively low measurement errors (body length: 2.9%, body mass: 8.1%). In order to develop accurate sex- and age-specific correction factors, a sufficient number of individuals from both sexes and from all desired age classes have to be captured for baseline measurements. Given proper validation, this method provides an excellent opportunity to collect morphological data for large numbers of individuals with minimal disturbance.

  2. Applicability of single-camera photogrammetry to determine body dimensions of pinnipeds: Galapagos sea lions as an example.

    Directory of Open Access Journals (Sweden)

    Kristine Meise

    Full Text Available Morphological features correlate with many life history traits and are therefore of high interest to behavioral and evolutionary biologists. Photogrammetry provides a useful tool to collect morphological data from species for which measurements are otherwise difficult to obtain. This method reduces disturbance and avoids capture stress. Using the Galapagos sea lion (Zalophus wollebaeki as a model system, we tested the applicability of single-camera photogrammetry in combination with laser distance measurement to estimate morphological traits which may vary with an animal's body position. We assessed whether linear morphological traits estimated by photogrammetry can be used to estimate body length and mass. We show that accurate estimates of body length (males: ±2.0%, females: ±2.6% and reliable estimates of body mass are possible (males: ±6.8%, females: 14.5%. Furthermore, we developed correction factors that allow the use of animal photos that diverge somewhat from a flat-out position. The product of estimated body length and girth produced sufficiently reliable estimates of mass to categorize individuals into 10 kg-classes of body mass. Data of individuals repeatedly photographed within one season suggested relatively low measurement errors (body length: 2.9%, body mass: 8.1%. In order to develop accurate sex- and age-specific correction factors, a sufficient number of individuals from both sexes and from all desired age classes have to be captured for baseline measurements. Given proper validation, this method provides an excellent opportunity to collect morphological data for large numbers of individuals with minimal disturbance.

  3. Efficient selection of a single harmonic emission using a multi-color laser field with an aperture-iris diaphragm

    Science.gov (United States)

    Wei, Pengfei; Tian, Qili; Zeng, Zhinan; Jiang, Jiaming; Miao, Jing; Zheng, Yinghui; Ge, Xiaochun; Li, Chuang; Li, Ruxin; Xu, Zhizhan

    2014-08-01

    The efficient selection of an almost pure single harmonic emission (the 14th harmonic at 57 nm) from the harmonic comb has been experimentally achieved in an argon gas cell using a multi-color laser field with an aperture-iris diaphragm. When compared with the non-diaphragm case, the purity of the selected single harmonic emission (i.e. the contrast ratio) in the case of using the aperture-iris diaphragm is dramatically increased by approximately order of magnitude. Therefore, the modification of the multi-color laser field by using such an aperture-iris diaphragm before focusing is demonstrated to be an effective way of improving the phase-matching conditions for selective enhancement of the single harmonic emission.

  4. Efficient selection of a single harmonic emission using a multi-color laser field with an aperture-iris diaphragm

    International Nuclear Information System (INIS)

    Wei, Pengfei; Tian, Qili; Zeng, Zhinan; Jiang, Jiaming; Miao, Jing; Zheng, Yinghui; Ge, Xiaochun; Li, Chuang; Li, Ruxin; Xu, Zhizhan

    2014-01-01

    The efficient selection of an almost pure single harmonic emission (the 14th harmonic at 57 nm) from the harmonic comb has been experimentally achieved in an argon gas cell using a multi-color laser field with an aperture-iris diaphragm. When compared with the non-diaphragm case, the purity of the selected single harmonic emission (i.e. the contrast ratio) in the case of using the aperture-iris diaphragm is dramatically increased by approximately order of magnitude. Therefore, the modification of the multi-color laser field by using such an aperture-iris diaphragm before focusing is demonstrated to be an effective way of improving the phase-matching conditions for selective enhancement of the single harmonic emission. (paper)

  5. Single underwater image enhancement based on color cast removal and visibility restoration

    Science.gov (United States)

    Li, Chongyi; Guo, Jichang; Wang, Bo; Cong, Runmin; Zhang, Yan; Wang, Jian

    2016-05-01

    Images taken under underwater condition usually have color cast and serious loss of contrast and visibility. Degraded underwater images are inconvenient for observation and analysis. In order to address these problems, an underwater image-enhancement method is proposed. A simple yet effective underwater image color cast removal algorithm is first presented based on the optimization theory. Then, based on the minimum information loss principle and inherent relationship of medium transmission maps of three color channels in an underwater image, an effective visibility restoration algorithm is proposed to recover visibility, contrast, and natural appearance of degraded underwater images. To evaluate the performance of the proposed method, qualitative comparison, quantitative comparison, and color accuracy test are conducted. Experimental results demonstrate that the proposed method can effectively remove color cast, improve contrast and visibility, and recover natural appearance of degraded underwater images. Additionally, the proposed method is comparable to and even better than several state-of-the-art methods.

  6. Photography in Dermatologic Surgery: Selection of an Appropriate Camera Type for a Particular Clinical Application.

    Science.gov (United States)

    Chen, Brian R; Poon, Emily; Alam, Murad

    2017-08-01

    Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.

  7. Robotic versus human camera holding in video-assisted thoracic sympathectomy: a single blind randomized trial of efficacy and safety.

    Science.gov (United States)

    Martins Rua, Joaquim Fernando; Jatene, Fabio Biscegli; de Campos, José Ribas Milanez; Monteiro, Rosangela; Tedde, Miguel Lia; Samano, Marcos Naoyuki; Bernardo, Wanderley M; Das-Neves-Pereira, João Carlos

    2009-02-01

    Our objective is to compare surgical safety and efficacy between robotic and human camera control in video-assisted thoracic sympathectomy. A randomized-controlled-trial was performed. Surgical operation was VATS sympathectomy for hyperhidrosis. The trial compared a voice-controlled robot for holding the endoscopic camera robotic group (Ro) to human assisted group (Hu). Each group included 19 patients. Sympathectomy was achieved by electrodessication of the third ganglion. Operations were filmed and images stored. Two observers quantified the number of involuntary and inappropriate movements and how many times the camera was cleaned. Safety criteria were surgical accidents, pain and aesthetical results; efficacy criteria were: surgical and camera use duration, anhydrosis, length of hospitalization, compensatory hyperhidrosis and patient satisfaction. There was no difference between groups regarding surgical accidents, number of involuntary movements, pain, aesthetical results, general satisfaction, number of lens cleaning, anhydrosis, length of hospitalization, and compensatory hyperhidrosis. The number of contacts of the laparoscopic lens with mediastinal structures was lower in the Ro group (Probotic arm in VATS sympathectomy for hyperhidrosis is as safe but less efficient when compared to a human camera-holding assistant.

  8. Single color attribute index for shade conformity judgment of dental resin composites

    Directory of Open Access Journals (Sweden)

    Yong-Keun Lee

    2015-01-01

    Full Text Available Introduction: Commercial dental resin composites under the same shade designations show color discrepancies by brand. Moreover, three Commission Internationale de l′Eclairage (CIE color coordinates show significant variations by measurement method; therefore, direct comparisons of the color coordinates based on different methods are meaningless. This study aimed to assess a hypothesis that a new color attribute index (CAI, which could reduce the color coordinate variations by measurement method, was applicable for the shade conformity judgment of dental resin composites. The Hypothesis: CAI is applicable in the shade conformity judgment of commercial dental resin composites. Using the CIE color coordinates of shade guide tabs and resin composites, combined color indices such as Wa = CIE aFNx01 × DEFNx01 ab /C ab FNx01 and Wb = CIE bFNx01 × DEFNx01 ab /C ab FNx01 were defined, in which DEFNx01 ab was the color difference with a standard white tile. Ratio of Wa/Wb to that of an arbitrary reference shade (A2 in the same brand and measurement was defined as the CAI. The CAI values were significantly different by the shade designation and showed a logical trend by the shade designation number. The CAI of commercial resin composites and the keyed shade guide tabs showed overlaps. Evaluation of the Hypothesis: The CAI might be used to judge the shade conformity of resin composites using the values based on different measurement methods. The application of the CAI, instead of conventional three-color coordinates, could efficiently simplify the shade conformity judgment of commercial resin composites. Although the hypothesis of the present study was partially confirmed, further studies for the practical application of this index are highly recommended.

  9. COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    Dominique Lafon

    2011-05-01

    Full Text Available The goal of this article is to present specific capabilities and limitations of the use of color digital images in a characterization process. The whole process is investigated, from the acquisition of digital color images to the analysis of the information relevant to various applications in the field of material characterization. A digital color image can be considered as a matrix of pixels with values expressed in a vector-space (commonly 3 dimensional space whose specificity, compared to grey-scale images, is to ensure a coding and a representation of the output image (visualisation printing that fits the human visual reality. In a characterization process, it is interesting to regard color image attnbutes as a set of visual aspect measurements on a material surface. Color measurement systems (spectrocolorimeters, colorimeters and radiometers and cameras use the same type of light detectors: most of them use Charge Coupled Devices sensors. The difference between the two types of color data acquisition systems is that color measurement systems provide a global information of the observed surface (average aspect of the surface: the color texture is not taken into account. Thus, it seems interesting to use imaging systems as measuring instruments for the quantitative characterization of the color texture.

  10. Multibeam scanning optics with single laser source for full-color printers.

    Science.gov (United States)

    Maruo, S; Arimoto, A; Kobayashi, S

    1997-10-01

    In the novel optical system described here, four-color toners can be developed in one rotation of the photoconductor, and the color control information is given when the intensities of the laser power levels are changed and the two polarization directions are switched. A polarizing beam splitter between the common scanning optics and the photoconductor enables the laser beam to pass through a common scanning system and to illuminate two positions on the photoconductive material. The laser beam polarization direction is controlled by an electro-optical device immediately behind the laser. In each illuminated position, two-color toners are developed by a three-level (trilevel) photographic process. This simplified optical system eliminates the registration errors that occur with four-color information items and can be useful in high-speed printing systems.

  11. White Rock in False Color

    Science.gov (United States)

    2005-01-01

    [figure removed for brevity, see original site] The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. This false color image shows the wind eroded deposit in Pollack Crater called 'White Rock'. This image was collected during the Southern Fall Season. Image information: VIS instrument. Latitude -8, Longitude 25.2 East (334.8 West). 0 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington

  12. Single nucleotide polymorphisms in CAPN and leptin genes associated with meat color and tenderness in Nellore cattle.

    Science.gov (United States)

    Pinto, L F B; Ferraz, J B S; Pedrosa, V B; Eler, J P; Meirelles, F V; Bonin, M N; Rezende, F M; Carvalho, M E; Cucco, D C; Silva, R C G

    2011-09-15

    We analyzed single nucleotide polymorphisms in calpain, leptin, leptin receptor, and growth hormone receptor genes and their association with color, drip and cooking losses of longissimus muscle at 7, 14 and 21 days postmortem in 638 purebred Nellore bulls slaughtered between 22 and 26 months of age. Meat samples were vacuum-packed and aged at 4°C. The single nucleotide polymorphisms T945M, GHR2, E2FB, and CAPN4751 were evaluated. All genotypic classes were observed; however, the T/T genotype of T945M and E2FB was found at a low frequency. A significant association of E2FB with drip loss (a measure of water-holding capacity) was detected at seven days of meat aging. CAPN4751 had an additive effect on red and yellow color intensities. The T allele of CAPN4751 was found to be positively associated with improved meat color, but not with meat tenderness, differing from a previous report indicating that it is associated with meat tenderness. We conclude that the potential for use of CAPN4751 as a marker for these meat quality traits requires further research.

  13. A Proposal of a Color Music Notation System on a Single Melody for Music Beginners

    Science.gov (United States)

    Kuo, Yi-Ting; Chuang, Ming-Chuen

    2013-01-01

    Music teachers often encounter obstructions in teaching beginners in music reading. Conventional notational symbols require beginners to spend significant amount of time in memorizing, which discourages learning at early stage. This article proposes a newly-developed color music notation system that may improve the recognition of the staff and the…

  14. A Real-Time Method to Detect and Track Moving Objects (DATMO from Unmanned Aerial Vehicles (UAVs Using a Single Camera

    Directory of Open Access Journals (Sweden)

    Bruce MacDonald

    2012-04-01

    Full Text Available We develop a real-time method to detect and track moving objects (DATMO from unmanned aerial vehicles (UAVs using a single camera. To address the challenging characteristics of these vehicles, such as continuous unrestricted pose variation and low-frequency vibrations, new approaches must be developed. The main concept proposed in this work is to create an artificial optical flow field by estimating the camera motion between two subsequent video frames. The core of the methodology consists of comparing this artificial flow with the real optical flow directly calculated from the video feed. The motion of the UAV between frames is estimated with available parallel tracking and mapping techniques that identify good static features in the images and follow them between frames. By comparing the two optical flows, a list of dynamic pixels is obtained and then grouped into dynamic objects. Tracking these dynamic objects through time and space provides a filtering procedure to eliminate spurious events and misdetections. The algorithms have been tested with a quadrotor platform using a commercial camera.

  15. A Fluorescence Light-Up Ag Nanocluster Probe that Discriminates Single-Nucleotide Variants by Emission Color

    OpenAIRE

    Yeh, Hsin-Chih; Sharma, Jaswinder; Shih, Ie-Ming; Vu, Dung M.; Martinez, Jennifer S.; Werner, James H.

    2012-01-01

    Rapid and precise screening of small genetic variations, such as single-nucleotide polymorphisms (SNPs), among an individual’s genome is still an unmet challenge at point-of-care settings. One crucial step towards this goal is the development of discrimination probes that require no enzymatic reaction and are easy to use. Here we report a new type of fluorescent molecular probe, termed a chameleon NanoCluster Beacon (cNCB), that lights up into different colors upon binding SNP targets. NanoCl...

  16. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  17. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  18. Toward a single-chip TECless/NUCless InGaAs SWIR camera with 120-dB intrinsic operation dynamic range

    Science.gov (United States)

    Ni, Y.; Arion, B.; Zhu, Y. M.; Potet, P.; Huet, Odile; Reverchon, Jean Luc; Truffer, Jean Patrick; Robo, Jean Alexandre; Costard, Eric

    2011-06-01

    This paper describes a single-chip InGaAs SWIR camera with more than 120dB instant operational dynamic range with an innovative CMOS ROIC technology, so called MAGIC, invented and patented by New Imaging Technologies. A 320x256- pixel InGaAs 25μm pitch photodiode array, designed and fabricated by III-Vlab/Thales Research & Technology(TRT), has been hybridized on this new generation CMOS ROIC. With NIT's MAGIC technology, the sensor's output follows a precise logarithmic law in function of incoming photon flux and gives instant operational dynamic range (DR) better than 120 dB. The ROIC incorporates the entire video signal processing function including a CCIR TV encoder, so a complete SWIR InGaAs camera with standard video output has been realized on a single 30x30 mm2 PCB board with ¼ W power consumption. Neither TEC nor NUC is needed from room temperature operation. The camera can be switched on and off instantly, ideal for all the portable battery operated SWIR band observation applications. The measured RMS noise and FPN noise on the prototype sensor in dark conditions are 0.4 mV and 0.27 mV respectively. The signal excursion from pixel is about 300mV over the 120 dB dynamic range. The FPN remains almost constant over the whole operation dynamic range. The NEI has been measured to be 3,71E+09 ph/s/cm2 with 92 equivalent noise photons at 25Hz frame rate, better than the same architecture of InGaAs photodiode array hybridized on an Indigo ROIC ISC9809 with a pitch of 30 μm for which a readout noise of 120 electrons is observed.

  19. sup(99m)Tc-DTPA gamma-camera renography: Normal values and rapid determination of single-kidney glomerular filtration rate

    International Nuclear Information System (INIS)

    Rehling, M.; Moller, M.L.; Lund, J.O.; Trap-Jensen, J.; Jensen, K.B.; Thamdrup, B.

    1985-01-01

    A method for sup(99m)Tc-diethylenetriaminepentaacetate (DTPA) gamma-camera renography is presented. From each renogram an uptake index (UI) proportional to the single-kidney glomerular filtration rate (SKGFR) is defined. If the proportionality factor between UI and SKGFR is the same in all patients, UI can be used as an accurate measure of SKGFR. In order to test this, sup(99m)Tc-DTPA renography was performed in 101 patients with glomerular filtration rates (GFR) varying between 4 and 172 ml/min. The sum of the right- and left-kidney UIs correlated well with the total GFR calculated from the simultaneously measured plasma clearance of sup(99m)Tc-DTPA after a single injection. The correlation coefficient was 0.97. The method was tested in a prospective study of 57 patients. The total GFR estimated from the renograms was not significantly different from the GFR calculated from the plasma clearance of sup(99m)Tc-DTPA. The coefficient of variation -a combination of inaccuracy and imprecision in the estimates as well as in the reference values - was 11.8% at a GFR of 100 ml/min. It is concluded that, in adults, the SKGFR can be calculated as part of the clinical routine from sup(99m)Tc-DTPA gamma-camera renography without determining the injected dose or collecting urine or blood samples. Normal values for some parameters of the renogram obtained in 25 normal subjects are given. (orig.)

  20. A low-cost phantom for simple routine testing of single photon emission computed tomography (SPECT) cameras

    International Nuclear Information System (INIS)

    Ng, A.H.; Ng, K.H.; Dharmendra, H.; Perkins, A.C.

    2009-01-01

    A simple sphere test phantom has been developed for routine performance testing of SPECT systems in situations where expensive commercial phantoms may not be available. The phantom was based on a design with six universal syringe hubs set in the frame to support a circular array of six glass blown spheres of different sizes. The frame was then placed into a water-filled CT abdomen phantom and scanned with a triple head camera system (Philips IRIX TM , USA). Comparison was made with a commercially available phantom (Deluxe Jaszczak phantom). Whereas the commercial phantom demonstrates cold spot resolution, an important advantage of the sphere test phantom was that hot spot resolution could be easily measured using almost half (370 MBq) of the activity recommended for use in the commercial phantom. Results showed that the contrast increased non-linearly with sphere volume and radionuclide concentration. The phantom was found to be suitable as an inexpensive option for daily performance tests.

  1. Gold nanoshell photomodification under a single-nanosecond laser pulse accompanied by color-shifting and bubble formation phenomena

    International Nuclear Information System (INIS)

    Akchurin, Garif; Khlebtsov, Boris; Akchurin, Georgy; Tuchin, Valery; Zharov, Vladimir; Khlebtsov, Nikolai

    2008-01-01

    Laser-nanoparticle interaction is crucial for biomedical applications of lasers and nanotechnology to the treatment of cancer or pathogenic microorganisms. We report on the first observation of laser-induced coloring of gold nanoshell solution after a one nanosecond pulse and an unprecedentedly low bubble formation (as the main mechanism of cancer cell killing) threshold at a laser fluence of about 4 mJ cm -2 , which is safe for normal tissue. Specifically, silica/gold nanoshell (140/15 nm) suspensions were irradiated with a single 4 ns (1064 nm) or 8 ns (900 nm) laser pulse at fluences ranging from 0.1 mJ cm -2 to 50 J cm -2 . Solution red coloring was observed by the naked eye confirmed by blue-shifting of the absorption spectrum maximum from the initial 900 nm for nanoshells to 530 nm for conventional colloidal gold nanospheres. TEM images revealed significant photomodification of nanoparticles including complete fragmentation of gold shells, changes in silica core structure, formation of small 20-30 nm isolated spherical gold nanoparticles, gold nanoshells with central holes, and large and small spherical gold particles attached to a silica core. The time-resolved monitoring of bubble formation phenomena with the photothermal (PT) thermolens technique demonstrated that after application of a single 8 ns pulse at fluences 5-10 mJ cm -2 and higher the next pulse did not produce any PT response, indicating a dramatic decrease in absorption because of gold shell modification. We also observed a dependence of the bubble expansion time on the laser energy with unusually very fast PT signal rising (∼3.5 ns scale at 0.2 J cm -2 ). Application of the observed phenomena to medical applications is discussed, including a simple visual color test for laser-nanoparticle interaction

  2. Impact of environmental colored noise in single-species population dynamics

    Science.gov (United States)

    Spanio, Tommaso; Hidalgo, Jorge; Muñoz, Miguel A.

    2017-10-01

    Variability on external conditions has important consequences for the dynamics and the organization of biological systems. In many cases, the characteristic timescale of environmental changes as well as their correlations play a fundamental role in the way living systems adapt and respond to it. A proper mathematical approach to understand population dynamics, thus, requires approaches more refined than, e.g., simple white-noise approximations. To shed further light onto this problem, in this paper we propose a unifying framework based on different analytical and numerical tools available to deal with "colored" environmental noise. In particular, we employ a "unified colored noise approximation" to map the original problem into an effective one with white noise, and then we apply a standard path integral approach to gain analytical understanding. For the sake of specificity, we present our approach using as a guideline a variation of the contact process—which can also be seen as a birth-death process of the Malthus-Verhulst class—where the propagation or birth rate varies stochastically in time. Our approach allows us to tackle in a systematic manner some of the relevant questions concerning population dynamics under environmental variability, such as determining the stationary population density, establishing the conditions under which a population may become extinct, and estimating extinction times. We focus on the emerging phase diagram and its possible phase transitions, underlying how these are affected by the presence of environmental noise time-correlations.

  3. Comparison of digital intraoral scanners by single-image capture system and full-color movie system.

    Science.gov (United States)

    Yamamoto, Meguru; Kataoka, Yu; Manabe, Atsufumi

    2017-01-01

    The use of dental computer-aided design/computer-aided manufacturing (CAD/CAM) restoration is rapidly increasing. This study was performed to evaluate the marginal and internal cement thickness and the adhesive gap of internal cavities comprising CAD/CAM materials using two digital impression acquisition methods and micro-computed tomography. Images obtained by a single-image acquisition system (Bluecam Ver. 4.0) and a full-color video acquisition system (Omnicam Ver. 4.2) were divided into the BL and OM groups, respectively. Silicone impressions were prepared from an ISO-standard metal mold, and CEREC Stone BC and New Fuji Rock IMP were used to create working models (n=20) in the BL and OM groups (n=10 per group), respectively. Individual inlays were designed in a conventional manner using designated software, and all restorations were prepared using CEREC inLab MC XL. These were assembled with the corresponding working models used for measurement, and the level of fit was examined by three-dimensional analysis based on micro-computed tomography. Significant differences in the marginal and internal cement thickness and adhesive gap spacing were found between the OM and BL groups. The full-color movie capture system appears to be a more optimal restoration system than the single-image capture system.

  4. Single camera multi-view anthropometric measurement of human height and mid-upper arm circumference using linear regression.

    Science.gov (United States)

    Liu, Yingying; Sowmya, Arcot; Khamis, Heba

    2018-01-01

    Manually measured anthropometric quantities are used in many applications including human malnutrition assessment. Training is required to collect anthropometric measurements manually, which is not ideal in resource-constrained environments. Photogrammetric methods have been gaining attention in recent years, due to the availability and affordability of digital cameras. The primary goal is to demonstrate that height and mid-upper arm circumference (MUAC)-indicators of malnutrition-can be accurately estimated by applying linear regression to distance measurements from photographs of participants taken from five views, and determine the optimal view combinations. A secondary goal is to observe the effect on estimate error of two approaches which reduce complexity of the setup, computational requirements and the expertise required of the observer. Thirty-one participants (11 female, 20 male; 18-37 years) were photographed from five views. Distances were computed using both camera calibration and reference object techniques from manually annotated photos. To estimate height, linear regression was applied to the distances between the top of the participants head and the floor, as well as the height of a bounding box enclosing the participant's silhouette which eliminates the need to identify the floor. To estimate MUAC, linear regression was applied to the mid-upper arm width. Estimates were computed for all view combinations and performance was compared to other photogrammetric methods from the literature-linear distance method for height, and shape models for MUAC. The mean absolute difference (MAD) between the linear regression estimates and manual measurements were smaller compared to other methods. For the optimal view combinations (smallest MAD), the technical error of measurement and coefficient of reliability also indicate the linear regression methods are more reliable. The optimal view combination was the front and side views. When estimating height by linear

  5. Encyclopedia of color science and technology

    CERN Document Server

    2016-01-01

    The Encyclopedia of Color Science and Technology provides an authoritative single source for understanding and applying the concepts of color to all fields of science and technology, including artistic and historical aspects of color. Many topics are discussed in this timely reference, including an introduction to the science of color, and entries on the physics, chemistry and perception of color. Color is described as it relates to optical phenomena of color and continues on through colorants and materials used to modulate color and also to human vision of color. The measurement of color is provided as is colorimetry, color spaces, color difference metrics, color appearance models, color order systems and cognitive color. Other topics discussed include industrial color, color imaging, capturing color, displaying color and printing color. Descriptions of color encodings, color management, processing color and applications relating to color synthesis for computer graphics are included in this work. The Encyclo...

  6. Deployable Wireless Camera Penetrators

    Science.gov (United States)

    Badescu, Mircea; Jones, Jack; Sherrit, Stewart; Wu, Jiunn Jeng

    2008-01-01

    aerobot or a spacecraft onto a comet or asteroid. A system of 20 of these penetrators could be designed and built in a 1- to 2-kg mass envelope. Possible future modifications of the camera penetrators, such as the addition of a chemical spray device, would allow the study of simple chemical reactions of reagents sprayed at the landing site and looking at the color changes. Zoom lenses also could be added for future use.

  7. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  8. New nuclear medicine gamma camera systems

    International Nuclear Information System (INIS)

    Villacorta, Edmundo V.

    1997-01-01

    The acquisition of the Open E.CAM and DIACAM gamma cameras by Makati Medical Center is expected to enhance the capabilities of its nuclear medicine facilities. When used as an aid to diagnosis, nuclear medicine entails the introduction of a minute amount of radioactive material into the patient; thus, no reaction or side-effect is expected. When it reaches the particular target organ, depending on the radiopharmaceutical, a lesion will appear as a decrease (cold) area or increase (hot) area in the radioactive distribution as recorded byu the gamma cameras. Gamma camera images in slices or SPECT (Single Photon Emission Computer Tomography), increase the sensitivity and accuracy in detecting smaller and deeply seated lesions, which otherwise may not be detected in the regular single planar images. Due to the 'open' design of the equipment, claustrophobic patients will no longer feel enclosed during the procedure. These new gamma cameras yield improved resolution and superb image quality, and the higher photon sensitivity shortens imaging acquisition time. The E.CAM, which is the latest generation gamma camera, is featured by its variable angle dual-head system, the only one available in the Philipines, and the excellent choice for Myocardial Perfusion Imaging (MPI). From the usual 45 minutes, the acquisition time for gated SPECT imaging of the heart has now been remarkably reduced to 12 minutes. 'Gated' infers snap-shots of the heart in selected phases of its contraction and relaxation as triggered by ECG. The DIACAM is installed in a room with access outside the main entrance of the department, intended specially for bed-borne patients. Both systems are equipped with a network of high performance Macintosh ICOND acquisition and processing computers. Added to the hardware is the ICON processing software which allows total simultaneous acquisition and processing capabilities in the same operator's terminal. Video film and color printers are also provided. Together

  9. Water Detection Based on Color Variation

    Science.gov (United States)

    Rankin, Arturo L.

    2012-01-01

    This software has been designed to detect water bodies that are out in the open on cross-country terrain at close range (out to 30 meters), using imagery acquired from a stereo pair of color cameras mounted on a terrestrial, unmanned ground vehicle (UGV). This detector exploits the fact that the color variation across water bodies is generally larger and more uniform than that of other naturally occurring types of terrain, such as soil and vegetation. Non-traversable water bodies, such as large puddles, ponds, and lakes, are detected based on color variation, image intensity variance, image intensity gradient, size, and shape. At ranges beyond 20 meters, water bodies out in the open can be indirectly detected by detecting reflections of the sky below the horizon in color imagery. But at closer range, the color coming out of a water body dominates sky reflections, and the water cue from sky reflections is of marginal use. Since there may be times during UGV autonomous navigation when a water body does not come into a perception system s field of view until it is at close range, the ability to detect water bodies at close range is critical. Factors that influence the perceived color of a water body at close range are the amount and type of sediment in the water, the water s depth, and the angle of incidence to the water body. Developing a single model of the mixture ratio of light reflected off the water surface (to the camera) to light coming out of the water body (to the camera) for all water bodies would be fairly difficult. Instead, this software detects close water bodies based on local terrain features and the natural, uniform change in color that occurs across the surface from the leading edge to the trailing edge.

  10. Photogrammetry-Based Head Digitization for Rapid and Accurate Localization of EEG Electrodes and MEG Fiducial Markers Using a Single Digital SLR Camera.

    Science.gov (United States)

    Clausner, Tommy; Dalal, Sarang S; Crespo-García, Maité

    2017-01-01

    The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D . Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position.

  11. Prediction of left main or 3-vessel disease using myocardial perfusion reserve on dynamic thallium-201 single-photon emission computed tomography with a semiconductor gamma camera.

    Science.gov (United States)

    Shiraishi, Shinya; Sakamoto, Fumi; Tsuda, Noriko; Yoshida, Morikatsu; Tomiguchi, Seiji; Utsunomiya, Daisuke; Ogawa, Hisao; Yamashita, Yasuyuki

    2015-01-01

    Myocardial perfusion imaging (MPI) may fail to detect balanced ischemia. We evaluated myocardial perfusion reserve (MPR) using Tl dynamic single-photon emission computed tomography (SPECT) and a novel cadmium zinc telluride (CZT) camera for predicting 3-vessel or left main coronary artery disease (CAD). METHODS AND RESULTS: A total of 55 consecutive patients with suspected CAD underwent SPECT-MPI and coronary angiography. The MPR index was calculated using the standard 2-compartment kinetic model. We analyzed the utility of MPR index, other SPECT findings, and various clinical variables. On multivariate analysis, MPR index and history of previous myocardial infarction (MI) predicted left main and 3-vessel disease. The area under the receiver operating characteristic curve was 0.81 for MPR index, 0.699 for history of previous MI, and 0.86 for MPR index plus history of previous MI. MPR index ≤1.5 yielded the highest diagnostic accuracy. Sensitivity, specificity, and accuracy were 86%, 78%, and 80%, respectively, for MPR index, 64%, 76%, 73% for previous MI, and 57%, 93%, and 84% for MPR index plus history of previous MI. Quantification of MPR using dynamic SPECT and a novel CZT camera may identify balanced ischemia in patients with left main or 3-vessel disease.

  12. Scale Closure in Upper Ocean Optical Properties: From Single Particles to Ocean Color

    Science.gov (United States)

    Green, Rebecca E.

    2002-01-01

    Predictions of chlorophyll concentration from satellite ocean color are an indicator of primary productivity, with implications for foodwebs, fisheries, and the global carbon cycle. Models describing the relationship between optical properties and chlorophyll do not account for much of the optical variability observed in natural waters, because of the presence of seawater constituents that do not covary with phytoplankton pigments. in order to understand variability in these models, the optical contributions of seawater constituents were investigated. A combination of Mie theory and flow cytometry was used to determine the diameter, complex refractive index, and optical cross-sections of individual particles. In New England continental shelf waters, eukaryotic phytoplankton were the main particle contributors to absorption and scaftering. Minerals were the main contributor to backscattering (bb) in the spring, whereas in the summer both minerals and detritus contributed to bb. Synechococcus and heterotrophic bacteria were relatively unimportant optically. Seasonal differences in the spectral shape of remote sensing reflectance, Rrs, were contributed to approximately equally by eukaryotic phytoplankton absorption, dissolved absorption, and non-phytoplankton bb. Differences between measurements of bb and Prs and modeled values based on chlorophyll concentration were caused by higher dissolved absorption and non-phytoplankton bb than were assumed by the model.

  13. Efficient coupling of a single diamond color center to propagating plasmonic gap modes

    DEFF Research Database (Denmark)

    Kumar, Shailesh; Huck, Alexander; Andersen, Ulrik L

    2013-01-01

    We report on coupling of a single nitrogen-vacancy (NV) center in a nanodiamond to the propagating gap mode of two parallel placed chemically grown silver nanowires. The coupled NV-center nanowire system is made by manipulating nanodiamonds and nanowires with the tip of an atomic force microscope...

  14. Quantitative single-particle digital autoradiography with α-particle emitters for targeted radionuclide therapy using the iQID camera

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Brian W., E-mail: brian.miller@pnnl.gov [Pacific Northwest National Laboratory, Richland, Washington 99354 and College of Optical Sciences, The University of Arizona, Tucson, Arizona 85719 (United States); Frost, Sofia H. L.; Frayo, Shani L.; Kenoyer, Aimee L.; Santos, Erlinda; Jones, Jon C.; Orozco, Johnnie J. [Fred Hutchinson Cancer Research Center, Seattle, Washington 98109 (United States); Green, Damian J.; Press, Oliver W.; Pagel, John M.; Sandmaier, Brenda M. [Fred Hutchinson Cancer Research Center, Seattle, Washington 98109 and Department of Medicine, University of Washington, Seattle, Washington 98195 (United States); Hamlin, Donald K.; Wilbur, D. Scott [Department of Radiation Oncology, University of Washington, Seattle, Washington 98195 (United States); Fisher, Darrell R. [Dade Moeller Health Group, Richland, Washington 99354 (United States)

    2015-07-15

    Purpose: Alpha-emitting radionuclides exhibit a potential advantage for cancer treatments because they release large amounts of ionizing energy over a few cell diameters (50–80 μm), causing localized, irreparable double-strand DNA breaks that lead to cell death. Radioimmunotherapy (RIT) approaches using monoclonal antibodies labeled with α emitters may thus inactivate targeted cells with minimal radiation damage to surrounding tissues. Tools are needed to visualize and quantify the radioactivity distribution and absorbed doses to targeted and nontargeted cells for accurate dosimetry of all treatment regimens utilizing α particles, including RIT and others (e.g., Ra-223), especially for organs and tumors with heterogeneous radionuclide distributions. The aim of this study was to evaluate and characterize a novel single-particle digital autoradiography imager, the ionizing-radiation quantum imaging detector (iQID) camera, for use in α-RIT experiments. Methods: The iQID camera is a scintillator-based radiation detection system that images and identifies charged-particle and gamma-ray/x-ray emissions spatially and temporally on an event-by-event basis. It employs CCD-CMOS cameras and high-performance computing hardware for real-time imaging and activity quantification of tissue sections, approaching cellular resolutions. In this work, the authors evaluated its characteristics for α-particle imaging, including measurements of intrinsic detector spatial resolutions and background count rates at various detector configurations and quantification of activity distributions. The technique was assessed for quantitative imaging of astatine-211 ({sup 211}At) activity distributions in cryosections of murine and canine tissue samples. Results: The highest spatial resolution was measured at ∼20 μm full width at half maximum and the α-particle background was measured at a rate as low as (2.6 ± 0.5) × 10{sup −4} cpm/cm{sup 2} (40 mm diameter detector area

  15. Quantitative single-particle digital autoradiography with α-particle emitters for targeted radionuclide therapy using the iQID camera.

    Science.gov (United States)

    Miller, Brian W; Frost, Sofia H L; Frayo, Shani L; Kenoyer, Aimee L; Santos, Erlinda; Jones, Jon C; Green, Damian J; Hamlin, Donald K; Wilbur, D Scott; Fisher, Darrell R; Orozco, Johnnie J; Press, Oliver W; Pagel, John M; Sandmaier, Brenda M

    2015-07-01

    Alpha-emitting radionuclides exhibit a potential advantage for cancer treatments because they release large amounts of ionizing energy over a few cell diameters (50-80 μm), causing localized, irreparable double-strand DNA breaks that lead to cell death. Radioimmunotherapy (RIT) approaches using monoclonal antibodies labeled with α emitters may thus inactivate targeted cells with minimal radiation damage to surrounding tissues. Tools are needed to visualize and quantify the radioactivity distribution and absorbed doses to targeted and nontargeted cells for accurate dosimetry of all treatment regimens utilizing α particles, including RIT and others (e.g., Ra-223), especially for organs and tumors with heterogeneous radionuclide distributions. The aim of this study was to evaluate and characterize a novel single-particle digital autoradiography imager, the ionizing-radiation quantum imaging detector (iQID) camera, for use in α-RIT experiments. The iQID camera is a scintillator-based radiation detection system that images and identifies charged-particle and gamma-ray/x-ray emissions spatially and temporally on an event-by-event basis. It employs CCD-CMOS cameras and high-performance computing hardware for real-time imaging and activity quantification of tissue sections, approaching cellular resolutions. In this work, the authors evaluated its characteristics for α-particle imaging, including measurements of intrinsic detector spatial resolutions and background count rates at various detector configurations and quantification of activity distributions. The technique was assessed for quantitative imaging of astatine-211 ((211)At) activity distributions in cryosections of murine and canine tissue samples. The highest spatial resolution was measured at ∼20 μm full width at half maximum and the α-particle background was measured at a rate as low as (2.6 ± 0.5) × 10(-4) cpm/cm(2) (40 mm diameter detector area). Simultaneous imaging of multiple tissue sections was

  16. Bixin and norbixin in human plasma: determination and study of the absorption of a single dose of Annatto food color.

    Science.gov (United States)

    Levy, L W; Regalado, E; Navarrete, S; Watkins, R H

    1997-09-01

    A procedure was developed for the detection and determination of bixin and norbixin in human plasma by reversed-phase HPLC with a sensitivity limit of 5 micrograms l-1. A group of seven volunteers ingested a single dose of 1 ml of a commercial Annatto Food Color (16 mg of cis-bixin in soybean oil). The presence of bixin (cis and trans) and norbixin (cis and trans) was demonstrated in the plasma at average levels of 11.6, 10.1, 2.8 and 0 micrograms l-1 of bixin and 48, 58, 53 and 29 micrograms l-1 of norbixin after 2, 4, 6 and 8 h, respectively. Considerable individual variations were observed. Complete plasma clearance generally occurred for bixin by 8 h and for norbixin by 24 h after ingestion of cis-bixin.

  17. Temperature dependence of CIE-x,y color coordinates in YAG:Ce single crystal phosphor

    Czech Academy of Sciences Publication Activity Database

    Rejman, M.; Babin, Vladimir; Kučerková, Romana; Nikl, Martin

    2017-01-01

    Roč. 187, Jul (2017), s. 20-25 ISSN 0022-2313 R&D Projects: GA TA ČR TA04010135 Institutional support: RVO:68378271 Keywords : YAG:Ce * single-crystal * simulation * energy level lifetime * white LED * CIE * temperature dependence Subject RIV: BM - Solid Matter Physics ; Magnetism OBOR OECD: Condensed matter physics (including formerly solid state physics, supercond.) Impact factor: 2.686, year: 2016

  18. Single neurons with both form/color differential responses and saccade-related responses in the nonretinotopic pulvinar of the behaving macaque monkey.

    Science.gov (United States)

    Benevento, L A; Port, J D

    1995-01-01

    The nonretinotopic portion of the macaque pulvinar complex is interconnected with the occipitoparietal and occipitotemporal transcortical visual systems where information about the location and motion of a visual object or its form and color are modulated by eye movements and attention. We recorded from single cells in and about the border of the dorsal portion of the lateral pulvinar and the adjacent medial pulvinar of awake behaving Macaca mulatta in order to determine how the properties of these two functionally dichotomous cortical systems were represented. We found a class of pulvinar neurons that responded differentially to ten different patterns or broadband wavelengths (colors). Thirty-four percent of cells tested responded to the presentation of at least one of the pattern or color stimuli. These cells often discharged to several of the patterns or colors, but responded best to only one or two of them, and 86% were found to have statistically significant pattern and/or color preferences. Pattern/color preferential cells had an average latency of 79.1 +/- 46.0 ms (range 31-186 ms), responding well before most inferotemporal cortical cell responses. Visually guided and memory-guided saccade tasks showed that 58% of pattern/color preferential cells also had saccade-related properties, e.g. directional presaccadic and postsaccadic discharges, and inhibition of activity during the saccade. In the pulvinar, the mean presacadic response latency was earlier, and the mean postsaccadic response latency was later, than those reported for parietal cortex. We also discovered that the strength of response to patterns or colors changed depending upon the behavioral setting. In comparison to trials in which the monkey fixated dead ahead during passive presentations of pattern and color stimuli, 92% of the cells showed attenuated responses to the same passive presentation of patterns and colors during fixation when these trials were interleaved with trials which also

  19. Quantitative Single-Particle Digital Autoradiography with α-Particle Emitters for Targeted Radionuclide Therapy using the iQID Camera

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Brian W.; Frost, Sophia; Frayo, Shani; Kenoyer, Aimee L.; Santos, E. B.; Jones, Jon C.; Green, Damian J.; Hamlin, Donald K.; Wilbur, D. Scott; Fisher, Darrell R.; Orozco, Johnnie J.; Press, Oliver W.; Pagel, John M.; Sandmaier, B. M.

    2015-07-01

    Abstract Alpha emitting radionuclides exhibit a potential advantage for cancer treatments because they release large amounts of ionizing energy over a few cell diameters (50–80 μm) causing localized, irreparable double-strand DNA breaks that lead to cell death. Radioimmunotherapy (RIT) approaches using monoclonal antibodies labeled with alpha emitters may inactivate targeted cells with minimal radiation damage to surrounding tissues. For accurate dosimetry in alpha-RIT, tools are needed to visualize and quantify the radioactivity distribution and absorbed dose to targeted and non-targeted cells, especially for organs and tumors with heterogeneous radionuclide distributions. The aim of this study was to evaluate and characterize a novel single-particle digital autoradiography imager, iQID (ionizing-radiation Quantum Imaging Detector), for use in alpha-RIT experiments. Methods: The iQID camera is a scintillator-based radiation detection technology that images and identifies charged-particle and gamma-ray/X-ray emissions spatially and temporally on an event-by-event basis. It employs recent advances in CCD/CMOS cameras and computing hardware for real-time imaging and activity quantification of tissue sections, approaching cellular resolutions. In this work, we evaluated this system’s characteristics for alpha particle imaging including measurements of spatial resolution and background count rates at various detector configurations and quantification of activity distributions. The technique was assessed for quantitative imaging of astatine-211 (211At) activity distributions in cryosections of murine and canine tissue samples. Results: The highest spatial resolution was measured at ~20 μm full width at half maximum (FWHM) and the alpha particle background was measured at a rate of (2.6 ± 0.5) × 10–4 cpm/cm2 (40 mm diameter detector area). Simultaneous imaging of multiple tissue sections was performed using a large-area iQID configuration (ø 11.5 cm

  20. Gamma camera

    International Nuclear Information System (INIS)

    Reiss, K.H.; Kotschak, O.; Conrad, B.

    1976-01-01

    A gamma camera with a simplified setup as compared with the state of engineering is described permitting, apart from good localization, also energy discrimination. Behind the usual vacuum image amplifier a multiwire proportional chamber filled with trifluorine bromium methane is connected in series. Localizing of the signals is achieved by a delay line, energy determination by means of a pulse height discriminator. With the aid of drawings and circuit diagrams, the setup and mode of operation are explained. (ORU) [de

  1. Phenolic Composition and Color of Single Cultivar Young Red Wines Made with Mencia and Alicante-Bouschet Grapes in AOC Valdeorras (Galicia, NW Spain

    Directory of Open Access Journals (Sweden)

    Eugenio Revilla

    2016-07-01

    Full Text Available Single cultivar wines made with two different red grape cultivars from AOC Valdeorras (Galicia, NW Spain, Mencia and Alicante Bouschet, were studied with the aim of determining their color and phenolic composition. Two sets of analyses were made on 30 wine samples of 2014 vintage, after malolactic fermentation took place, to evaluate several physicochemical characteristics from these wines related to color and polyphenols. Several parameters related with color and the general phenolic composition of wines (total phenols index, color intensity, hue, total anthocyans, total anthocyanins, colored anthocyanins, chemical age index, and total tannins were determined by UV-VIS spectrophotometry. Those analyses revealed that Alicante Bouschet wines presented, in general, a higher content of polyphenols and a more intense color than Mencia wines. Using HPLC-DAD, five anthocyanin monoglucosides and nine acylated anthocyanins were identified in both types of wine; each type of wine showed a distinctive anthocyanin fingerprint, as Alicante Bouschet wines contained a higher proportion of cyanidin-derived anthocyanins. Multivariate statistic studies were performed to both datasets to explore relationships among variables and among samples. These studies revealed relationships among several variables considered, and were capable to group the samples in two different classes using principal component analysis (PCA.

  2. Ultraviolet/violet dual-color electroluminescence based on n-ZnO single crystal/p-GaN direct-contact light-emitting diode

    International Nuclear Information System (INIS)

    Li, Songzhan; Lin, Wenwen; Fang, Guojia; Huang, Feng; Huang, Huihui; Long, Hao; Mo, Xiaoming; Wang, Haoning; Guan, Wenjie; Zhao, Xingzhong

    2013-01-01

    We have fabricated a fully transparent ultraviolet (UV)/violet dual-color electroluminescence (EL) device based on n-ZnO single crystal and p-GaN via a simple direct-contact method. The device presents dual-color EL under forward and reverse biases—an intense violet emission centered at 400 nm from ZnO and a sharp UV emission peaked at 365 nm from GaN, respectively. The reason for dual color emissions is proposed in terms of the energy band theory and the transmission spectra of ZnO single crystal and p-GaN. -- Highlights: ► A fully transparent LED based on n-ZnO SC and p-GaN is fabricated via the direct-contact method. ► The n-ZnO SC/p-GaN device shows UV/violet dual-color emission under electrically pumped. ► The device presents a violet emission 400 nm and a UV emission 365 nm under forward and reverse biases. ► The EL of the dual-color device displays good stability and reproducibility

  3. Online color monitoring

    Science.gov (United States)

    Massen, Robert C.

    1999-09-01

    Monitoring color in the production line requires to remotely observe moving and not-aligned objects with in general complex surface features: multicolored, textured, non-flat, showing highlights and shadows. We discuss the use of color cameras and associated color image processing technologies for what we call 'imaging colorimetry.' This is a 2-step procedure which first uses color for segmentation and for finding Regions-of- Interest on the moving objects and then uses cluster-based color image processing for computing color deviations relative to previously trained references. This colorimetry is much more a measurement of aesthetic consistency of the visual appearance of a product then the traditional measurement of a more physically defined mean color vector difference. We show how traditional non-imaging colorimetry looses most of this aesthetic information due to the computation of a mean color vector or mean color vector difference, by averaging over the sensor's field-of-view. A large number of industrial applications are presented where complex inspection tasks have been solved based on this approach. The expansion to a higher feature space dimensions based on the 'multisensorial camera' concept gives an outlook to future developments.

  4. Portable digital lock-in instrument to determine chemical constituents with single-color absorption measurements for Global Health Initiatives

    Science.gov (United States)

    Vacas-Jacques, Paulino; Linnes, Jacqueline; Young, Anna; Gerrard, Victoria; Gomez-Marquez, Jose

    2014-03-01

    Innovations in international health require the use of state-of-the-art technology to enable clinical chemistry for diagnostics of bodily fluids. We propose the implementation of a portable and affordable lock-in amplifier-based instrument that employs digital technology to perform biochemical diagnostics on blood, urine, and other fluids. The digital instrument is composed of light source and optoelectronic sensor, lock-in detection electronics, microcontroller unit, and user interface components working with either power supply or batteries. The instrument performs lock-in detection provided that three conditions are met. First, the optoelectronic signal of interest needs be encoded in the envelope of an amplitude-modulated waveform. Second, the reference signal required in the demodulation channel has to be frequency and phase locked with respect to the optoelectronic carrier signal. Third, the reference signal should be conditioned appropriately. We present three approaches to condition the signal appropriately: high-pass filtering the reference signal, precise offset tuning the reference level by low-pass filtering, and by using a voltage divider network. We assess the performance of the lock-in instrument by comparing it to a benchmark device and by determining protein concentration with single-color absorption measurements. We validate the concentration values obtained with the proposed instrument using chemical concentration measurements. Finally, we demonstrate that accurate retrieval of phase information can be achieved by using the same instrument.

  5. Single attosecond pulse generation in an orthogonally polarized two-color laser field combined with a static electric field

    International Nuclear Information System (INIS)

    Xia Changlong; Zhang Gangtai; Wu Jie; Liu Xueshen

    2010-01-01

    We investigate theoretic high-order harmonic generation and single attosecond pulse generation in an orthogonally polarized two-color laser field, which is synthesized by a mid-infrared (IR) pulse (12.5 fs, 2000 nm) in the y component and a much weaker (12 fs, 800 nm) pulse in the x component. We find that the width of the harmonic plateau can be extended when a static electric field is added in the y component. We also investigate emission time of harmonics in terms of a time-frequency analysis to illustrate the physical mechanism of high-order harmonic generation. We calculate the ionization rate using the Ammosov-Delone-Krainov model and interpret the variation of harmonic intensity for different static electric field strengths. When the ratio of strengths of the static and the y-component laser fields is 0.1, a continuous harmonic spectrum is formed from 220 to 420 eV. By superposing a properly selected range of the harmonic spectrum from 300 to 350 eV, an isolated attosecond pulse with a duration of about 75 as is obtained, which is near linearly polarized.

  6. Portable digital lock-in instrument to determine chemical constituents with single-color absorption measurements for Global Health Initiatives

    International Nuclear Information System (INIS)

    Vacas-Jacques, Paulino; Linnes, Jacqueline; Young, Anna; Gomez-Marquez, Jose; Gerrard, Victoria

    2014-01-01

    Innovations in international health require the use of state-of-the-art technology to enable clinical chemistry for diagnostics of bodily fluids. We propose the implementation of a portable and affordable lock-in amplifier-based instrument that employs digital technology to perform biochemical diagnostics on blood, urine, and other fluids. The digital instrument is composed of light source and optoelectronic sensor, lock-in detection electronics, microcontroller unit, and user interface components working with either power supply or batteries. The instrument performs lock-in detection provided that three conditions are met. First, the optoelectronic signal of interest needs be encoded in the envelope of an amplitude-modulated waveform. Second, the reference signal required in the demodulation channel has to be frequency and phase locked with respect to the optoelectronic carrier signal. Third, the reference signal should be conditioned appropriately. We present three approaches to condition the signal appropriately: high-pass filtering the reference signal, precise offset tuning the reference level by low-pass filtering, and by using a voltage divider network. We assess the performance of the lock-in instrument by comparing it to a benchmark device and by determining protein concentration with single-color absorption measurements. We validate the concentration values obtained with the proposed instrument using chemical concentration measurements. Finally, we demonstrate that accurate retrieval of phase information can be achieved by using the same instrument

  7. Optimization of Single- and Dual-Color Immunofluorescence Protocols for Formalin-Fixed, Paraffin-Embedded Archival Tissues.

    Science.gov (United States)

    Kajimura, Junko; Ito, Reiko; Manley, Nancy R; Hale, Laura P

    2016-02-01

    Performance of immunofluorescence staining on archival formalin-fixed paraffin-embedded human tissues is generally not considered to be feasible, primarily due to problems with tissue quality and autofluorescence. We report the development and application of procedures that allowed for the study of a unique archive of thymus tissues derived from autopsies of individuals exposed to atomic bomb radiation in Hiroshima, Japan in 1945. Multiple independent treatments were used to minimize autofluorescence and maximize fluorescent antibody signals. Treatments with NH3/EtOH and Sudan Black B were particularly useful in decreasing autofluorescent moieties present in the tissue. Deconvolution microscopy was used to further enhance the signal-to-noise ratios. Together, these techniques provide high-quality single- and dual-color fluorescent images with low background and high contrast from paraffin blocks of thymus tissue that were prepared up to 60 years ago. The resulting high-quality images allow the application of a variety of image analyses to thymus tissues that previously were not accessible. Whereas the procedures presented remain to be tested for other tissue types and archival conditions, the approach described may facilitate greater utilization of older paraffin block archives for modern immunofluorescence studies. © 2016 The Histochemical Society.

  8. Rapid radiotracer washout from the heart: effect on image quality in SPECT performed with a single-headed gamma camera system.

    Science.gov (United States)

    O'Connor, M K; Cho, D S

    1992-06-01

    Technetium-99m-teboroxime demonstrates high extraction and rapid washout from the myocardium. To evaluate the feasibility of performing SPECT with this agent using a single-headed gamma camera system, a series of phantom studies were performed that simulated varying degrees of washout from normal and "ischemic" regions of the myocardium. In the absence of ischemic regions, short axis profiles were relatively unaffected by washout of less than 50% of activity over the duration of a SPECT acquisition. However, significant corruption of the SPECT data was observed when large (greater than a factor of 2) differences existed in the washout of activity from normal and "ischemic" myocardium. This corruption was observed with 30%-40% washout of activity from normal regions of the heart. Based on published washout rates, these results indicate that clinical studies with 99mTc-teboroxime may need to be completed within 2-4 min to order to prevent degradation of image quality due to differential washout effects.

  9. Development of a new Xe-133 single dose multi-step method (SDMM) for muscle blood flow measurement using gamma camera

    International Nuclear Information System (INIS)

    Bunko, Hisashi; Seto, Mikito; Taki, Junichi

    1985-01-01

    In order to measure the muscle blood flow (MBF) during exercise (Ex), a new Xe-133 single dose multi-step method (SDMM) for leg MBF measurement before, during and after Ex using gamma camera was developped. Theoretically, if the activity of Xe-133 in the muscle immediately before and after Ex are known, then the mean MBF during Ex can be calculated. In SDMM, these activities are corrected through correction formula using time delays between end of data aquisition (DA) at rest (R1) and begining of the Ex (TAB), and between end of Ex and begining of the DA after Ex (R2) (TDA). Validity of the SDMM and MBF response on mild and heavy Ex were evaluated in 11 normal volunteers. Ex MBF calculated from 5 and 2.5 min DA (5 sec/frame) both at R1 and R2 were highly correlated (r=.996). Ex MBF by SDMM and direct(measurement by fixed leg exercise were also highly correlated (r=.999). Reproducibility of the R1 and Ex MBF were excellent (r=.999). The highest MBF was seen in GCM on miled walking Ex and in VLM on heavy squatting Ex. After miled Ex, MBF rapidly returned to normal. After heavy Ex, MBF remaind high in VLM In conclusion, SDMM is simple and accurate method for evaluation of dynamic MBF response according to exercise. SDMM is also applicable to the field of sports medicine. (author)

  10. Gamma camera

    International Nuclear Information System (INIS)

    Berninger, W.H.

    1975-01-01

    The light pulse output of a scintillator, on which incident collimated gamma rays impinge, is detected by an array of photoelectric tubes each having a convexly curved photocathode disposed in close proximity to the scintillator. Electronic circuitry connected to outputs of the phototubes develops the scintillation event position coordinate electrical signals with good linearity and with substantial independence of the spacing between the scintillator and photocathodes so that the phototubes can be positioned as close to the scintillator as is possible to obtain less distortion in the field of view and improved spatial resolution as compared to conventional planar photocathode gamma cameras

  11. Localization of protein-protein interactions among three fluorescent proteins in a single living cell: three-color FRET microscopy

    Science.gov (United States)

    Sun, Yuansheng; Booker, Cynthia F.; Day, Richard N.; Periasamy, Ammasi

    2009-02-01

    Förster resonance energy transfer (FRET) methodology has been used for over 30 years to localize protein-protein interactions in living specimens. The cloning and modification of various visible fluorescent proteins (FPs) has generated a variety of new probes that can be used as FRET pairs to investigate the protein associations in living cells. However, the spectral cross-talk between FRET donor and acceptor channels has been a major limitation to FRET microscopy. Many investigators have developed different ways to eliminate the bleedthrough signals in the FRET channel for one donor and one acceptor. We developed a novel FRET microscopy method for studying interactions among three chromophores: three-color FRET microscopy. We generated a genetic construct that directly links the three FPs - monomeric teal FP (mTFP), Venus and tandem dimer Tomato (tdTomato), and demonstrated the occurrence of mutually dependent energy transfers among the three FPs. When expressed in cells and excited with the 458 nm laser line, the mTFP-Venus-tdTomato fusion proteins yielded parallel (mTFP to Venus and mTFP to tdTomato) and sequential (mTFP to Venus and then to tdTomato) energy transfer signals. To quantify the FRET signals in the three-FP system in a single living cell, we developed an algorithm to remove all the spectral cross-talk components and also to separate different FRET signals at a same emission channel using the laser scanning spectral imaging and linear unmixing techniques on the Zeiss510 META system. Our results were confirmed with fluorescence lifetime measurements and using acceptor photobleaching FRET microscopy.

  12. An equalised global graphical model-based approach for multi-camera object tracking

    OpenAIRE

    Chen, Weihua; Cao, Lijun; Chen, Xiaotang; Huang, Kaiqi

    2015-01-01

    Non-overlapping multi-camera visual object tracking typically consists of two steps: single camera object tracking and inter-camera object tracking. Most of tracking methods focus on single camera object tracking, which happens in the same scene, while for real surveillance scenes, inter-camera object tracking is needed and single camera tracking methods can not work effectively. In this paper, we try to improve the overall multi-camera object tracking performance by a global graph model with...

  13. Color constancy by characterization of illumination chromaticity

    Science.gov (United States)

    Nikkanen, Jarno T.

    2011-05-01

    Computational color constancy algorithms play a key role in achieving desired color reproduction in digital cameras. Failure to estimate illumination chromaticity correctly will result in invalid overall colour cast in the image that will be easily detected by human observers. A new algorithm is presented for computational color constancy. Low computational complexity and low memory requirement make the algorithm suitable for resource-limited camera devices, such as consumer digital cameras and camera phones. Operation of the algorithm relies on characterization of the range of possible illumination chromaticities in terms of camera sensor response. The fact that only illumination chromaticity is characterized instead of the full color gamut, for example, increases robustness against variations in sensor characteristics and against failure of diagonal model of illumination change. Multiple databases are used in order to demonstrate the good performance of the algorithm in comparison to the state-of-the-art color constancy algorithms.

  14. Color to black-and-white converter

    Science.gov (United States)

    Perry, W. E.

    1977-01-01

    Lanthanum-modified lead zirconate titanate ceramic plate, when sandwiched between pair of conventional light polarizers, forms electrically controlled coverter for television camera. Assembly can be used with camera at remote site to enable camera to transmit color or black and white signal on command.

  15. Dynamic simulation of color blindness for studying color vision requirements in practice

    NARCIS (Netherlands)

    Lucassen, M.P.; Alferdinck, J.W.A.M.

    2006-01-01

    We report on a dynamic simulation of defective color vision. Using an RGB video camera connected to a PC or laptop, the captured and displayed RGB colors are translated by our software into modified RGB values that simulate the color appearance of a person with a color deficiency. Usually, the

  16. Color-induced graph colorings

    CERN Document Server

    Zhang, Ping

    2015-01-01

    A comprehensive treatment of color-induced graph colorings is presented in this book, emphasizing vertex colorings induced by edge colorings. The coloring concepts described in this book depend not only on the property required of the initial edge coloring and the kind of objects serving as colors, but also on the property demanded of the vertex coloring produced. For each edge coloring introduced, background for the concept is provided, followed by a presentation of results and open questions dealing with this topic. While the edge colorings discussed can be either proper or unrestricted, the resulting vertex colorings are either proper colorings or rainbow colorings. This gives rise to a discussion of irregular colorings, strong colorings, modular colorings, edge-graceful colorings, twin edge colorings and binomial colorings. Since many of the concepts described in this book are relatively recent, the audience for this book is primarily mathematicians interested in learning some new areas of graph colorings...

  17. Two-color spectroscopy of UV excited ssDNA complex with a single-wall nanotube probe: Fast nucleobase autoionization mechanism

    OpenAIRE

    Ignatova, Tetyana; Balaeff, Alexander; Zheng, Ming; Blades, Michael; Stoeckl, Peter; Rotkin, Slava V.

    2015-01-01

    DNA autoionization is a fundamental process wherein UV-photoexcited nucleobases dissipate energy by charge transfer to the environment without undergoing chemical damage. Here, single-wall carbon nanotubes (SWNT) are explored as a photoluminescent reporter for studying the mechanism and rates of DNA autoionization. Two-color photoluminescence spectroscopy allows separate photoexcitation of the DNA and the SWNTs in the UV and visible range, respectively. A strong SWNT photoluminescence quenchi...

  18. Junocam: Juno's Outreach Camera

    Science.gov (United States)

    Hansen, C. J.; Caplinger, M. A.; Ingersoll, A.; Ravine, M. A.; Jensen, E.; Bolton, S.; Orton, G.

    2017-11-01

    Junocam is a wide-angle camera designed to capture the unique polar perspective of Jupiter offered by Juno's polar orbit. Junocam's four-color images include the best spatial resolution ever acquired of Jupiter's cloudtops. Junocam will look for convective clouds and lightning in thunderstorms and derive the heights of the clouds. Junocam will support Juno's radiometer experiment by identifying any unusual atmospheric conditions such as hotspots. Junocam is on the spacecraft explicitly to reach out to the public and share the excitement of space exploration. The public is an essential part of our virtual team: amateur astronomers will supply ground-based images for use in planning, the public will weigh in on which images to acquire, and the amateur image processing community will help process the data.

  19. Natural Colorants: Food Colorants from Natural Sources.

    Science.gov (United States)

    Sigurdson, Gregory T; Tang, Peipei; Giusti, M Mónica

    2017-02-28

    The color of food is often associated with the flavor, safety, and nutritional value of the product. Synthetic food colorants have been used because of their high stability and low cost. However, consumer perception and demand have driven the replacement of synthetic colorants with naturally derived alternatives. Natural pigment applications can be limited by lower stability, weaker tinctorial strength, interactions with food ingredients, and inability to match desired hues. Therefore, no single naturally derived colorant can serve as a universal alternative for a specified synthetic colorant in all applications. This review summarizes major environmental and biological sources for natural colorants as well as nature-identical counterparts. Chemical characteristics of prevalent pigments, including anthocyanins, carotenoids, betalains, and chlorophylls, are described. The possible applications and hues (warm, cool, and achromatic) of currently used natural pigments, such as anthocyanins as red and blue colorants, and possible future alternatives, such as purple violacein and red pyranoanthocyanins, are also discussed.

  20. The PLATO camera

    Science.gov (United States)

    Laubier, D.; Bodin, P.; Pasquier, H.; Fredon, S.; Levacher, P.; Vola, P.; Buey, T.; Bernardi, P.

    2017-11-01

    PLATO (PLAnetary Transits and Oscillation of stars) is a candidate for the M3 Medium-size mission of the ESA Cosmic Vision programme (2015-2025 period). It is aimed at Earth-size and Earth-mass planet detection in the habitable zone of bright stars and their characterisation using the transit method and the asterosismology of their host star. That means observing more than 100 000 stars brighter than magnitude 11, and more than 1 000 000 brighter than magnitude 13, with a long continuous observing time for 20 % of them (2 to 3 years). This yields a need for an unusually long term signal stability. For the brighter stars, the noise requirement is less than 34 ppm.hr-1/2, from a frequency of 40 mHz down to 20 μHz, including all sources of noise like for instance the motion of the star images on the detectors and frequency beatings. Those extremely tight requirements result in a payload consisting of 32 synchronised, high aperture, wide field of view cameras thermally regulated down to -80°C, whose data are combined to increase the signal to noise performances. They are split into 4 different subsets pointing at 4 directions to widen the total field of view; stars in the centre of that field of view are observed by all 32 cameras. 2 extra cameras are used with color filters and provide pointing measurement to the spacecraft Attitude and Orbit Control System (AOCS) loop. The satellite is orbiting the Sun at the L2 Lagrange point. This paper presents the optical, electronic and electrical, thermal and mechanical designs devised to achieve those requirements, and the results from breadboards developed for the optics, the focal plane, the power supply and video electronics.

  1. Image compensation for camera and lighting variability

    Science.gov (United States)

    Daley, Wayne D.; Britton, Douglas F.

    1996-12-01

    With the current trend of integrating machine vision systems in industrial manufacturing and inspection applications comes the issue of camera and illumination stabilization. Unless each application is built around a particular camera and highly controlled lighting environment, the interchangeability of cameras of fluctuations in lighting become a problem as each camera usually has a different response. An empirical approach is proposed where color tile data is acquired using the camera of interest, and a mapping is developed to some predetermined reference image using neural networks. A similar analytical approach based on a rough analysis of the imaging systems is also considered for deriving a mapping between cameras. Once a mapping has been determined, all data from one camera is mapped to correspond to the images of the other prior to performing any processing on the data. Instead of writing separate image processing algorithms for the particular image data being received, the image data is adjusted based on each particular camera and lighting situation. All that is required when swapping cameras is the new mapping for the camera being inserted. The image processing algorithms can remain the same as the input data has been adjusted appropriately. The results of utilizing this technique are presented for an inspection application.

  2. Colorism/Neo-Colorism

    Science.gov (United States)

    Snell, Joel

    2017-01-01

    There are numerous aspects to being non-Caucasian that may not be known by Whites. Persons of color suggest folks who are African, South Americans, Native Americans, Biracial, Asians and others. The question is what do these individuals feel relative to their color and facial characteristics. Eugene Robinson suggest that the future favorable color…

  3. Phase delaying the human circadian clock with a single light pulse and moderate delay of the sleep/dark episode: no influence of iris color.

    Science.gov (United States)

    Canton, Jillian L; Smith, Mark R; Choi, Ho-Sun; Eastman, Charmane I

    2009-07-17

    Light exposure in the late evening and nighttime and a delay of the sleep/dark episode can phase delay the circadian clock. This study assessed the size of the phase delay produced by a single light pulse combined with a moderate delay of the sleep/dark episode for one day. Because iris color or race has been reported to influence light-induced melatonin suppression, and we have recently reported racial differences in free-running circadian period and circadian phase shifting in response to light pulses, we also tested for differences in the magnitude of the phase delay in subjects with blue and brown irises. Subjects (blue-eyed n = 7; brown eyed n = 6) maintained a regular sleep schedule for 1 week before coming to the laboratory for a baseline phase assessment, during which saliva was collected every 30 minutes to determine the time of the dim light melatonin onset (DLMO). Immediately following the baseline phase assessment, which ended 2 hours after baseline bedtime, subjects received a 2-hour bright light pulse (~4,000 lux). An 8-hour sleep episode followed the light pulse (i.e. was delayed 4 hours from baseline). A final phase assessment was conducted the subsequent night to determine the phase shift of the DLMO from the baseline to final phase assessment.Phase delays of the DLMO were compared in subjects with blue and brown irises. Iris color was also quantified from photographs using the three dimensions of red-green-blue color axes, as well as a lightness scale. These variables were correlated with phase shift of the DLMO, with the hypothesis that subjects with lighter irises would have larger phase delays. The average phase delay of the DLMO was -1.3 +/- 0.6 h, with a maximum delay of ~2 hours, and was similar for subjects with blue and brown irises. There were no significant correlations between any of the iris color variables and the magnitude of the phase delay. A single 2-hour bright light pulse combined with a moderate delay of the sleep/dark episode

  4. Phase delaying the human circadian clock with a single light pulse and moderate delay of the sleep/dark episode: no influence of iris color

    Directory of Open Access Journals (Sweden)

    Choi Ho-Sun

    2009-07-01

    Full Text Available Abstract Background Light exposure in the late evening and nighttime and a delay of the sleep/dark episode can phase delay the circadian clock. This study assessed the size of the phase delay produced by a single light pulse combined with a moderate delay of the sleep/dark episode for one day. Because iris color or race has been reported to influence light-induced melatonin suppression, and we have recently reported racial differences in free-running circadian period and circadian phase shifting in response to light pulses, we also tested for differences in the magnitude of the phase delay in subjects with blue and brown irises. Methods Subjects (blue-eyed n = 7; brown eyed n = 6 maintained a regular sleep schedule for 1 week before coming to the laboratory for a baseline phase assessment, during which saliva was collected every 30 minutes to determine the time of the dim light melatonin onset (DLMO. Immediately following the baseline phase assessment, which ended 2 hours after baseline bedtime, subjects received a 2-hour bright light pulse (~4,000 lux. An 8-hour sleep episode followed the light pulse (i.e. was delayed 4 hours from baseline. A final phase assessment was conducted the subsequent night to determine the phase shift of the DLMO from the baseline to final phase assessment. Phase delays of the DLMO were compared in subjects with blue and brown irises. Iris color was also quantified from photographs using the three dimensions of red-green-blue color axes, as well as a lightness scale. These variables were correlated with phase shift of the DLMO, with the hypothesis that subjects with lighter irises would have larger phase delays. Results The average phase delay of the DLMO was -1.3 ± 0.6 h, with a maximum delay of ~2 hours, and was similar for subjects with blue and brown irises. There were no significant correlations between any of the iris color variables and the magnitude of the phase delay. Conclusion A single 2-hour bright light

  5. Color constancy in Japanese animation

    Science.gov (United States)

    Ichihara, Yasuyo G.

    2006-01-01

    In this study, we measure the colors used in a Japanese Animations. The result can be seen on CIE-xy color spaces. It clearly shows that the color system is not a natural appearance system but an imagined and artistic appearance system. Color constancy of human vision can tell the difference in skin and hair colors between under moonlight and day light. Human brain generates a match to the memorized color of an object from daylight viewing conditions to the color of the object in different viewing conditions. For example, Japanese people always perceive the color of the Rising Sun in the Japanese flag as red even in a different viewing condition such as under moonlight. Color images captured by a camera cannot present those human perceptions. However, Japanese colorists in Animation succeeded in painting the effects of color constancy not only under moonlight but also added the memory matching colors. They aim to create a greater impact on viewer's perceptions by using the effect of the memory matching colors. In this paper, we propose the Imagined Japanese Animation Color System. This system in art is currently a subject of research in Japan. Its importance is that it could also provide an explanation on how human brain perceives the same color under different viewing conditions.

  6. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    Just like art historians have focused on e.g. composition or lighting, this dissertation takes a single stylistic parameter as its object of study: camera movement. Within film studies this localized avenue of middle-level research has become increasingly viable under the aegis of a perspective k...

  7. Mars surface context cameras past, present, and future

    Science.gov (United States)

    Gunn, M. D.; Cousins, C. R.

    2016-04-01

    Mars has been the focus of robotic space exploration since the 1960s, in which time there have been over 40 missions, some successful, some not. Camera systems have been a core component of all instrument payloads sent to the Martian surface, harnessing some combination of monochrome, color, multispectral, and stereo imagery. Together, these data sets provide the geological context to a mission, which over the decades has included the characterization and spatial mapping of geological units and associated stratigraphy, charting active surface processes such as dust devils and water ice sublimation, and imaging the robotic manipulation of samples via scoops (Viking), drills (Mars Science Laboratory (MSL) Curiosity), and grinders (Mars Exploration Rovers). Through the decades, science context imaging has remained an integral part of increasingly advanced analytical payloads, with continual advances in spatial and spectral resolution, radiometric and geometric calibration, and image analysis techniques. Mars context camera design has encompassed major technological shifts, from single photomultiplier tube detectors to megapixel charged-couple devices, and from multichannel to Bayer filter color imaging. Here we review the technological capability and evolution of science context imaging instrumentation resulting from successful surface missions to Mars, and those currently in development for planned future missions.

  8. Room temperature synthesis of ultra-small, near-unity single-sized lead halide perovskite quantum dots with wide color emission tunability, high color purity and high brightness

    Science.gov (United States)

    Peng, Lucheng; Geng, Jing; Ai, Lisha; Zhang, Ying; Xie, Renguo; Yang, Wensheng

    2016-08-01

    Phosphor with extremely narrow emission line widths, high brightness, and wide color emission tunability in visible regions is required for display and lighting applications, yet none has been reported in the literature so far. In the present study, single-sized lead halide perovskite (APbX 3; A = CH3NH3 and Cs; X = Cl, Br, and I) nanocrystalline (NC) phosphors were achieved for the first time in a one-pot reaction at room temperature (25 °C). The size-dependent samples, which included four families of CsPbBr3 NCs and exhibited sharp excitonic absorption peaks and pure band gap emission, were directly obtained by simply varying the concentration of ligands. The continuity of the optical spectrum can be successively tuned over the entire UV-visible spectral region (360-610 nm) by preparing CsPbCl3, CsPbI3, and CsPb(Y/Br)3 (Y = Cl and I) NCs with the use of CsPbBr3 NCs as templates by anion exchange while maintaining the size of NCs and high quantum yields of up to 80%. Notably, an emission line width of 10-24 nm, which is completely consistent with that of their single particles, indicates the formation of single-sized NCs. The versatility of the synthetic strategy was validated by extending it to the synthesis of single-sized CH3NH3PbX 3 NCs by simply replacing the cesium precursor by the CH3NH3 X precursor.

  9. Rolling cycle amplification based single-color quantum dots–ruthenium complex assembling dyads for homogeneous and highly selective detection of DNA

    Energy Technology Data Exchange (ETDEWEB)

    Su, Chen; Liu, Yufei; Ye, Tai; Xiang, Xia; Ji, Xinghu; He, Zhike, E-mail: zhkhe@whu.edu.cn

    2015-01-01

    Graphical abstract: A universal, label-free, homogeneous, highly sensitive, and selective fluorescent biosensor for DNA detection is developed by using rolling-circle amplification (RCA) based single-color quantum dots–ruthenium complex (QDs–Ru) assembling dyads. - Highlights: • The single-color QDs–Ru assembling dyads were applied in homogeneous DNA assay. • This biosensor exhibited high selectivity against base mismatched sequences. • This biosensor could be severed as universal platform for the detection of ssDNA. • This sensor could be used to detect the target in human serum samples. • This DNA sensor had a good selectivity under the interference of other dsDNA. - Abstract: In this work, a new, label-free, homogeneous, highly sensitive, and selective fluorescent biosensor for DNA detection is developed by using rolling-circle amplification (RCA) based single-color quantum dots–ruthenium complex (QDs–Ru) assembling dyads. This strategy includes three steps: (1) the target DNA initiates RCA reaction and generates linear RCA products; (2) the complementary DNA hybridizes with the RCA products to form long double-strand DNA (dsDNA); (3) [Ru(phen){sub 2}(dppx)]{sup 2+} (dppx = 7,8-dimethyldipyrido [3,2-a:2′,3′-c] phenanthroline) intercalates into the long dsDNA with strong fluorescence emission. Due to its strong binding propensity with the long dsDNA, [Ru(phen){sub 2}(dppx)]{sup 2+} is removed from the surface of the QDs, resulting in restoring the fluorescence of the QDs, which has been quenched by [Ru(phen){sub 2}(dppx)]{sup 2+} through a photoinduced electron transfer process and is overlaid with the fluorescence of dsDNA bonded Ru(II) polypyridyl complex (Ru-dsDNA). Thus, high fluorescence intensity is observed, and is related to the concentration of target. This sensor exhibits not only high sensitivity for hepatitis B virus (HBV) ssDNA with a low detection limit (0.5 pM), but also excellent selectivity in the complex matrix. Moreover

  10. Acceptance/Operational Test Report for Tank 241-AN-104 camera and camera purge control system

    International Nuclear Information System (INIS)

    Castleberry, J.L.

    1995-11-01

    This Acceptance/Operational Test Procedure (ATP/OTP) will document the satisfactory operation of the camera purge panel, purge control panel, color camera system and associated control components destined for installation. The final acceptance of the complete system will be performed in the field. The purge panel and purge control panel will be tested for its safety interlock which shuts down the camera and pan-and-tilt inside the tank vapor space during loss of purge pressure and that the correct purge volume exchanges are performed as required by NFPA 496. This procedure is separated into seven sections. This Acceptance/Operational Test Report documents the successful acceptance and operability testing of the 241-AN-104 camera system and camera purge control system

  11. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  12. Optimising camera traps for monitoring small mammals.

    Science.gov (United States)

    Glen, Alistair S; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea), feral cats (Felis catus) and hedgehogs (Erinaceuseuropaeus). Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  13. New camera systems for fuel services

    International Nuclear Information System (INIS)

    Hummel, W.; Beck, H.J.

    2010-01-01

    AREVA NP Fuel Services have many years of experience in visual examination and measurements on fuel assemblies and associated core components by using state of the art cameras and measuring technologies. The used techniques allow the surface and dimensional characterization of materials and shapes by visual examination. New enhanced and sophisticated technologies for fuel services f. e. are two shielded color camera systems for use under water and close inspection of a fuel assembly. Nowadays the market requirements for detecting and characterization of small defects (lower than the 10th of one mm) or cracks and analyzing surface appearances on an irradiated fuel rod cladding or fuel assembly structure parts have increased. Therefore it is common practice to use movie cameras with higher resolution. The radiation resistance of high resolution CCD cameras is in general very low and it is not possible to use them unshielded close to a fuel assembly. By extending the camera with a mirror system and shielding around the sensitive parts, the movie camera can be utilized for fuel assembly inspection. AREVA NP Fuel Services is now equipped with such kind of movie cameras. (orig.)

  14. Those Nifty Digital Cameras!

    Science.gov (United States)

    Ekhaml, Leticia

    1996-01-01

    Describes digital photography--an electronic imaging technology that merges computer capabilities with traditional photography--and its uses in education. Discusses how a filmless camera works, types of filmless cameras, advantages and disadvantages, and educational applications of the consumer digital cameras. (AEF)

  15. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen...

  16. Single-Nucleotide Polymorphism Genotyping of exoS inPseudomonas aeruginosaUsing Dual-Color Fluorescence Hybridization and Magnetic Separation.

    Science.gov (United States)

    Tang, Yongjun; Ali, Zeeshan; Dai, Jianguo; Liu, Xiaolong; Wu, Yanqi; Chen, Zhu; He, Nongyue; Li, Song; Wang, Lijun

    2018-01-01

    Pseudomonas aeruginosa exoS gene contains important replacement (non-synonymous) single-nucleotide polymorphism (SNP) loci, of which mutations in loci 162 (G162A) and 434 (G434C) in exoS greatly affects virulence. The present study aimed to develop an SNP-based classification method for exoS loci (G162A and G434C), using magnetic enrichment polymerase chain reaction, magnetic separation, and dual-color fluorescence to provide a technical basis for understanding the T3SS genotypic variation. The two SNP loci in 3 P. aeruginosa standard strains, ATCC27853, ATCC9027, and CMCC10104, were analyzed using this method. The two SNP loci of all these strains were found to be of the wild-type subtype. G values were greater than 0.8 and I values were greater than 3; hence, the classification yielded statistically significant results. In addition, G162A and G434C SNP loci in 21 clinical isolates were analyzed using this method for monitoring clinical mutations. In the G162A and G434C SNP loci, 57.1% and 80.9% of isolates were of the wild-type subtype; 23.8% and 14.3%, mutation subtype; 9.5% and 4.8%, heterozygous subtype, respectively. In a word, SNP genotyping of loci G162A and G434C in exoS was established using magnetic separation and dual-color fluorescence hybridization, and the method was optimized.

  17. Multiple-color optical activation, silencing, and desynchronization of neural activity, with single-spike temporal resolution.

    Directory of Open Access Journals (Sweden)

    Xue Han

    Full Text Available The quest to determine how precise neural activity patterns mediate computation, behavior, and pathology would be greatly aided by a set of tools for reliably activating and inactivating genetically targeted neurons, in a temporally precise and rapidly reversible fashion. Having earlier adapted a light-activated cation channel, channelrhodopsin-2 (ChR2, for allowing neurons to be stimulated by blue light, we searched for a complementary tool that would enable optical neuronal inhibition, driven by light of a second color. Here we report that targeting the codon-optimized form of the light-driven chloride pump halorhodopsin from the archaebacterium Natronomas pharaonis (hereafter abbreviated Halo to genetically-specified neurons enables them to be silenced reliably, and reversibly, by millisecond-timescale pulses of yellow light. We show that trains of yellow and blue light pulses can drive high-fidelity sequences of hyperpolarizations and depolarizations in neurons simultaneously expressing yellow light-driven Halo and blue light-driven ChR2, allowing for the first time manipulations of neural synchrony without perturbation of other parameters such as spiking rates. The Halo/ChR2 system thus constitutes a powerful toolbox for multichannel photoinhibition and photostimulation of virally or transgenically targeted neural circuits without need for exogenous chemicals, enabling systematic analysis and engineering of the brain, and quantitative bioengineering of excitable cells.

  18. Relational Teaching with Black Boys: Strategies for Learning at a Single-Sex Middle School for Boys of Color

    Science.gov (United States)

    Nelson, Joseph Derrick

    2016-01-01

    Background/Context: Positive teacher-student relationships are critical for Black boys' learning across single-sex and coeducational environments. Limited attention to these relationships by school professionals is rooted in deficit-oriented conceptions of boyhood and Black masculinity. The popular message of deficiency and pathology is clear:…

  19. Influence of restorative materials on color of implant-supported single crowns in esthetic zone: A spectrophotometric evaluation

    DEFF Research Database (Denmark)

    M., Peng; W.-J., Zhao; M., Hosseini

    2017-01-01

    Restorations of 98 implant-supported single crowns in anterior maxillary area were divided into 5 groups: zirconia abutment, titanium abutment, and gold/gold hue abutment with zirconia coping, respectively, and titanium abutment with metal coping as well as gold/gold hue abutment with metal copin...

  20. The Relationship between Correlates of Effective Schools and Social Emotional Learning within Single Gender Schools Serving Boys of Color

    Science.gov (United States)

    Green, Curt R.

    2013-01-01

    Urban school districts throughout the United States are creating single gender classrooms or schools to improve student achievements for their lowest performing subgroups (Noguera, 2009). It is hoped that separating the sexes will improve domains such as discipline, attendance and academic performance, while decreasing the dropout rate. If single…

  1. The MVACS Robotic Arm Camera

    Science.gov (United States)

    Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.

    2001-08-01

    The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.

  2. Direct measuring of single-cycle mid-IR light bullets path length in LiF by the laser coloration method

    Science.gov (United States)

    Chekalin, Sergey; Kompanets, Victor; Kuznetsov, Andrey; Dormidonov, Alexander; Kandidov, Valerii

    2017-10-01

    A colour-centre structure formed in a LiF crystal under filamentation of a femtosecond mid-IR laser pulse with a power slightly exceeding the critical power for self-focusing has been experimentally and theoretically investigated. A single-cycle light bullet was recorded for the first time by observation of strictly periodic oscillations for the density of the color centers induced in an isotropic LiF crystal under filamentation of a laser beam with a wavelength tuned in the range from 2600 to 3900 nm, which is due to the periodic change in the light field amplitude in the light bullet formed under filamentation under propagation in dispersive medium. The light bullet path length was not more than one millimeter.

  3. Direct measuring of single-cycle mid-IR light bullets path length in LiF by the laser coloration method

    Directory of Open Access Journals (Sweden)

    Chekalin Sergey

    2017-01-01

    Full Text Available A colour-centre structure formed in a LiF crystal under filamentation of a femtosecond mid-IR laser pulse with a power slightly exceeding the critical power for self-focusing has been experimentally and theoretically investigated. A single-cycle light bullet was recorded for the first time by observation of strictly periodic oscillations for the density of the color centers induced in an isotropic LiF crystal under filamentation of a laser beam with a wavelength tuned in the range from 2600 to 3900 nm, which is due to the periodic change in the light field amplitude in the light bullet formed under filamentation under propagation in dispersive medium. The light bullet path length was not more than one millimeter.

  4. Stokes image reconstruction for two-color microgrid polarization imaging systems.

    Science.gov (United States)

    Lemaster, Daniel A

    2011-07-18

    The Air Force Research Laboratory has developed a new microgrid polarization imaging system capable of simultaneously reconstructing linear Stokes parameter images in two colors on a single focal plane array. In this paper, an effective method for extracting Stokes images is presented for this type of camera system. It is also shown that correlations between the color bands can be exploited to significantly increase overall spatial resolution. Test data is used to show the advantages of this approach over bilinear interpolation. The bounds (in terms of available reconstruction bandwidth) on image resolution are also provided.

  5. Coloration dependence in the thermoluminescence properties of the double doped NaCl single crystals under gamma irradiation

    International Nuclear Information System (INIS)

    Sanchez-Mejorada, G.; Gelover-Santiago, A.L.; Frias, D.

    2006-01-01

    In this work the behaviour of calcium manganese doped NaCl single crystals under gamma irradiation is reported. Various single crystals of NaCl doped with Ca and Mn have been irradiated at different doses with ionising radiation. The production of defects has been correlated to the increase in the intensity of the thermo luminescent glow curve as a function of doses. The glow curves intensity as a function of doses shows the potential use of these materials as dosimeters. Optical properties of such crystals after irradiation with gamma rays have also been studied; results have shown their potentiality as a good detector and optical store memory devices. Since the creations of colour centres by photons with energy less than the band gap energy has been detected also in ns 2 -ion doped alkali halides. (copyright 2006 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  6. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  7. Quantifying the number of color centers in single fluorescent nanodiamonds by photon correlation spectroscopy and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Hui, Y.Y.; Chang, Y.-R.; Lee, H.-Y.; Chang, H.-C.; Lim, T.-S.; Fann Wunshain

    2009-01-01

    The number of negatively charged nitrogen-vacancy centers (N-V) - in fluorescent nanodiamond (FND) has been determined by photon correlation spectroscopy and Monte Carlo simulations at the single particle level. By taking account of the random dipole orientation of the multiple (N-V) - fluorophores and simulating the probability distribution of their effective numbers (N e ), we found that the actual number (N a ) of the fluorophores is in linear correlation with N e , with correction factors of 1.8 and 1.2 in measurements using linearly and circularly polarized lights, respectively. We determined N a =8±1 for 28 nm FND particles prepared by 3 MeV proton irradiation

  8. Video-rate fluorescence lifetime imaging camera with CMOS single-photon avalanche diode arrays and high-speed imaging algorithm

    NARCIS (Netherlands)

    Li, D.D.U.; Arlt, J.; Tyndall, D.; Walker, R.; Richardson, J.; Stoppa, D.; Charbon, E.; Henderson, R.K.

    2011-01-01

    A high-speed and hardware-only algorithm using a center of mass method has been proposed for single-detector fluorescence lifetime sensing applications. This algorithm is now implemented on a field programmable gate array to provide fast lifetime estimates from a 32 × 32 low dark count 0.13 ?m

  9. Color Algebras

    Science.gov (United States)

    Mulligan, Jeffrey B.

    2017-01-01

    A color algebra refers to a system for computing sums and products of colors, analogous to additive and subtractive color mixtures. The difficulty addressed here is the fact that, because of metamerism, we cannot know with certainty the spectrum that produced a particular color solely on the basis of sensory data. Knowledge of the spectrum is not required to compute additive mixture of colors, but is critical for subtractive (multiplicative) mixture. Therefore, we cannot predict with certainty the multiplicative interactions between colors based solely on sensory data. There are two potential applications of a color algebra: first, to aid modeling phenomena of human visual perception, such as color constancy and transparency; and, second, to provide better models of the interactions of lights and surfaces for computer graphics rendering.

  10. Ocean Color

    Data.gov (United States)

    National Aeronautics and Space Administration — Satellite-derived Ocean Color Data sets from historical and currently operational NASA and International Satellite missions including the NASA Coastal Zone Color...

  11. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  12. Presence capture cameras - a new challenge to the image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  13. Color naming

    OpenAIRE

    Şahin, Ebru

    1998-01-01

    Ankara : Bilkent University, Department of Interior Architecture and Environmental Design and Institute of Fine Arts, 1998. Thesis (Ph.D) -- Bilkent University, 1998 Includes bibliographical refences. In this study, visual aspects of color and neurophysiological processes involved in the phenomenon, language of color and color models were explained in addition to the discussion of different ideas, orientations and previous works behind the subject of matter. Available color ...

  14. Single component Mn-doped perovskite-related CsPb2ClxBr5-x nanoplatelets with a record white light quantum yield of 49%: a new single layer color conversion material for light-emitting diodes.

    Science.gov (United States)

    Wu, Hao; Xu, Shuhong; Shao, Haibao; Li, Lang; Cui, Yiping; Wang, Chunlei

    2017-11-09

    Single component nanocrystals (NCs) with white fluorescence are promising single layer color conversion media for white light-emitting diodes (LED) because the undesirable changes of chromaticity coordinates for the mixture of blue, green and red emitting NCs can be avoided. However, their practical applications have been hindered by the relative low photoluminescence (PL) quantum yield (QY) for traditional semiconductor NCs. Though Mn-doped perovskite nanocube is a potential candidate, it has been unable to realize a white-light emission to date. In this work, the synthesis of Mn-doped 2D perovskite-related CsPb 2 Cl x Br 5-x nanoplatelets with a pure white emission from a single component is reported. Unlike Mn-doped perovskite nanocubes with insufficient energy transfer efficiency, the current reported Mn-doped 2D perovskite-related CsPb 2 Cl x Br 5-x nanoplatelets show a 10 times higher energy transfer efficiency from perovskite to Mn impurities at the required emission wavelengths (about 450 nm for perovskite emission and 580 nm for Mn emission). As a result, the Mn/perovskite dual emission intensity ratio surprisingly elevates from less than 0.25 in case of Mn-doped nanocubes to 0.99 in the current Mn-doped CsPb 2 Cl x Br 5-x nanoplatelets, giving rise to a pure white light emission with Commission Internationale de l'Eclairage (CIE) color coordinates of (0.35, 0.32). More importantly, the highest PL QY for Mn-doped perovskite-related CsPb 2 Cl x Br 5-x nanoplatelets is up to 49%, which is a new record for white-emitting nanocrystals with single component. These highly luminescent nanoplatelets can be blended with polystyrene (PS) without changing the white light emission but dramatically improving perovskite stability. The perovskite-PS composites are available not only as a good solution processable coating material for assembling LED, but also as a superior conversion material for achieving white light LED with a single conversion layer.

  15. In Vitro Assessment of Single-Retainer Tooth-Colored Adhesively Fixed Partial Dentures for Posterior Teeth

    Directory of Open Access Journals (Sweden)

    Tissiana Bortolotto

    2010-01-01

    Full Text Available The purpose of this paper was to investigate, by means of marginal adaptation and fracture strength, three different types of single retainer posterior fixed partial dentures (FPDs for the replacement of a missing premolar. Two-unit cantilever FPDs were fabricated from composite resin, feldspathic porcelain, and fiber-reinforced composite resin. After luting procedures and margin polishing, all specimens were subjected to a Scanning Electron Microscopic marginal evaluation both prior to and after thermomechanical loading with a custom made chewing simulator comprising both thermal and mechanical loads. The results indicated that the highest score of marginal adaptation, that is, the closest score to 100% of continuous margins, at the tooth-composite resin interface was attained by the feldspathic porcelain group (88.1% median, followed by the fiber-reinforced composite resin group (78.9% median. The worse results were observed in the composite resin group (58.05% median. Fracture strength was higher in feldspathic porcelain (196N median when compared to resin composite (114.9 N median. All the fixed prostheses made of fiber-reinforced composite resin detached from the abutment teeth before fracturing, suggesting that the adhesive surface's retainer should be increased.

  16. Spectral Characterization of a Prototype SFA Camera for Joint Visible and NIR Acquisition.

    Science.gov (United States)

    Thomas, Jean-Baptiste; Lapray, Pierre-Jean; Gouton, Pierre; Clerc, Cédric

    2016-06-28

    Multispectral acquisition improves machine vision since it permits capturing more information on object surface properties than color imaging. The concept of spectral filter arrays has been developed recently and allows multispectral single shot acquisition with a compact camera design. Due to filter manufacturing difficulties, there was, up to recently, no system available for a large span of spectrum, i.e., visible and Near Infra-Red acquisition. This article presents the achievement of a prototype of camera that captures seven visible and one near infra-red bands on the same sensor chip. A calibration is proposed to characterize the sensor, and images are captured. Data are provided as supplementary material for further analysis and simulations. This opens a new range of applications in security, robotics, automotive and medical fields.

  17. Spectral Characterization of a Prototype SFA Camera for Joint Visible and NIR Acquisition

    Directory of Open Access Journals (Sweden)

    Jean-Baptiste Thomas

    2016-06-01

    Full Text Available Multispectral acquisition improves machine vision since it permits capturing more information on object surface properties than color imaging. The concept of spectral filter arrays has been developed recently and allows multispectral single shot acquisition with a compact camera design. Due to filter manufacturing difficulties, there was, up to recently, no system available for a large span of spectrum, i.e., visible and Near Infra-Red acquisition. This article presents the achievement of a prototype of camera that captures seven visible and one near infra-red bands on the same sensor chip. A calibration is proposed to characterize the sensor, and images are captured. Data are provided as supplementary material for further analysis and simulations. This opens a new range of applications in security, robotics, automotive and medical fields.

  18. LSST Camera Optics Design

    Energy Technology Data Exchange (ETDEWEB)

    Riot, V J; Olivier, S; Bauman, B; Pratuch, S; Seppala, L; Gilmore, D; Ku, J; Nordby, M; Foss, M; Antilogus, P; Morgado, N

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics will meet their performance goals.

  19. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  20. PEOPLE REIDENTIFCATION IN A DISTRIBUTED CAMERA NETWORK

    Directory of Open Access Journals (Sweden)

    Icaro Oliveira de Oliveira

    2010-06-01

    Full Text Available This paper presents an approach to the object reidentification problem in a distributed camera network system. The reidentification or reacquisition problem consists essentially on the matching process of images acquired from different cameras. This work is applied in a monitored environment by cameras. This application is important to modern security systems, in which the targets presence identification in the environment expands the capacity of action by security agents in real time and provides important parameters like localization for each target. We used target’s interest points and target’s color with features for reidentification. The satisfactory results were obtained from real experiments in public video datasets and synthetic images with noise.

  1. Scintillation camera with second order resolution

    International Nuclear Information System (INIS)

    Muehllehner, G.

    1976-01-01

    A scintillation camera for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area is described in which means is provided for second order positional resolution. The phototubes, which normally provide only a single order of resolution, are modified to provide second order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved

  2. Determining camera parameters for round glassware measurements

    International Nuclear Information System (INIS)

    Baldner, F O; Costa, P B; Leta, F R; Gomes, J F S; Filho, D M E S

    2015-01-01

    Nowadays there are many types of accessible cameras, including digital single lens reflex ones. Although these cameras are not usually employed in machine vision applications, they can be an interesting choice. However, these cameras have many available parameters to be chosen by the user and it may be difficult to select the best of these in order to acquire images with the needed metrological quality. This paper proposes a methodology to select a set of parameters that will supply a machine vision system with the needed quality image, considering the measurement required of a laboratory glassware

  3. A Plenoptic Multi-Color Imaging Pyrometer

    Science.gov (United States)

    Danehy, Paul M.; Hutchins, William D.; Fahringer, Timothy; Thurow, Brian S.

    2017-01-01

    A three-color pyrometer has been developed based on plenoptic imaging technology. Three bandpass filters placed in front of a camera lens allow separate 2D images to be obtained on a single image sensor at three different and adjustable wavelengths selected by the user. Images were obtained of different black- or grey-bodies including a calibration furnace, a radiation heater, and a luminous sulfur match flame. The images obtained of the calibration furnace and radiation heater were processed to determine 2D temperature distributions. Calibration results in the furnace showed that the instrument can measure temperature with an accuracy and precision of 10 Kelvins between 1100 and 1350 K. Time-resolved 2D temperature measurements of the radiation heater are shown.

  4. Color Categories and Color Appearance

    Science.gov (United States)

    Webster, Michael A.; Kay, Paul

    2012-01-01

    We examined categorical effects in color appearance in two tasks, which in part differed in the extent to which color naming was explicitly required for the response. In one, we measured the effects of color differences on perceptual grouping for hues that spanned the blue-green boundary, to test whether chromatic differences across the boundary…

  5. Bending strength measurements at different materials used for IR-cut filters in mobile camera devices

    Science.gov (United States)

    Dietrich, Volker; Hartmann, Peter; Kerz, Franca

    2015-03-01

    Digital cameras are present everywhere in our daily life. Science, business or private life cannot be imagined without digital images. The quality of an image is often rated by its color rendering. In order to obtain a correct color recognition, a near infrared cut (IRC-) filter must be used to alter the sensitivity of imaging sensor. Increasing requirements related to color balance and larger angle of incidence (AOI) enforced the use of new materials as the e.g. BG6X series which substitutes interference coated filters on D263 thin glass. Although the optical properties are the major design criteria, devices have to withstand numerous environmental conditions during use and manufacturing - as e.g. temperature change, humidity, and mechanical shock, as wells as mechanical stress. The new materials show different behavior with respect to all these aspects. They are usually more sensitive against these requirements to a larger or smaller extent. Mechanical strength is especially different. Reliable strength data are of major interest for mobile phone camera applications. As bending strength of a glass component depends not only upon the material itself, but mainly on the surface treatment and test conditions, a single number for the strength might be misleading if the conditions of the test and the samples are not described precisely,. Therefore, Schott started investigations upon the bending strength data of various IRC-filter materials. Different test methods were used to obtain statistical relevant data.

  6. Structural colors of the SiO2/polyethyleneimine thin films on poly(ethylene terephthalate) substrates

    International Nuclear Information System (INIS)

    Jia, Yanrong; Zhang, Yun; Zhou, Qiubao; Fan, Qinguo; Shao, Jianzhong

    2014-01-01

    The SiO 2 /polyethyleneimine (PEI) films with structural colors on poly(ethylene terephthalate) (PET) substrates were fabricated by an electrostatic self-assembly method. The morphology of the films was characterized by Scanning Electron Microscopy. The results showed that there was no distinguishable multilayered structure found of SiO 2 /PEI films. The optical behaviors of the films were investigated through the color photos captured by a digital camera and the color measurement by a multi-angle spectrophotometer. Different hue and brightness were observed at various viewing angles. The structural colors were dependent on the SiO 2 particle size and the number of assembly cycles. The mechanism of the structural colors generated from the assembled films was elucidated. The morphological structures and the optical properties proved that the SiO 2 /PEI film fabricated on PET substrate formed a homogeneous inorganic/organic SiO 2 /PEI composite layer, and the structural colors were originated from single thin film interference. - Highlights: • SiO 2 /PEI thin films were electrostatic self-assembled on PET substrates. • The surface morphology and optical behavior of the film were investigated. • The structural colors varied with various SiO 2 particle sizes and assembly cycles. • Different hue and lightness of SiO 2 /PEI film were observed at various viewing angles. • Structural color of the SiO 2 /PEI film originated from single thin film interference

  7. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...

  8. Human preference for individual colors

    Science.gov (United States)

    Palmer, Stephen E.; Schloss, Karen B.

    2010-02-01

    Color preference is an important aspect of human behavior, but little is known about why people like some colors more than others. Recent results from the Berkeley Color Project (BCP) provide detailed measurements of preferences among 32 chromatic colors as well as other relevant aspects of color perception. We describe the fit of several color preference models, including ones based on cone outputs, color-emotion associations, and Palmer and Schloss's ecological valence theory. The ecological valence theory postulates that color serves an adaptive "steering' function, analogous to taste preferences, biasing organisms to approach advantageous objects and avoid disadvantageous ones. It predicts that people will tend to like colors to the extent that they like the objects that are characteristically that color, averaged over all such objects. The ecological valence theory predicts 80% of the variance in average color preference ratings from the Weighted Affective Valence Estimates (WAVEs) of correspondingly colored objects, much more variance than any of the other models. We also describe how hue preferences for single colors differ as a function of gender, expertise, culture, social institutions, and perceptual experience.

  9. Color Analysis

    Science.gov (United States)

    Wrolstad, Ronald E.; Smith, Daniel E.

    Color, flavor, and texture are the three principal quality attributes that determine food acceptance, and color has a far greater influence on our judgment than most of us appreciate. We use color to determine if a banana is at our preferred ripeness level, and a discolored meat product can warn us that the product may be spoiled. The marketing departments of our food corporations know that, for their customers, the color must be "right." The University of California Davis scorecard for wine quality designates four points out of 20, or 20% of the total score, for color and appearance (1). Food scientists who establish quality control specifications for their product are very aware of the importance of color and appearance. While subjective visual assessment and use of visual color standards are still used in the food industry, instrumental color measurements are extensively employed. Objective measurement of color is desirable for both research and industrial applications, and the ruggedness, stability, and ease of use of today's color measurement instruments have resulted in their widespread adoption.

  10. Multiplexed interfacial transduction of nucleic acid hybridization using a single color of immobilized quantum dot donor and two acceptors in fluorescence resonance energy transfer.

    Science.gov (United States)

    Algar, W Russ; Krull, Ulrich J

    2010-01-01

    A multiplexed solid-phase assay for the detection of nucleic acid hybridization was developed on the basis of a single color of immobilized CdSe/ZnS quantum dot (QD) as a donor in fluorescence resonance energy transfer (FRET). This work demonstrated that two channels of detection did not necessitate two different QD donors. Two probe oligonucleotides were coimmobilized on optical fibers modified with QDs, and a sandwich assay was used to associate the acceptor dyes with interfacial hybridization events without target labeling. FRET-sensitized acceptor emission provided an analytical signal that was concentration dependent down to 10 nM. Changes in the ratio of coimmobilized probe oligonucleotides were found to yield linear changes in the relative amounts of acceptor emission. These changes were compared to previous studies that used mixed films of two QD donors for two detection channels. The analysis indicated that probe dilution effects were primarily driven by changes in acceptor number density and that QD dilution effects or changes in mean donor-acceptor distance were secondary. Hybridization kinetics were found to be consistent between different ratios of coimmobilized probes, suggesting that hybridization in this type of system occurred via the accepted model for solid-phase hybridization, where adsorption and then diffusion at the solid interface drove hybridization.

  11. Processing of Color Words Activates Color Representations

    Science.gov (United States)

    Richter, Tobias; Zwaan, Rolf A.

    2009-01-01

    Two experiments were conducted to investigate whether color representations are routinely activated when color words are processed. Congruency effects of colors and color words were observed in both directions. Lexical decisions on color words were faster when preceding colors matched the color named by the word. Color-discrimination responses…

  12. Superresolution with the focused plenoptic camera

    Science.gov (United States)

    Georgiev, Todor; Chunev, Georgi; Lumsdaine, Andrew

    2011-03-01

    Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.

  13. Colored operads

    CERN Document Server

    Yau, Donald

    2016-01-01

    The subject of this book is the theory of operads and colored operads, sometimes called symmetric multicategories. A (colored) operad is an abstract object which encodes operations with multiple inputs and one output and relations between such operations. The theory originated in the early 1970s in homotopy theory and quickly became very important in algebraic topology, algebra, algebraic geometry, and even theoretical physics (string theory). Topics covered include basic graph theory, basic category theory, colored operads, and algebras over colored operads. Free colored operads are discussed in complete detail and in full generality. The intended audience of this book includes students and researchers in mathematics and other sciences where operads and colored operads are used. The prerequisite for this book is minimal. Every major concept is thoroughly motivated. There are many graphical illustrations and about 150 exercises. This book can be used in a graduate course and for independent study.

  14. Color metallography

    International Nuclear Information System (INIS)

    Hasson, Raymond.

    1976-06-01

    After a short introduction explaining the reasons why color metallography was adopted, the various operations involved in this technique are described in turn and illustrated by colored photomicrographs. The sample preparation (cutting, covering) and surface preparation (trimming, polishing, finishing) are described briefly. The operations specific to color metallography are then detailed: revelation of the structure of polished surfaces, dye impregnation techniques, optical systems used in macrography, in micrography, different light sources used in microscopy, photographic methods [fr

  15. CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    G. Kemper

    2016-06-01

    Full Text Available Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna –IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first

  16. Multi-Frame Demosaicing and Super-Resolution of Color Images

    National Research Council Canada - National Science Library

    Farsiu, Sina; Elad, Michael; Milanfar, Peyman

    2006-01-01

    ...: super-resolution and demosaicing. A closer look at these problems reveals the relation between them, and as conventional color digital cameras suffer from both low-spatial resolution and color-filtering, it is reasonable to address...

  17. Educational Applications for Digital Cameras.

    Science.gov (United States)

    Cavanaugh, Terence; Cavanaugh, Catherine

    1997-01-01

    Discusses uses of digital cameras in education. Highlights include advantages and disadvantages, digital photography assignments and activities, camera features and operation, applications for digital images, accessory equipment, and comparisons between digital cameras and other digitizers. (AEF)

  18. Efficient SVM classifier based on color and texture region features for wound tissue images

    Science.gov (United States)

    Wannous, Hazem; Lucas, Yves; Treuillet, Sylvie

    2008-03-01

    This work is part of the ESCALE project dedicated to the design of a complete 3D and color wound assessment tool using a simple hand held digital camera. The first part was concerned with the computation of a 3D model for wound measurements using uncalibrated vision techniques. This article presents the second part, which deals with color classification of wound tissues, a prior step before combining shape and color analysis in a single tool for real tissue surface measurements. We have adopted an original approach based on unsupervised segmentation prior to classification, to improve the robustness of the labelling stage. A database of different tissue types is first built; a simple but efficient color correction method is applied to reduce color shifts due to uncontrolled lighting conditions. A ground truth is provided by the fusion of several clinicians manual labellings. Then, color and texture tissue descriptors are extracted from tissue regions of the images database, for the learning stage of an SVM region classifier with the aid of a ground truth resulting from. The output of this classifier provides a prediction model, later used to label the segmented regions of the database. Finally, we apply unsupervised color region segmentation on wound images and classify the tissue regions. Compared to the ground truth, the result of automatic segmentation driven classification provides an overlap score, (66 % to 88%) of tissue regions higher than that obtained by clinicians.

  19. Computationally Efficient Locally Adaptive Demosaicing of Color Filter Array Images Using the Dual-Tree Complex Wavelet Packet Transform

    Science.gov (United States)

    Aelterman, Jan; Goossens, Bart; De Vylder, Jonas; Pižurica, Aleksandra; Philips, Wilfried

    2013-01-01

    Most digital cameras use an array of alternating color filters to capture the varied colors in a scene with a single sensor chip. Reconstruction of a full color image from such a color mosaic is what constitutes demosaicing. In this paper, a technique is proposed that performs this demosaicing in a way that incurs a very low computational cost. This is done through a (dual-tree complex) wavelet interpretation of the demosaicing problem. By using a novel locally adaptive approach for demosaicing (complex) wavelet coefficients, we show that many of the common demosaicing artifacts can be avoided in an efficient way. Results demonstrate that the proposed method is competitive with respect to the current state of the art, but incurs a lower computational cost. The wavelet approach also allows for computationally effective denoising or deblurring approaches. PMID:23671575

  20. The laser scanning camera

    International Nuclear Information System (INIS)

    Jagger, M.

    The prototype development of a novel lenseless camera is reported which utilises a laser beam scanned in a raster by means of orthogonal vibrating mirrors to illuminate the field of view. Laser light reflected from the scene is picked up by a conveniently sited photosensitive device and used to modulate the brightness of a T.V. display scanned in synchronism with the moving laser beam, hence producing a T.V. image of the scene. The camera which needs no external lighting system can act in either a wide angle mode or by varying the size and position of the raster can be made to zoom in to view in detail any object within a 40 0 overall viewing angle. The resolution and performance of the camera are described and a comparison of these aspects is made with conventional T.V. cameras. (author)

  1. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  2. Myocardial blood flow rate and capillary permeability for 99mTc-DTPA in patients with angiographically normal coronary arteries. Evaluation of the single-injection, residue detection method with intracoronary indicator bolus injection and the use of a mobile gamma camera

    DEFF Research Database (Denmark)

    Svendsen, Jesper Hastrup; Kelbaek, H; Efsen, F

    1994-01-01

    The aims of the present study were to quantitate myocardial perfusion and capillary permeability in the human heart by means of the single-injection, residue detection method using a mobile gamma camera. With this method, the intravascular mean transit time and the capillary extraction fraction (...

  3. Color metasurfaces in industrial perspective

    DEFF Research Database (Denmark)

    Højlund-Nielsen, Emil; Kristensen, Anders

    This doctoral thesis describes the utilization of color metasurfaces in an industrial perspective, where nano-scale textures and contingent post processing replace inks, dyes and pigments in plastic production. The concept of colors by structure arguably reduces the number of raw materials...... and production environments is developed. Second, the fundamental optical surface properties of dielectric materials are investigated within the framework of mass production applicability. Different colors can be realized using a single-step etching process by altering the nano-texture in high-index materials......, exemplified in silicon. However, only corresponding faint colors appear in polymeric materials. The concept of all-polymer pigment-free coloration seems somewhat restricted in relation to widespread industrial employment. Finally, a novel plasmon color technology for structural coloration in plastics...

  4. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.; Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  5. The Circular Camera Movement

    DEFF Research Database (Denmark)

    Hansen, Lennard Højbjerg

    2014-01-01

    It has been an accepted precept in film theory that specific stylistic features do not express specific content. Nevertheless, it is possible to find many examples in the history of film in which stylistic features do express specific content: for instance, the circular camera movement is used re...... such as the circular camera movement. Keywords: embodied perception, embodied style, explicit narration, interpretation, style pattern, television style...

  6. The PANOPTES project: discovering exoplanets with low-cost digital cameras

    Science.gov (United States)

    Guyon, Olivier; Walawender, Josh; Jovanovic, Nemanja; Butterfield, Mike; Gee, Wilfred T.; Mery, Rawad

    2014-07-01

    The Panoptic Astronomical Networked OPtical observatory for Transiting Exoplanets Survey (PANOPTES, www.projectpanoptes.org) project is aimed at identifying transiting exoplanets using a wide network of low-cost imaging units. Each unit consists of two commercial digital single lens reflex (DSLR) cameras equipped with 85mm F1.4 lenses, mounted on a small equatorial mount. At a few $1000s per unit, the system offers a uniquely advantageous survey eficiency for the cost, and can easily be assembled by amateur astronomers or students. Three generations of prototype units have so far been tested, and the baseline unit design, which optimizes robustness, simplicity and cost, is now ready to be duplicated. We describe the hardware and software for the PANOPTES project, focusing on key challenging aspects of the project. We show that obtaining high precision photometric measurements with commercial DSLR color cameras is possible, using a PSF-matching algorithm we developed for this project. On-sky tests show that percent-level photometric precision is achieved in 1 min with a single camera. We also discuss hardware choices aimed at optimizing system robustness while maintaining adequate cost. PANOPTES is both an outreach project and a scientifically compelling survey for transiting exoplanets. In its current phase, experienced PANOPTES members are deploying a limited number of units, acquiring the experience necessary to run the network. A much wider community will then be able to participate to the project, with schools and citizen scientists integrating their units in the network.

  7. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  8. Colored leptons

    International Nuclear Information System (INIS)

    Harari, H.

    1985-01-01

    If leptons are composite and if they contain colored preons, one expects the existence of heavy color-octet fermions with quantum numbers similar to those of ordinary leptons. Such a ''colored lepton'' should decay into a gluon and a lepton, yielding a unique experimental signature. Charged ''colored leptons'' probably have masses of the order of the compositeness scale Λ > or approx. 1 TeV. They may be copiously produced at future multi-TeV e + e - , ep and hadron colliders. ''Colored neutrinos'' may have both Dirac and Majorana masses. They could be much lighter than Λ, possibly as light as 100 GeV or less. In such a case they should be readily produced at the CERN anti pp collider, yielding spectacular monojet and dijet events. They may also be produced at LEP and HERA. (orig.)

  9. The WEBERSAT camera - An inexpensive earth imaging system

    Science.gov (United States)

    Jackson, Stephen; Raetzke, Jeffrey

    WEBERSAT is a 27 pound LEO satellite launched in 1990 into a 500 mile polar orbit. One of its payloads is a low cost CCD color camera system developed by engineering students at Weber State University. The camera is a modified Canon CI-10 with a 25 mm lens, automatic iris, and 780 x 490 pixel resolution. The iris range control potentiometer was made programmable; a 10.7 MHz digitization clock, fixed focus support, and solid tantalum capacitors were added. Camera output signals, composite video, red, green, blue, and the digitization clock are fed to a flash digitizer, where they are processed for storage in RAM. Camera control commands are stored and executed via the onboard computer. The CCD camera has successfully imaged meteorological features of the earth, land masses, and a number of astronomical objects.

  10. Differentiating defects in red oak lumber by discriminant analysis using color, shape, and density

    Science.gov (United States)

    B. H. Bond; D. Earl Kline; Philip A. Araman

    2002-01-01

    Defect color, shape, and density measures aid in the differentiation of knots, bark pockets, stain/mineral streak, and clearwood in red oak, (Quercus rubra). Various color, shape, and density measures were extracted for defects present in color and X-ray images captured using a color line scan camera and an X-ray line scan detector. Analysis of variance was used to...

  11. Improving the Quality of Color Colonoscopy Videos

    Directory of Open Access Journals (Sweden)

    Dahyot Rozenn

    2008-01-01

    Full Text Available Abstract Colonoscopy is currently one of the best methods to detect colorectal cancer. Nowadays, one of the widely used colonoscopes has a monochrome chipset recording successively at 60 Hz and components merged into one color video stream. Misalignments of the channels occur each time the camera moves, and this artefact impedes both online visual inspection by doctors and offline computer analysis of the image data. We propose to restore this artefact by first equalizing the color channels and then performing a robust camera motion estimation and compensation.

  12. The color lexicon of the Somali language.

    Science.gov (United States)

    Brown, Angela M; Isse, Abdirizak; Lindsey, Delwin T

    2016-01-01

    This empirical study had three goals: (a) to describe Somali color naming and its motifs, (b) to relate color naming by Somali informants to their color vision, and (c) to search for historical and demographic clues about the diversity of Somali color naming. Somali-speaking informants from Columbus, Ohio provided monolexemic color terms for 83 or 145 World Color Survey (WCS) color samples. Proximity analysis reduced the 103 color terms to the eight chromatic color meanings from the WCS plus black, white, and gray. Informants' data sets were grouped by spectral clustering analysis into four WCS color naming motifs named after the terms for the cool colors: (a) Green-Blue, (b) Grue (a single term meaning "green or blue"), (c) Gray, and (d) Dark. The results show that, first, the Somali language has about four motifs among its speakers. Second, individuals' color vision test results and their motifs were not correlated, suggesting that multiple motifs do not arise from individual variation in color vision. Last, the Somali color lexicon has changed over the past century. New color terms often came from the names of familiar colored objects, and informants' motifs were closely related to their ages and genders, suggesting that the diversity of color naming across speakers of Somali probably results from ongoing language change.

  13. The Dark Energy Camera

    Science.gov (United States)

    Flaugher, B.; Diehl, H. T.; Honscheid, K.; Abbott, T. M. C.; Alvarez, O.; Angstadt, R.; Annis, J. T.; Antonik, M.; Ballester, O.; Beaufore, L.; Bernstein, G. M.; Bernstein, R. A.; Bigelow, B.; Bonati, M.; Boprie, D.; Brooks, D.; Buckley-Geer, E. J.; Campa, J.; Cardiel-Sas, L.; Castander, F. J.; Castilla, J.; Cease, H.; Cela-Ruiz, J. M.; Chappa, S.; Chi, E.; Cooper, C.; da Costa, L. N.; Dede, E.; Derylo, G.; DePoy, D. L.; de Vicente, J.; Doel, P.; Drlica-Wagner, A.; Eiting, J.; Elliott, A. E.; Emes, J.; Estrada, J.; Fausti Neto, A.; Finley, D. A.; Flores, R.; Frieman, J.; Gerdes, D.; Gladders, M. D.; Gregory, B.; Gutierrez, G. R.; Hao, J.; Holland, S. E.; Holm, S.; Huffman, D.; Jackson, C.; James, D. J.; Jonas, M.; Karcher, A.; Karliner, I.; Kent, S.; Kessler, R.; Kozlovsky, M.; Kron, R. G.; Kubik, D.; Kuehn, K.; Kuhlmann, S.; Kuk, K.; Lahav, O.; Lathrop, A.; Lee, J.; Levi, M. E.; Lewis, P.; Li, T. S.; Mandrichenko, I.; Marshall, J. L.; Martinez, G.; Merritt, K. W.; Miquel, R.; Muñoz, F.; Neilsen, E. H.; Nichol, R. C.; Nord, B.; Ogando, R.; Olsen, J.; Palaio, N.; Patton, K.; Peoples, J.; Plazas, A. A.; Rauch, J.; Reil, K.; Rheault, J.-P.; Roe, N. A.; Rogers, H.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schindler, R. H.; Schmidt, R.; Schmitt, R.; Schubnell, M.; Schultz, K.; Schurter, P.; Scott, L.; Serrano, S.; Shaw, T. M.; Smith, R. C.; Soares-Santos, M.; Stefanik, A.; Stuermer, W.; Suchyta, E.; Sypniewski, A.; Tarle, G.; Thaler, J.; Tighe, R.; Tran, C.; Tucker, D.; Walker, A. R.; Wang, G.; Watson, M.; Weaverdyck, C.; Wester, W.; Woods, R.; Yanny, B.; DES Collaboration

    2015-11-01

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel-1. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6-9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  14. THE DARK ENERGY CAMERA

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, B.; Diehl, H. T.; Alvarez, O.; Angstadt, R.; Annis, J. T.; Buckley-Geer, E. J. [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Honscheid, K. [Center for Cosmology and Astro-Particle Physics, The Ohio State University, Columbus, OH 43210 (United States); Abbott, T. M. C.; Bonati, M. [Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, Casilla 603, La Serena (Chile); Antonik, M.; Brooks, D. [Department of Physics and Astronomy, University College London, Gower Street, London, WC1E 6BT (United Kingdom); Ballester, O.; Cardiel-Sas, L. [Institut de Física d’Altes Energies, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Barcelona (Spain); Beaufore, L. [Department of Physics, The Ohio State University, Columbus, OH 43210 (United States); Bernstein, G. M. [Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104 (United States); Bernstein, R. A. [Carnegie Observatories, 813 Santa Barbara St., Pasadena, CA 91101 (United States); Bigelow, B.; Boprie, D. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States); Campa, J. [Centro de Investigaciones Energèticas, Medioambientales y Tecnológicas (CIEMAT), Madrid (Spain); Castander, F. J., E-mail: diehl@fnal.gov [Institut de Ciències de l’Espai, IEEC-CSIC, Campus UAB, Facultat de Ciències, Torre C5 par-2, E-08193 Bellaterra, Barcelona (Spain); Collaboration: DES Collaboration; and others

    2015-11-15

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel{sup −1}. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6–9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  15. Multi-Angle Snowflake Camera Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Stuefer, Martin [Univ. of Alaska, Fairbanks, AK (United States); Bailey, J. [Univ. of Alaska, Fairbanks, AK (United States)

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASC cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.

  16. High-speed CCD camera at NAOC

    Science.gov (United States)

    Zhao, Zhaowang; Wang, Wei; Liu, Yangbin

    2006-06-01

    A high speed CCD camera has been completed at the National Astronomical Observatories of China (NAOC). A Kodak CCD was used in the camera. Two output ports are used to read out CCD data and total speed achieved 60M pixels per second. The Kodak KAI-4021 image sensor is a high-performance 2Kx2K-pixel interline transfer device. The 7.4μ square pixels with micro lenses provide high sensitivity and the large full well capacity results in high dynamic range. The inter-line transfer structure provides high quality image and enables electronic shuttering for precise exposure control. The electronic shutter provides a method of precisely controlling the image exposure time without any mechanical components. The camera is controlled by a NIOS II family of embedded processors, which is Altera's second-generation soft-core embedded processor for FPGAs. The powerful embedded processors make the camera with splendid features to satisfy continuously appearing new observational requirements. This camera is very flexible and is easy to implement new special functions. Since FPGA and other peripheral logic signals are triggered by a single master clock, the whole system is perfectly synchronized. By using this technique the camera cuts off the noise dramatically.

  17. Color tejido

    OpenAIRE

    Rius Tormo, Palmira

    2010-01-01

    Póster presentado en el IX Congreso Nacional del Color, Alicante, 29-30 junio, 1-2 julio 2010. La exposición que se propone tiene como núcleo principal el color y muestra las posibilidades expresivas que aporta a los diferentes materiales. Las 7 obras presentadas buscan la armonía estética y la fuerza simbólica.

  18. A Color-Opponency Based Biological Model for Color Constancy

    Directory of Open Access Journals (Sweden)

    Yongjie Li

    2011-05-01

    Full Text Available Color constancy is the ability of the human visual system to adaptively correct color-biased scenes under different illuminants. Most of the existing color constancy models are nonphysiologically plausible. Among the limited biological models, the great majority is Retinex and its variations, and only two or three models directly simulate the feature of color-opponency, but only of the very earliest stages of visual pathway, i.e., the single-opponent mechanisms involved at the levels of retinal ganglion cells and lateral geniculate nucleus (LGN neurons. Considering the extensive physiological evidences supporting that both the single-opponent cells in retina and LGN and the double-opponent neurons in primary visual cortex (V1 are the building blocks for color constancy, in this study we construct a color-opponency based color constancy model by simulating the opponent fashions of both the single-opponent and double-opponent cells in a forward manner. As for the spatial structure of the receptive fields (RF, both the classical RF (CRF center and the nonclassical RF (nCRF surround are taken into account for all the cells. The proposed model was tested on several typical image databases commonly used for performance evaluation of color constancy methods, and exciting results were achieved.

  19. [Constructing 3-dimensional colorized digital dental model assisted by digital photography].

    Science.gov (United States)

    Ye, Hong-qiang; Liu, Yu-shu; Liu, Yun-song; Ning, Jing; Zhao, Yi-jiao; Zhou, Yong-sheng

    2016-02-18

    To explore a method of constructing universal 3-dimensional (3D) colorized digital dental model which can be displayed and edited in common 3D software (such as Geomagic series), in order to improve the visual effect of digital dental model in 3D software. The morphological data of teeth and gingivae were obtained by intra-oral scanning system (3Shape TRIOS), constructing 3D digital dental models. The 3D digital dental models were exported as STL files. Meanwhile, referring to the accredited photography guide of American Academy of Cosmetic Dentistry (AACD), five selected digital photographs of patients'teeth and gingivae were taken by digital single lens reflex camera (DSLR) with the same exposure parameters (except occlusal views) to capture the color data. In Geomagic Studio 2013, after STL file of 3D digital dental model being imported, digital photographs were projected on 3D digital dental model with corresponding position and angle. The junctions of different photos were carefully trimmed to get continuous and natural color transitions. Then the 3D colorized digital dental model was constructed, which was exported as OBJ file or WRP file which was a special file for software of Geomagic series. For the purpose of evaluating the visual effect of the 3D colorized digital model, a rating scale on color simulation effect in views of patients'evaluation was used. Sixteen patients were recruited and their scores on colored and non-colored digital dental models were recorded. The data were analyzed using McNemar-Bowker test in SPSS 20. Universal 3D colorized digital dental model with better color simulation was constructed based on intra-oral scanning and digital photography. For clinical application, the 3D colorized digital dental models, combined with 3D face images, were introduced into 3D smile design of aesthetic rehabilitation, which could improve the patients' cognition for the esthetic digital design and virtual prosthetic effect. Universal 3D colorized

  20. Multispectral imaging using a stereo camera: concept, design and assessment

    Directory of Open Access Journals (Sweden)

    Mansouri Alamin

    2011-01-01

    Full Text Available Abstract This paper proposes a one-shot six-channel multispectral color image acquisition system using a stereo camera and a pair of optical filters. The two filters from the best pair, selected from among readily available filters such that they modify the sensitivities of the two cameras in such a way that they produce optimal estimation of spectral reflectance and/or color, are placed in front of the two lenses of the stereo camera. The two images acquired from the stereo camera are then registered for pixel-to-pixel correspondence. The spectral reflectance and/or color at each pixel on the scene are estimated from the corresponding camera outputs in the two images. Both simulations and experiments have shown that the proposed system performs well both spectrally and colorimetrically. Since it acquires the multispectral images in one shot, the proposed system can solve the limitations of slow and complex acquisition process, and costliness of the state of the art multispectral imaging systems, leading to its possible uses in widespread applications.

  1. Environmental Effects on Measurement Uncertainties of Time-of-Flight Cameras

    DEFF Research Database (Denmark)

    Gudmundsson, Sigurjon Arni; Aanæs, Henrik; Larsen, Rasmus

    2007-01-01

    In this paper the effect the environment has on the SwissRanger SR3000 Time-Of-Flight camera is investigated. The accuracy of this camera is highly affected by the scene it is pointed at: Such as the reflective properties, color and gloss. Also the complexity of the scene has considerable effects...... description of how a surface color intensity influences the depth measurement, and illustrate how multiple reflections influence the resulting depth measurement....

  2. THE HUBBLE WIDE FIELD CAMERA 3 TEST OF SURFACES IN THE OUTER SOLAR SYSTEM: SPECTRAL VARIATION ON KUIPER BELT OBJECTS

    International Nuclear Information System (INIS)

    Fraser, Wesley C.; Brown, Michael E.; Glass, Florian

    2015-01-01

    Here, we present additional photometry of targets observed as part of the Hubble Wide Field Camera 3 (WFC3) Test of Surfaces in the Outer Solar System. Twelve targets were re-observed with the WFC3 in the optical and NIR wavebands designed to complement those used during the first visit. Additionally, all of the observations originally presented by Fraser and Brown were reanalyzed through the same updated photometry pipeline. A re-analysis of the optical and NIR color distribution reveals a bifurcated optical color distribution and only two identifiable spectral classes, each of which occupies a broad range of colors and has correlated optical and NIR colors, in agreement with our previous findings. We report the detection of significant spectral variations on five targets which cannot be attributed to photometry errors, cosmic rays, point-spread function or sensitivity variations, or other image artifacts capable of explaining the magnitude of the variation. The spectrally variable objects are found to have a broad range of dynamical classes and absolute magnitudes, exhibit a broad range of apparent magnitude variations, and are found in both compositional classes. The spectrally variable objects with sufficiently accurate colors for spectral classification maintain their membership, belonging to the same class at both epochs. 2005 TV189 exhibits a sufficiently broad difference in color at the two epochs that span the full range of colors of the neutral class. This strongly argues that the neutral class is one single class with a broad range of colors, rather than the combination of multiple overlapping classes

  3. A combined GIS and stereo vision approach to identify building pixels in images and determine appropriate color terms

    Directory of Open Access Journals (Sweden)

    Philip James Bartie

    2011-05-01

    Full Text Available Color information is a useful attribute to include in a building's description to assist the listener in identifying the intended target. Often this information is only available as image data, and not readily accessible for use in constructing referring expressions for verbal communication. The method presented uses a GIS building polygon layer in conjunction with street-level captured imagery to provide a method to automatically filter foreground objects and select pixels which correspond to building facades. These selected pixels are then used to define the most appropriate color term for the building, and corresponding fuzzy color term histogram. The technique uses a single camera capturing images at a high frame rate, with the baseline distance between frames calculated from a GPS speed log. The expected distance from the camera to the building is measured from the polygon layer and refined from the calculated depth map, after which building pixels are selected. In addition significant foreground planar surfaces between the known road edge and building facade are identified as possible boundary walls and hedges. The output is a dataset of the most appropriate color terms for both the building and boundary walls. Initial trials demonstrate the usefulness of the technique in automatically capturing color terms for buildings in urban regions.

  4. A combined GIS and stereo vision approach to identify building pixels in images and determine appropriate color terms

    Directory of Open Access Journals (Sweden)

    Philip James Bartie

    2011-01-01

    Full Text Available Color information is a useful attribute to include in a building's description to assist the listener in identifying the intended target. Often this information is only available as image data, and not readily accessible for use in constructing referring expressions for verbal communication. The method presented uses a GIS building polygon layer in conjunction with street-level captured imagery to provide a method to automatically filter foreground objects and select pixels which correspond to building facades. These selected pixels are then used to define the most appropriate color term for the building, and corresponding fuzzy color term histogram. The technique uses a single camera capturing images at a high frame rate, with the baseline distance between frames calculated from a GPS speed log. The expected distance from the camera to the building is measured from the polygon layer and refined from the calculated depth map, after which building pixels are selected. In addition significant foreground planar surfaces between the known road edge and building facade are identified as possible boundary walls and hedges. The output is a dataset of the most appropriate color terms for both the building and boundary walls. Initial trials demonstrate the usefulness of the technique in automatically capturing color terms for buildings in urban regions.

  5. Spectrally resolved measurements of the terahertz beam profile generated from a two-color air plasma

    DEFF Research Database (Denmark)

    Pedersen, Pernille Klarskov; Zalkovskij, Maksim; Strikwerda, Andrew

    2014-01-01

    Using a THz camera and THz bandpass filters, we measure the frequency - resolved beam profile emitted from a two - color air plasma. We observe a frequency - independent emission angle from the plasma .......Using a THz camera and THz bandpass filters, we measure the frequency - resolved beam profile emitted from a two - color air plasma. We observe a frequency - independent emission angle from the plasma ....

  6. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  7. Communities, Cameras, and Conservation

    Science.gov (United States)

    Patterson, Barbara

    2012-01-01

    Communities, Cameras, and Conservation (CCC) is the most exciting and valuable program the author has seen in her 30 years of teaching field science courses. In this citizen science project, students and community volunteers collect data on mountain lions ("Puma concolor") at four natural areas and public parks along the Front Range of Colorado.…

  8. Mars Observer camera

    Science.gov (United States)

    Malin, M. C.; Danielson, G. E.; Ingersoll, A. P.; Masursky, H.; Veverka, J.; Ravine, M. A.; Soulanille, T. A.

    1992-01-01

    The Mars Observer camera (MOC) is a three-component system (one narrow-angle and two wide-angle cameras) designed to take high spatial resolution pictures of the surface of Mars and to obtain lower spatial resolution, synoptic coverage of the planet's surface and atmosphere. The cameras are based on the 'push broom' technique; that is, they do not take 'frames' but rather build pictures, one line at a time, as the spacecraft moves around the planet in its orbit. MOC is primarily a telescope for taking extremely high resolution pictures of selected locations on Mars. Using the narrow-angle camera, areas ranging from 2.8 km x 2.8 km to 2.8 km x 25.2 km (depending on available internal digital buffer memory) can be photographed at about 1.4 m/pixel. Additionally, lower-resolution pictures (to a lowest resolution of about 11 m/pixel) can be acquired by pixel averaging; these images can be much longer, ranging up to 2.8 x 500 km at 11 m/pixel. High-resolution data will be used to study sediments and sedimentary processes, polar processes and deposits, volcanism, and other geologic/geomorphic processes.

  9. The world's fastest camera

    CERN Multimedia

    Piquepaille, Roland

    2006-01-01

    This image processor is not your typical digital camera. It took 6 years to 20 people and $6 million to build the "Regional Calorimeter Trigger"(RCT) which will be a component of the Compact Muon Solenoid (CMS) experiment, one of the detectors on the Large Hadron Collider (LHC) in Geneva, Switzerland (1 page)

  10. Camera as Cultural Critique

    DEFF Research Database (Denmark)

    Suhr, Christian

    2015-01-01

    researchers, cameras, and filmed subjects already inherently comprise analytical decisions. It is these ethnographic qualities inherent in audiovisual and photographic imagery that make it of particular value to a participatory anthropological enterprise that seeks to resist analytic closure and seeks instead...

  11. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...

  12. CHAMP - Camera, Handlens, and Microscope Probe

    Science.gov (United States)

    Mungas, G. S.; Beegle, L. W.; Boynton, J.; Sepulveda, C. A.; Balzer, M. A.; Sobel, H. R.; Fisher, T. A.; Deans, M.; Lee, P.

    2005-01-01

    CHAMP (Camera, Handlens And Microscope Probe) is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As an arm-mounted imager, CHAMP supports stereo-imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision range-finding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. Currently designed with a filter wheel with 4 different filters, so that color and black and white images can be obtained over the entire Field-of-View, future designs will increase the number of filter positions to include 8 different filters. Finally, CHAMP incorporates controlled white and UV illumination so that images can be obtained regardless of sun position, and any potential fluorescent species can be identified so the most astrobiologically interesting samples can be identified.

  13. Modeling human color categorization: Color discrimination and color memory

    NARCIS (Netherlands)

    Heskes, T.; van den Broek, Egon; Lucas, P.; Hendriks, Maria A.; Vuurpijl, L.G.; Puts, M.J.H.; Wiegerinck, W.

    2003-01-01

    Color matching in Content-Based Image Retrieval is done using a color space and measuring distances between colors. Such an approach yields non-intuitive results for the user. We introduce color categories (or focal colors), determine that they are valid, and use them in two experiments. The

  14. The PAU Camera

    Science.gov (United States)

    Casas, R.; Ballester, O.; Cardiel-Sas, L.; Carretero, J.; Castander, F. J.; Castilla, J.; Crocce, M.; de Vicente, J.; Delfino, M.; Fernández, E.; Fosalba, P.; García-Bellido, J.; Gaztañaga, E.; Grañena, F.; Jiménez, J.; Madrid, F.; Maiorino, M.; Martí, P.; Miquel, R.; Neissner, C.; Ponce, R.; Sánchez, E.; Serrano, S.; Sevilla, I.; Tonello, N.; Troyano, I.

    2011-11-01

    The PAU Camera (PAUCam) is a wide-field camera designed to be mounted at the William Herschel Telescope (WHT) prime focus, located at the Observatorio del Roque de los Muchachos in the island of La Palma (Canary Islands).Its primary function is to carry out a cosmological survey, the PAU Survey, covering an area of several hundred square degrees of sky. Its purpose is to determine positions and distances using photometric redshift techniques. To achieve accurate photo-z's, PAUCam will be equipped with 40 narrow-band filters covering the range from 450 to850 nm, and six broad-band filters, those of the SDSS system plus the Y band. To fully cover the focal plane delivered by the telescope optics, 18 CCDs 2k x 4k are needed. The pixels are square of 15 μ m size. The optical characteristics of the prime focus corrector deliver a field-of-view where eight of these CCDs will have an illumination of more than 95% covering a field of 40 arc minutes. The rest of the CCDs will occupy the vignetted region extending the field diameter to one degree. Two of the CCDs will be devoted to auto-guiding.This camera have some innovative features. Firstly, both the broad-band and the narrow-band filters will be placed in mobile trays, hosting 16 such filters at most. Those are located inside the cryostat at few millimeters in front of the CCDs when observing. Secondly, a pressurized liquid nitrogen tank outside the camera will feed a boiler inside the cryostat with a controlled massflow. The read-out electronics will use the Monsoon architecture, originally developed by NOAO, modified and manufactured by our team in the frame of the DECam project (the camera used in the DES Survey).PAUCam will also be available to the astronomical community of the WHT.

  15. 7 CFR 29.3012 - Color symbols.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Color symbols. 29.3012 Section 29.3012 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... Color symbols. As applied to Burley, single color symbols are as follows: L—buff, F—tan, R—red, D—dark...

  16. Color superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Wilczek, F. [Institute for Advanced Study, Princeton, NJ (United States)

    1997-09-22

    The asymptotic freedom of QCD suggests that at high density - where one forms a Fermi surface at very high momenta - weak coupling methods apply. These methods suggest that chiral symmetry is restored and that an instability toward color triplet condensation (color superconductivity) sets in. Here I attempt, using variational methods, to estimate these effects more precisely. Highlights include demonstration of a negative pressure in the uniform density chiral broken phase for any non-zero condensation, which we take as evidence for the philosophy of the MIT bag model; and demonstration that the color gap is substantial - several tens of MeV - even at modest densities. Since the superconductivity is in a pseudoscalar channel, parity is spontaneously broken.

  17. MISR radiometric camera-by-camera Cloud Mask V004

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the Radiometric camera-by-camera Cloud Mask dataset. It is used to determine whether a scene is classified as clear or cloudy. A new parameter has...

  18. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an

  19. Multi-color pyrometry imaging system and method of operating the same

    Science.gov (United States)

    Estevadeordal, Jordi; Nirmalan, Nirm Velumylum; Tralshawala, Nilesh; Bailey, Jeremy Clyde

    2017-03-21

    A multi-color pyrometry imaging system for a high-temperature asset includes at least one viewing port in optical communication with at least one high-temperature component of the high-temperature asset. The system also includes at least one camera device in optical communication with the at least one viewing port. The at least one camera device includes a camera enclosure and at least one camera aperture defined in the camera enclosure, The at least one camera aperture is in optical communication with the at least one viewing port. The at least one camera device also includes a multi-color filtering mechanism coupled to the enclosure. The multi-color filtering mechanism is configured to sequentially transmit photons within a first predetermined wavelength band and transmit photons within a second predetermined wavelength band that is different than the first predetermined wavelength band.

  20. A color display device recording X ray spectra, especially intended for medical radiography

    International Nuclear Information System (INIS)

    Boulch, J.-M.

    1975-01-01

    Said invention relates to a color display recording device for X ray spectra intended for medical radiography. The video signal of the X ray camera receiving the radiation having passed through the patient is amplified and transformed into a color coding according to the energy spectrum received by the camera. In a first version, the energy spectrum from the camera gives directly an image on the color tube. In a second version the energy spectrum, after having been transformed into digital signals, is first sent into a memory, then into a computer used as a spectrum analyzer, and finally into the color display device [fr

  1. DNATagger, colors for codons.

    Science.gov (United States)

    Scherer, N M; Basso, D M

    2008-09-16

    DNATagger is a web-based tool for coloring and editing DNA, RNA and protein sequences and alignments. It is dedicated to the visualization of protein coding sequences and also protein sequence alignments to facilitate the comprehension of evolutionary processes in sequence analysis. The distinctive feature of DNATagger is the use of codons as informative units for coloring DNA and RNA sequences. The codons are colored according to their corresponding amino acids. It is the first program that colors codons in DNA sequences without being affected by "out-of-frame" gaps of alignments. It can handle single gaps and gaps inside the triplets. The program also provides the possibility to edit the alignments and change color patterns and translation tables. DNATagger is a JavaScript application, following the W3C guidelines, designed to work on standards-compliant web browsers. It therefore requires no installation and is platform independent. The web-based DNATagger is available as free and open source software at http://www.inf.ufrgs.br/~dmbasso/dnatagger/.

  2. Gate Simulation of a Gamma Camera

    International Nuclear Information System (INIS)

    Abidi, Sana; Mlaouhi, Zohra

    2008-01-01

    Medical imaging is a very important diagnostic because it allows for an exploration of the internal human body. The nuclear imaging is an imaging technique used in the nuclear medicine. It is to determine the distribution in the body of a radiotracers by detecting the radiation it emits using a detection device. Two methods are commonly used: Single Photon Emission Computed Tomography (SPECT) and the Positrons Emission Tomography (PET). In this work we are interested on modelling of a gamma camera. This simulation is based on Monte-Carlo language and in particular Gate simulator (Geant4 Application Tomographic Emission). We have simulated a clinical gamma camera called GAEDE (GKS-1) and then we validate these simulations by experiments. The purpose of this work is to monitor the performance of these gamma camera and the optimization of the detector performance and the the improvement of the images quality. (Author)

  3. Face Liveness Detection Using a Light Field Camera

    Directory of Open Access Journals (Sweden)

    Sooyeon Kim

    2014-11-01

    Full Text Available A light field camera is a sensor that can record the directions as well as the colors of incident rays. This camera is widely utilized from 3D reconstruction to face and iris recognition. In this paper, we suggest a novel approach for defending spoofing face attacks, like printed 2D facial photos (hereinafter 2D photos and HD tablet images, using the light field camera. By viewing the raw light field photograph from a different standpoint, we extract two special features which cannot be obtained from the conventional camera. To verify the performance, we compose light field photograph databases and conduct experiments. Our proposed method achieves at least 94.78% accuracy or up to 99.36% accuracy under different types of spoofing attacks.

  4. Monitoring environmental change with color slides

    Science.gov (United States)

    Arthur W. Magill

    1989-01-01

    Monitoring human impact on outdoor recreation sites and view landscapes is necessary to evaluate influences which may require corrective action and to determine if management is achieving desired goals. An inexpensive method to monitor environmental change is to establish camera points and use repeat color slides. Successful monitoring from slides requires the observer...

  5. Fast natural color mapping for night-time imagery

    NARCIS (Netherlands)

    Hogervorst, M.A.; Toet, A.

    2010-01-01

    We present a new method to render multi-band night-time imagery (images from sensors whose sensitive range does not necessarily coincide with the visual part of the electromagnetic spectrum, e.g. image intensifiers, thermal camera's) in natural daytime colors. The color mapping is derived from the

  6. Spatial characterization of nanotextured surfaces by visual color imaging

    DEFF Research Database (Denmark)

    Feidenhans'l, Nikolaj Agentoft; Murthy, Swathi; Madsen, Morten H.

    2016-01-01

    We present a method using an ordinary color camera to characterize nanostructures from the visual color of the structures. The method provides a macroscale overview image from which micrometer-sized regions can be analyzed independently, hereby revealing long-range spatial variations...

  7. A single-source solid-precursor method for making eco-friendly doped semiconductor nanoparticles emitting multi-color luminescence.

    Science.gov (United States)

    Manzoor, K; Aditya, V; Vadera, S R; Kumar, N; Kutty, T R N

    2007-02-01

    A novel synthesis method is presented for the preparation of eco-friendly, doped semiconductor nanocrystals encapsulated within oxide-shells, both formed sequentially from a single-source solid-precursor. Highly luminescent ZnS nanoparticles, in situ doped with Cu(+)-Al3+ pairs and encapsulated with ZnO shells are prepared by the thermal decomposition of a solid-precursor compound, zinc sulfato-thiourea-oxyhydroxide, showing layered crystal structure. The precursor compound is prepared by an aqueous wet-chemical reaction involving necessary chemical reagents required for the precipitation, doping and inorganic surface capping of the nanoparticles. The elemental analysis (C, H, N, S, O, Zn), quantitative estimation of different chemical groups (SO4(2-) and NH4(-)) and infrared studies suggested that the precursor compound is formed by the intercalation of thiourea, and/or its derivatives thiocarbamate (CSNH2(-)), dithiocarbamate (CS2NH2(-)), etc., and ammonia into the gallery space of zinc-sulfato-oxyhydroxide corbel where the Zn(II) ions are both in the octahedral as well as tetrahedral coordination in the ratio 3 : 2 and the dopant ions are incorporated within octahedral voids. The powder X-ray diffraction of precursor compound shows high intensity basal reflection corresponding to the large lattice-plane spacing of d = 11.23 angstroms and the Rietveld analysis suggested orthorhombic structure with a = 9.71 angstroms, b = 12.48 angstroms, c = 26.43 angstroms, and beta = 90 degrees. Transmission electron microscopy studies show the presence of micrometer sized acicular monocrystallites with prismatic platy morphology. Controlled thermolysis of the solid-precursor at 70-110 degrees C leads to the collapse of layered structure due to the hydrolysis of interlayer thiourea molecules or its derivatives and the S2- ions liberated thereby reacts with the tetrahedral Zn(II) atoms leading to the precipitation of ZnS nanoparticles at the gallery space. During this process

  8. Color transparency

    International Nuclear Information System (INIS)

    Miller, G.A.

    1993-01-01

    Imagine shooting a beam of protons of high momentum P through an atomic nucleus. Usually the nuclear interactions prevent the particles from emerging with momentum ∼P. Further, the angular distribution of elastically scattered protons is close to the optical diffraction pattern produced by a black disk. Thus the nucleus acts as a black disk and is not transparent. However, certain high momentum transfer reactions in which a proton is knocked out of the nucleus may be completely different. Suppose that the high momentum transfer process leads to the formation of a small-size color singlet wavepacket that is ejected from the nucleus. The effects of gluons emitted by color singlet systems of closely separated quarks and gluons tend to cancel. Thus the wavepacket-nuclear interactions are suppressed, the nucleus becomes transparant and one says that color transparency CT occurs. The observation of CT also requires that the wavepacket not expand very much while it moves through the nucleus. Simple quantum mechanical formulations can assess this expansion. The creation of a small-sized wavepacket is expected in asymptotic perturbative effects. The author reviews the few experimental attempts to observe color transparency in nuclear (e,e'p) and (p,pp) reactions and interpret the data and their implications

  9. Color Sense

    Science.gov (United States)

    Johnson, Heidi S. S.; Maki, Jennifer A.

    2009-01-01

    This article reports a study conducted by members of the WellU Academic Integration Subcommittee of The College of St. Scholastica's College's Healthy Campus Initiative plan whose purpose was to determine whether changing color in the classroom could have a measurable effect on students. One simple improvement a school can make in a classroom is…

  10. Body worn camera

    Science.gov (United States)

    Aishwariya, A.; Pallavi Sudhir, Gulavani; Garg, Nemesa; Karthikeyan, B.

    2017-11-01

    A body worn camera is small video camera worn on the body, typically used by police officers to record arrests, evidence from crime scenes. It helps preventing and resolving complaints brought by members of the public; and strengthening police transparency, performance, and accountability. The main constants of this type of the system are video format, resolution, frames rate, and audio quality. This system records the video in .mp4 format with 1080p resolution and 30 frames per second. One more important aspect to while designing this system is amount of power the system requires as battery management becomes very critical. The main design challenges are Size of the Video, Audio for the video. Combining both audio and video and saving it in .mp4 format, Battery, size that is required for 8 hours of continuous recording, Security. For prototyping this system is implemented using Raspberry Pi model B.

  11. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1986-01-01

    A positron emission tomography camera having a plurality of detector rings positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom. Each ring contains a plurality of scintillation detectors which are positioned around an inner circumference with a septum ring extending inwardly from the inner circumference along each outer edge of each ring. An additional septum ring is positioned in the middle of each ring of detectors and parallel to the other septa rings, whereby the inward extent of all the septa rings may be reduced by one-half and the number of detectors required in each ring is reduced. The additional septa reduces the costs of the positron camera and improves its performance

  12. The NEAT Camera Project

    Science.gov (United States)

    Jr., Ray L. Newburn

    1995-01-01

    The NEAT (Near Earth Asteroid Tracking) camera system consists of a camera head with a 6.3 cm square 4096 x 4096 pixel CCD, fast electronics, and a Sun Sparc 20 data and control computer with dual CPUs, 256 Mbytes of memory, and 36 Gbytes of hard disk. The system was designed for optimum use with an Air Force GEODSS (Ground-based Electro-Optical Deep Space Surveillance) telescope. The GEODSS telescopes have 1 m f/2.15 objectives of the Ritchey-Chretian type, designed originally for satellite tracking. Installation of NEAT began July 25 at the Air Force Facility on Haleakala, a 3000 m peak on Maui in Hawaii.

  13. Gamma camera display system

    International Nuclear Information System (INIS)

    Stout, K.J.

    1976-01-01

    A gamma camera having an array of photomultipliers coupled via pulse shaping circuitry and a resistor weighting circuit to a display for forming an image of a radioactive subject is described. A linearizing circuit is coupled to the weighting circuit, the linearizing circuit including a nonlinear feedback circuit with diode coupling to the weighting circuit for linearizing the correspondence between points of the display and points of the subject. 4 Claims, 5 Drawing Figures

  14. Scanning gamma camera

    International Nuclear Information System (INIS)

    Engdahl, L.W.; Batter, J.F. Jr.; Stout, K.J.

    1977-01-01

    A scanning system for a gamma camera providing for the overlapping of adjacent scan paths is described. A collimator mask having tapered edges provides for a graduated reduction in intensity of radiation received by a detector thereof, the reduction in intensity being graduated in a direction normal to the scanning path to provide a blending of images of adjacent scan paths. 31 claims, 15 figures

  15. A novel super-resolution camera model

    Science.gov (United States)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  16. Color naming across languages reflects color use.

    Science.gov (United States)

    Gibson, Edward; Futrell, Richard; Jara-Ettinger, Julian; Mahowald, Kyle; Bergen, Leon; Ratnasingam, Sivalogeswaran; Gibson, Mitchell; Piantadosi, Steven T; Conway, Bevil R

    2017-10-03

    What determines how languages categorize colors? We analyzed results of the World Color Survey (WCS) of 110 languages to show that despite gross differences across languages, communication of chromatic chips is always better for warm colors (yellows/reds) than cool colors (blues/greens). We present an analysis of color statistics in a large databank of natural images curated by human observers for salient objects and show that objects tend to have warm rather than cool colors. These results suggest that the cross-linguistic similarity in color-naming efficiency reflects colors of universal usefulness and provide an account of a principle (color use) that governs how color categories come about. We show that potential methodological issues with the WCS do not corrupt information-theoretic analyses, by collecting original data using two extreme versions of the color-naming task, in three groups: the Tsimane', a remote Amazonian hunter-gatherer isolate; Bolivian-Spanish speakers; and English speakers. These data also enabled us to test another prediction of the color-usefulness hypothesis: that differences in color categorization between languages are caused by differences in overall usefulness of color to a culture. In support, we found that color naming among Tsimane' had relatively low communicative efficiency, and the Tsimane' were less likely to use color terms when describing familiar objects. Color-naming among Tsimane' was boosted when naming artificially colored objects compared with natural objects, suggesting that industrialization promotes color usefulness.

  17. Use of cameras for monitoring visibility impairment

    Science.gov (United States)

    Malm, William; Cismoski, Scott; Prenni, Anthony; Peters, Melanie

    2018-02-01

    Webcams and automated, color photography cameras have been routinely operated in many U.S. national parks and other federal lands as far back as 1988, with a general goal of meeting interpretive needs within the public lands system and communicating effects of haze on scenic vistas to the general public, policy makers, and scientists. Additionally, it would be desirable to extract quantifiable information from these images to document how visibility conditions change over time and space and to further reflect the effects of haze on a scene, in the form of atmospheric extinction, independent of changing lighting conditions due to time of day, year, or cloud cover. Many studies have demonstrated a link between image indexes and visual range or extinction in urban settings where visibility is significantly degraded and where scenes tend to be gray and devoid of color. In relatively clean, clear atmospheric conditions, clouds and lighting conditions can sometimes affect the image radiance field as much or more than the effects of haze. In addition, over the course of many years, cameras have been replaced many times as technology improved or older systems wore out, and therefore camera image pixel density has changed dramatically. It is shown that gradient operators are very sensitive to image resolution while contrast indexes are not. Furthermore, temporal averaging and time of day restrictions allow for developing quantitative relationships between atmospheric extinction and contrast-type indexes even when image resolution has varied over time. Temporal averaging effectively removes the variability of visibility indexes associated with changing cloud cover and weather conditions, and changes in lighting conditions resulting from sun angle effects are best compensated for by restricting averaging to only certain times of the day.

  18. Acceptance/operational test procedure 241-AN-107 Video Camera System

    International Nuclear Information System (INIS)

    Pedersen, L.T.

    1994-01-01

    This procedure will document the satisfactory operation of the 241-AN-107 Video Camera System. The camera assembly, including camera mast, pan-and-tilt unit, camera, and lights, will be installed in Tank 241-AN-107 to monitor activities during the Caustic Addition Project. The camera focus, zoom, and iris remote controls will be functionally tested. The resolution and color rendition of the camera will be verified using standard reference charts. The pan-and-tilt unit will be tested for required ranges of motion, and the camera lights will be functionally tested. The master control station equipment, including the monitor, VCRs, printer, character generator, and video micrometer will be set up and performance tested in accordance with original equipment manufacturer's specifications. The accuracy of the video micrometer to measure objects in the range of 0.25 inches to 67 inches will be verified. The gas drying distribution system will be tested to ensure that a drying gas can be flowed over the camera and lens in the event that condensation forms on these components. This test will be performed by attaching the gas input connector, located in the upper junction box, to a pressurized gas supply and verifying that the check valve, located in the camera housing, opens to exhaust the compressed gas. The 241-AN-107 camera system will also be tested to assure acceptable resolution of the camera imaging components utilizing the camera system lights

  19. Color film spectral properties test experiment for target simulation

    Science.gov (United States)

    Liu, Xinyue; Ming, Xing; Fan, Da; Guo, Wenji

    2017-04-01

    In hardware-in-loop test of the aviation spectra camera, the liquid crystal light valve and digital micro-mirror device could not simulate the spectrum characteristics of the landmark. A test system frame was provided based on the color film for testing the spectra camera; and the spectrum characteristics of the color film was test in the paper. The result of the experiment shows that difference was existed between the landmark and the film spectrum curse. However, the spectrum curse peak should change according to the color, and the curse is similar with the standard color traps. So, if the quantity value of error between the landmark and the film was calibrated and the error could be compensated, the film could be utilized in the hardware-in-loop test for the aviation spectra camera.

  20. Colorful solar selective absorber integrated with different colored units.

    Science.gov (United States)

    Chen, Feiliang; Wang, Shao-Wei; Liu, Xingxing; Ji, Ruonan; Li, Zhifeng; Chen, Xiaoshuang; Chen, Yuwei; Lu, Wei

    2016-01-25

    Solar selective absorbers are the core part for solar thermal technologies such as solar water heaters, concentrated solar power, solar thermoelectric generators and solar thermophotovoltaics. Colorful solar selective absorber can provide new freedom and flexibility beyond energy performance, which will lead to wider utilization of solar technologies. In this work, we present a monolithic integration of colored solar absorber array with different colors on a single substrate based on a multilayered structure of Cu/TiN(x)O(y)/TiO(2)/Si(3)N(4)/SiO(2). A colored solar absorber array with 16 color units is demonstrated experimentally by using combinatorial deposition technique via changing the thickness of SiO(2) layer. The solar absorptivity and thermal emissivity of all the color units is higher than 92% and lower than 5.5%, respectively. The colored solar selective absorber array can have colorful appearance and designable patterns while keeping high energy performance at the same time. It is a new candidate for a number of solar applications, especially for architecture integration and military camouflage.

  1. Color filter array pattern identification using variance of color difference image

    Science.gov (United States)

    Shin, Hyun Jun; Jeon, Jong Ju; Eom, Il Kyu

    2017-07-01

    A color filter array is placed on the image sensor of a digital camera to acquire color images. Each pixel uses only one color, since the image sensor can measure only one color per pixel. Therefore, empty pixels are filled using an interpolation process called demosaicing. The original and the interpolated pixels have different statistical characteristics. If the image is modified by manipulation or forgery, the color filter array pattern is altered. This pattern change can be a clue for image forgery detection. However, most forgery detection algorithms have the disadvantage of assuming the color filter array pattern. We present an identification method of the color filter array pattern. Initially, the local mean is eliminated to remove the background effect. Subsequently, the color difference block is constructed to emphasize the difference between the original pixel and the interpolated pixel. The variance measure of the color difference image is proposed as a means of estimating the color filter array configuration. The experimental results show that the proposed method is effective in identifying the color filter array pattern. Compared with conventional methods, our method provides superior performance.

  2. Common aperture multispectral spotter camera: Spectro XR

    Science.gov (United States)

    Petrushevsky, Vladimir; Freiman, Dov; Diamant, Idan; Giladi, Shira; Leibovich, Maor

    2017-10-01

    The Spectro XRTM is an advanced color/NIR/SWIR/MWIR 16'' payload recently developed by Elbit Systems / ELOP. The payload's primary sensor is a spotter camera with common 7'' aperture. The sensor suite includes also MWIR zoom, EO zoom, laser designator or rangefinder, laser pointer / illuminator and laser spot tracker. Rigid structure, vibration damping and 4-axes gimbals enable high level of line-of-sight stabilization. The payload's list of features include multi-target video tracker, precise boresight, strap-on IMU, embedded moving map, geodetic calculations suite, and image fusion. The paper describes main technical characteristics of the spotter camera. Visible-quality, all-metal front catadioptric telescope maintains optical performance in wide range of environmental conditions. High-efficiency coatings separate the incoming light into EO, SWIR and MWIR band channels. Both EO and SWIR bands have dual FOV and 3 spectral filters each. Several variants of focal plane array formats are supported. The common aperture design facilitates superior DRI performance in EO and SWIR, in comparison to the conventionally configured payloads. Special spectral calibration and color correction extend the effective range of color imaging. An advanced CMOS FPA and low F-number of the optics facilitate low light performance. SWIR band provides further atmospheric penetration, as well as see-spot capability at especially long ranges, due to asynchronous pulse detection. MWIR band has good sharpness in the entire field-of-view and (with full HD FPA) delivers amount of detail far exceeding one of VGA-equipped FLIRs. The Spectro XR offers level of performance typically associated with larger and heavier payloads.

  3. THE ADVANCED CAMERA FOR SURVEYS GENERAL CATALOG: STRUCTURAL PARAMETERS FOR APPROXIMATELY HALF A MILLION GALAXIES

    Energy Technology Data Exchange (ETDEWEB)

    Griffith, Roger L.; Kirkpatrick, J. Davy [Infrared Processing and Analysis Center, California Institute of Technology, Pasadena, CA 91125 (United States); Cooper, Michael C. [Center for Galaxy Evolution, Department of Physics and Astronomy, University of California, Irvine, 4129 Frederick Reines Hall, Irvine, CA 92697 (United States); Newman, Jeffrey A. [Pittsburgh Particle Physics, Astrophysics, and Cosmology Center, Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA 15260 (United States); Moustakas, Leonidas A.; Stern, Daniel [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr., Pasadena, CA 91109 (United States); Comerford, Julia M. [Astronomy Department, University of Texas at Austin, Austin, TX 78712 (United States); Davis, Marc [Department of Astronomy, University of California, Berkeley, Hearst Field Annex B, Berkeley, CA 94720 (United States); Lotz, Jennifer M.; Koekemoer, Anton M. [Space Telescope Science Institute, 3700 San Martin Dr., Baltimore, MD 21218 (United States); Barden, Marco [Institute of Astro- and Particle Physics, University of Innsbruck, Technikerstr. 25, 6020 Innsbruck (Austria); Conselice, Christopher J. [School of Physics and Astronomy, University of Nottingham, Nottingham (United Kingdom); Capak, Peter L.; Scoville, Nick; Sheth, Kartik; Shopbell, Patrick [Spitzer Science Centre, 314-6 California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125 (United States); Faber, S. M.; Koo, David C. [UCO/Lick Observatory, University of California, CA (United States); Noeske, Kai G. [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA (United States); Willmer, Christopher N. A. [Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721 (United States); and others

    2012-05-01

    We present the Advanced Camera for Surveys General Catalog (ACS-GC), a photometric and morphological database using publicly available data obtained with the Advanced Camera for Surveys (ACS) instrument on the Hubble Space Telescope. The goal of the ACS-GC database is to provide a large statistical sample of galaxies with reliable structural and distance measurements to probe the evolution of galaxies over a wide range of look-back times. The ACS-GC includes approximately 470,000 astronomical sources (stars + galaxies) derived from the AEGIS, COSMOS, GEMS, and GOODS surveys. GALAPAGOS was used to construct photometric (SEXTRACTOR) and morphological (GALFIT) catalogs. The analysis assumes a single Sersic model for each object to derive quantitative structural parameters. We include publicly available redshifts from the DEEP2, COMBO-17, TKRS, PEARS, ACES, CFHTLS, and zCOSMOS surveys to supply redshifts (spectroscopic and photometric) for a considerable fraction ({approx}74%) of the imaging sample. The ACS-GC includes color postage stamps, GALFIT residual images, and photometry, structural parameters, and redshifts combined into a single catalog.

  4. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... One Use Facts About Colored Contacts and Halloween Safety Colored Contact Lens Facts Over-the-Counter Costume ... use of colored contact lenses , from the U.S. Food and Drug Administration (FDA). Are the colored lenses ...

  5. Comparison of polarimetric cameras

    Science.gov (United States)

    2017-03-01

    and darker objects will have higher degrees of polarization. Quantitative data has been collected on the moon that displays the UMOV effect for...different phases and regions of the moon (Zubko 2011). The effect relates the wavelength, color, and texture of an object to polarization. C. NON-OPTICAL...and registering the photos after saving them. If the trigger mode was unsuccessful the images results cause errors in registration. An attempt at

  6. Forward looking anomaly detection via fusion of infrared and color imagery

    Science.gov (United States)

    Stone, K.; Keller, J. M.; Popescu, M.; Havens, T. C.; Ho, K. C.

    2010-04-01

    This paper develops algorithms for the detection of interesting and abnormal objects in color and infrared imagery taken from cameras mounted on a moving vehicle, observing a fixed scene. The primary purpose of detection is to cue a human-in-the-loop detection system. Algorithms for direct detection and change detection are investigated, as well as fusion of the two. Both methods use temporal information to reduce the number of false alarms. The direct detection algorithm uses image self-similarity computed between local neighborhoods to determine interesting, or unique, parts of an image. Neighborhood similarity is computed using Euclidean distance in CIELAB color space for the color imagery, and Euclidean distance between grey levels in the infrared imagery. The change detection algorithm uses the affine scale-invariant feature transform (ASIFT) to transform multiple background frames into the current image space. Each transformed image is then compared to the current image, and the multiple outputs are fused to produce a single difference image. Changes in lighting and contrast between the background run and the current run are adjusted for in both color and infrared imagery. Frame-to-frame motion is modeled using a perspective transformation, the parameters of which are computed using scale-invariant feature transform (SIFT) keypoint correspondences. This information is used to perform temporal accumulation of single frame detections for both the direct detection and change detection algorithms. Performance of the proposed algorithms is evaluated on multiple lanes from a data collection at a US Army test site.

  7. AIP GHz modulation detection using a streak camera: Suitability of streak cameras in the AWAKE experiment

    CERN Document Server

    Rieger, K; Reimann, O; Muggli, P

    2017-01-01

    Using frequency mixing, a modulated light pulse of ns duration is created. We show that, with a ps-resolution streak camera that is usually used for single short pulse measurements, we can detect via an FFT detection approach up to 450 GHz modulation in a pulse in a single measurement. This work is performed in the context of the AWAKE plasma wakefield experiment where modulation frequencies in the range of 80–280 GHz are expected.

  8. Stereoscopic camera design

    Science.gov (United States)

    Montgomery, David J.; Jones, Christopher K.; Stewart, James N.; Smith, Alan

    2002-05-01

    It is clear from the literature that the majority of work in stereoscopic imaging is directed towards the development of modern stereoscopic displays. As costs come down, wider public interest in this technology is expected to increase. This new technology would require new methods of image formation. Advances in stereo computer graphics will of course lead to the creation of new stereo computer games, graphics in films etc. However, the consumer would also like to see real-world stereoscopic images, pictures of family, holiday snaps etc. Such scenery would have wide ranges of depth to accommodate and would need also to cope with moving objects, such as cars, and in particular other people. Thus, the consumer acceptance of auto/stereoscopic displays and 3D in general would be greatly enhanced by the existence of a quality stereoscopic camera. This paper will cover an analysis of existing stereoscopic camera designs and show that they can be categorized into four different types, with inherent advantages and disadvantages. A recommendation is then made with regard to 3D consumer still and video photography. The paper will go on to discuss this recommendation and describe its advantages and how it can be realized in practice.

  9. Three-band, 1.9-μm axial resolution full-field optical coherence microscopy over a 530-1700 nm wavelength range using a single camera

    OpenAIRE

    Federici, Antoine; Dubois, Arnaud

    2014-01-01

    International audience; Full-field optical coherence microscopy is an established optical technology based on low-coherence interference microscopy for high-resolution imaging of semitransparent samples. In this Letter, we demonstrate an extension of the technique using a visible to short-wavelength infrared camera and a halogen lamp to image in three distinct bands centered at 635, 870, and 1170 nm. Reflective microscope objectives are employed to minimize chromatic aberrations of the imagin...

  10. Do focal colors look particularly "colorful"?

    Science.gov (United States)

    Witzel, Christoph; Franklin, Anna

    2014-04-01

    If the most typical red, yellow, green, and blue were particularly colorful (i.e., saturated), they would "jump out to the eye." This would explain why even fundamentally different languages have distinct color terms for these focal colors, and why unique hues play a prominent role in subjective color appearance. In this study, the subjective saturation of 10 colors around each of these focal colors was measured through a pairwise matching task. Results show that subjective saturation changes systematically across hues in a way that is strongly correlated to the visual gamut, and exponentially related to sensitivity but not to focal colors.

  11. Comparatively Studied Color Correction Methods for Color Calibration of Automated Microscopy Complex of Biomedical Specimens

    Directory of Open Access Journals (Sweden)

    T. A. Kravtsova

    2016-01-01

    Full Text Available The paper considers a task of generating the requirements and creating a calibration target for automated microscopy systems (AMS of biomedical specimens to provide the invariance of algorithms and software to the hardware configuration. The required number of color fields of the calibration target and their color coordinates are mostly determined by the color correction method, for which coefficients of the equations are estimated during the calibration process. The paper analyses existing color calibration techniques for digital imaging systems using an optical microscope and shows that there is a lack of published results of comparative studies to demonstrate a particular useful color correction method for microscopic images. A comparative study of ten image color correction methods in RGB space using polynomials and combinations of color coordinate of different orders was carried out. The method of conditioned least squares to estimate the coefficients in the color correction equations using captured images of 217 color fields of the calibration target Kodak Q60-E3 was applied. The regularization parameter in this method was chosen experimentally. It was demonstrated that the best color correction quality characteristics are provided by the method that uses a combination of color coordinates of the 3rd order. The study of the influence of the number and the set of color fields included in calibration target on color correction quality for microscopic images was performed. Six train sets containing 30, 35, 40, 50, 60 and 80 color fields, and test set of 47 color fields not included in any of the train sets were formed. It was found out that the train set of 60 color fields minimizes the color correction error values for both operating modes of digital camera: using "default" color settings and with automatic white balance. At the same time it was established that the use of color fields from the widely used now Kodak Q60-E3 target does not

  12. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    A positron emission tomography camera having a plurality of detector planes positioned side-by-side around a patient area to detect radiation. Each plane includes a plurality of photomultiplier tubes and at least two rows of scintillation crystals on each photomultiplier tube extend across to adjacent photomultiplier tubes for detecting radiation from the patient area. Each row of crystals on each photomultiplier tube is offset from the other rows of crystals, and the area of each crystal on each tube in each row is different than the area of the crystals on the tube in other rows for detecting which crystal is actuated and allowing the detector to detect more inter-plane slides. The crystals are offset by an amount equal to the length of the crystal divided by the number of rows. The rows of crystals on opposite sides of the patient may be rotated 90 degrees relative to each other

  13. Automatic locking radioisotope camera lock

    International Nuclear Information System (INIS)

    Rosauer, P.J.

    1978-01-01

    The lock of the present invention secures the isotope source in a stored shielded condition in the camera until a positive effort has been made to open the lock and take the source outside of the camera and prevents disconnection of the source pigtail unless the source is locked in a shielded condition in the camera. It also gives a visual indication of the locked or possible exposed condition of the isotope source and prevents the source pigtail from being completely pushed out of the camera, even when the lock is released. (author)

  14. Color and motion-based particle filter target tracking in a network of overlapping cameras with multi-threading and GPGPU Rastreo de objetivos por medio de filtros de partículas basados en color y movimiento en una red de cámaras con multi-hilo y GPGPU

    Directory of Open Access Journals (Sweden)

    Jorge Francisco Madrigal Díaz

    2013-03-01

    Full Text Available This paper describes an efficient implementation of multiple-target multiple-view tracking in video-surveillance sequences. It takes advantage of the capabilities of multiple core Central Processing Units (CPUs and of graphical processing units under the Compute Unifie Device Arquitecture (CUDA framework. The principle of our algorithm is 1 in each video sequence, to perform tracking on all persons to track by independent particle filters and 2 to fuse the tracking results of all sequences. Particle filters belong to the category of recursive Bayesian filters. They update a Monte-Carlo representation of the posterior distribution over the target position and velocity. For this purpose, they combine a probabilistic motion model, i.e. prior knowledge about how targets move (e.g. constant velocity and a likelihood model associated to the observations on targets. At this first level of single video sequences, the multi-threading library Threading Buildings Blocks (TBB has been used to parallelize the processing of the per-target independent particle filters. Afterwards at the higher level, we rely on General Purpose Programming on Graphical Processing Units (generally termed as GPGPU through CUDA in order to fuse target-tracking data collected on multiple video sequences, by solving the data association problem. Tracking results are presented on various challenging tracking datasets.Este artículo describe una implementación eficiente de un algoritmo de seguimiento de múlti­ples objetivos en múltiples vistas en secuencias de video vigilancia. Aprovecha las capacidades de las Unidades Centrales de Procesamiento (CPUs, por sus siglas en inglés de múltiples núcleos y de las unidades de procesamiento gráfico, bajo el entorno de desarrollo de Arquitec­tura Unificada de Dispositivos de Cómputo (CUDA, por sus siglas en inglés. El principio de nuestro algoritmo es: 1 aplicar el seguimiento visual en cada secuencia de video sobre todas las

  15. Color constancy in dermatoscopy with smartphone

    Science.gov (United States)

    Cugmas, Blaž; Pernuš, Franjo; Likar, Boštjan

    2017-12-01

    The recent spread of cheap dermatoscopes for smartphones can empower patients to acquire images of skin lesions on their own and send them to dermatologists. Since images are acquired by different smartphone cameras under unique illumination conditions, the variability in colors is expected. Therefore, the mobile dermatoscopic systems should be calibrated in order to ensure the color constancy in skin images. In this study, we have tested a dermatoscope DermLite DL1 basic, attached to Samsung Galaxy S4 smartphone. Under the controlled conditions, jpeg images of standard color patches were acquired and a model between an unknown device-dependent RGB and a deviceindependent Lab color space has been built. Results showed that median and the best color error was 7.77 and 3.94, respectively. Results are in the range of a human eye detection capability (color error ≈ 4) and video and printing industry standards (color error is expected to be between 5 and 6). It can be concluded that a calibrated smartphone dermatoscope can provide sufficient color constancy and can serve as an interesting opportunity to bring dermatologists closer to the patients.

  16. Study on color difference estimation method of medicine biochemical analysis

    Science.gov (United States)

    Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Sun, Jiashi; Zhou, Fengkun

    2006-01-01

    The biochemical analysis in medicine is an important inspection and diagnosis method in hospital clinic. The biochemical analysis of urine is one important item. The Urine test paper shows corresponding color with different detection project or different illness degree. The color difference between the standard threshold and the test paper color of urine can be used to judge the illness degree, so that further analysis and diagnosis to urine is gotten. The color is a three-dimensional physical variable concerning psychology, while reflectance is one-dimensional variable; therefore, the estimation method of color difference in urine test can have better precision and facility than the conventional test method with one-dimensional reflectance, it can make an accurate diagnose. The digital camera is easy to take an image of urine test paper and is used to carry out the urine biochemical analysis conveniently. On the experiment, the color image of urine test paper is taken by popular color digital camera and saved in the computer which installs a simple color space conversion (RGB -> XYZ -> L *a *b *)and the calculation software. Test sample is graded according to intelligent detection of quantitative color. The images taken every time were saved in computer, and the whole illness process will be monitored. This method can also use in other medicine biochemical analyses that have relation with color. Experiment result shows that this test method is quick and accurate; it can be used in hospital, calibrating organization and family, so its application prospect is extensive.

  17. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellut, Paolo; Sherwin, Gary

    2011-01-01

    TIR cameras can be used for day/night Unmanned Ground Vehicle (UGV) autonomous navigation when stealth is required. The quality of uncooled TIR cameras has significantly improved over the last decade, making them a viable option at low speed Limiting factors for stereo ranging with uncooled LWIR cameras are image blur and low texture scenes TIR perception capabilities JPL has explored includes: (1) single and dual band TIR terrain classification (2) obstacle detection (pedestrian, vehicle, tree trunks, ditches, and water) (3) perception thru obscurants

  18. X-ray imaging using digital cameras

    Science.gov (United States)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  19. Payload topography camera of Chang'e-3

    International Nuclear Information System (INIS)

    Yu, Guo-Bin; Liu, En-Hai; Zhao, Ru-Jin; Zhong, Jie; Zhou, Xiang-Dong; Zhou, Wu-Lin; Wang, Jin; Chen, Yuan-Pei; Hao, Yong-Jie

    2015-01-01

    Chang'e-3 was China's first soft-landing lunar probe that achieved a successful roving exploration on the Moon. A topography camera functioning as the lander's “eye” was one of the main scientific payloads installed on the lander. It was composed of a camera probe, an electronic component that performed image compression, and a cable assembly. Its exploration mission was to obtain optical images of the lunar topography in the landing zone for investigation and research. It also observed rover movement on the lunar surface and finished taking pictures of the lander and rover. After starting up successfully, the topography camera obtained static images and video of rover movement from different directions, 360° panoramic pictures of the lunar surface around the lander from multiple angles, and numerous pictures of the Earth. All images of the rover, lunar surface, and the Earth were clear, and those of the Chinese national flag were recorded in true color. This paper describes the exploration mission, system design, working principle, quality assessment of image compression, and color correction of the topography camera. Finally, test results from the lunar surface are provided to serve as a reference for scientific data processing and application. (paper)

  20. CHAMP (Camera, Handlens, and Microscope Probe)

    Science.gov (United States)

    Mungas, Greg S.; Boynton, John E.; Balzer, Mark A.; Beegle, Luther; Sobel, Harold R.; Fisher, Ted; Klein, Dan; Deans, Matthew; Lee, Pascal; Sepulveda, Cesar A.

    2005-01-01

    CHAMP (Camera, Handlens And Microscope Probe)is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As a robotic arm-mounted imager, CHAMP supports stereo imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision rangefinding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. CHAMP was originally developed through the Mars Instrument Development Program (MIDP) in support of robotic field investigations, but may also find application in new areas such as robotic in-orbit servicing and maintenance operations associated with spacecraft and human operations. We overview CHAMP'S instrument performance and basic design considerations below.

  1. The Eye of the Camera

    NARCIS (Netherlands)

    van Rompay, Thomas Johannes Lucas; Vonk, Dorette J.; Fransen, M.L.

    2009-01-01

    This study addresses the effects of security cameras on prosocial behavior. Results from previous studies indicate that the presence of others can trigger helping behavior, arising from the need for approval of others. Extending these findings, the authors propose that security cameras can likewise

  2. Luminescence properties of single-phase color-tunable CeZn1-x(B5O10):xMn2+ phosphor

    Science.gov (United States)

    Lu, Yixuan; Li, Chenxia; Deng, Degang; Hua, Yu; Wang, Le; Jin, Leisheng; Xu, Shiqing

    2017-12-01

    In this work, a transition metal doped Ce3+-based borate CeZn1-x(B5O10):Mn2+ was synthesized by utilizing conventional solid-state reaction method. The lattice structure and particularly the luminescence properties of the synthesized phosphors were carefully studied. By analyzing the results, it is revealed that Mn2+ only occupies Zn sites, and it only contributes the light emitting peaked at 615 nm under the excitation of 371 nm. While the emitting peak at 437 nm under the same excitation is verified to be caused by the 5d-4f (2F5/2 and 2F7/2) transitions of Ce3+-based. Furthermore, by conducting a comprehensive investigation on overlap of emission, excitation spectra and variation of fluorescence decay lifetimes, we make it clear that the energy transfer from Ce3+ to Mn2+ is virtually via a resonance-type mechanism. Last but not the least, we can realize color-tunable light emitting by adjusting the concentration of Mn2+ in our proposed phosphor and using excitation of ultraviolet (UV) light. In addition, close to white light can be realized when doping the Mn2+ ions to a certain extent.

  3. Three-color Förster resonance energy transfer within single F₀F₁-ATP synthases: monitoring elastic deformations of the rotary double motor in real time.

    Science.gov (United States)

    Ernst, Stefan; Düser, Monika G; Zarrabi, Nawid; Börsch, Michael

    2012-01-01

    Catalytic activities of enzymes are associated with elastic conformational changes of the protein backbone. Förster-type resonance energy transfer, commonly referred to as FRET, is required in order to observe the dynamics of relative movements within the protein. Förster-type resonance energy transfer between two specifically attached fluorophores provides a ruler with subnanometer resolution between 3 and 8 nm, submillisecond time resolution for time trajectories of conformational changes, and single-molecule sensitivity to overcome the need for synchronization of various conformations. F(O)F(1)-ATP synthase is a rotary molecular machine which catalyzes the formation of adenosine triphosphate (ATP). The Escherichia coli enzyme comprises a proton driven 10 stepped rotary F(O) motor connected to a 3-stepped F(1) motor, where ATP is synthesized. This mismatch of step sizes will result in elastic deformations within the rotor parts. We present a new single-molecule FRET approach to observe both rotary motors simultaneously in a single F(O)F(1)-ATP synthase at work. We labeled this enzyme with three fluorophores, specifically at the stator part and at the two rotors. Duty cycle-optimized with alternating laser excitation, referred to as DCO-ALEX, allowed to control enzyme activity and to unravel associated transient twisting within the rotors of a single enzyme during ATP hydrolysis and ATP synthesis. Monte Carlo simulations revealed that the rotor twisting is larger than 36 deg.

  4. Modeling human color categorization

    NARCIS (Netherlands)

    van den Broek, Egon; Schouten, Th.E.; Kisters, P.M.F.

    A unique color space segmentation method is introduced. It is founded on features of human cognition, where 11 color categories are used in processing color. In two experiments, human subjects were asked to categorize color stimuli into these 11 color categories, which resulted in markers for a

  5. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1982-01-01

    The invention provides a composite solid state detector for use in deriving a display, by spatial coordinate information, of the distribution or radiation emanating from a source within a region of interest, comprising several solid state detector components, each having a given surface arranged for exposure to impinging radiation and exhibiting discrete interactions therewith at given spatially definable locations. The surface of each component and the surface disposed opposite and substantially parallel thereto are associated with impedence means configured to provide for each opposed surface outputs for signals relating the given location of the interactions with one spatial coordinate parameter of one select directional sense. The detector components are arranged to provide groupings of adjacently disposed surfaces mutually linearly oriented to exhibit a common directional sense of the spatial coordinate parameter. Means interconnect at least two of the outputs associated with each of the surfaces within a given grouping for collecting the signals deriving therefrom. The invention also provides a camera system for imaging the distribution of a source of gamma radiation situated within a region of interest

  6. OCAMS: The OSIRIS-REx Camera Suite

    Science.gov (United States)

    Rizk, B.; Drouet d'Aubigny, C.; Golish, D.; Fellows, C.; Merrill, C.; Smith, P.; Walker, M. S.; Hendershot, J. E.; Hancock, J.; Bailey, S. H.; DellaGiustina, D. N.; Lauretta, D. S.; Tanner, R.; Williams, M.; Harshman, K.; Fitzgibbon, M.; Verts, W.; Chen, J.; Connors, T.; Hamara, D.; Dowd, A.; Lowman, A.; Dubin, M.; Burt, R.; Whiteley, M.; Watson, M.; McMahon, T.; Ward, M.; Booher, D.; Read, M.; Williams, B.; Hunten, M.; Little, E.; Saltzman, T.; Alfred, D.; O'Dougherty, S.; Walthall, M.; Kenagy, K.; Peterson, S.; Crowther, B.; Perry, M. L.; See, C.; Selznick, S.; Sauve, C.; Beiser, M.; Black, W.; Pfisterer, R. N.; Lancaster, A.; Oliver, S.; Oquest, C.; Crowley, D.; Morgan, C.; Castle, C.; Dominguez, R.; Sullivan, M.

    2018-02-01

    The OSIRIS-REx Camera Suite (OCAMS) will acquire images essential to collecting a sample from the surface of Bennu. During proximity operations, these images will document the presence of satellites and plumes, record spin state, enable an accurate model of the asteroid's shape, and identify any surface hazards. They will confirm the presence of sampleable regolith on the surface, observe the sampling event itself, and image the sample head in order to verify its readiness to be stowed. They will document Bennu's history as an example of early solar system material, as a microgravity body with a planetesimal size-scale, and as a carbonaceous object. OCAMS is fitted with three cameras. The MapCam will record color images of Bennu as a point source on approach to the asteroid in order to connect Bennu's ground-based point-source observational record to later higher-resolution surface spectral imaging. The SamCam will document the sample site before, during, and after it is disturbed by the sample mechanism. The PolyCam, using its focus mechanism, will observe the sample site at sub-centimeter resolutions, revealing surface texture and morphology. While their imaging requirements divide naturally between the three cameras, they preserve a strong degree of functional overlap. OCAMS and the other spacecraft instruments will allow the OSIRIS-REx mission to collect a sample from a microgravity body on the same visit during which it was first optically acquired from long range, a useful capability as humanity reaches out to explore near-Earth, Main-Belt and Jupiter Trojan asteroids.

  7. Color symmetrical superconductivity in a schematic nuclear quark model

    DEFF Research Database (Denmark)

    Bohr, Henrik; Providencia, C.; da Providencia, J.

    2010-01-01

    In this letter, a novel BCS-type formalism is constructed in the framework of a schematic QCD inspired quark model, having in mind the description of color symmetrical superconducting states. In the usual approach to color superconductivity, the pairing correlations affect only the quasi......-particle states of two colors, the single-particle states of the third color remaining unaffected by the pairing correlations. In the theory of color symmetrical superconductivity here proposed, the pairing correlations affect symmetrically the quasi-particle states of the three colors and vanishing net color...

  8. 'Clovis' in Color

    Science.gov (United States)

    2004-01-01

    [figure removed for brevity, see original site] Figure 1 This approximate true-color image taken by the Mars Exploration Rover Spirit shows the rock outcrop dubbed 'Clovis.' The rock was discovered to be softer than other rocks studied so far at Gusev Crater after the rover easily ground a hole into it with its rock abrasion tool. This image was taken by the 750-, 530- and 480-nanometer filters of the rover's panoramic camera on sol 217 (August 13, 2004). Elemental Trio Found in 'Clovis' Figure 1 above shows that the interior of the rock dubbed 'Clovis' contains higher concentrations of sulfur, bromine and chlorine than basaltic, or volcanic, rocks studied so far at Gusev Crater. The data were taken by the Mars Exploration Rover Spirit's alpha particle X-ray spectrometer after the rover dug into Clovis with its rock abrasion tool. The findings might indicate that this rock was chemically altered, and that fluids once flowed through the rock depositing these elements.

  9. Embedding Color Watermarks in Color Images

    Directory of Open Access Journals (Sweden)

    Wu Tung-Lin

    2003-01-01

    Full Text Available Robust watermarking with oblivious detection is essential to practical copyright protection of digital images. Effective exploitation of the characteristics of human visual perception to color stimuli helps to develop the watermarking scheme that fills the requirement. In this paper, an oblivious watermarking scheme that embeds color watermarks in color images is proposed. Through color gamut analysis and quantizer design, color watermarks are embedded by modifying quantization indices of color pixels without resulting in perceivable distortion. Only a small amount of information including the specification of color gamut, quantizer stepsize, and color tables is required to extract the watermark. Experimental results show that the proposed watermarking scheme is computationally simple and quite robust in face of various attacks such as cropping, low-pass filtering, white-noise addition, scaling, and JPEG compression with high compression ratios.

  10. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... One Use Facts About Colored Contacts and Halloween Safety Colored Contact Lens Facts Over-the-Counter Costume ... Costume Contact Lenses Can Ruin Vision Eye Makeup Safety In fact, it is illegal to sell colored ...

  11. Tooth - abnormal colors

    Science.gov (United States)

    ... medlineplus.gov/ency/article/003065.htm Tooth - abnormal colors To use the sharing features on this page, please enable JavaScript. Abnormal tooth color is any color other than white to yellowish- ...

  12. Urine - abnormal color

    Science.gov (United States)

    ... medlineplus.gov/ency/article/003139.htm Urine - abnormal color To use the sharing features on this page, please enable JavaScript. The usual color of urine is straw-yellow. Abnormally colored urine ...

  13. Skin color - patchy

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003224.htm Skin color - patchy To use the sharing features on this page, please enable JavaScript. Patchy skin color is areas where the skin color is irregular. ...

  14. Visible camera imaging of plasmas in Proto-MPEX

    Science.gov (United States)

    Mosby, R.; Skeen, C.; Biewer, T. M.; Renfro, R.; Ray, H.; Shaw, G. C.

    2015-11-01

    The prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device being developed at Oak Ridge National Laboratory (ORNL). This machine plans to study plasma-material interaction (PMI) physics relevant to future fusion reactors. Measurements of plasma light emission will be made on Proto-MPEX using fast, visible framing cameras. The cameras utilize a global shutter, which allows a full frame image of the plasma to be captured and compared at multiple times during the plasma discharge. Typical exposure times are ~10-100 microseconds. The cameras are capable of capturing images at up to 18,000 frames per second (fps). However, the frame rate is strongly dependent on the size of the ``region of interest'' that is sampled. The maximum ROI corresponds to the full detector area, of ~1000x1000 pixels. The cameras have an internal gain, which controls the sensitivity of the 10-bit detector. The detector includes a Bayer filter, for ``true-color'' imaging of the plasma emission. This presentation will exmine the optimized camera settings for use on Proto-MPEX. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.

  15. Doubled Color Codes

    Science.gov (United States)

    Bravyi, Sergey

    Combining protection from noise and computational universality is one of the biggest challenges in the fault-tolerant quantum computing. Topological stabilizer codes such as the 2D surface code can tolerate a high level of noise but implementing logical gates, especially non-Clifford ones, requires a prohibitively large overhead due to the need of state distillation. In this talk I will describe a new family of 2D quantum error correcting codes that enable a transversal implementation of all logical gates required for the universal quantum computing. Transversal logical gates (TLG) are encoded operations that can be realized by applying some single-qubit rotation to each physical qubit. TLG are highly desirable since they introduce no overhead and do not spread errors. It has been known before that a quantum code can have only a finite number of TLGs which rules out computational universality. Our scheme circumvents this no-go result by combining TLGs of two different quantum codes using the gauge-fixing method pioneered by Paetznick and Reichardt. The first code, closely related to the 2D color code, enables a transversal implementation of all single-qubit Clifford gates such as the Hadamard gate and the π / 2 phase shift. The second code that we call a doubled color code provides a transversal T-gate, where T is the π / 4 phase shift. The Clifford+T gate set is known to be computationally universal. The two codes can be laid out on the honeycomb lattice with two qubits per site such that the code conversion requires parity measurements for six-qubit Pauli operators supported on faces of the lattice. I will also describe numerical simulations of logical Clifford+T circuits encoded by the distance-3 doubled color code. Based on a joint work with Andrew Cross.

  16. Digital color imaging

    CERN Document Server

    Fernandez-Maloigne, Christine; Macaire, Ludovic

    2013-01-01

    This collective work identifies the latest developments in the field of the automatic processing and analysis of digital color images.For researchers and students, it represents a critical state of the art on the scientific issues raised by the various steps constituting the chain of color image processing.It covers a wide range of topics related to computational color imaging, including color filtering and segmentation, color texture characterization, color invariant for object recognition, color and motion analysis, as well as color image and video indexing and retrieval. <

  17. CCD TV camera, TM1300

    International Nuclear Information System (INIS)

    Takano, Mitsuo; Endou, Yukio; Nakayama, Hideo

    1982-01-01

    Development has been made of a black-and-white TV camera TM 1300 using an interline-transfer CCD, which excels in performance frame-transfer CCDs marketed since 1980: it has a greater number of horizontal picture elements and far smaller input power (less than 2 W at 9 V), uses hybrid ICs for the CCD driver unit to reduce the size of the camera, has no picture distortion, no burn-in; in addition, it has peripheral equipment, such as the camera housing and the pan and till head miniaturized as well. It is also expected to be widened in application to industrial TV. (author)

  18. High Quality Camera Surveillance System

    OpenAIRE

    Helaakoski, Ari

    2015-01-01

    Oulu University of Applied Sciences Information Technology Author: Ari Helaakoski Title of the master’s thesis: High Quality Camera Surveillance System Supervisor: Kari Jyrkkä Term and year of completion: Spring 2015 Number of pages: 31 This master’s thesis was commissioned by iProtoXi Oy and it was done to one iProtoXi customer. The aim of the thesis was to make a camera surveillance system which is using a High Quality camera with pan and tilt possibility. It should b...

  19. Control system for gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.

    1977-01-01

    An improved gamma camera arrangement is described which utilizing a solid state detector, formed of high purity germanium. the central arrangement of the camera operates to effect the carrying out of a trapezoidal filtering operation over antisymmetrically summed spatial signals through gated integration procedures utilizing idealized integrating intervals. By simultaneously carrying out peak energy evaluation of the input signals, a desirable control over pulse pile-up phenomena is achieved. Additionally, through the use of the time derivative of incoming pulse or signal energy information to initially enable the control system, a low level information evaluation is provided serving to enhance the signal processing efficiency of the camera

  20. Performance Considerations for the SIMPL Single Photon, Polarimetric, Two-Color Laser Altimeter as Applied to Measurements of Forest Canopy Structure and Composition

    Science.gov (United States)

    Dabney, Philip W.; Harding, David J.; Valett, Susan R.; Vasilyev, Aleksey A.; Yu, Anthony W.

    2012-01-01

    The Slope Imaging Multi-polarization Photon-counting Lidar (SIMPL) is a multi-beam, micropulse airborne laser altimeter that acquires active and passive polarimetric optical remote sensing measurements at visible and near-infrared wavelengths. SIMPL was developed to demonstrate advanced measurement approaches of potential benefit for improved, more efficient spaceflight laser altimeter missions. SIMPL data have been acquired for wide diversity of forest types in the summers of 2010 and 2011 in order to assess the potential of its novel capabilities for characterization of vegetation structure and composition. On each of its four beams SIMPL provides highly-resolved measurements of forest canopy structure by detecting single-photons with 15 cm ranging precision using a narrow-beam system operating at a laser repetition rate of 11 kHz. Associated with that ranging data SIMPL provides eight amplitude parameters per beam unlike the single amplitude provided by typical laser altimeters. Those eight parameters are received energy that is parallel and perpendicular to that of the plane-polarized transmit pulse at 532 nm (green) and 1064 nm (near IR), for both the active laser backscatter retro-reflectance and the passive solar bi-directional reflectance. This poster presentation will cover the instrument architecture and highlight the performance of the SIMPL instrument with examples taken from measurements for several sites with distinct canopy structures and compositions. Specific performance areas such as probability of detection, after pulsing, and dead time, will be highlighted and addressed, along with examples of their impact on the measurements and how they limit the ability to accurately model and recover the canopy properties. To assess the sensitivity of SIMPL's measurements to canopy properties an instrument model has been implemented in the FLIGHT radiative transfer code, based on Monte Carlo simulation of photon transport. SIMPL data collected in 2010 over

  1. An Airborne Multispectral Imaging System Based on Two Consumer-Grade Cameras for Agricultural Remote Sensing

    Directory of Open Access Journals (Sweden)

    Chenghai Yang

    2014-06-01

    Full Text Available This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS sensor with 5616 × 3744 pixels. One camera captures normal color images, while the other is modified to obtain near-infrared (NIR images. The color camera is also equipped with a GPS receiver to allow geotagged images. A remote control is used to trigger both cameras simultaneously. Images are stored in 14-bit RAW and 8-bit JPEG files in CompactFlash cards. The second-order transformation was used to align the color and NIR images to achieve subpixel alignment in four-band images. The imaging system was tested under various flight and land cover conditions and optimal camera settings were determined for airborne image acquisition. Images were captured at altitudes of 305–3050 m (1000–10,000 ft and pixel sizes of 0.1–1.0 m were achieved. Four practical application examples are presented to illustrate how the imaging system was used to estimate cotton canopy cover, detect cotton root rot, and map henbit and giant reed infestations. Preliminary analysis of example images has shown that this system has potential for crop condition assessment, pest detection, and other agricultural applications.

  2. Color superfluidity of neutral ultracold fermions in the presence of color-flip and color-orbit fields

    Science.gov (United States)

    Kurkcuoglu, Doga Murat; Sá de Melo, C. A. R.

    2018-02-01

    We describe how color superfluidity is modified in the presence of color-flip and color-orbit fields in the context of ultracold atoms and discuss connections between this problem and that of color superconductivity in quantum chromodynamics. We study the case of s -wave contact interactions between different colors and we identify several superfluid phases, with five being nodal and one being fully gapped. When our system is described in a mixed-color basis, the superfluid order parameter tensor is characterized by six independent components with explicit momentum dependence induced by color-orbit coupling. The nodal superfluid phases are topological in nature and the low-temperature phase diagram of the color-flip field versus the interaction parameter exhibits a pentacritical point, where all five nodal color superfluid phases converge. These results are in sharp contrast to the case of zero color-flip and color-orbit fields, where the system has perfect U(3) symmetry and possesses a superfluid phase that is characterized by fully gapped quasiparticle excitations with a single complex order parameter with no momentum dependence and by inert unpaired fermions representing a nonsuperfluid component. In the latter case, just a crossover between a Bardeen-Cooper-Schrieffer and a Bose-Einstein-condensation superfluid occurs. Furthermore, we analyze the order parameter tensor in a total pseudospin basis, investigate its momentum dependence in the singlet, triplet, and quintet sectors, and compare the results with the simpler case of spin-1/2 fermions in the presence of spin-flip and spin-orbit fields, where only singlet and triplet channels arise. Finally, we analyze in detail spectroscopic properties of color superfluids in the presence of color-flip and color-orbit fields, such as the quasiparticle excitation spectrum, momentum distribution, and density of states to help characterize all the encountered topological quantum phases, which can be realized in fermionic

  3. Comparison of myocardial perfusion imaging between the new high-speed gamma camera and the standard anger camera

    International Nuclear Information System (INIS)

    Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi

    2013-01-01

    Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest 99m Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time. (author)

  4. Analyzer for gamma cameras diagnostic

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Osorio Deliz, J. F.; Diaz Garcia, A.

    2013-01-01

    This research work was carried out to develop an analyzer for gamma cameras diagnostic. It is composed of an electronic system that includes hardware and software capabilities, and operates from the acquisition of the 4 head position signals of a gamma camera detector. The result is the spectrum of the energy delivered by nuclear radiation coming from the camera detector head. This system includes analog processing of position signals from the camera, digitization and the subsequent processing of the energy signal in a multichannel analyzer, sending data to a computer via a standard USB port and processing of data in a personal computer to obtain the final histogram. The circuits are composed of an analog processing board and a universal kit with micro controller and programmable gate array. (Author)

  5. New generation of meteorology cameras

    Science.gov (United States)

    Janout, Petr; Blažek, Martin; Páta, Petr

    2017-12-01

    A new generation of the WILLIAM (WIde-field aLL-sky Image Analyzing Monitoring system) camera includes new features such as monitoring of rain and storm clouds during the day observation. Development of the new generation of weather monitoring cameras responds to the demand for monitoring of sudden weather changes. However, new WILLIAM cameras are ready to process acquired image data immediately, release warning against sudden torrential rains, and send it to user's cell phone and email. Actual weather conditions are determined from image data, and results of image processing are complemented by data from sensors of temperature, humidity, and atmospheric pressure. In this paper, we present the architecture, image data processing algorithms of mentioned monitoring camera and spatially-variant model of imaging system aberrations based on Zernike polynomials.

  6. Memory for color reactivates color processing region.

    Science.gov (United States)

    Slotnick, Scott D

    2009-11-25

    Memory is thought to be constructive in nature, where features processed in different cortical regions are synthesized during retrieval. In an effort to support this constructive memory framework, the present functional magnetic resonance imaging study assessed whether memory for color reactivated color processing regions. During encoding, participants were presented with colored and gray abstract shapes. During retrieval, old and new shapes were presented in gray and participants responded 'old-colored', 'old-gray', or 'new'. Within color perception regions, color memory related activity was observed in the left fusiform gyrus, adjacent to the collateral sulcus. A retinotopic mapping analysis indicated this activity occurred within color processing region V8. The present feature specific evidence provides compelling support for a constructive view of memory.

  7. Compact optical technique for streak camera calibration

    International Nuclear Information System (INIS)

    Bell, Perry; Griffith, Roger; Hagans, Karla; Lerche, Richard; Allen, Curt; Davies, Terence; Janson, Frans; Justin, Ronald; Marshall, Bruce; Sweningsen, Oliver

    2004-01-01

    To produce accurate data from optical streak cameras requires accurate temporal calibration sources. We have reproduced an older technology for generating optical timing marks that had been lost due to component availability. Many improvements have been made which allow the modern units to service a much larger need. Optical calibrators are now available that produce optical pulse trains of 780 nm wavelength light at frequencies ranging from 0.1 to 10 GHz, with individual pulse widths of approximately 25 ps full width half maximum. Future plans include the development of single units that produce multiple frequencies to cover a wide temporal range, and that are fully controllable via an RS232 interface

  8. Compact optical technique for streak camera calibration

    Science.gov (United States)

    Bell, Perry; Griffith, Roger; Hagans, Karla; Lerche, Richard; Allen, Curt; Davies, Terence; Janson, Frans; Justin, Ronald; Marshall, Bruce; Sweningsen, Oliver

    2004-10-01

    To produce accurate data from optical streak cameras requires accurate temporal calibration sources. We have reproduced an older technology for generating optical timing marks that had been lost due to component availability. Many improvements have been made which allow the modern units to service a much larger need. Optical calibrators are now available that produce optical pulse trains of 780 nm wavelength light at frequencies ranging from 0.1 to 10 GHz, with individual pulse widths of approximately 25 ps full width half maximum. Future plans include the development of single units that produce multiple frequencies to cover a wide temporal range, and that are fully controllable via an RS232 interface.

  9. A color image processing pipeline for digital microscope

    Science.gov (United States)

    Liu, Yan; Liu, Peng; Zhuang, Zhefeng; Chen, Enguo; Yu, Feihong

    2012-10-01

    Digital microscope has found wide application in the field of biology, medicine et al. A digital microscope differs from traditional optical microscope in that there is no need to observe the sample through an eyepiece directly, because the optical image is projected directly on the CCD/CMOS camera. However, because of the imaging difference between human eye and sensor, color image processing pipeline is needed for the digital microscope electronic eyepiece to get obtain fine image. The color image pipeline for digital microscope, including the procedures that convert the RAW image data captured by sensor into real color image, is of great concern to the quality of microscopic image. The color pipeline for digital microscope is different from digital still cameras and video cameras because of the specific requirements of microscopic image, which should have the characters of high dynamic range, keeping the same color with the objects observed and a variety of image post-processing. In this paper, a new color image processing pipeline is proposed to satisfy the requirements of digital microscope image. The algorithm of each step in the color image processing pipeline is designed and optimized with the purpose of getting high quality image and accommodating diverse user preferences. With the proposed pipeline implemented on the digital microscope platform, the output color images meet the various analysis requirements of images in the medicine and biology fields very well. The major steps of color imaging pipeline proposed include: black level adjustment, defect pixels removing, noise reduction, linearization, white balance, RGB color correction, tone scale correction and gamma correction.

  10. Color spaces in digital video

    Energy Technology Data Exchange (ETDEWEB)

    Gaunt, R.

    1997-05-01

    . For example, humans `see` more white-to-black (luminance) detail then red, green, or blue color detail. Also, the eye is most sensitive to green colors. Taking advantage of this, both composite and component video allocates more bandwidth for the luma (Y`) signal than the chroma signals. Y`611 is composed of 59% green`, 30% red`, and 11% blue` (prime symbol denotes gamma corrected colors). This luma signal also maintains compatibility with black and white television receivers. Component digital video converts R`G`B` signals (either from a camera or a computer) to a monochromatic brightness signal Y` (referred here as luma to distinguish it from the CIE luminance linear- light quantity), and two color difference signals Cb and Cr. These last two are the blue and red signals with the luma component subtracted out. As you know, computer graphic images are composed of red, green, and blue elements defined in a linear color space. Color monitors do not display RGB linearly. A linear RGB color space image must be gamma corrected to be displayed properly on a CRT. Gamma correction, which is approximately a 0.45 power function, must also be employed before converting an RGB image to video color space. Gamma correction is defined for video in the international standard: ITU-Rec. BT.709-4. The gamma correction transform is the same for red, green, and blue. The color coding standard for component digital video and high definition video symbolizes gamma corrected luma by Y`, the blue difference signal by Cb (Cb = B` -Y`), and the red color difference signal by Cr (Cr = R` - Y`). Component analog HDTV uses Y`PbPr. To reduce conversion errors, clip in R`G`B`, not in Y`CbCr space. View video on a video monitor, computer monitor phosphors are wrong. Use a large word size (double precision) to avoid warp around, the0232n round the results to values between 0 and 255. And finally, recall that multiplying two 8- bit numbers results in a 16-bit number, so values need to be clipped to 8

  11. Astronomy and the camera obscura

    Science.gov (United States)

    Feist, M.

    2000-02-01

    The camera obscura (from Latin meaning darkened chamber) is a simple optical device with a long history. In the form considered here, it can be traced back to 1550. It had its heyday during the Victorian era when it was to be found at the seaside as a tourist attraction or sideshow. It was also used as an artist's drawing aid and, in 1620, the famous astronomer-mathematician, Johannes Kepler used a small tent camera obscura to trace the scenery.

  12. Luminescence properties of a single-component Na0.34Ca0.66Al1.66Si2.34O8:Ce3+, Sm3+ phosphor with tunable color tone for UV-pumped LEDs

    Science.gov (United States)

    Wang, Lei; Dong, Jie; Cui, Cai'e.; Tian, Yue; Huang, Ping

    2015-08-01

    A series of single-phase Na0.34Ca0.66Al1.66Si2.34O8:Ce3+, Sm3+ (NCASO) phosphors have been synthesized via a high temperature solid-state reaction method. The samples were studied based on photoluminescence (PL), photoluminescence excitation (PLE) spectra and fluorescence decay patterns. The obtained PLE exhibited a strong excitation band in the UV region between 250 and 380 nm. Under 340 nm excitation, NCASO:Ce3+, Sm3+ phosphor showed a broad emission band at 414 nm of Ce3+ and four emission bands from 550 nm to 725 nm of Sm3+. Spectra demonstrate nonradiative energy transfers (ET) occur from Ce3+-Sm3+. The analysis based on Inokuti-Hirayama model indicates that the ET is governed by electric dipole-dipole interaction. Moreover, the emitting colors can be adjusting from blue to white by proper tuning of the relative composition of Ce3+/Sm3+. These results show that NCASO:Ce3+, Sm3+ phosphors can be used as a potential single-phased white-emitting candidate for UV WLEDs.

  13. Robust Pedestrian Detection by Combining Visible and Thermal Infrared Cameras

    Directory of Open Access Journals (Sweden)

    Ji Hoon Lee

    2015-05-01

    Full Text Available With the development of intelligent surveillance systems, the need for accurate detection of pedestrians by cameras has increased. However, most of the previous studies use a single camera system, either a visible light or thermal camera, and their performances are affected by various factors such as shadow, illumination change, occlusion, and higher background temperatures. To overcome these problems, we propose a new method of detecting pedestrians using a dual camera system that combines visible light and thermal cameras, which are robust in various outdoor environments such as mornings, afternoons, night and rainy days. Our research is novel, compared to previous works, in the following four ways: First, we implement the dual camera system where the axes of visible light and thermal cameras are parallel in the horizontal direction. We obtain a geometric transform matrix that represents the relationship between these two camera axes. Second, two background images for visible light and thermal cameras are adaptively updated based on the pixel difference between an input thermal and pre-stored thermal background images. Third, by background subtraction of thermal image considering the temperature characteristics of background and size filtering with morphological operation, the candidates from whole image (CWI in the thermal image is obtained. The positions of CWI (obtained by background subtraction and the procedures of shadow removal, morphological operation, size filtering, and filtering of the ratio of height to width in the visible light image are projected on those in the thermal image by using the geometric transform matrix, and the searching regions for pedestrians are defined in the thermal image. Fourth, within these searching regions, the candidates from the searching image region (CSI of pedestrians in the thermal image are detected. The final areas of pedestrians are located by combining the detected positions of the CWI and CSI of

  14. The GCT camera for the Cherenkov Telescope Array

    Science.gov (United States)

    Lapington, J. S.; Abchiche, A.; Allan, D.; Amans, J.-P.; Armstrong, T. P.; Balzer, A.; Berge, D.; Boisson, C.; Bousquet, J.-J.; Bose, R.; Brown, A. M.; Bryan, M.; Buchholtz, G.; Buckley, J.; Chadwick, P. M.; Costantini, H.; Cotter, G.; Daniel, M. K.; De Franco, A.; De Frondat, F.; Dournaux, J.-L.; Dumas, D.; Ernenwein, J.-P.; Fasola, G.; Funk, S.; Gironnet, J.; Graham, J. A.; Greenshaw, T.; Hervet, O.; Hidaka, N.; Hinton, J. A.; Huet, J.-M.; Jankowsky, D.; Jegouzo, I.; Jogler, T.; Kawashima, T.; Kraus, M.; Laporte, P.; Leach, S.; Lefaucheur, J.; Markoff, S.; Melse, T.; Minaya, I. A.; Mohrmann, L.; Molyneux, P.; Moore, P.; Nolan, S. J.; Okumura, A.; Osborne, J. P.; Parsons, R. D.; Rosen, S.; Ross, D.; Rowell, G.; Rulten, C. B.; Sato, Y.; Sayede, F.; Schmoll, J.; Schoorlemmer, H.; Servillat, M.; Sol, H.; Stamatescu, V.; Stephan, M.; Stuik, R.; Sykes, J.; Tajima, H.; Thornhill, J.; Tibaldo, L.; Trichard, C.; Varner, G.; Vink, J.; Watson, J. J.; White, R.; Yamane, N.; Zech, A.; Zink, A.; Zorn, J.; CTA Consortium

    2017-12-01

    The Gamma Cherenkov Telescope (GCT) is one of the designs proposed for the Small Sized Telescope (SST) section of the Cherenkov Telescope Array (CTA). The GCT uses dual-mirror optics, resulting in a compact telescope with good image quality and a large field of view with a smaller, more economical, camera than is achievable with conventional single mirror solutions. The photon counting GCT camera is designed to record the flashes of atmospheric Cherenkov light from gamma and cosmic ray initiated cascades, which last only a few tens of nanoseconds. The GCT optics require that the camera detectors follow a convex surface with a radius of curvature of 1 m and a diameter of 35 cm, which is approximated by tiling the focal plane with 32 modules. The first camera prototype is equipped with multi-anode photomultipliers, each comprising an 8×8 array of 6×6 mm2 pixels to provide the required angular scale, adding up to 2048 pixels in total. Detector signals are shaped, amplified and digitised by electronics based on custom ASICs that provide digitisation at 1 GSample/s. The camera is self-triggering, retaining images where the focal plane light distribution matches predefined spatial and temporal criteria. The electronics are housed in the liquid-cooled, sealed camera enclosure. LED flashers at the corners of the focal plane provide a calibration source via reflection from the secondary mirror. The first GCT camera prototype underwent preliminary laboratory tests last year. In November 2015, the camera was installed on a prototype GCT telescope (SST-GATE) in Paris and was used to successfully record the first Cherenkov light of any CTA prototype, and the first Cherenkov light seen with such a dual-mirror optical system. A second full-camera prototype based on Silicon Photomultipliers is under construction. Up to 35 GCTs are envisaged for CTA.

  15. Science, conservation, and camera traps

    Science.gov (United States)

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  16. Stereo matching based on SIFT descriptor with illumination and camera invariance

    Science.gov (United States)

    Niu, Haitao; Zhao, Xunjie; Li, Chengjin; Peng, Xiang

    2010-10-01

    Stereo matching is the process of finding corresponding points in two or more images. The description of interest points is a critical aspect of point correspondence which is vital in stereo matching. SIFT descriptor has been proven to be better on the distinctiveness and robustness than other local descriptors. However, SIFT descriptor does not involve color information of feature point which provides powerfully distinguishable feature in matching tasks. Furthermore, in a real scene, image color are affected by various geometric and radiometric factors,such as gamma correction and exposure. These situations are very common in stereo images. For this reason, the color recorded by a camera is not a reliable cue, and the color consistency assumption is no longer valid between stereo images in real scenes. Hence the performance of other SIFT-based stereo matching algorithms can be severely degraded under the radiometric variations. In this paper, we present a new improved SIFT stereo matching algorithms that is invariant to various radiometric variations between left and right images. Unlike other improved SIFT stereo matching algorithms, we explicitly employ the color formation model with the parameters of lighting geometry, illuminant color and camera gamma in SIFT descriptor. Firstly, we transform the input color images to log-chromaticity color space, thus a linear relationship can be established. Then, we use a log-polar histogram to build three color invariance components for SIFT descriptor. So that our improved SIFT descriptor is invariant to lighting geometry, illuminant color and camera gamma changes between left and right images. Then we can match feature points between two images and use SIFT descriptor Euclidean distance as a geometric measure in our data sets to make it further accurate and robust. Experimental results show that our method is superior to other SIFT-based algorithms including conventional stereo matching algorithms under various

  17. Multi Camera Multi Object Tracking using Block Search over Epipolar Geometry

    Directory of Open Access Journals (Sweden)

    Saman Sargolzaei

    2000-01-01

    Full Text Available We present strategy for multi-objects tracking in multi camera environment for the surveillance and security application where tracking multitude subjects are of utmost importance in a crowded scene. Our technique assumes partially overlapped multi-camera setup where cameras share common view from different angle to assess positions and activities of subjects under suspicion. To establish spatial correspondence between camera views we employ an epipolar geometry technique. We propose an overlapped block search method to find the interested pattern (target in new frames. Color pattern update scheme has been considered to further optimize the efficiency of the object tracking when object pattern changes due to object motion in the field of views of the cameras. Evaluation of our approach is presented with the results on PETS2007 dataset..

  18. Effect of a single injection of gonadotropin-releasing hormone (GnRH) and human chorionic gonadotropin (hCG) on testicular blood flow measured by color doppler ultrasonography in male Shiba goats.

    Science.gov (United States)

    Samir, Haney; Sasaki, Kazuaki; Ahmed, Eman; Karen, Aly; Nagaoka, Kentaro; El Sayed, Mohamed; Taya, Kazuyoshi; Watanabe, Gen

    2015-05-01

    Although color Doppler ultrasonography has been used to evaluate testicular blood flow in many species, very little has been done in goat. Eight male Shiba goats were exposed to a single intramuscular injection of either gonadotropin-releasing hormone (GnRH group; 1 µg/kg BW) or human chorionic gonadotropin (hCG group; 25 IU/kg BW). Plasma testosterone (T), estradiol (E2) and inhibin (INH) were measured just before (0 hr) and at different intervals post injection by radioimmunoassay. Testis volume (TV) and Doppler indices, such as resistive index (RI) and pulsatility index (PI) of the supratesticular artery, were measured by B-mode and color Doppler ultrasonography, respectively. The results indicated an increase in testicular blood flow in both groups, as RI and PI decreased significantly (P<0.05), but this increase was significant higher and earlier in hCG group (1 hr) than in the GnRH group (2 hr). A high correlation was found for RI and PI with both T (RI, r= -0.862; PI, r= -0.707) and INH in the GnRH group (RI, r=0.661; PI, r=0.701). However, a significant (P<0.05) correlation was found between E2 and both RI (r= -0.610) and PI (r= -0.763) in hCG group. In addition, TV significantly increased and was highly correlated with RI in both groups (GnRH, r= -0.718; hCG, r= -0.779). In conclusion, hCG and GnRH may improve testicular blood flow and TV in Shiba goats.

  19. Texture affects color emotion

    NARCIS (Netherlands)

    Lucassen, M.P.; Gevers, T.; Gijsenij, A.

    2011-01-01

    Several studies have recorded color emotions in subjects viewing uniform color (UC) samples. We conduct an experiment to measure and model how these color emotions change when texture is added to the color samples. Using a computer monitor, our subjects arrange samples along four scales: warm-cool,

  20. What is Color Blindness?

    Science.gov (United States)

    ... Stories Español Eye Health / Eye Health A-Z Color Blindness Sections What Is Color Blindness? What Are ... Treatment How Color Blindness Is Tested What Is Color Blindness? Leer en Español: ¿Qué es el daltonismo? ...

  1. Derivation of Color Confusion Lines for Pseudo-Dichromat Observers from Color Discrimination Thresholds

    Directory of Open Access Journals (Sweden)

    Kahiro Matsudaira

    2011-05-01

    Full Text Available The objective is to develop a method of defining color confusion lines in the display RGB color space through color discrimination tasks. In the experiment, reference and test square patches were presented side by side on a CRT display. The subject's task is to set the test color where the color difference from the reference is just noticeable to him/her. In a single trial, the test color was only adjustable along one of 26 directions around the reference. Thus 26 colors with just noticeable difference (JND were obtained and made up a tube-like or an ellipsoidal shape around each reference. With color-anomalous subjects, the major axes of these shapes should be parallel to color confusion lines that have a common orientation vector corresponding to one of the cone excitation axes L, M, or S. In our method, the orientation vector was determined by minimizing the sum of the squares of the distances from JND colors to each confusion line. To assess the performance the method, the orientation vectors obtained by pseudo-dichromats (color normal observers with a dichromat simulator were compared to those theoretically calculated from the color vision model used in the simulator.

  2. Full-color structured illumination optical sectioning microscopy

    Science.gov (United States)

    Qian, Jia; Lei, Ming; Dan, Dan; Yao, Baoli; Zhou, Xing; Yang, Yanlong; Yan, Shaohui; Min, Junwei; Yu, Xianghua

    2015-09-01

    In merits of super-resolved resolution and fast speed of three-dimensional (3D) optical sectioning capability, structured illumination microscopy (SIM) has found variety of applications in biomedical imaging. So far, most SIM systems use monochrome CCD or CMOS cameras to acquire images and discard the natural color information of the specimens. Although multicolor integration scheme are employed, multiple excitation sources and detectors are required and the spectral information is limited to a few of wavelengths. Here, we report a new method for full-color SIM with a color digital camera. A data processing algorithm based on HSV (Hue, Saturation, and Value) color space is proposed, in which the recorded color raw images are processed in the Hue, Saturation, Value color channels, and then reconstructed to a 3D image with full color. We demonstrated some 3D optical sectioning results on samples such as mixed pollen grains, insects, micro-chips and the surface of coins. The presented technique is applicable to some circumstance where color information plays crucial roles, such as in materials science and surface morphology.

  3. Color enhancement in multispectral image of human skin

    Science.gov (United States)

    Mitsui, Masanori; Murakami, Yuri; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2003-07-01

    Multispectral imaging is receiving attention in medical color imaging, as high-fidelity color information can be acquired by the multispectral image capturing. On the other hand, as color enhancement in medical color image is effective for distinguishing lesion from normal part, we apply a new technique for color enhancement using multispectral image to enhance the features contained in a certain spectral band, without changing the average color distribution of original image. In this method, to keep the average color distribution, KL transform is applied to spectral data, and only high-order KL coefficients are amplified in the enhancement. Multispectral images of human skin of bruised arm are captured by 16-band multispectral camera, and the proposed color enhancement is applied. The resultant images are compared with the color images reproduced assuming CIE D65 illuminant (obtained by natural color reproduction technique). As a result, the proposed technique successfully visualizes unclear bruised lesions, which are almost invisible in natural color images. The proposed technique will provide support tool for the diagnosis in dermatology, visual examination in internal medicine, nursing care for preventing bedsore, and so on.

  4. Sensory Drive, Color, and Color Vision.

    Science.gov (United States)

    Price, Trevor D

    2017-08-01

    Colors often appear to differ in arbitrary ways among related species. However, a fraction of color diversity may be explained because some signals are more easily perceived in one environment rather than another. Models show that not only signals but also the perception of signals should regularly evolve in response to different environments, whether these primarily involve detection of conspecifics or detection of predators and prey. Thus, a deeper understanding of how perception of color correlates with environmental attributes should help generate more predictive models of color divergence. Here, I briefly review our understanding of color vision in vertebrates. Then I focus on opsin spectral tuning and opsin expression, two traits involved in color perception that have become amenable to study. I ask how opsin tuning is correlated with ecological differences, notably the light environment, and how this potentially affects perception of conspecific colors. Although opsin tuning appears to evolve slowly, opsin expression levels are more evolutionarily labile but have been difficult to connect to color perception. The challenge going forward will be to identify how physiological differences involved in color vision, such as opsin expression levels, translate into perceptual differences, the selection pressures that have driven those differences, and ultimately how this may drive evolution of conspecific colors.

  5. Modeling color preference using color space metrics.

    Science.gov (United States)

    Schloss, Karen B; Lessard, Laurent; Racey, Chris; Hurlbert, Anya C

    2017-07-27

    Studying color preferences provides a means to discover how perceptual experiences map onto cognitive and affective judgments. A challenge is finding a parsimonious way to describe and predict patterns of color preferences, which are complex with rich individual differences. One approach has been to model color preferences using factors from metric color spaces to establish direct correspondences between dimensions of color and preference. Prior work established that substantial, but not all, variance in color preferences could be captured by weights on color space dimensions using multiple linear regression. The question we address here is whether model fits may be improved by using different color metric specifications. We therefore conducted a large-scale analysis of color space models, and focused in-depth analysis on models that differed in color space (cone-contrast vs. CIELAB), coordinate system within the color space (Cartesian vs. cylindrical), and factor degrees (1st degree only, or 1st and 2nd degree). We used k-fold cross validation to avoid over-fitting the data and to ensure fair comparisons across models. The best model was the 2nd-harmonic Lch model ("LabC Cyl2"). Specified in CIELAB space, it included 1st and 2nd harmonics of hue (capturing opponency in hue preferences and simultaneous liking/disliking of both hues on an opponent axis, respectively), lightness, and chroma. These modeling approaches can be used to characterize and compare patterns for group averages and individuals in future datasets on color preference, or other measures in which correspondences between color appearance and cognitive or affective judgments may exist. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Color: Physics and Perception

    Science.gov (United States)

    Gilbert, Pupa

    Unless we are colorblind, as soon as we look at something, we know what color it is. Simple, isn't it? No, not really. The color we see is rarely just determined by the physical color, that is, the wavelength of visible light associated with that color. Other factors, such as the illuminating light, or the brightness surrounding a certain color, affect our perception of that color. Most striking, and useful, is understanding how the retina and the brain work together to interpret the color we see, and how they can be fooled by additive color mixing, which makes it possible to have color screens and displays. I will show the physical origin of all these phenomena and give live demos as I explain how they work. Bring your own eyes! For more information: (1) watch TED talk: ``Color: Physics and Perception'' and (2) read book: PUPA Gilbert and W Haeberli ``Physics in the Arts'', ISBN 9780123918789.

  7. Industrial Color Physics

    CERN Document Server

    Klein, Georg A

    2010-01-01

    This unique book starts with a short historical overview of the development of the theories of color vision and applications of industrial color physics. The three dominant factors producing color - light source, color sample, and observer - are described in detail. The standardized color spaces are shown and related color values are applied to characteristic color qualities of absorption as well as of effect colorants. The fundamentals of spectrometric and colorimetric measuring techniques together with specific applications are described. Theoretical models for radiative transfer in transparent, translucent, and opaque layers are detailed; the two, three, and multi-flux approximations are presented for the first time in a coherent formalism. These methods constitute the fundamentals not only for the important classical methods, but also modern methods of recipe prediction applicable to all known colorants. The text is supplied with 52 tables, more than 200 partially colored illustrations, an appendix, and a...

  8. Light field driven streak-camera for single-shot measurements of the temporal profile of XUV-pulses from a free-electron laser; Lichtfeld getriebene Streak-Kamera zur Einzelschuss Zeitstrukturmessung der XUV-Pulse eines Freie-Elektronen Lasers

    Energy Technology Data Exchange (ETDEWEB)

    Fruehling, Ulrike

    2009-10-15

    The Free Electron Laser in Hamburg (FLASH) is a source for highly intense ultra short extreme ultraviolet (XUV) light pulses with pulse durations of a few femtoseconds. Due to the stochastic nature of the light generation scheme based on self amplified spontaneous emission (SASE), the duration and temporal profile of the XUV pulses fluctuate from shot to shot. In this thesis, a THz-field driven streak-camera capable of single pulse measurements of the XUV pulse-profile has been realized. In a first XUV-THz pump-probe experiment at FLASH, the XUV-pulses are overlapped in a gas target with synchronized THz-pulses generated by a new THz-undulator. The electromagnetic field of the THz light accelerates photoelectrons produced by the XUV-pulses with the resulting change of the photoelectron momenta depending on the phase of the THz field at the time of ionisation. This technique is intensively used in attosecond metrology where near infrared streaking fields are employed for the temporal characterisation of attosecond XUV-Pulses. Here, it is adapted for the analysis of pulse durations in the few femtosecond range by choosing a hundred times longer far infrared streaking wavelengths. Thus, the gap between conventional streak cameras with typical resolutions of hundreds of femtoseconds and techniques with attosecond resolution is filled. Using the THz-streak camera, the time dependent electric field of the THz-pulses was sampled in great detail while on the other hand the duration and even details of the time structure of the XUV-pulses were characterized. (orig.)

  9. High dynamic range image acquisition based on multiplex cameras

    Science.gov (United States)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  10. COLORS OF ELLIPTICALS FROM GALEX TO SPITZER

    International Nuclear Information System (INIS)

    Schombert, James M.

    2016-01-01

    Multi-color photometry is presented for a large sample of local ellipticals selected by morphology and isolation. The sample uses data from the Galaxy Evolution Explorer ( GALEX ), Sloan Digital Sky Survey (SDSS), Two Micron All-Sky Survey (2MASS), and Spitzer to cover the filters NUV , ugri , JHK and 3.6 μ m. Various two-color diagrams, using the half-light aperture defined in the 2MASS J filter, are very coherent from color to color, meaning that galaxies defined to be red in one color are always red in other colors. Comparison to globular cluster colors demonstrates that ellipticals are not composed of a single age, single metallicity (e.g., [Fe/H]) stellar population, but require a multi-metallicity model using a chemical enrichment scenario. Such a model is sufficient to explain two-color diagrams and the color–magnitude relations for all colors using only metallicity as a variable on a solely 12 Gyr stellar population with no evidence of stars younger than 10 Gyr. The [Fe/H] values that match galaxy colors range from −0.5 to +0.4, much higher (and older) than population characteristics deduced from Lick/IDS line-strength system studies, indicating an inconsistency between galaxy colors and line indices values for reasons unknown. The NUV colors have unusual behavior, signaling the rise and fall of the UV upturn with elliptical luminosity. Models with blue horizontal branch tracks can reproduce this behavior, indicating the UV upturn is strictly a metallicity effect.

  11. COLORS OF ELLIPTICALS FROM GALEX TO SPITZER

    Energy Technology Data Exchange (ETDEWEB)

    Schombert, James M., E-mail: jschombe@uoregon.edu [Department of Physics, University of Oregon, Eugene, OR 97403 (United States)

    2016-12-01

    Multi-color photometry is presented for a large sample of local ellipticals selected by morphology and isolation. The sample uses data from the Galaxy Evolution Explorer ( GALEX ), Sloan Digital Sky Survey (SDSS), Two Micron All-Sky Survey (2MASS), and Spitzer to cover the filters NUV , ugri , JHK and 3.6 μ m. Various two-color diagrams, using the half-light aperture defined in the 2MASS J filter, are very coherent from color to color, meaning that galaxies defined to be red in one color are always red in other colors. Comparison to globular cluster colors demonstrates that ellipticals are not composed of a single age, single metallicity (e.g., [Fe/H]) stellar population, but require a multi-metallicity model using a chemical enrichment scenario. Such a model is sufficient to explain two-color diagrams and the color–magnitude relations for all colors using only metallicity as a variable on a solely 12 Gyr stellar population with no evidence of stars younger than 10 Gyr. The [Fe/H] values that match galaxy colors range from −0.5 to +0.4, much higher (and older) than population characteristics deduced from Lick/IDS line-strength system studies, indicating an inconsistency between galaxy colors and line indices values for reasons unknown. The NUV colors have unusual behavior, signaling the rise and fall of the UV upturn with elliptical luminosity. Models with blue horizontal branch tracks can reproduce this behavior, indicating the UV upturn is strictly a metallicity effect.

  12. Vision-Based Cooperative Pose Estimation for Localization in Multi-Robot Systems Equipped with RGB-D Cameras

    Directory of Open Access Journals (Sweden)

    Xiaoqin Wang

    2014-12-01

    Full Text Available We present a new vision based cooperative pose estimation scheme for systems of mobile robots equipped with RGB-D cameras. We first model a multi-robot system as an edge-weighted graph. Then, based on this model, and by using the real-time color and depth data, the robots with shared field-of-views estimate their relative poses in pairwise. The system does not need the existence of a single common view shared by all robots, and it works in 3D scenes without any specific calibration pattern or landmark. The proposed scheme distributes working loads evenly in the system, hence it is scalable and the computing power of the participating robots is efficiently used. The performance and robustness were analyzed both on synthetic and experimental data in different environments over a range of system configurations with varying number of robots and poses.

  13. Animal coloration research: why it matters.

    Science.gov (United States)

    Caro, Tim; Stoddard, Mary Caswell; Stuart-Fox, Devi

    2017-07-05

    While basic research on animal coloration is the theme of this special edition, here we highlight its applied significance for industry, innovation and society. Both the nanophotonic structures producing stunning optical effects and the colour perception mechanisms in animals are extremely diverse, having been honed over millions of years of evolution for many different purposes. Consequently, there is a wealth of opportunity for biomimetic and bioinspired applications of animal coloration research, spanning colour production, perception and function. Fundamental research on the production and perception of animal coloration is contributing to breakthroughs in the design of new materials (cosmetics, textiles, paints, optical coatings, security labels) and new technologies (cameras, sensors, optical devices, robots, biomedical implants). In addition, discoveries about the function of animal colour are influencing sport, fashion, the military and conservation. Understanding and applying knowledge of animal coloration is now a multidisciplinary exercise. Our goal here is to provide a catalyst for new ideas and collaborations between biologists studying animal coloration and researchers in other disciplines.This article is part of the themed issue 'Animal coloration: production, perception, function and application'. © 2017 The Author(s).

  14. Coloring local feature extraction

    OpenAIRE

    Van De Weijer, Joost; Schmid, Cordelia

    2006-01-01

    International audience; Although color is commonly experienced as an indispensable quality in describing the world around us, state-of-the art local feature-based representations are mostly based on shape description, and ignore color information. The description of color is hampered by the large amount of variations which causes the measured color values to vary significantly. In this paper we aim to extend the description of local features with color information. To accomplish a wide applic...

  15. Color models of hadrons

    International Nuclear Information System (INIS)

    Greenberg, O.W.; Nelson, C.A.

    1977-01-01

    The evidence for a three-valued 'color' degree of freedom in hadron physics is reviewed. The structure of color models is discussed. Consequences of color models for elementary particle physics are discussed, including saturation properties of hadronic states, π 0 →2γ and related decays, leptoproduction, and lepton pair annihilation. Signatures are given which distinguish theories with isolated colored particles from those in which color is permanently bound. (Auth.)

  16. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  17. SUB-CAMERA CALIBRATION OF A PENTA-CAMERA

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-03-01

    Full Text Available Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors

  18. The TolTEC Camera for the LMT Telescope

    Science.gov (United States)

    Bryan, Sean

    2018-01-01

    TolTEC is a new camera being built for the 50-meter Large Millimeter-wave Telescope (LMT) on Sierra Negra in Puebla, Mexico. The instrument will discover and characterize distant galaxies by detecting the thermal emission of dust heated by starlight. The polarimetric capabilities of the camera will measure magnetic fields in star-forming regions in the Milky Way. The optical design of the camera uses mirrors, lenses, and dichroics to simultaneously couple a 4 arcminute diameter field of view onto three single-band focal planes at 150, 220, and 280 GHz. The 7000 polarization-selective detectors are single-band horn-coupled LEKID detectors fabricated at NIST. A rotating half wave plate operates at ambient temperature to modulate the polarized signal. In addition to the galactic and extragalactic surveys already planned, TolTEC installed at the LMT will provide open observing time to the community.

  19. Scalable IC Platform for Smart Cameras

    Directory of Open Access Journals (Sweden)

    Harry Broers

    2005-08-01

    Full Text Available Smart cameras are among the emerging new fields of electronics. The points of interest are in the application areas, software and IC development. In order to reduce cost, it is worthwhile to invest in a single architecture that can be scaled for the various application areas in performance (and resulting power consumption. In this paper, we show that the combination of an SIMD (single-instruction multiple-data processor and a general-purpose DSP is very advantageous for the image processing tasks encountered in smart cameras. While the SIMD processor gives the very high performance necessary by exploiting the inherent data parallelism found in the pixel crunching part of the algorithms, the DSP offers a friendly approach to the more complex tasks. The paper continues to motivate that SIMD processors have very convenient scaling properties in silicon, making the complete, SIMD-DSP architecture suitable for different application areas without changing the software suite. Analysis of the changes in power consumption due to scaling shows that for typical image processing tasks, it is beneficial to scale the SIMD processor to use the maximum level of parallelism available in the algorithm if the IC supply voltage can be lowered. If silicon cost is of importance, the parallelism of the processor should be scaled to just reach the desired performance given the speed of the silicon.

  20. High-speed three-frame image recording system using colored flash units and low-cost video equipment

    Science.gov (United States)

    Racca, Roberto G.; Scotten, Larry N.

    1995-05-01

    This article describes a method that allows the digital recording of sequences of three black and white images at rates of several thousand frames per second using a system consisting of an ordinary CCD camcorder, three flash units with color filters, a PC-based frame grabber board and some additional electronics. The maximum framing rate is determined by the duration of the flashtube emission, and for common photographic flash units lasting about 20 microsecond(s) it can exceed 10,000 frames per second in actual use. The subject under study is strobe- illuminated using a red, a green and a blue flash unit controlled by a special sequencer, and the three images are captured by a color CCD camera on a single video field. Color is used as the distinguishing parameter that allows the overlaid exposures to be resolved. The video output for that particular field will contain three individual scenes, one for each primary color component, which potentially can be resolved with no crosstalk between them. The output is electronically decoded into the primary color channels, frame grabbed and stored into digital memory, yielding three time-resolved images of the subject. A synchronization pulse provided by the flash sequencer triggers the frame grabbing so that the correct video field is acquired. A scheme involving the use of videotape as intermediate storage allows the frame grabbing to be performed using a monochrome video digitizer. Ideally each flash- illuminated scene would be confined to one color channel, but in practice various factors, both optical and electronic, affect color separation. Correction equations have been derived that counteract these effects in the digitized images and minimize 'ghosting' between frames. Once the appropriate coefficients have been established through a calibration procedure that needs to be performed only once for a given configuration of the equipment, the correction process is carried out transparently in software every time a

  1. The fly's eye camera system

    Science.gov (United States)

    Mészáros, L.; Pál, A.; Csépány, G.; Jaskó, A.; Vida, K.; Oláh, K.; Mezö, G.

    2014-12-01

    We introduce the Fly's Eye Camera System, an all-sky monitoring device intended to perform time domain astronomy. This camera system design will provide complementary data sets for other synoptic sky surveys such as LSST or Pan-STARRS. The effective field of view is obtained by 19 cameras arranged in a spherical mosaic form. These individual cameras of the device stand on a hexapod mount that is fully capable of achieving sidereal tracking for the subsequent exposures. This platform has many advantages. First of all it requires only one type of moving component and does not include unique parts. Hence this design not only eliminates problems implied by unique elements, but the redundancy of the hexapod allows smooth operations even if one or two of the legs are stuck. In addition, it can calibrate itself by observed stars independently from both the geographical location (including northen and southern hemisphere) and the polar alignment of the full mount. All mechanical elements and electronics are designed within the confines of our institute Konkoly Observatory. Currently, our instrument is in testing phase with an operating hexapod and reduced number of cameras.

  2. Color-avoiding percolation

    Science.gov (United States)

    Krause, Sebastian M.; Danziger, Michael M.; Zlatić, Vinko

    2017-08-01

    Many real world networks have groups of similar nodes which are vulnerable to the same failure or adversary. Nodes can be colored in such a way that colors encode the shared vulnerabilities. Using multiple paths to avoid these vulnerabilities can greatly improve network robustness, if such paths exist. Color-avoiding percolation provides a theoretical framework for analyzing this scenario, focusing on the maximal set of nodes which can be connected via multiple color-avoiding paths. In this paper we extend the basic theory of color-avoiding percolation that was published in S. M. Krause et al. [Phys. Rev. X 6, 041022 (2016)], 10.1103/PhysRevX.6.041022. We explicitly account for the fact that the same particular link can be part of different paths avoiding different colors. This fact was previously accounted for with a heuristic approximation. Here we propose a better method for solving this problem which is substantially more accurate for many avoided colors. Further, we formulate our method with differentiated node functions, either as senders and receivers, or as transmitters. In both functions, nodes can be explicitly trusted or avoided. With only one avoided color we obtain standard percolation. Avoiding additional colors one by one, we can understand the critical behavior of color-avoiding percolation. For unequal color frequencies, we find that the colors with the largest frequencies control the critical threshold and exponent. Colors of small frequencies have only a minor influence on color-avoiding connectivity, thus allowing for approximations.

  3. NEW METHOD USING IMAGE ANALYSIS TO MEASURE GINGIVAL COLOR

    OpenAIRE

    Takayoshi Tsubai; Mansjur Nasir; Mardiana A. Adam; Rungnapa Warotayanont; J. E. Scott

    2015-01-01

    For many years, observation of gingival color has been a popular area of dental research. However these methods are hard to analyze for any other than the different base conditions and colors. Thus we introduced an alternative method using image analysis to measure gingival color. For the research we performed a dental examination on 30 female students.The system is set up by aligning the camera area and facial area. The subject's chin is placed in a fixed chin cup mounted 30 cm from the came...

  4. Dark Energy Camera for Blanco

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  5. Image mosaicking based on feature points using color-invariant values

    Science.gov (United States)

    Lee, Dong-Chang; Kwon, Oh-Seol; Ko, Kyung-Woo; Lee, Ho-Young; Ha, Yeong-Ho

    2008-02-01

    In the field of computer vision, image mosaicking is achieved using image features, such as textures, colors, and shapes between corresponding images, or local descriptors representing neighborhoods of feature points extracted from corresponding images. However, image mosaicking based on feature points has attracted more recent attention due to the simplicity of the geometric transformation, regardless of distortion and differences in intensity generated by camera motion in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray values, identifying corresponding points becomes difficult in the case of changing illumination and images with a similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on feature points using color information of images. Essentially, the digital values acquired from a real digital color camera are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature points extracted using the proposed method is increased, while image mosaicking using color information is also achieved.

  6. EDICAM (Event Detection Intelligent Camera)

    Energy Technology Data Exchange (ETDEWEB)

    Zoletnik, S. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Szabolics, T., E-mail: szabolics.tamas@wigner.mta.hu [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Kocsis, G.; Szepesi, T.; Dunai, D. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary)

    2013-10-15

    Highlights: ► We present EDICAM's hardware modules. ► We present EDICAM's main design concepts. ► This paper will describe EDICAM firmware architecture. ► Operation principles description. ► Further developments. -- Abstract: A new type of fast framing camera has been developed for fusion applications by the Wigner Research Centre for Physics during the last few years. A new concept was designed for intelligent event driven imaging which is capable of focusing image readout to Regions of Interests (ROIs) where and when predefined events occur. At present these events mean intensity changes and external triggers but in the future more sophisticated methods might also be defined. The camera provides 444 Hz frame rate at full resolution of 1280 × 1024 pixels, but monitoring of smaller ROIs can be done in the 1–116 kHz range even during exposure of the full image. Keeping space limitations and the harsh environment in mind the camera is divided into a small Sensor Module and a processing card interconnected by a fast 10 Gbit optical link. This camera hardware has been used for passive monitoring of the plasma in different devices for example at ASDEX Upgrade and COMPASS with the first version of its firmware. The new firmware and software package is now available and ready for testing the new event processing features. This paper will present the operation principle and features of the Event Detection Intelligent Camera (EDICAM). The device is intended to be the central element in the 10-camera monitoring system of the Wendelstein 7-X stellarator.

  7. A practical one-shot multispectral imaging system using a single image sensor.

    Science.gov (United States)

    Monno, Yusuke; Kikuchi, Sunao; Tanaka, Masayuki; Okutomi, Masatoshi

    2015-10-01

    Single-sensor imaging using the Bayer color filter array (CFA) and demosaicking is well established for current compact and low-cost color digital cameras. An extension from the CFA to a multispectral filter array (MSFA) enables us to acquire a multispectral image in one shot without increased size or cost. However, multispectral demosaicking for the MSFA has been a challenging problem because of very sparse sampling of each spectral band in the MSFA. In this paper, we propose a high-performance multispectral demosaicking algorithm, and at the same time, a novel MSFA pattern that is suitable for our proposed algorithm. Our key idea is the use of the guided filter to interpolate each spectral band. To generate an effective guide image, in our proposed MSFA pattern, we maintain the sampling density of the G -band as high as the Bayer CFA, and we array each spectral band so that an adaptive kernel can be estimated directly from raw MSFA data. Given these two advantages, we effectively generate the guide image from the most densely sampled G -band using the adaptive kernel. In the experiments, we demonstrate that our proposed algorithm with our proposed MSFA pattern outperforms existing algorithms and provides better color fidelity compared with a conventional color imaging system with the Bayer CFA. We also show some real applications using a multispectral camera prototype we built.

  8. Streak cameras and their applications

    International Nuclear Information System (INIS)

    Bernet, J.M.; Imhoff, C.

    1987-01-01

    Over the last several years, development of various measurement techniques in the nanosecond and pico-second range has led to increased reliance on streak cameras. This paper will present the main electronic and optoelectronic performances of the Thomson-CSF TSN 506 cameras and their associated devices used to build an automatic image acquisition and processing system (NORMA). A brief survey of the diversity and the spread of the use of high speed electronic cinematography will be illustrated by a few typical applications [fr

  9. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    Science.gov (United States)

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  10. Preferred skin color enhancement for photographic color reproduction

    Science.gov (United States)

    Zeng, Huanzhao; Luo, Ronnier

    2011-01-01

    Skin tones are the most important colors among the memory color category. Reproducing skin colors pleasingly is an important factor in photographic color reproduction. Moving skin colors toward their preferred skin color center improves the color preference of skin color reproduction. Several methods to morph skin colors to a smaller preferred skin color region has been reported in the past. In this paper, a new approach is proposed to further improve the result of skin color enhancement. An ellipsoid skin color model is applied to compute skin color probabilities for skin color detection and to determine a weight for skin color adjustment. Preferred skin color centers determined through psychophysical experiments were applied for color adjustment. Preferred skin color centers for dark, medium, and light skin colors are applied to adjust skin colors differently. Skin colors are morphed toward their preferred color centers. A special processing is applied to avoid contrast loss in highlight. A 3-D interpolation method is applied to fix a potential contouring problem and to improve color processing efficiency. An psychophysical experiment validates that the method of preferred skin color enhancement effectively identifies skin colors, improves the skin color preference, and does not objectionably affect preferred skin colors in original images.

  11. Effect of endodontic sealers on tooth color.

    Science.gov (United States)

    Meincke, Débora Könzgen; Prado, Maíra; Gomes, Brenda Paula Figueiredo; Bona, Alvaro Della; Sousa, Ezilmara Leonor Rolim

    2013-08-01

    One of the goals of endodontic treatment is the adequate filling of the root canal,which is often done using gutta-percha and sealer. It has been reported that sealer remnants in the coronary pulp chamber cause tooth color changes. Therefore, this study was designed to examine the effect of endodontic sealer remnants on tooth color, testing the hypothesis that sealers cause coronal color changes. Forty single-rooted human teeth were endodontically treated leaving excess sealer material in the coronary pulp chamber. The specimens were divided into four groups (n = 10) according to the endodontic sealer used (AH, AH Plus; EF, Endofill; EN,endome´ thasoneN; and S26, Sealer 26). Teeth were stored at 37 8C moist environment.Color coordinates (L*a*b*) were measured with a spectrophotometer before endodontic treatment(baseline-control), 24 h and 6 months after treatment. L*a*b* values were used to calculate color changes (DE). Data were statistically analyzed using Kruskal–Wallis and Mann–Whitney-U tests. Color changes were observed for all groups with S26 and EN producing the greatest mean DE values after 6 months. Endodontic sealer remnants affect tooth color confirming the experimental hypothesis. This study examined the effect of endodontic sealer remnants on tooth color, and observed that after 6 months, the sealers produced unacceptable color changes. 2012 Elsevier Ltd. All rights reserved.

  12. Laser-evoked coloration in polymers

    International Nuclear Information System (INIS)

    Zheng, H.Y.; Rosseinsky, David; Lim, G.C.

    2005-01-01

    Laser-evoked coloration in polymers has long been a major aim of polymer technology for potential applications in product surface decoration, marking personalised images and logos. However, the coloration results reported so far were mostly attributed to laser-induced thermal-chemical reactions. The laser-irradiated areas are characterized with grooves due to material removal. Furthermore, only single color was laser-induced in any given polymer matrix. To induce multiple colors in a given polymer matrix with no apparent surface material removal is most desirable and challenging and may be achieved through laser-induced photo-chemical reactions. However, little public information is available at present. We report that two colors of red and green have been produced on an initially transparent CPV/PVA samples through UV laser-induced photo-chemical reactions. This is believed the first observation of laser-induced multiple-colors in the given polymer matrix. It is believed that the colorants underwent photo-effected electron transfer with suitable electron donors from the polymers to change from colorless bipyridilium Bipm 2+ to the colored Bipm + species. The discovery may lead to new approaches to the development of laser-evoked multiple coloration in polymers

  13. Detecting method of subjects' 3D positions and experimental advanced camera control system

    Science.gov (United States)

    Kato, Daiichiro; Abe, Kazuo; Ishikawa, Akio; Yamada, Mitsuho; Suzuki, Takahito; Kuwashima, Shigesumi

    1997-04-01

    Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.

  14. A Novel Mechanism for Color Vision: Pupil Shape and Chromatic Aberration Can Provide Spectral Discrimination for Color Blind Organisms.

    OpenAIRE

    Stubbs, Christopher; Stubbs, Alexander

    2015-01-01

    We present a mechanism by which organisms with only a single photoreceptor, that have a monochromatic view of the world, can achieve color discrimination. The combination of an off axis pupil and the principle of chromatic aberration (where light of different colors focus at different distances behind a lens) can combine to provide color-blind animals with a way to distinguish colors. As a specific example we constructed a computer model of the visual system of cephalopods, (octopus, squid, a...

  15. A Novel Mechanism for Color Vision: Pupil Shape and Chromatic Aberration Can Provide Spectral Discrimination for Color Blind Organisms.

    OpenAIRE

    Stubbs, Alexander L; Stubbs, Christopher William

    2016-01-01

    We present a mechanism by which organisms with only a single photoreceptor, that have a monochromatic view of the world, can achieve color discrimination. The combination of an off axis pupil and the principle of chromatic aberration (where light of different colors focus at different distances behind a lens) can combine to provide color-blind animals with a way to distinguish colors. As a specific example we constructed a computer model of the visual system of cephalopods, (octopus, squid, a...

  16. NEW METHOD USING IMAGE ANALYSIS TO MEASURE GINGIVAL COLOR

    Directory of Open Access Journals (Sweden)

    Takayoshi Tsubai

    2015-07-01

    Full Text Available For many years, observation of gingival color has been a popular area of dental research. However these methods are hard to analyze for any other than the different base conditions and colors. Thus we introduced an alternative method using image analysis to measure gingival color. For the research we performed a dental examination on 30 female students.The system is set up by aligning the camera area and facial area. The subject's chin is placed in a fixed chin cup mounted 30 cm from the camera lens. Each image is acquired such that comparison may be made with the original bite holder as well as a standard color scale. After converted to computer we used a curves dialog box for color adjustment. The curves dialog box allows adjustment of the entire tonal range of an image.The results of the analysis of the free gingiva compared to the attached gingiva are that attached gingiva was more vivid red and yellow compared to the free gingiva. In conclusion, the system described herein of digital caputre and comparison of color images, analysis and separation in three channels of free and attached ginigval surface images and matching with colorimetric scales may be useful for demonstrating the diversity of ginigval color as well as analysis of gingival health.

  17. REFINEMENT OF COLORED MOBILE MAPPING DATA USING INTENSITY IMAGES

    Directory of Open Access Journals (Sweden)

    T. Yamakawa

    2016-06-01

    Full Text Available Mobile mapping systems (MMS can capture dense point-clouds of urban scenes. For visualizing realistic scenes using point-clouds, RGB colors have to be added to point-clouds. To generate colored point-clouds in a post-process, each point is projected onto camera images and a RGB color is copied to the point at the projected position. However, incorrect colors are often added to point-clouds because of the misalignment of laser scanners, the calibration errors of cameras and laser scanners, or the failure of GPS acquisition. In this paper, we propose a new method to correct RGB colors of point-clouds captured by a MMS. In our method, RGB colors of a point-cloud are corrected by comparing intensity images and RGB images. However, since a MMS outputs sparse and anisotropic point-clouds, regular images cannot be obtained from intensities of points. Therefore, we convert a point-cloud into a mesh model and project triangle faces onto image space, on which regular lattices are defined. Then we extract edge features from intensity images and RGB images, and detect their correspondences. In our experiments, our method worked very well for correcting RGB colors of point-clouds captured by a MMS.

  18. Characteristics of Color Produced by Awa Natural Indigo and Synthetic Indigo

    OpenAIRE

    Kawahito, Miyoko; Yasukawa, Ryoko

    2009-01-01

    Color of cloth dyed with Awa natural indigo is quantitatively compared with color of the cloth dyed with synthetic indigo. Results showed that: 1) color produced by Awa natural indigo is bluer and brighter than color produced by synthetic indigo; 2) a single Gaussian function fits the profile of the running of color produced by Awa natural indigo and the running of color produced by synthetic indigo prepared with sodium hydrosulfite approximates a linear sum of two Gaussian functions; 3) befo...

  19. Low-complexity camera digital signal imaging for video document projection system

    Science.gov (United States)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  20. Towards a miniaturized photon counting laser altimeter and stereoscopic camera instrument suite for microsatellites

    NARCIS (Netherlands)

    Moon, S.G.; Hannemann, S.; Collon, M.; Wielinga, K.; Kroesbergen, E.; Harris, J.; Gill, E.K.A.; Maessen, D.C.

    2009-01-01

    In the following we review the optimization for microsatellite deployment of a highly integrated payload suite comprising a high resolution camera, an additional camera for stereoscopic imaging, and a single photon counting laser altimeter. This payload suite, the `Stereo Imaging Laser Altimeter'

  1. Teaching of color in predoctoral and postdoctoral dental education in 2009.

    Science.gov (United States)

    Paravina, Rade D; O'Neill, Paula N; Swift, Edward J; Nathanson, Dan; Goodacre, Charles J

    2010-01-01

    The goal of the study was to determine the current status of the teaching of color in dental education at both the predoctoral (Pre-D) and postdoctoral (Post-D) levels. A cross-sectional web-based survey, containing 27 multiple choice, multiple best and single best answers was created. Upon receiving the administrative approval, dental faculty involved in the teaching of color to Pre-D or Post-D dental students from around the world (N=205), were administered a survey. Statistical analysis of differences between Pre-D and Post-D was performed using Chi-square test (α=0.05). A total of 130 responses were received (response rate 63.4%); there were 70 responses from North America, 40 from Europe, 10 from South America, nine from Asia and one from Africa. A course on "color" or "color in dentistry" was included in the dental curriculum of 80% of Pre-D programs and 82% of Post-D programs. The number of hours dedicated to color-related topics was 4.0±2.4 for Pre-D and 5.5±2.9 for Post-D, respectively (p3D-Master shade guide, digital camera and lens selection, composite resins, and maxillofacial prosthetic materials. Except for the restorative courses and composite resins, significantly higher results were recorded for Post-D programs. Vitapan Classical and 3D-Master were the most frequently taught shade guides. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Development Of A Multicolor Sub/millimeter Camera Using Microwave Kinetic Inductance Detectors

    Science.gov (United States)

    Schlaerth, James A.; Czakon, N. G.; Day, P. K.; Downes, T. P.; Duan, R.; Glenn, J.; Golwala, S. R.; Hollister, M. I.; LeDuc, H. G.; Maloney, P. R.; Mazin, B. A.; Noroozian, O.; Sayers, J.; Siegel, S.; Vayonakis, A.; Zmuidzinas, J.

    2011-01-01

    Microwave Kinetic Inductance Detectors (MKIDs) are superconducting resonators useful for detecting light from the millimeter-wave to the X-ray. These detectors are easily multiplexed, as the resonances can be tuned to slightly different frequencies, allowing hundreds of detectors to be read out simultaneously using a single feedline. The Multicolor Submillimeter Inductance Camera, MUSIC, will use 2304 antenna-coupled MKIDs in multicolor operation, with bands centered at wavelengths of 0.85, 1.1, 1.3 and 2.0 mm, beginning in 2011. Here we present the results of our demonstration instrument, DemoCam, containing a single 3-color array with 72 detectors and optics similar to MUSIC. We present sensitivities achieved at the telescope, and compare to those expected based upon laboratory tests. We explore the factors that limit the sensitivity, in particular electronics noise, antenna efficiency, and excess loading. We discuss mitigation of these factors, and how we plan to improve sensitivity to the level of background-limited performance for the scientific operation of MUSIC. Finally, we note the expected mapping speed and contributions of MUSIC to astrophysics, and in particular to the study of submillimeter galaxies. This research has been funded by grants from the National Science Foundation, the Gordon and Betty Moore Foundation, and the NASA Graduate Student Researchers Program.

  3. SFR test fixture for hemispherical and hyperhemispherical camera systems

    Science.gov (United States)

    Tamkin, John M.

    2017-08-01

    Optical testing of camera systems in volume production environments can often require expensive tooling and test fixturing. Wide field (fish-eye, hemispheric and hyperhemispheric) optical systems create unique challenges because of the inherent distortion, and difficulty in controlling reflections from front-lit high resolution test targets over the hemisphere. We present a unique design for a test fixture that uses low-cost manufacturing methods and equipment such as 3D printing and an Arduino processor to control back-lit multi-color (VIS/NIR) targets and sources. Special care with LED drive electronics is required to accommodate both global and rolling shutter sensors.

  4. Image processing for the ESA Faint Object Camera

    Science.gov (United States)

    Norris, P.

    1980-10-01

    The paper describes the Faint Object Camera (FOC) for image processing which complements the NASA Space Telescope for the 1983 Shuttle launch. The data processing for removing instrument signature effects from the FOC images is discussed along with subtle errors in the data. Data processing will be accomplished by a minicomputer driving a high quality color display, with large backing disk storage; interactive techniques for selective enhancement of image features will be combined with standard scientific transformation, filtering, and analysis methods. Astronomical techniques including star finding, will be used and spectral-type searches will be obtained from astronomical data analysis institutes.

  5. Evaluation of Operator Performance Using True Color and Artificial Color in Natural Scene Perception

    National Research Council Canada - National Science Library

    Vargo, John

    1999-01-01

    .... Recent advances in technology have permitted the fusion of the output of these two devices into a single color display that potentially combines the capabilities of both sensors while overcoming their limitations...

  6. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... One Use Facts About Colored Contacts and Halloween Safety Colored Contact Lens Facts Over-the-Counter Costume ... new application of artificial intelligence shows whether a patient’s eyes point to high blood pressure or risk ...

  7. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... use of colored contact lenses , from the U.S. Food and Drug Administration (FDA). Are the colored lenses ... 2018 By Dan T. Gudgel Do you know what the difference is between ophthalmologists and optometrists? A ...

  8. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... with Colored Contact Lenses Julian: Teenager Blinded In One Eye By Non-Prescription Contact Lens Laura: Vision ... Robyn: Blurry Vision and Daily Eye Drops After One Use Facts About Colored Contacts and Halloween Safety ...

  9. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... One Use Facts About Colored Contacts and Halloween Safety Colored Contact Lens Facts Over-the-Counter Costume ... Academy Jobs at the Academy Financial Relationships with Industry Medical Disclaimer Privacy Policy Terms of Service For ...

  10. Antibacterial Structural Color Hydrogels.

    Science.gov (United States)

    Chen, Zhuoyue; Mo, Min; Fu, Fanfan; Shang, Luoran; Wang, Huan; Liu, Cihui; Zhao, Yuanjin

    2017-11-08

    Structural color hydrogels with lasting survivability are important for many applications, but they still lack anti-biodegradation capability. Thus, we herein present novel antibacterial structural color hydrogels by simply integrating silver nanoparticles (AgNPs) in situ into the hydrogel materials. Because the integrated AgNPs possessed wide and excellent antibacterial abilities, the structural color hydrogels could prevent bacterial adhesion, avoid hydrogel damage, and maintain their vivid structural colors during their application and storage. It was demonstrated that the AgNP-tagged poly(N-isopropylacrylamide) structural color hydrogels could retain their original thermal-responsive color transition even when the AgNP-free hydrogels were degraded by bacteria and that the AgNP-integrated self-healing structural color protein hydrogels could save their self-repairing property instead of being degraded by bacteria. These features indicated that the antibacterial structural color hydrogels could be amenable to a variety of practical biomedical applications.

  11. Fingers that change color

    Science.gov (United States)

    Blanching of the fingers; Fingers - pale; Toes that change color; Toes - pale ... These conditions can cause fingers or toes to change color: Buerger disease. Chilblains. Painful inflammation of small ...

  12. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... One Use Facts About Colored Contacts and Halloween Safety Colored Contact Lens Facts Over-the-Counter Costume ... an ophthalmologist — an eye medical doctor — who will measure each eye and talk to you about proper ...

  13. The Camera Comes to Court.

    Science.gov (United States)

    Floren, Leola

    After the Lindbergh kidnapping trial in 1935, the American Bar Association sought to eliminate electronic equipment from courtroom proceedings. Eventually, all but two states adopted regulations applying that ban to some extent, and a 1965 Supreme Court decision encouraged the banning of television cameras at trials as well. Currently, some states…

  14. High-speed holographic camera

    International Nuclear Information System (INIS)

    Novaro, Marc

    The high-speed holographic camera is a disgnostic instrument using holography as an information storing support. It allows us to take 10 holograms, of an object, with exposures times of 1,5ns, separated in time by 1 or 2ns. In order to get these results easily, no mobile part is used in the set-up [fr

  15. The LSST camera system overview

    Science.gov (United States)

    Gilmore, Kirk; Kahn, Steven; Nordby, Martin; Burke, David; O'Connor, Paul; Oliver, John; Radeka, Veljko; Schalk, Terry; Schindler, Rafe

    2006-06-01

    The LSST camera is a wide-field optical (0.35-1um) imager designed to provide a 3.5 degree FOV with better than 0.2 arcsecond sampling. The detector format will be a circular mosaic providing approximately 3.2 Gigapixels per image. The camera includes a filter mechanism and, shuttering capability. It is positioned in the middle of the telescope where cross-sectional area is constrained by optical vignetting and heat dissipation must be controlled to limit thermal gradients in the optical beam. The fast, f/1.2 beam will require tight tolerances on the focal plane mechanical assembly. The focal plane array operates at a temperature of approximately -100°C to achieve desired detector performance. The focal plane array is contained within an evacuated cryostat, which incorporates detector front-end electronics and thermal control. The cryostat lens serves as an entrance window and vacuum seal for the cryostat. Similarly, the camera body lens serves as an entrance window and gas seal for the camera housing, which is filled with a suitable gas to provide the operating environment for the shutter and filter change mechanisms. The filter carousel can accommodate 5 filters, each 75 cm in diameter, for rapid exchange without external intervention.

  16. Gamma camera with reflectivity mask

    International Nuclear Information System (INIS)

    Stout, K.J.

    1980-01-01

    In accordance with the present invention there is provided a radiographic camera comprising: a scintillator; a plurality of photodectors positioned to face said scintillator; a plurality of masked regions formed upon a face of said scintillator opposite said photdetectors and positioned coaxially with respective ones of said photodetectors for decreasing the amount of internal reflection of optical photons generated within said scintillator. (auth)

  17. Robotic Arm Camera on Mars, with Lights Off

    Science.gov (United States)

    2008-01-01

    This approximate color image is a view of NASA's Phoenix Mars Lander's Robotic Arm Camera (RAC) as seen by the lander's Surface Stereo Imager (SSI). This image was taken on the afternoon of the 116th Martian day, or sol, of the mission (September 22, 2008). The RAC is about 8 centimeters (3 inches) tall. The SSI took images of the RAC to test both the light-emitting diodes (LEDs) and cover function. Individual images were taken in three SSI filters that correspond to the red, green, and blue LEDs one at a time. This yields proper coloring when imaging Phoenix's surrounding Martian environment. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  18. Color and experiment

    International Nuclear Information System (INIS)

    Chanowitz, M.S.

    After a brief review of the color hypothesis and the motivations for its introduction, the experimental tests are discussed. Colored states are assumed not to have been produced at present energies and the only experimental tests discussed apply below the color threshold, when color is a 'hidden symmetry'. Some of these tests offer the posibility of distinguishing between quark models with fractional and integral quark charges

  19. Color and experimental physics

    International Nuclear Information System (INIS)

    Chanowitz, M.S.

    1975-01-01

    After a brief review of the color hypothesis and the motivations for its introduction, the experimental tests arare discussed. It is assumed that colored states have not been produced at present energies and only experimental tests which apply below the color threshold, when color is a ''hidden symmetry,'' are discussed. Some of these tests offer the possibility of distinguishing between quark models with fractional and integral quark charges. (auth)

  20. Reimagining the Color Wheel

    Science.gov (United States)

    Snyder, Jennifer

    2011-01-01

    Color wheels are a traditional project for many teachers. The author has used them in art appreciation classes for many years, but one problem she found when her pre-service art education students created colored wheels was that they were boring: simple circles, with pie-shaped pieces, which students either painted or colored in. This article…

  1. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... in Cleveland. "This is far from the truth." Real People, Real Problems with Colored Contact Lenses Julian: Teenager Blinded ... use of colored contact lenses , from the U.S. Food and Drug Administration (FDA). Are the colored lenses ...

  2. 2D Measurements of the Balmer Series in Proto-MPEX using a Fast Visible Camera Setup

    Science.gov (United States)

    Lindquist, Elizabeth G.; Biewer, Theodore M.; Ray, Holly B.

    2017-10-01

    The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device with densities up to 1020 m-3 and temperatures up to 20 eV. Broadband spectral measurements show the visible emission spectra are solely due to the Balmer lines of deuterium. Monochromatic and RGB color Sanstreak SC1 Edgertronic fast visible cameras capture high speed video of plasmas in Proto-MPEX. The color camera is equipped with a long pass 450 nm filter and an internal Bayer filter to view the Dα line at 656 nm on the red channel and the Dβ line at 486 nm on the blue channel. The monochromatic camera has a 434 nm narrow bandpass filter to view the Dγ intensity. In the setup, a 50/50 beam splitter is used so both cameras image the same region of the plasma discharge. Camera images were aligned to each other by viewing a grid ensuring 1 pixel registration between the two cameras. A uniform intensity calibrated white light source was used to perform a pixel-to-pixel relative and an absolute intensity calibration for both cameras. Python scripts that combined the dual camera data, rendering the Dα, Dβ, and Dγ intensity ratios. Observations from Proto-MPEX discharges will be presented. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.

  3. Simulation-based camera navigation training in laparoscopy-a randomized trial

    DEFF Research Database (Denmark)

    Nilsson, Cecilia; Sørensen, Jette Led; Konge, Lars

    2017-01-01

    patient safety. The objectives of this trial were to examine how to train laparoscopic camera navigation and to explore the transfer of skills to the operating room. MATERIALS AND METHODS: A randomized, single-center superiority trial with three groups: The first group practiced simulation-based camera...... navigation tasks (camera group), the second group practiced performing a simulation-based cholecystectomy (procedure group), and the third group received no training (control group). Participants were surgical novices without prior laparoscopic experience. The primary outcome was assessment of camera.......033), had a higher score. CONCLUSIONS: Simulation-based training improves the technical skills required for camera navigation, regardless of practicing camera navigation or the procedure itself. Transfer to the clinical setting could, however, not be demonstrated. The control group demonstrated higher...

  4. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects.

    Science.gov (United States)

    Bulczak, David; Lambers, Martin; Kolb, Andreas

    2017-12-22

    In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.

  5. Measurement of the timing behaviour of off-the-shelf cameras

    Science.gov (United States)

    Schatz, Volker

    2017-04-01

    This paper presents a measurement method suitable for investigating the timing properties of cameras. A single light source illuminates the camera detector starting with a varying defined delay after the camera trigger. Pixels from the recorded camera frames are summed up and normalised, and the resulting function is indicative of the overlap between illumination and exposure. This allows one to infer the trigger delay and the exposure time with sub-microsecond accuracy. The method is therefore of interest when off-the-shelf cameras are used in reactive systems or synchronised with other cameras. It can supplement radiometric and geometric calibration methods for cameras in scientific use. A closer look at the measurement results reveals deviations from the ideal camera behaviour of constant sensitivity limited to the exposure interval. One of the industrial cameras investigated retains a small sensitivity long after the end of the nominal exposure interval. All three investigated cameras show non-linear variations of sensitivity at O≤ft({{10}-3}\\right) to O≤ft({{10}-2}\\right) during exposure. Due to its sign, the latter effect cannot be described by a sensitivity function depending on the time after triggering, but represents non-linear pixel characteristics.

  6. Impact of New Camera Technologies on Discoveries in Cell Biology.

    Science.gov (United States)

    Stuurman, Nico; Vale, Ronald D

    2016-08-01

    New technologies can make previously invisible phenomena visible. Nowhere is this more obvious than in the field of light microscopy. Beginning with the observation of "animalcules" by Antonie van Leeuwenhoek, when he figured out how to achieve high magnification by shaping lenses, microscopy has advanced to this day by a continued march of discoveries driven by technical innovations. Recent advances in single-molecule-based technologies have achieved unprecedented resolution, and were the basis of the Nobel prize in Chemistry in 2014. In this article, we focus on developments in camera technologies and associated image processing that have been a major driver of technical innovations in light microscopy. We describe five types of developments in camera technology: video-based analog contrast enhancement, charge-coupled devices (CCDs), intensified sensors, electron multiplying gain, and scientific complementary metal-oxide-semiconductor cameras, which, together, have had major impacts in light microscopy. © 2016 Marine Biological Laboratory.

  7. Automated Meteor Detection by All-Sky Digital Camera Systems

    Science.gov (United States)

    Suk, Tomáš; Šimberová, Stanislava

    2017-12-01

    We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.

  8. Scanning Color Laser Microscope

    Science.gov (United States)

    Awamura, D.; Ode, T.; Yonezawa, M.

    1988-01-01

    A confocal color laser microscope which utilizes a three color laser light source (Red: He-Ne, Green: Ar, Blue: Ar) has been developed and is finding useful applications in the semiconductor field. The color laser microscope, when compared to a conventional microscope, offers superior color separation, higher resolution, and sharper contrast. Recently some new functions including a Focus Scan Memory, a Surface Profile Measurement System, a Critical Dimension Measurement system (CD) and an Optical Beam Induced Current Function (OBIC) have been developed for the color laser microscope. This paper will discuss these new features.

  9. Can camera traps monitor Komodo dragons a large ectothermic predator?

    Science.gov (United States)

    Ariefiandy, Achmad; Purwandana, Deni; Seno, Aganto; Ciofi, Claudio; Jessop, Tim S

    2013-01-01

    Camera trapping has greatly enhanced population monitoring of often cryptic and low abundance apex carnivores. Effectiveness of passive infrared camera trapping, and ultimately population monitoring, relies on temperature mediated differences between the animal and its ambient environment to ensure good camera detection. In ectothermic predators such as large varanid lizards, this criterion is presumed less certain. Here we evaluated the effectiveness of camera trapping to potentially monitor the population status of the Komodo dragon (Varanus komodoensis), an apex predator, using site occupancy approaches. We compared site-specific estimates of site occupancy and detection derived using camera traps and cage traps at 181 trapping locations established across six sites on four islands within Komodo National Park, Eastern Indonesia. Detection and site occupancy at each site were estimated using eight competing models that considered site-specific variation in occupancy (ψ)and varied detection probabilities (p) according to detection method, site and survey number using a single season site occupancy modelling approach. The most parsimonious model [ψ (site), p (site survey); ω = 0.74] suggested that site occupancy estimates differed among sites. Detection probability varied as an interaction between site and survey number. Our results indicate that overall camera traps produced similar estimates of detection and site occupancy to cage traps, irrespective of being paired, or unpaired, with cage traps. Whilst one site showed some evidence detection was affected by trapping method detection was too low to produce an accurate occupancy estimate. Overall, as camera trapping is logistically more feasible it may provide, with further validation, an alternative method for evaluating long-term site occupancy patterns in Komodo dragons, and potentially other large reptiles, aiding conservation of this species.

  10. Subwavelength Plasmonic Color Printing Protected for Ambient Use

    DEFF Research Database (Denmark)

    Roberts, Alexander Sylvester; Pors, Anders Lambertus; Albrektsen, Ole

    2014-01-01

    We demonstrate plasmonic color printing with subwavelength resolution using circular gap-plasmon resonators (GPRs) arranged in 340 nm period arrays of square unit cells and fabricated with single-step electron-beam lithography. We develop a printing procedure resulting in correct single-pixel color...... reproduction, high color uniformity of colored areas, and high reproduction fidelity. Furthermore, we demonstrate that, due to inherent stability of GPRs with respect to surfactants, the fabricated color print can be protected with a transparent dielectric overlay for ambient use without destroying its...... coloring. Using finite-element simulations, we uncover the physical mechanisms responsible for color printing with GPR arrays and suggest the appropriate design procedure minimizing the influence of the protection layer....

  11. Camera trap arrays improve detection probability of wildlife: Investigating study design considerations using an empirical dataset.

    Directory of Open Access Journals (Sweden)

    Kelly M O'Connor

    Full Text Available Camera trapping is a standard tool in ecological research and wildlife conservation. Study designs, particularly for small-bodied or cryptic wildlife species often attempt to boost low detection probabilities by using non-random camera placement or baited cameras, which may bias data, or incorrectly estimate detection and occupancy. We investigated the ability of non-baited, multi-camera arrays to increase detection probabilities of wildlife. Study design components were evaluated for their influence on wildlife detectability by iteratively parsing an empirical dataset (1 by different sizes of camera arrays deployed (1-10 cameras, and (2 by total season length (1-365 days. Four species from our dataset that represented a range of body sizes and differing degrees of presumed detectability based on life history traits were investigated: white-tailed deer (Odocoileus virginianus, bobcat (Lynx rufus, raccoon (Procyon lotor, and Virginia opossum (Didelphis virginiana. For all species, increasing from a single camera to a multi-camera array significantly improved detection probability across the range of season lengths and number of study sites evaluated. The use of a two camera array increased survey detection an average of 80% (range 40-128% from the detection probability of a single camera across the four species. Species that were detected infrequently benefited most from a multiple-camera array, where the addition of up to eight cameras produced significant increases in detectability. However, for species detected at high frequencies, single cameras produced a season-long (i.e, the length of time over which cameras are deployed and actively monitored detectability greater than 0.75. These results highlight the need for researchers to be critical about camera trap study designs based on their intended target species, as detectability for each focal species responded differently to array size and season length. We suggest that researchers a priori

  12. Camera trap arrays improve detection probability of wildlife: Investigating study design considerations using an empirical dataset.

    Science.gov (United States)

    O'Connor, Kelly M; Nathan, Lucas R; Liberati, Marjorie R; Tingley, Morgan W; Vokoun, Jason C; Rittenhouse, Tracy A G

    2017-01-01

    Camera trapping is a standard tool in ecological research and wildlife conservation. Study designs, particularly for small-bodied or cryptic wildlife species often attempt to boost low detection probabilities by using non-random camera placement or baited cameras, which may bias data, or incorrectly estimate detection and occupancy. We investigated the ability of non-baited, multi-camera arrays to increase detection probabilities of wildlife. Study design components were evaluated for their influence on wildlife detectability by iteratively parsing an empirical dataset (1) by different sizes of camera arrays deployed (1-10 cameras), and (2) by total season length (1-365 days). Four species from our dataset that represented a range of body sizes and differing degrees of presumed detectability based on life history traits were investigated: white-tailed deer (Odocoileus virginianus), bobcat (Lynx rufus), raccoon (Procyon lotor), and Virginia opossum (Didelphis virginiana). For all species, increasing from a single camera to a multi-camera array significantly improved detection probability across the range of season lengths and number of study sites evaluated. The use of a two camera array increased survey detection an average of 80% (range 40-128%) from the detection probability of a single camera across the four species. Species that were detected infrequently benefited most from a multiple-camera array, where the addition of up to eight cameras produced significant increases in detectability. However, for species detected at high frequencies, single cameras produced a season-long (i.e, the length of time over which cameras are deployed and actively monitored) detectability greater than 0.75. These results highlight the need for researchers to be critical about camera trap study designs based on their intended target species, as detectability for each focal species responded differently to array size and season length. We suggest that researchers a priori identify

  13. FIR Detectors/Cameras Based on GaN and Si Field-Effect Devices Project

    Data.gov (United States)

    National Aeronautics and Space Administration — SETI proposes to develop GaN and Si based multicolor FIR/THz cameras with detector elements and readout, signal processing electronics integrated on a single chip....

  14. Relating color working memory and color perception.

    Science.gov (United States)

    Allred, Sarah R; Flombaum, Jonathan I

    2014-11-01

    Color is the most frequently studied feature in visual working memory (VWM). Oddly, much of this work de-emphasizes perception, instead making simplifying assumptions about the inputs served to memory. We question these assumptions in light of perception research, and we identify important points of contact between perception and working memory in the case of color. Better characterization of its perceptual inputs will be crucial for elucidating the structure and function of VWM. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. CMOS IMAGING SENSOR TECHNOLOGY FOR AERIAL MAPPING CAMERAS

    Directory of Open Access Journals (Sweden)

    K. Neumann

    2016-06-01

    Full Text Available In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.

  16. Frequency division multiplexed multi-color fluorescence microscope system

    Science.gov (United States)

    Le, Vu Nam; Yang, Huai Dong; Zhang, Si Chun; Zhang, Xin Rong; Jin, Guo Fan

    2017-10-01

    Grayscale camera can only obtain gray scale image of object, while the multicolor imaging technology can obtain the color information to distinguish the sample structures which have the same shapes but in different colors. In fluorescence microscopy, the current method of multicolor imaging are flawed. Problem of these method is affecting the efficiency of fluorescence imaging, reducing the sampling rate of CCD etc. In this paper, we propose a novel multiple color fluorescence microscopy imaging method which based on the Frequency division multiplexing (FDM) technology, by modulating the excitation lights and demodulating the fluorescence signal in frequency domain. This method uses periodic functions with different frequency to modulate amplitude of each excitation lights, and then combine these beams for illumination in a fluorescence microscopy imaging system. The imaging system will detect a multicolor fluorescence image by a grayscale camera. During the data processing, the signal obtained by each pixel of the camera will be processed with discrete Fourier transform, decomposed by color in the frequency domain and then used inverse discrete Fourier transform. After using this process for signals from all of the pixels, monochrome images of each color on the image plane can be obtained and multicolor image is also acquired. Based on this method, this paper has constructed and set up a two-color fluorescence microscope system with two excitation wavelengths of 488 nm and 639 nm. By using this system to observe the linearly movement of two kinds of fluorescent microspheres, after the data processing, we obtain a two-color fluorescence dynamic video which is consistent with the original image. This experiment shows that the dynamic phenomenon of multicolor fluorescent biological samples can be generally observed by this method. Compared with the current methods, this method can obtain the image signals of each color at the same time, and the color video's frame

  17. UltraColor: a new gamut-mapping strategy

    Science.gov (United States)

    Spaulding, Kevin E.; Ellson, Richard N.; Sullivan, James R.

    1995-04-01

    Many color calibration and enhancement strategies exist for digital systems. Typically, these approaches are optimized to work well with one class of images, but may produce unsatisfactory results for other types of images. For example, a colorimetric strategy may work well when printing photographic scenes, but may give inferior results for business graphic images because of device color gamut limitations. On the other hand, a color enhancement strategy that works well for business graphics images may distort the color reproduction of skintones and other important photographic colors. This paper describes a method for specifying different color mapping strategies in various regions of color space, while providing a mechanism for smooth transitions between the different regions. The method involves a two step process: (1) constraints are applied so some subset of the points in the input color space explicitly specifying the color mapping function; (2) the color mapping for the remainder of the color values is then determined using an interpolation algorithm that preserves continuity and smoothness. The interpolation algorithm that was developed is based on a computer graphics morphing technique. This method was used to develop the UltraColor gamut mapping strategy, which combines a colorimetric mapping for colors with low saturation levels, with a color enhancement technique for colors with high saturation levels. The result is a single color transformation that produces superior quality for all classes of imagery. UltraColor has been incorporated in several models of Kodak printers including the Kodak ColorEase PS and the Kodak XLS 8600 PS thermal dye sublimation printers.

  18. Gain attenuation of gated framing camera

    International Nuclear Information System (INIS)

    Xiao Shali; Liu Shenye; Cao Zhurong; Li Hang; Zhang Haiying; Yuan Zheng; Wang Liwei

    2009-01-01

    The theoretic model of framing camera's gain attenuation is analyzed. The exponential attenuation curve of the gain along the pulse propagation time is simulated. An experiment to measure the coefficient of gain attenuation based on the gain attenuation theory is designed. Experiment result shows that the gain follows an exponential attenuation rule with a quotient of 0.0249 nm -1 , the attenuation coefficient of the pulse is 0.00356 mm -1 . The loss of the pulse propagation along the MCP stripline is the leading reason of gain attenuation. But in the figure of a single stripline, the gain dose not follow the rule of exponential attenuation completely, instead, there is a gain increase at the stripline bottom. That is caused by the reflection of the pulse. The reflectance is about 24.2%. Combining the experiment and theory, which design of the stripline MCP can improved the gain attenuation. (authors)

  19. Architectural Design Document for Camera Models

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study.......Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study....

  20. Colors, colored overlays, and reading skills

    Directory of Open Access Journals (Sweden)

    Arcangelo eUccula

    2014-07-01

    Full Text Available In this article, we are concerned with the role of colors in reading written texts. It has been argued that colored overlays applied above written texts positively influence both reading fluency and reading speed. These effects would be particularly evident for those individuals affected by the so called Meares-Irlen syndrome, i.e. who experience eyestrain and/or visual distortions – e.g. color, shape or movement illusions – while reading. This condition would interest the 12-14% of the general population and up to the 46% of the dyslexic population. Thus, colored overlays have been largely employed as a remedy for some aspects of the difficulties in reading experienced by dyslexic individuals, as fluency and speed. Despite the wide use of colored overlays, how they exert their effects has not been made clear yet. Also, according to some researchers, the results supporting the efficacy of colored overlays as a tool for helping readers are at least controversial. Furthermore, the very nature of the Meares-Irlen syndrome has been questioned. Here we provide a concise, critical review of the literature.