WorldWideScience

Sample records for vi-cmos image sensors

  1. Image processing occupancy sensor

    Science.gov (United States)

    Brackney, Larry J.

    2016-09-27

    A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.

  2. Photon-counting image sensors

    CERN Document Server

    Teranishi, Nobukazu; Theuwissen, Albert; Stoppa, David; Charbon, Edoardo

    2017-01-01

    The field of photon-counting image sensors is advancing rapidly with the development of various solid-state image sensor technologies including single photon avalanche detectors (SPADs) and deep-sub-electron read noise CMOS image sensor pixels. This foundational platform technology will enable opportunities for new imaging modalities and instrumentation for science and industry, as well as new consumer applications. Papers discussing various photon-counting image sensor technologies and selected new applications are presented in this all-invited Special Issue.

  3. Image-based occupancy sensor

    Science.gov (United States)

    Polese, Luigi Gentile; Brackney, Larry

    2015-05-19

    An image-based occupancy sensor includes a motion detection module that receives and processes an image signal to generate a motion detection signal, a people detection module that receives the image signal and processes the image signal to generate a people detection signal, a face detection module that receives the image signal and processes the image signal to generate a face detection signal, and a sensor integration module that receives the motion detection signal from the motion detection module, receives the people detection signal from the people detection module, receives the face detection signal from the face detection module, and generates an occupancy signal using the motion detection signal, the people detection signal, and the face detection signal, with the occupancy signal indicating vacancy or occupancy, with an occupancy indication specifying that one or more people are detected within the monitored volume.

  4. Imaging Sensors: Artificial and Natural

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 4; Issue 2. Imaging Sensors: Artificial and Natural. Vikram Dhar. General Article Volume 4 Issue 2 February 1999 pp 27-36. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/004/02/0027-0036 ...

  5. Vertical Silicon Nanowires for Image Sensor Applications

    OpenAIRE

    Park, Hyunsung

    2014-01-01

    Conventional image sensors achieve color imaging using absorptive organic dye filters. These face considerable challenges however in the trend toward ever higher pixel densities and advanced imaging methods such as multispectral imaging and polarization-resolved imaging. In this dissertation, we investigate the optical properties of vertical silicon nanowires with the goal of image sensor applications. First, we demonstrate a multispectral imaging system that uses a novel filter that consists...

  6. CMOS sensors for atmospheric imaging

    Science.gov (United States)

    Pratlong, Jérôme; Burt, David; Jerram, Paul; Mayer, Frédéric; Walker, Andrew; Simpson, Robert; Johnson, Steven; Hubbard, Wendy

    2017-09-01

    Recent European atmospheric imaging missions have seen a move towards the use of CMOS sensors for the visible and NIR parts of the spectrum. These applications have particular challenges that are completely different to those that have driven the development of commercial sensors for applications such as cell-phone or SLR cameras. This paper will cover the design and performance of general-purpose image sensors that are to be used in the MTG (Meteosat Third Generation) and MetImage satellites and the technology challenges that they have presented. We will discuss how CMOS imagers have been designed with 4T pixel sizes of up to 250 μm square achieving good charge transfer efficiency, or low lag, with signal levels up to 2M electrons and with high line rates. In both devices a low noise analogue read-out chain is used with correlated double sampling to suppress the readout noise and give a maximum dynamic range that is significantly larger than in standard commercial devices. Radiation hardness is a particular challenge for CMOS detectors and both of these sensors have been designed to be fully radiation hard with high latch-up and single-event-upset tolerances, which is now silicon proven on MTG. We will also cover the impact of ionising radiation on these devices. Because with such large pixels the photodiodes have a large open area, front illumination technology is sufficient to meet the detection efficiency requirements but with thicker than standard epitaxial silicon to give improved IR response (note that this makes latch up protection even more important). However with narrow band illumination reflections from the front and back of the dielectric stack on the top of the sensor produce Fabry-Perot étalon effects, which have been minimised with process modifications. We will also cover the addition of precision narrow band filters inside the MTG package to provide a complete imaging subsystem. Control of reflected light is also critical in obtaining the

  7. Temperature Sensors Integrated into a CMOS Image Sensor

    NARCIS (Netherlands)

    Abarca Prouza, A.N.; Xie, S.; Markenhof, Jules; Theuwissen, A.J.P.

    2017-01-01

    In this work, a novel approach is presented for measuring relative temperature variations inside the pixel array of a CMOS image sensor itself. This approach can give important information when compensation for dark (current) fixed pattern noise (FPN) is needed. The test image sensor consists of

  8. Visual Image Sensor Organ Replacement

    Science.gov (United States)

    Maluf, David A.

    2014-01-01

    This innovation is a system that augments human vision through a technique called "Sensing Super-position" using a Visual Instrument Sensory Organ Replacement (VISOR) device. The VISOR device translates visual and other sensors (i.e., thermal) into sounds to enable very difficult sensing tasks. Three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. Because the human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns, the translation of images into sounds reduces the risk of accidentally filtering out important clues. The VISOR device was developed to augment the current state-of-the-art head-mounted (helmet) display systems. It provides the ability to sense beyond the human visible light range, to increase human sensing resolution, to use wider angle visual perception, and to improve the ability to sense distances. It also allows compensation for movement by the human or changes in the scene being viewed.

  9. Current-mode CMOS hybrid image sensor

    Science.gov (United States)

    Benyhesan, Mohammad Kassim

    Digital imaging is growing rapidly making Complimentary Metal-Oxide-Semi conductor (CMOS) image sensor-based cameras indispensable in many modern life devices like cell phones, surveillance devices, personal computers, and tablets. For various purposes wireless portable image systems are widely deployed in many indoor and outdoor places such as hospitals, urban areas, streets, highways, forests, mountains, and towers. However, the increased demand on high-resolution image sensors and improved processing features is expected to increase the power consumption of the CMOS sensor-based camera systems. Increased power consumption translates into a reduced battery life-time. The increased power consumption might not be a problem if there is access to a nearby charging station. On the other hand, the problem arises if the image sensor is located in widely spread areas, unfavorable to human intervention, and difficult to reach. Given the limitation of energy sources available for wireless CMOS image sensor, an energy harvesting technique presents a viable solution to extend the sensor life-time. Energy can be harvested from the sun light or the artificial light surrounding the sensor itself. In this thesis, we propose a current-mode CMOS hybrid image sensor capable of energy harvesting and image capture. The proposed sensor is based on a hybrid pixel that can be programmed to perform the task of an image sensor and the task of a solar cell to harvest energy. The basic idea is to design a pixel that can be configured to exploit its internal photodiode to perform two functions: image sensing and energy harvesting. As a proof of concept a 40 x 40 array of hybrid pixels has been designed and fabricated in a standard 0.5 microm CMOS process. Measurement results show that up to 39 microW of power can be harvested from the array under 130 Klux condition with an energy efficiency of 220 nJ /pixel /frame. The proposed image sensor is a current-mode image sensor which has several

  10. Thermal luminescence spectroscopy chemical imaging sensor.

    Science.gov (United States)

    Carrieri, Arthur H; Buican, Tudor N; Roese, Erik S; Sutter, James; Samuels, Alan C

    2012-10-01

    The authors present a pseudo-active chemical imaging sensor model embodying irradiative transient heating, temperature nonequilibrium thermal luminescence spectroscopy, differential hyperspectral imaging, and artificial neural network technologies integrated together. We elaborate on various optimizations, simulations, and animations of the integrated sensor design and apply it to the terrestrial chemical contamination problem, where the interstitial contaminant compounds of detection interest (analytes) comprise liquid chemical warfare agents, their various derivative condensed phase compounds, and other material of a life-threatening nature. The sensor must measure and process a dynamic pattern of absorptive-emissive middle infrared molecular signature spectra of subject analytes to perform its chemical imaging and standoff detection functions successfully.

  11. Panoramic imaging perimeter sensor design and modeling

    Energy Technology Data Exchange (ETDEWEB)

    Pritchard, D.A.

    1993-12-31

    This paper describes the conceptual design and preliminary performance modeling of a 360-degree imaging sensor. This sensor combines automatic perimeter intrusion detection with immediate visual assessment and is intended to be used for fast deployment around fixed or temporary high-value assets. The sensor requirements, compiled from various government agencies, are summarized. The conceptual design includes longwave infrared and visible linear array technology. An auxiliary millimeter-wave sensing technology is also considered for use during periods of infrared and visible obscuration. The infrared detectors proposed for the sensor design are similar to the Standard Advanced Dewar Assembly Types Three A and B (SADA-IIIA/B). An overview of the sensor and processor is highlighted. The infrared performance of this sensor design has been predicted using existing thermal imaging system models and is described in the paper. Future plans for developing a prototype are also presented.

  12. Onboard Image Processing System for Hyperspectral Sensor.

    Science.gov (United States)

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-09-25

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost.

  13. Smart CMOS image sensor for lightning detection and imaging

    OpenAIRE

    Rolando, Sébastien; Goiffon, Vincent; Magnan, Pierre; Corbière, Franck; Molina, Romain; Tulet, Michel; Bréart-de-Boisanger, Michel; Saint-Pé, Olivier; Guiry, Saïprasad; Larnaudie, Franck; Leone, Bruno; Perez-Cuevas, Leticia; Zayer, Igor

    2013-01-01

    We present a CMOS image sensor dedicated to lightning detection and imaging. The detector has been designed to evaluate the potentiality of an on-chip lightning detection solution based on a smart sensor. This evaluation is performed in the frame of the predevelopment phase of the lightning detector that will be implemented in the Meteosat Third Generation Imager satellite for the European Space Agency. The lightning detection process is performed by a smart detector combining an in-pixel fra...

  14. Millimeter-wave sensor image enhancement

    Science.gov (United States)

    Wilson, William J.; Suess, Helmut

    1989-01-01

    Images from an airborne, scanning radiometer operating at a frequency of 98 GHz have been analyzed. The millimeter-wave images were obtained in 1985-1986 using the JPL millimeter-wave imaging sensor. The goal of this study was to enhance the information content of these images and make their interpretation easier. A visual-interpretative approach was used for information extraction from the images. This included application of nonlinear transform techniques for noise reduction and for color, contrast, and edge enhancement. Results of using the techniques on selected millimeter-wave images are discussed.

  15. Compressive Sensing Image Sensors-Hardware Implementation

    Directory of Open Access Journals (Sweden)

    Shahram Shirani

    2013-04-01

    Full Text Available The compressive sensing (CS paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal–oxide–semiconductor technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed.

  16. Imaging Sensors: Artificial and Natural

    Indian Academy of Sciences (India)

    nature, is that of 'smart' skins. These consist of semiconductor sensors embedded in the skin of ~he robot, vehicle or aircraft which will measure pressure, temperature, pH etc., monitoring its 'health' and enhancing its survivability. Obviously, all these sensory data have to be fused in the robot's brain (without overloading it!)

  17. High dynamic range imaging sensors and architectures

    CERN Document Server

    Darmont, Arnaud

    2013-01-01

    Illumination is a crucial element in many applications, matching the luminance of the scene with the operational range of a camera. When luminance cannot be adequately controlled, a high dynamic range (HDR) imaging system may be necessary. These systems are being increasingly used in automotive on-board systems, road traffic monitoring, and other industrial, security, and military applications. This book provides readers with an intermediate discussion of HDR image sensors and techniques for industrial and non-industrial applications. It describes various sensor and pixel architectures capable

  18. Uncooled thermal imaging sensor for UAV applications

    Science.gov (United States)

    Cochrane, Derick M.; Manning, Paul A.; Wyllie, Tim A.

    2001-10-01

    Research by DERA aimed at unmanned air vehicle (UAV) size reduction and control automation has led to a unique solution for a short range reconnaissance UAV system. Known as OBSERVER, the UAV conventionally carries a lightweight visible band sensor payload producing imagery with a large 40°x90° field of regard (FOR) to maximize spatial awareness and target detection ranges. Images taken from three CCD camera units set at elevations from plan view and up to the near horizon and are 'stitched' together to produce the large contiguous sensor footprint. This paper describes the design of a thermal imaging (TI) sensor which has been developed to be compatible with the OBSERVER UAV system. The sensor is based on UK uncooled thermal imaging technology research and offers a compact and lightweight solution operating in the 8-12 μm waveband without the need for cryogenic cooling. Infra-red radiation is gathered using two lead scandium tantalate (PST) hybrid thermal detectors each with a 384 X 288 pixel resolution, known as the Very Large Array (VLA). The TI system is designed to maintain the imaging format with that of the visible band sensor. In order to practically achieve this with adequate resolution performance, a dual field of view (FOV) optical system is used within a pitchable gimbal. This combines the advantages of a wide angle 40°x30° FOV for target detection and a narrow angle 13°x10° FOV 'foveal patch' to improve target recognition ranges. The gimbal system can be steered in elevation to give the full 90° coverage as with the visible band sensor footprint. The concept of operation is that targets can be detected over the large FOV and then the air vehicle is maneuvered so as to bring the target into the foveal patch view for recognition at an acceptable stand-off range.

  19. Cell phones as imaging sensors

    Science.gov (United States)

    Bhatti, Nina; Baker, Harlyn; Marguier, Joanna; Berclaz, Jérôme; Süsstrunk, Sabine

    2010-04-01

    Camera phones are ubiquitous, and consumers have been adopting them faster than any other technology in modern history. When connected to a network, though, they are capable of more than just picture taking: Suddenly, they gain access to the power of the cloud. We exploit this capability by providing a series of image-based personal advisory services. These are designed to work with any handset over any cellular carrier using commonly available Multimedia Messaging Service (MMS) and Short Message Service (SMS) features. Targeted at the unsophisticated consumer, these applications must be quick and easy to use, not requiring download capabilities or preplanning. Thus, all application processing occurs in the back-end system (i.e., as a cloud service) and not on the handset itself. Presenting an image to an advisory service in the cloud, a user receives information that can be acted upon immediately. Two of our examples involve color assessment - selecting cosmetics and home décor paint palettes; the third provides the ability to extract text from a scene. In the case of the color imaging applications, we have shown that our service rivals the advice quality of experts. The result of this capability is a new paradigm for mobile interactions - image-based information services exploiting the ubiquity of camera phones.

  20. Quality metrics for sensor images

    Science.gov (United States)

    Ahumada, AL

    1993-01-01

    Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery

  1. Lightning Imaging Sensor (LIS) on TRMM Science Data V4

    Data.gov (United States)

    National Aeronautics and Space Administration — The Lightning Imaging Sensor (LIS) Science Data was collected by the Lightning Imaging Sensor (LIS), which was an instrument on the Tropical Rainfall Measurement...

  2. Lightning Imaging Sensor (LIS) on TRMM Backgrounds V4

    Data.gov (United States)

    National Aeronautics and Space Administration — The Lightning Imaging Sensor (LIS) Backgrounds was collected by the Lightning Imaging Sensor (LIS), which was an instrument on the Tropical Rainfall Measurement...

  3. Smart CMOS image sensor for lightning detection and imaging.

    Science.gov (United States)

    Rolando, Sébastien; Goiffon, Vincent; Magnan, Pierre; Corbière, Franck; Molina, Romain; Tulet, Michel; Bréart-de-Boisanger, Michel; Saint-Pé, Olivier; Guiry, Saïprasad; Larnaudie, Franck; Leone, Bruno; Perez-Cuevas, Leticia; Zayer, Igor

    2013-03-01

    We present a CMOS image sensor dedicated to lightning detection and imaging. The detector has been designed to evaluate the potentiality of an on-chip lightning detection solution based on a smart sensor. This evaluation is performed in the frame of the predevelopment phase of the lightning detector that will be implemented in the Meteosat Third Generation Imager satellite for the European Space Agency. The lightning detection process is performed by a smart detector combining an in-pixel frame-to-frame difference comparison with an adjustable threshold and on-chip digital processing allowing an efficient localization of a faint lightning pulse on the entire large format array at a frequency of 1 kHz. A CMOS prototype sensor with a 256×256 pixel array and a 60 μm pixel pitch has been fabricated using a 0.35 μm 2P 5M technology and tested to validate the selected detection approach.

  4. Robust Dehaze Algorithm for Degraded Image of CMOS Image Sensors

    Directory of Open Access Journals (Sweden)

    Chen Qu

    2017-09-01

    Full Text Available The CMOS (Complementary Metal-Oxide-Semiconductor is a new type of solid image sensor device widely used in object tracking, object recognition, intelligent navigation fields, and so on. However, images captured by outdoor CMOS sensor devices are usually affected by suspended atmospheric particles (such as haze, causing a reduction in image contrast, color distortion problems, and so on. In view of this, we propose a novel dehazing approach based on a local consistent Markov random field (MRF framework. The neighboring clique in traditional MRF is extended to the non-neighboring clique, which is defined on local consistent blocks based on two clues, where both the atmospheric light and transmission map satisfy the character of local consistency. In this framework, our model can strengthen the restriction of the whole image while incorporating more sophisticated statistical priors, resulting in more expressive power of modeling, thus, solving inadequate detail recovery effectively and alleviating color distortion. Moreover, the local consistent MRF framework can obtain details while maintaining better results for dehazing, which effectively improves the image quality captured by the CMOS image sensor. Experimental results verified that the method proposed has the combined advantages of detail recovery and color preservation.

  5. Robust Dehaze Algorithm for Degraded Image of CMOS Image Sensors.

    Science.gov (United States)

    Qu, Chen; Bi, Du-Yan; Sui, Ping; Chao, Ai-Nong; Wang, Yun-Fei

    2017-09-22

    The CMOS (Complementary Metal-Oxide-Semiconductor) is a new type of solid image sensor device widely used in object tracking, object recognition, intelligent navigation fields, and so on. However, images captured by outdoor CMOS sensor devices are usually affected by suspended atmospheric particles (such as haze), causing a reduction in image contrast, color distortion problems, and so on. In view of this, we propose a novel dehazing approach based on a local consistent Markov random field (MRF) framework. The neighboring clique in traditional MRF is extended to the non-neighboring clique, which is defined on local consistent blocks based on two clues, where both the atmospheric light and transmission map satisfy the character of local consistency. In this framework, our model can strengthen the restriction of the whole image while incorporating more sophisticated statistical priors, resulting in more expressive power of modeling, thus, solving inadequate detail recovery effectively and alleviating color distortion. Moreover, the local consistent MRF framework can obtain details while maintaining better results for dehazing, which effectively improves the image quality captured by the CMOS image sensor. Experimental results verified that the method proposed has the combined advantages of detail recovery and color preservation.

  6. CMOS Image Sensors for High Speed Applications.

    Science.gov (United States)

    El-Desouki, Munir; Deen, M Jamal; Fang, Qiyin; Liu, Louis; Tse, Frances; Armstrong, David

    2009-01-01

    Recent advances in deep submicron CMOS technologies and improved pixel designs have enabled CMOS-based imagers to surpass charge-coupled devices (CCD) imaging technology for mainstream applications. The parallel outputs that CMOS imagers can offer, in addition to complete camera-on-a-chip solutions due to being fabricated in standard CMOS technologies, result in compelling advantages in speed and system throughput. Since there is a practical limit on the minimum pixel size (4∼5 μm) due to limitations in the optics, CMOS technology scaling can allow for an increased number of transistors to be integrated into the pixel to improve both detection and signal processing. Such smart pixels truly show the potential of CMOS technology for imaging applications allowing CMOS imagers to achieve the image quality and global shuttering performance necessary to meet the demands of ultrahigh-speed applications. In this paper, a review of CMOS-based high-speed imager design is presented and the various implementations that target ultrahigh-speed imaging are described. This work also discusses the design, layout and simulation results of an ultrahigh acquisition rate CMOS active-pixel sensor imager that can take 8 frames at a rate of more than a billion frames per second (fps).

  7. CMOS Image Sensors for High Speed Applications

    Directory of Open Access Journals (Sweden)

    M. Jamal Deen

    2009-01-01

    Full Text Available Recent advances in deep submicron CMOS technologies and improved pixel designs have enabled CMOS-based imagers to surpass charge-coupled devices (CCD imaging technology for mainstream applications. The parallel outputs that CMOS imagers can offer, in addition to complete camera-on-a-chip solutions due to being fabricated in standard CMOS technologies, result in compelling advantages in speed and system throughput. Since there is a practical limit on the minimum pixel size (4~5 μm due to limitations in the optics, CMOS technology scaling can allow for an increased number of transistors to be integrated into the pixel to improve both detection and signal processing. Such smart pixels truly show the potential of CMOS technology for imaging applications allowing CMOS imagers to achieve the image quality and global shuttering performance necessary to meet the demands of ultrahigh-speed applications. In this paper, a review of CMOS-based high-speed imager design is presented and the various implementations that target ultrahigh-speed imaging are described. This work also discusses the design, layout and simulation results of an ultrahigh acquisition rate CMOS active-pixel sensor imager that can take 8 frames at a rate of more than a billion frames per second (fps.

  8. IR sensors and imagers in networked operations

    Science.gov (United States)

    Breiter, Rainer; Cabanski, Wolfgang

    2005-05-01

    "Network-centric Warfare" is a common slogan describing an overall concept of networked operation of sensors, information and weapons to gain command and control superiority. Referring to IR sensors, integration and fusion of different channels like day/night or SAR images or the ability to spread image data among various users are typical requirements. Looking for concrete implementations the German Army future infantryman IdZ is an example where a group of ten soldiers build a unit with every soldier equipped with a personal digital assistant (PDA) for information display, day photo camera and a high performance thermal imager for every unit. The challenge to allow networked operation among such a unit is bringing information together and distribution over a capable network. So also AIM's thermal reconnaissance and targeting sight HuntIR which was selected for the IdZ program provides this capabilities by an optional wireless interface. Besides the global approach of Network-centric Warfare network technology can also be an interesting solution for digital image data distribution and signal processing behind the FPA replacing analog video networks or specific point to point interfaces. The resulting architecture can provide capabilities of data fusion from e.g. IR dual-band or IR multicolor sensors. AIM has participated in a German/UK collaboration program to produce a demonstrator for day/IR video distribution via Gigabit Ethernet for vehicle applications. In this study Ethernet technology was chosen for network implementation and a set of electronics was developed for capturing video data of IR and day imagers and Gigabit Ethernet video distribution. The demonstrator setup follows the requirements of current and future vehicles having a set of day and night imager cameras and a crew station with several members. Replacing the analog video path by a digital video network also makes it easy to implement embedded training by simply feeding the network with

  9. Vertically integrated thin film color sensor arrays for imaging applications.

    Science.gov (United States)

    Knipp, Dietmar; Street, Robert A; Stiebig, Helmut; Krause, Mathias; Lu, Jeng-Ping; Ready, Steve; Ho, Jackson

    2006-04-17

    Large area color sensor arrays based on vertically integrated thin-film sensors were realized. The complete color information of each color pixel is detected at the same position of the sensor array without using optical filters. The sensor arrays consist of amorphous silicon thin film color sensors integrated on top of amorphous silicon readout transistors. The spectral sensitivity of the sensors is controlled by the applied bias voltage. The operating principle of the color sensor arrays is described. Furthermore, the image quality and the pixel cross talk of the sensor arrays is analyzed by measurements of the line spread function and the modulation transfer function.

  10. A Biologically Inspired CMOS Image Sensor

    CERN Document Server

    Sarkar, Mukul

    2013-01-01

    Biological systems are a source of inspiration in the development of small autonomous sensor nodes. The two major types of optical vision systems found in nature are the single aperture human eye and the compound eye of insects. The latter are among the most compact and smallest vision sensors. The eye is a compound of individual lenses with their own photoreceptor arrays.  The visual system of insects allows them to fly with a limited intelligence and brain processing power. A CMOS image sensor replicating the perception of vision in insects is discussed and designed in this book for industrial (machine vision) and medical applications. The CMOS metal layer is used to create an embedded micro-polarizer able to sense polarization information. This polarization information is shown to be useful in applications like real time material classification and autonomous agent navigation. Further the sensor is equipped with in pixel analog and digital memories which allow variation of the dynamic range and in-pixel b...

  11. Hyperspectral Foveated Imaging Sensor for Objects Identification and Tracking Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Optical tracking and identification sensors have numerous NASA and non-NASA applications. For example, airborne or spaceborne imaging sensors are used to visualize...

  12. X-ray examination apparatus with an imaging arrangement having a plurality of image sensors

    NARCIS (Netherlands)

    Slump, Cornelis H.

    1995-01-01

    An imaging arrangement including a multi-sensor for use in an x-ray examination apparatus is described that combines a plurality of partially overlapping sub-images, resulting in an increased effective sensor area when compared to a single sensor-image. Thus an imaging arrangement is provided

  13. X-ray examination apparatus with an imaging arrangement having a plurality of image sensors

    NARCIS (Netherlands)

    Slump, Cornelis H.; Harms, M.O.

    1999-01-01

    An imaging arrangement including a multi-sensor for use in an x-ray examination apparatus is described that combines a plurality of partially overlapping sub-images, resulting in an increased effective sensor area when compared to a single sensor-image. Thus an imaging arrangement is provided

  14. Introduction to sensors for ranging and imaging

    CERN Document Server

    Brooker, Graham

    2009-01-01

    ""This comprehensive text-reference provides a solid background in active sensing technology. It is concerned with active sensing, starting with the basics of time-of-flight sensors (operational principles, components), and going through the derivation of the radar range equation and the detection of echo signals, both fundamental to the understanding of radar, sonar and lidar imaging. Several chapters cover signal propagation of both electromagnetic and acoustic energy, target characteristics, stealth, and clutter. The remainder of the book introduces the range measurement process, active ima

  15. Modeling and simulation of TDI CMOS image sensors

    Science.gov (United States)

    Nie, Kai-ming; Yao, Su-ying; Xu, Jiang-tao; Gao, Jing

    2013-09-01

    In this paper, a mathematical model of TDI CMOS image sensors was established in behavioral level through MATLAB based on the principle of a TDI CMOS image sensor using temporal oversampling rolling shutter in the along-track direction. The geometric perspective and light energy transmission relationships between the scene and the image on the sensor are included in the proposed model. A graphical user interface (GUI) of the model was also established. A high resolution satellitic picture was used to model the virtual scene being photographed. The effectiveness of the proposed model was verified by computer simulations based on the satellitic picture. In order to guide the design of TDI CMOS image sensors, the impacts of some parameters of TDI CMOS image sensors including pixel pitch, pixel photosensitive size, and integration time on the performance of the sensors were researched through the proposed model. The impacts of the above parameters on the sensors were quantified by sensor's modulation transfer function (MTF) of the along-track direction, which was calculated by slanted-edge method. The simulation results indicated that the TDI CMOS image sensor can get a better performance with smaller pixel photosensitive size and shorter integration time. The proposed model is useful in the process of researching and developing a TDI CMOS image sensor.

  16. UV-sensitive scientific CCD image sensors

    Science.gov (United States)

    Vishnevsky, Grigory I.; Kossov, Vladimir G.; Iblyaminova, A. F.; Lazovsky, Leonid Y.; Vydrevitch, Michail G.

    1997-06-01

    An investigation of probe laser irradiation interaction with substances containing in an environment has long since become a recognized technique for contamination detection and identification. For this purpose, a near and midrange-IR laser irradiation is traditionally used. However, as many works presented on last ecology monitoring conferences show, in addition to traditional systems, rapidly growing are systems with laser irradiation from near-UV range (250 - 500 nm). Use of CCD imagers is one of the prerequisites for this allowing the development of a multi-channel computer-based spectral research system. To identify and analyze contaminating impurities on an environment, such methods as laser fluorescence analysis, UV absorption and differential spectroscopy, Raman scattering are commonly used. These methods are used to identify a large number of impurities (petrol, toluene, Xylene isomers, SO2, acetone, methanol), to detect and identify food pathogens in real time, to measure a concentration of NH3, SO2 and NO in combustion outbursts, to detect oil products in a water, to analyze contaminations in ground waters, to define ozone distribution in the atmosphere profile, to monitor various chemical processes including radioactive materials manufacturing, heterogeneous catalytic reactions, polymers production etc. Multi-element image sensor with enhanced UV sensitivity, low optical non-uniformity, low intrinsic noise and high dynamic range is a key element of all above systems. Thus, so called Virtual Phase (VP) CCDs possessing all these features, seems promising for ecology monitoring spectral measuring systems. Presently, a family of VP CCDs with different architecture and number of pixels is developed and being manufactured. All CCDs from this family are supported with a precise slow-scan digital image acquisition system that can be used in various image processing systems in astronomy, biology, medicine, ecology etc. An image is displayed directly on a PC

  17. Virtual View Image over Wireless Visual Sensor Network

    Directory of Open Access Journals (Sweden)

    Gamantyo Hendrantoro

    2011-12-01

    Full Text Available In general, visual sensors are applied to build virtual view images. When number of visual sensors increases then quantity and quality of the information improves. However, the view images generation is a challenging task in Wireless Visual Sensor Network environment due to energy restriction, computation complexity, and bandwidth limitation. Hence this paper presents a new method of virtual view images generation from selected cameras on Wireless Visual Sensor Network. The aim of the paper is to meet bandwidth and energy limitations without reducing information quality. The experiment results showed that this method could minimize number of transmitted imageries with sufficient information.

  18. Special Sensor Microwave Imager/Sounder (SSMIS) Sensor Data Record (SDR) in netCDF

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Special Sensor Microwave Imager/Sounder (SSMIS) is a series of passive microwave conically scanning imagers and sounders onboard the DMSP satellites beginning...

  19. Image sensors for radiometric measurements in the ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Desa, E.S.; Desa, B.A.E.

    image sensors for use in obtaining high resolution spectra of the upward and downward irradiance light fields in the ocean. Image sensors studied here, have practical dynamic ranges of the order of (10 super(4)). Given this, it is possible to work...

  20. FASTICA based denoising for single sensor Digital Cameras images

    OpenAIRE

    Shawetangi kala; Raj Kumar Sahu

    2012-01-01

    Digital color cameras use a single sensor equipped with a color filter array (CFA) to capture scenes in color. Since each sensor cell can record only one color value, the other two missing components at each position need to be interpolated. The color interpolation process is usually called color demosaicking (CDM). The quality of demosaicked images is degraded due to the sensor noise introduced during the image acquisition process. Many advanced denoising algorithms, which are designed for ...

  1. Collaborative Image Coding and Transmission over Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Min Wu

    2007-01-01

    Full Text Available The imaging sensors are able to provide intuitive visual information for quick recognition and decision. However, imaging sensors usually generate vast amount of data. Therefore, processing and coding of image data collected in a sensor network for the purpose of energy efficient transmission poses a significant technical challenge. In particular, multiple sensors may be collecting similar visual information simultaneously. We propose in this paper a novel collaborative image coding and transmission scheme to minimize the energy for data transmission. First, we apply a shape matching method to coarsely register images to find out maximal overlap to exploit the spatial correlation between images acquired from neighboring sensors. For a given image sequence, we transmit background image only once. A lightweight and efficient background subtraction method is employed to detect targets. Only the regions of target and their spatial locations are transmitted to the monitoring center. The whole image can then be reconstructed by fusing the background and the target images as well as their spatial locations. Experimental results show that the energy for image transmission can indeed be greatly reduced with collaborative image coding and transmission.

  2. Interferometric fiber optic sensors for biomedical applications of optoacoustic imaging.

    Science.gov (United States)

    Lamela, Horacio; Gallego, Daniel; Gutierrez, Rebeca; Oraevsky, Alexander

    2011-03-01

    We present a non-metallic interferometric silica optical fiber ultrasonic wideband sensor for optoacoustic imaging applications. The ultrasonic sensitivity of this sensor has been characterized over the frequency range from 1 to 10 MHz. A comparative analysis has been carried out between this sensor and an array of piezoelectric transducers using optoacoustic signals generated from an optical absorbent embedded in a tissue mimicking phantom. Also, a two dimensional reconstructed image of the phantom using the fiber interferometric sensor is presented and compared to the image obtained using the Laser Optoacoustic Imaging System, LOIS-64B. The feasibility of our fiber optic based sensor for wideband ultrasonic detection is demonstrated. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Photoacoustic imaging with planoconcave optical microresonator sensors: feasibility studies based on phantom imaging

    Science.gov (United States)

    Guggenheim, James A.; Zhang, Edward Z.; Beard, Paul C.

    2017-03-01

    The planar Fabry-Pérot (FP) sensor provides high quality photoacoustic (PA) images but beam walk-off limits sensitivity and thus penetration depth to ≍1 cm. Planoconcave microresonator sensors eliminate beam walk-off enabling sensitivity to be increased by an order-of-magnitude whilst retaining the highly favourable frequency response and directional characteristics of the FP sensor. The first tomographic PA images obtained in a tissue-realistic phantom using the new sensors are described. These show that the microresonator sensors provide near identical image quality as the planar FP sensor but with significantly greater penetration depth (e.g. 2-3cm) due to their higher sensitivity. This offers the prospect of whole body small animal imaging and clinical imaging to depths previously unattainable using the FP planar sensor.

  4. Multi-sensor image fusion and its applications

    CERN Document Server

    Blum, Rick S

    2005-01-01

    Taking another lesson from nature, the latest advances in image processing technology seek to combine image data from several diverse types of sensors in order to obtain a more accurate view of the scene: very much the same as we rely on our five senses. Multi-Sensor Image Fusion and Its Applications is the first text dedicated to the theory and practice of the registration and fusion of image data, covering such approaches as statistical methods, color-related techniques, model-based methods, and visual information display strategies.After a review of state-of-the-art image fusion techniques,

  5. Sensor Correction of a 6-Band Multispectral Imaging Sensor for UAV Remote Sensing

    Directory of Open Access Journals (Sweden)

    Arko Lucieer

    2012-05-01

    Full Text Available Unmanned aerial vehicles (UAVs represent a quickly evolving technology, broadening the availability of remote sensing tools to small-scale research groups across a variety of scientific fields. Development of UAV platforms requires broad technical skills covering platform development, data post-processing, and image analysis. UAV development is constrained by a need to balance technological accessibility, flexibility in application and quality in image data. In this study, the quality of UAV imagery acquired by a miniature 6-band multispectral imaging sensor was improved through the application of practical image-based sensor correction techniques. Three major components of sensor correction were focused upon: noise reduction, sensor-based modification of incoming radiance, and lens distortion. Sensor noise was reduced through the use of dark offset imagery. Sensor modifications through the effects of filter transmission rates, the relative monochromatic efficiency of the sensor and the effects of vignetting were removed through a combination of spatially/spectrally dependent correction factors. Lens distortion was reduced through the implementation of the Brown–Conrady model. Data post-processing serves dual roles in data quality improvement, and the identification of platform limitations and sensor idiosyncrasies. The proposed corrections improve the quality of the raw multispectral imagery, facilitating subsequent quantitative image analysis.

  6. Color digital holography using a single monochromatic imaging sensor.

    Science.gov (United States)

    Kiire, Tomohiro; Barada, Daisuke; Sugisaka, Jun-ichiro; Hayasaki, Yoshio; Yatagai, Toyohiko

    2012-08-01

    Color digital holography utilizing the Doppler effect is proposed. The time variation of holograms produced by superposing images at three wavelengths is recorded using a high-speed monochromatic imaging sensor. The complex amplitude at each wavelength can be extracted from frequency information contained in the Fourier transforms of the recorded holograms. An image of the object is reconstructed by the angular spectrum method. Reconstructed monochromatic images at the three wavelengths are combined to produce a color image for display.

  7. Vision communications based on LED array and imaging sensor

    Science.gov (United States)

    Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.

  8. LIGHTNING IMAGING SENSOR (LIS) SCIENCE DATA V4

    Data.gov (United States)

    National Aeronautics and Space Administration — The Lightning Imaging Sensor (LIS) is an instrument on the Tropical Rainfall Measurement Mission satellite (TRMM) used to detect the distribution and variability of...

  9. Low-Mass Planar Photonic Imaging Sensor Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose a revolutionary electro-optical (EO) imaging sensor concept that provides a low-mass, low-volume alternative to the traditional bulky optical telescope...

  10. A Biologically Inspired CMOS Image Sensor

    NARCIS (Netherlands)

    Sarkar, M.

    2011-01-01

    Biological systems are a source of inspiration in the development of small autonomous sensor nodes. The two major types of optical vision systems found in nature are the single aperture human eye and the compound eye of insects. The latter are among the most compact and smallest vision sensors. The

  11. Fusion: ultra-high-speed and IR image sensors

    Science.gov (United States)

    Etoh, T. Goji; Dao, V. T. S.; Nguyen, Quang A.; Kimata, M.

    2015-08-01

    Most targets of ultra-high-speed video cameras operating at more than 1 Mfps, such as combustion, crack propagation, collision, plasma, spark discharge, an air bag at a car accident and a tire under a sudden brake, generate sudden heat. Researchers in these fields require tools to measure the high-speed motion and heat simultaneously. Ultra-high frame rate imaging is achieved by an in-situ storage image sensor. Each pixel of the sensor is equipped with multiple memory elements to record a series of image signals simultaneously at all pixels. Image signals stored in each pixel are read out after an image capturing operation. In 2002, we developed an in-situ storage image sensor operating at 1 Mfps 1). However, the fill factor of the sensor was only 15% due to a light shield covering the wide in-situ storage area. Therefore, in 2011, we developed a backside illuminated (BSI) in-situ storage image sensor to increase the sensitivity with 100% fill factor and a very high quantum efficiency 2). The sensor also achieved a much higher frame rate,16.7 Mfps, thanks to the wiring on the front side with more freedom 3). The BSI structure has another advantage that it has less difficulties in attaching an additional layer on the backside, such as scintillators. This paper proposes development of an ultra-high-speed IR image sensor in combination of advanced nano-technologies for IR imaging and the in-situ storage technology for ultra-highspeed imaging with discussion on issues in the integration.

  12. Extended Special Sensor Microwave Imager (SSM/I) Sensor Data Record (SDR) in netCDF

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Special Sensor Microwave Imager (SSM/I) is a seven-channel linearly polarized passive microwave radiometer that operates at frequencies of 19.36 (vertically and...

  13. Fuzzy image processing in sun sensor

    Science.gov (United States)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  14. Images from Bits: Non-Iterative Image Reconstruction for Quanta Image Sensors

    Directory of Open Access Journals (Sweden)

    Stanley H. Chan

    2016-11-01

    Full Text Available A quanta image sensor (QIS is a class of single-photon imaging devices that measure light intensity using oversampled binary observations. Because of the stochastic nature of the photon arrivals, data acquired by QIS is a massive stream of random binary bits. The goal of image reconstruction is to recover the underlying image from these bits. In this paper, we present a non-iterative image reconstruction algorithm for QIS. Unlike existing reconstruction methods that formulate the problem from an optimization perspective, the new algorithm directly recovers the images through a pair of nonlinear transformations and an off-the-shelf image denoising algorithm. By skipping the usual optimization procedure, we achieve orders of magnitude improvement in speed and even better image reconstruction quality. We validate the new algorithm on synthetic datasets, as well as real videos collected by one-bit single-photon avalanche diode (SPAD cameras.

  15. Detection of sudden death syndrome using a multispectral imaging sensor

    Science.gov (United States)

    Sudden death syndrome (SDS), caused by the fungus Fusarium solani f. sp. glycines, is a widespread mid- to late-season disease with distinctive foliar symptoms. This paper reported the development of an image analysis based method to detect SDS using a multispectral image sensor. A hue, saturation a...

  16. Compact hyperspectral image sensor based on a novel hyperspectral encoder

    Science.gov (United States)

    Hegyi, Alex N.; Martini, Joerg

    2015-06-01

    A novel hyperspectral imaging sensor is demonstrated that can enable breakthrough applications of hyperspectral imaging in domains not previously accessible. Our technology consists of a planar hyperspectral encoder combined with a traditional monochrome image sensor. The encoder adds negligibly to the sensor's overall size, weight, power requirement, and cost (SWaP-C); therefore, the new imager can be incorporated wherever image sensors are currently used, such as in cell phones and other consumer electronics. In analogy to Fourier spectroscopy, the technique maintains a high optical throughput because narrow-band spectral filters are unnecessary. Unlike conventional Fourier techniques that rely on Michelson interferometry, our hyperspectral encoder is robust to vibration and amenable to planar integration. The device can be viewed within a computational optics paradigm: the hardware is uncomplicated and serves to increase the information content of the acquired data, and the complexity of the system, that is, the decoding of the spectral information, is shifted to computation. Consequently, system tradeoffs, for example, between spectral resolution and imaging speed or spatial resolution, are selectable in software. Our prototype demonstration of the hyperspectral imager is based on a commercially-available silicon CCD. The prototype encoder was inserted within the camera's ~1 cu. in. housing. The prototype can image about 49 independent spectral bands distributed from 350 nm to 1250 nm, but the technology may be extendable over a wavelength range from ~300 nm to ~10 microns, with suitable choice of detector.

  17. Toroidal sensor arrays for real-time photoacoustic imaging

    Science.gov (United States)

    Bychkov, Anton S.; Cherepetskaya, Elena B.; Karabutov, Alexander A.; Makarov, Vladimir A.

    2017-07-01

    This article addresses theoretical and numerical investigation of image formation in photoacoustic (PA) imaging with complex-shaped concave sensor arrays. The spatial resolution and the size of sensitivity region of PA and laser ultrasonic (LU) imaging systems are assessed using sensitivity maps and spatial resolution maps in the image plane. This paper also discusses the relationship between the size of high-sensitivity regions and the spatial resolution of real-time imaging systems utilizing toroidal arrays. It is shown that the use of arrays with toroidal geometry significantly improves the diagnostic capabilities of PA and LU imaging to investigate biological objects, rocks, and composite materials.

  18. Toroidal sensor arrays for real-time photoacoustic imaging.

    Science.gov (United States)

    Bychkov, Anton S; Cherepetskaya, Elena B; Karabutov, Alexander A; Makarov, Vladimir A

    2017-07-01

    This article addresses theoretical and numerical investigation of image formation in photoacoustic (PA) imaging with complex-shaped concave sensor arrays. The spatial resolution and the size of sensitivity region of PA and laser ultrasonic (LU) imaging systems are assessed using sensitivity maps and spatial resolution maps in the image plane. This paper also discusses the relationship between the size of high-sensitivity regions and the spatial resolution of real-time imaging systems utilizing toroidal arrays. It is shown that the use of arrays with toroidal geometry significantly improves the diagnostic capabilities of PA and LU imaging to investigate biological objects, rocks, and composite materials.

  19. Optical Tomography System: Charge-coupled Device Linear Image Sensors

    Directory of Open Access Journals (Sweden)

    M. Idroas

    2010-09-01

    Full Text Available This paper discussed an optical tomography system based on charge-coupled device (CCD linear image sensors. The developed system consists of a lighting system, a measurement section and a data acquisition system. Four CCD linear image sensors are configured around a flow pipe with an octagonal-shaped measurement section, for a four projections system. The four CCD linear image sensors consisting of 2048 pixels with a pixel size of 14 micron by 14 micron are used to produce a high-resolution system. A simple optical model is mapped into the system’s sensitivity matrix to relate the optical attenuation due to variations of optical density within the measurement section. A reconstructed tomographic image is produced based on the model using MATLAB software. The designed instrumentation system is calibrated and tested through different particle size measurements from different projections.

  20. Ultra-High-Speed Image Signal Accumulation Sensor

    Directory of Open Access Journals (Sweden)

    Takeharu Goji Etoh

    2010-04-01

    Full Text Available Averaging of accumulated data is a standard technique applied to processing data with low signal-to-noise ratios (SNR, such as image signals captured in ultra-high-speed imaging. The authors propose an architecture layout of an ultra-high-speed image sensor capable of on-chip signal accumulation. The very high frame rate is enabled by employing an image sensor structure with a multi-folded CCD in each pixel, which serves as an in situ image signal storage. The signal accumulation function is achieved by direct connection of the first and the last storage elements of the in situ storage CCD. It has been thought that the multi-folding is achievable only by driving electrodes with complicated and impractical layouts. Simple configurations of the driving electrodes to overcome the difficulty are presented for two-phase and four-phase transfer CCD systems. The in situ storage image sensor with the signal accumulation function is named Image Signal Accumulation Sensor (ISAS.

  1. Research-grade CMOS image sensors for demanding space applications

    Science.gov (United States)

    Saint-Pé, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Belliot, Pierre

    2017-11-01

    Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid- 90s, CMOS Image Sensors (CIS) have been competing with CCDs for more and more consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA, and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this talk will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments of CIS prototypes built using an imaging CMOS process and of devices based on improved designs will be presented.

  2. Image and Sensor Data Processing for Target Acquisition and Recognition.

    Science.gov (United States)

    1980-11-01

    reprisontativo d’images d’antratne- mont dout il connait la viriti terrain . Pour chacune des cibles do cec images, lordinateur calculera les n paramitres...l’objet, glissement limitd A sa lergeur. DOaprds las rdsultets obtenus jusqu’A meintenent, nous navons pas observE de glissement impor- tant et ATR> I TR...AEROSPACE RESEARCH AND DEVELOPMENT (ORGANISATION DU TRAITE DE L’ATLANTIQUE NORD) AGARDonferenceJoceedin io.290 IMAGE AND SENSOR DATA PROCESSING FOR TARGET

  3. Whisk Broom Imaging Sensor LandSat-7

    OpenAIRE

    2007-01-01

    sim present anim Simulation Presentation Animation Interactive Media Element This presentation demonstrates using animations how a whisk-broom imaging sensor operates. It shows: The optical path through the primary and secondary mirrors to the Scan Line Correction (SLC) assembly., How the satellite captures images of the ground using the Scan Mirror assembly., The change in the scanned image when the SLC is turned off. SS3020 Introduction to Measurement and Signatur...

  4. A mobile ferromagnetic shape detection sensor using a Hall sensor array and magnetic imaging.

    Science.gov (United States)

    Misron, Norhisam; Shin, Ng Wei; Shafie, Suhaidi; Marhaban, Mohd Hamiruce; Mailah, Nashiren Farzilah

    2011-01-01

    This paper presents a mobile Hall sensor array system for the shape detection of ferromagnetic materials that are embedded in walls or floors. The operation of the mobile Hall sensor array system is based on the principle of magnetic flux leakage to describe the shape of the ferromagnetic material. Two permanent magnets are used to generate the magnetic flux flow. The distribution of magnetic flux is perturbed as the ferromagnetic material is brought near the permanent magnets and the changes in magnetic flux distribution are detected by the 1-D array of the Hall sensor array setup. The process for magnetic imaging of the magnetic flux distribution is done by a signal processing unit before it displays the real time images using a netbook. A signal processing application software is developed for the 1-D Hall sensor array signal acquisition and processing to construct a 2-D array matrix. The processed 1-D Hall sensor array signals are later used to construct the magnetic image of ferromagnetic material based on the voltage signal and the magnetic flux distribution. The experimental results illustrate how the shape of specimens such as square, round and triangle shapes is determined through magnetic images based on the voltage signal and magnetic flux distribution of the specimen. In addition, the magnetic images of actual ferromagnetic objects are also illustrated to prove the functionality of mobile Hall sensor array system for actual shape detection. The results prove that the mobile Hall sensor array system is able to perform magnetic imaging in identifying various ferromagnetic materials.

  5. Imaging in scattering media using correlation image sensors and sparse convolutional coding

    KAUST Repository

    Heide, Felix

    2014-10-17

    Correlation image sensors have recently become popular low-cost devices for time-of-flight, or range cameras. They usually operate under the assumption of a single light path contributing to each pixel. We show that a more thorough analysis of the sensor data from correlation sensors can be used can be used to analyze the light transport in much more complex environments, including applications for imaging through scattering and turbid media. The key of our method is a new convolutional sparse coding approach for recovering transient (light-in-flight) images from correlation image sensors. This approach is enabled by an analysis of sparsity in complex transient images, and the derivation of a new physically-motivated model for transient images with drastically improved sparsity.

  6. Analysis of imaging for laser triangulation sensors under Scheimpflug rule.

    Science.gov (United States)

    Miks, Antonin; Novak, Jiri; Novak, Pavel

    2013-07-29

    In this work a detailed analysis of the problem of imaging of objects lying in the plane tilted with respect to the optical axis of the rotationally symmetrical optical system is performed by means of geometrical optics theory. It is shown that the fulfillment of the so called Scheimpflug condition (Scheimpflug rule) does not guarantee the sharp image of the object as it is usually declared because of the fact that due to the dependence of aberrations of real optical systems on the object distance the image becomes blurred. The f-number of a given optical system also varies with the object distance. It is shown the influence of above mentioned effects on the accuracy of the laser triangulation sensors measurements. A detailed analysis of laser triangulation sensors, based on geometrical optics theory, is performed and relations for the calculation of measurement errors and construction parameters of laser triangulation sensors are derived.

  7. The Theoretical Highest Frame Rate of Silicon Image Sensors

    Directory of Open Access Journals (Sweden)

    Takeharu Goji Etoh

    2017-02-01

    Full Text Available The frame rate of the digital high-speed video camera was 2000 frames per second (fps in 1989, and has been exponentially increasing. A simulation study showed that a silicon image sensor made with a 130 nm process technology can achieve about 1010 fps. The frame rate seems to approach the upper bound. Rayleigh proposed an expression on the theoretical spatial resolution limit when the resolution of lenses approached the limit. In this paper, the temporal resolution limit of silicon image sensors was theoretically analyzed. It is revealed that the limit is mainly governed by mixing of charges with different travel times caused by the distribution of penetration depth of light. The derived expression of the limit is extremely simple, yet accurate. For example, the limit for green light of 550 nm incident to silicon image sensors at 300 K is 11.1 picoseconds. Therefore, the theoretical highest frame rate is 90.1 Gfps (about 1011 fps

  8. Advanced pixel architectures for scientific image sensors

    CERN Document Server

    Coath, R; Godbeer, A; Wilson, M; Turchetta, R

    2009-01-01

    We present recent developments from two projects targeting advanced pixel architectures for scientific applications. Results are reported from FORTIS, a sensor demonstrating variants on a 4T pixel architecture. The variants include differences in pixel and diode size, the in-pixel source follower transistor size and the capacitance of the readout node to optimise for low noise and sensitivity to small amounts of charge. Results are also reported from TPAC, a complex pixel architecture with ~160 transistors per pixel. Both sensors were manufactured in the 0.18μm INMAPS process, which includes a special deep p-well layer and fabrication on a high resistivity epitaxial layer for improved charge collection efficiency.

  9. Oil exploration oriented multi-sensor image fusion algorithm

    Directory of Open Access Journals (Sweden)

    Xiaobing Zhang

    2017-04-01

    Full Text Available In order to accurately forecast the fracture and fracture dominance direction in oil exploration, in this paper, we propose a novel multi-sensor image fusion algorithm. The main innovations of this paper lie in that we introduce Dual-tree complex wavelet transform (DTCWT in data fusion and divide an image to several regions before image fusion. DTCWT refers to a new type of wavelet transform, and it is designed to solve the problem of signal decomposition and reconstruction based on two parallel transforms of real wavelet. We utilize DTCWT to segment the features of the input images and generate a region map, and then exploit normalized Shannon entropy of a region to design the priority function. To test the effectiveness of our proposed multi-sensor image fusion algorithm, four standard pairs of images are used to construct the dataset. Experimental results demonstrate that the proposed algorithm can achieve high accuracy in multi-sensor image fusion, especially for images of oil exploration.

  10. Oil exploration oriented multi-sensor image fusion algorithm

    Science.gov (United States)

    Xiaobing, Zhang; Wei, Zhou; Mengfei, Song

    2017-04-01

    In order to accurately forecast the fracture and fracture dominance direction in oil exploration, in this paper, we propose a novel multi-sensor image fusion algorithm. The main innovations of this paper lie in that we introduce Dual-tree complex wavelet transform (DTCWT) in data fusion and divide an image to several regions before image fusion. DTCWT refers to a new type of wavelet transform, and it is designed to solve the problem of signal decomposition and reconstruction based on two parallel transforms of real wavelet. We utilize DTCWT to segment the features of the input images and generate a region map, and then exploit normalized Shannon entropy of a region to design the priority function. To test the effectiveness of our proposed multi-sensor image fusion algorithm, four standard pairs of images are used to construct the dataset. Experimental results demonstrate that the proposed algorithm can achieve high accuracy in multi-sensor image fusion, especially for images of oil exploration.

  11. Image-based environmental monitoring sensor application using an embedded wireless sensor network.

    Science.gov (United States)

    Paek, Jeongyeup; Hicks, John; Coe, Sharon; Govindan, Ramesh

    2014-08-28

    This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet's built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Cannot Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions.

  12. Self-Similarity Superresolution for Resource-Constrained Image Sensor Node in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yuehai Wang

    2014-01-01

    Full Text Available Wireless sensor networks, in combination with image sensors, open up a grand sensing application field. It is a challenging problem to recover a high resolution (HR image from its low resolution (LR counterpart, especially for low-cost resource-constrained image sensors with limited resolution. Sparse representation-based techniques have been developed recently and increasingly to solve this ill-posed inverse problem. Most of these solutions are based on an external dictionary learned from huge image gallery, consequently needing tremendous iteration and long time to match. In this paper, we explore the self-similarity inside the image itself, and propose a new combined self-similarity superresolution (SR solution, with low computation cost and high recover performance. In the self-similarity image super resolution model (SSIR, a small size sparse dictionary is learned from the image itself by the methods such as KSVD. The most similar patch is searched and specially combined during the sparse regulation iteration. Detailed information, such as edge sharpness, is preserved more faithfully and clearly. Experiment results confirm the effectiveness and efficiency of this double self-learning method in the image super resolution.

  13. CMOS Image Sensors for High Speed Applications

    OpenAIRE

    Jamal Deen, M.; Qiyin Fang; Louis Liu; Frances Tse; David Armstrong; Munir El-Desouki

    2009-01-01

    Recent advances in deep submicron CMOS technologies and improved pixel designs have enabled CMOS-based imagers to surpass charge-coupled devices (CCD) imaging technology for mainstream applications. The parallel outputs that CMOS imagers can offer, in addition to complete camera-on-a-chip solutions due to being fabricated in standard CMOS technologies, result in compelling advantages in speed and system throughput. Since there is a practical limit on the minimum pixel size (4~5 μm) due to ...

  14. Autonomous vision networking: miniature wireless sensor networks with imaging technology

    Science.gov (United States)

    Messinger, Gioia; Goldberg, Giora

    2006-09-01

    The recent emergence of integrated PicoRadio technology, the rise of low power, low cost, System-On-Chip (SOC) CMOS imagers, coupled with the fast evolution of networking protocols and digital signal processing (DSP), created a unique opportunity to achieve the goal of deploying large-scale, low cost, intelligent, ultra-low power distributed wireless sensor networks for the visualization of the environment. Of all sensors, vision is the most desired, but its applications in distributed sensor networks have been elusive so far. Not any more. The practicality and viability of ultra-low power vision networking has been proven and its applications are countless, from security, and chemical analysis to industrial monitoring, asset tracking and visual recognition, vision networking represents a truly disruptive technology applicable to many industries. The presentation discusses some of the critical components and technologies necessary to make these networks and products affordable and ubiquitous - specifically PicoRadios, CMOS imagers, imaging DSP, networking and overall wireless sensor network (WSN) system concepts. The paradigm shift, from large, centralized and expensive sensor platforms, to small, low cost, distributed, sensor networks, is possible due to the emergence and convergence of a few innovative technologies. Avaak has developed a vision network that is aided by other sensors such as motion, acoustic and magnetic, and plans to deploy it for use in military and commercial applications. In comparison to other sensors, imagers produce large data files that require pre-processing and a certain level of compression before these are transmitted to a network server, in order to minimize the load on the network. Some of the most innovative chemical detectors currently in development are based on sensors that change color or pattern in the presence of the desired analytes. These changes are easily recorded and analyzed by a CMOS imager and an on-board DSP processor

  15. Target detection and recognition techniques of line imaging ladar sensor

    Science.gov (United States)

    Sun, Zhi-hui; Deng, Jia-hao; Yan, Xiao-wei

    2009-07-01

    A line imaging ladar sensor using linear diode laser array and linear avalanche photodiode (APD) array is developed for precise terminal guidance and intelligent proximity fuzing applications. The detection principle of line imaging ladar is discussed in detail, and design method of the line imaging ladar sensor system is given. Taking military tank target as example, simulated tank height and intensity images are obtained by the line imaging ladar simulation system. The subsystems of line imaging ladar sensor including transmitter and receiver are designed. Multi-pulse coherent algorithm and correlation detection method are adopted to improve the SNR of echo and to estimate time-of-flight, respectively. Experiment results show that the power SNR can be improved by N (number of coherent average) times and the maximum range error is 0.25 m. A few of joint transform correlation (JTC) techniques are discussed to improve noncooperative target recognition capability in height image with complex background. Simulation results show that binary JTC, non-zero-order modified fringe-adjusted JTC and non-zero-order amplitude-modulated JTC can improve the target recognition performance effectively.

  16. Blue fluorescent cGMP sensor for multiparameter fluorescence imaging.

    Directory of Open Access Journals (Sweden)

    Yusuke Niino

    Full Text Available Cyclic GMP (cGMP regulates many physiological processes by cooperating with the other signaling molecules such as cyclic AMP (cAMP and Ca(2+. Genetically encoded sensors for cGMP have been developed based on fluorescence resonance energy transfer (FRET between fluorescent proteins. However, to analyze the dynamic relationship among these second messengers, combined use of existing sensors in a single cell is inadequate because of the significant spectral overlaps. A single wavelength indicator is an effective alternative to avoid this problem, but color variants of a single fluorescent protein-based biosensor are limited. In this study, to construct a new color fluorescent sensor, we converted the FRET-based sensor into a single wavelength indicator using a dark FRET acceptor. We developed a blue fluorescent cGMP biosensor, which is spectrally compatible with a FRET-based cAMP sensor using cyan and yellow fluorescent proteins (CFP/YFP. We cotransfected them and loaded a red fluorescent probe for Ca(2+ into cells, and accomplished triple-parameter fluorescence imaging of these cyclic nucleotides and Ca(2+, confirming the applicability of this combination to individually monitor their dynamics in a single cell. This blue fluorescent sensor and the approach using this FRET pair would be useful for multiparameter fluorescence imaging to understand complex signal transduction networks.

  17. Retina-like sensor image coordinates transformation and display

    Science.gov (United States)

    Cao, Fengmei; Cao, Nan; Bai, Tingzhu; Song, Shengyu

    2015-03-01

    For a new kind of retina-like senor camera, the image acquisition, coordinates transformation and interpolation need to be realized. Both of the coordinates transformation and interpolation are computed in polar coordinate due to the sensor's particular pixels distribution. The image interpolation is based on sub-pixel interpolation and its relative weights are got in polar coordinates. The hardware platform is composed of retina-like senor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes the real-time image acquisition, coordinate transformation and interpolation.

  18. Biologically based sensor fusion for medical imaging

    Science.gov (United States)

    Aguilar, Mario; Garrett, Aaron L.

    2001-03-01

    We present an architecture for the fusion of multiple medical image modalities that enhances the original imagery and combines the complimentary information of the various modalities. The design principles follow the organization of the color vision system in humans and primates. Mainly, the design of within- modality enhancement and between-modality combination for fusion is based on the neural connectivity of retina and visual cortex. The architecture is based on a system developed for night vision applications while the first author was at MIT Lincoln Laboratory. Results of fusing various modalities are presented, including: a) fusion of T1-weighted and T2-weighted MRI images, b) fusion of PD, T1 weighted, and T2-weighted, and c) fusion of SPECT and MRI/CT. The results will demonstrate the ability to fuse such disparate imaging modalities with regard to information content and complimentarities. These results will show how both brightness and color contrast are used in the resulting color fused images to convey information to the user. In addition, we will demonstrate the ability to preserve the high spatial resolution of modalities such as MRI even when combined with poor resolution images such as from SPECT scans. We conclude by motivating the use of the fusion method to derive more powerful image features to be used in segmentation and pattern recognition.

  19. Low data rate architecture for smart image sensor

    Science.gov (United States)

    Darwish, Amani; Sicard, Gilles; Fesquet, Laurent

    2014-03-01

    An innovative smart image sensor architecture based on event-driven asynchronous functioning is presented in this paper. The proposed architecture has been designed in order to control the sensor data flow by extracting only the relevant information from the image sensor and performing spatial and temporal redundancies suppression in video streaming. We believe that this data flow reduction leads to a system power consumption reduction which is essential in mobile devices. In this first proposition, we present our new pixel behaviour as well as our new asynchronous read-out architecture. Simulations using both Matlab and VHDL were performed in order to validate the proposed pixel behaviour and the reading protocol. These simulations results have met our expectations and confirmed the suggested ideas.

  20. A GRAPH READER USING A CCD IMAGE SENSOR

    African Journals Online (AJOL)

    2008-01-18

    Jan 18, 2008 ... 3. Data Processing. The microcontroller, the CCD sensor, the stepper motor and the rest of the system are interfaced to the PC where data processing and overall control are done. A software program in. QUICKBASIC is used to process the pixels. First the 1024 pixels of an image line are received from the.

  1. Active resonant subwavelength grating for scannerless range imaging sensors.

    Energy Technology Data Exchange (ETDEWEB)

    Kemme, Shanalyn A.; Nellums, Robert O.; Boye, Robert R.; Peters, David William

    2006-11-01

    In this late-start LDRD, we will present a design for a wavelength-agile, high-speed modulator that enables a long-term vision for the THz Scannerless Range Imaging (SRI) sensor. It takes the place of the currently-utilized SRI micro-channel plate which is limited to photocathode sensitive wavelengths (primarily in the visible and near-IR regimes). Two of Sandia's successful technologies--subwavelength diffractive optics and THz sources and detectors--are poised to extend the capabilities of the SRI sensor. The goal is to drastically broaden the SRI's sensing waveband--all the way to the THz regime--so the sensor can see through image-obscuring, scattering environments like smoke and dust. Surface properties, such as reflectivity, emissivity, and scattering roughness, vary greatly with the illuminating wavelength. Thus, objects that are difficult to image at the SRI sensor's present near-IR wavelengths may be imaged more easily at the considerably longer THz wavelengths (0.1 to 1mm). The proposed component is an active Resonant Subwavelength Grating (RSG). Sandia invested considerable effort on a passive RSG two years ago, which resulted in a highly-efficient (reflectivity greater than gold), wavelength-specific reflector. For this late-start LDRD proposal, we will transform the passive RSG design into an active laser-line reflector.

  2. DNA as Sensors and Imaging Agents for Metal Ions

    Science.gov (United States)

    Xiang, Yu

    2014-01-01

    Increasing interests in detecting metal ions in many chemical and biomedical fields have created demands for developing sensors and imaging agents for metal ions with high sensitivity and selectivity. This review covers recent progress in DNA-based sensors and imaging agents for metal ions. Through both combinatorial selection and rational design, a number of metal ion-dependent DNAzymes and metal ion-binding DNA structures that can selectively recognize specific metal ions have been obtained. By attaching these DNA molecules with signal reporters such as fluorophores, chromophores, electrochemical tags, and Raman tags, a number of DNA-based sensors for both diamagnetic and paramagnetic metal ions have been developed for fluorescent, colorimetric, electrochemical, and surface Raman detections. These sensors are highly sensitive (with detection limit down to 11 ppt) and selective (with selectivity up to millions-fold) toward specific metal ions. In addition, through further development to simplify the operation, such as the use of “dipstick tests”, portable fluorometers, computer-readable discs, and widely available glucose meters, these sensors have been applied for on-site and real-time environmental monitoring and point-of-care medical diagnostics. The use of these sensors for in situ cellular imaging has also been reported. The generality of the combinatorial selection to obtain DNAzymes for almost any metal ion in any oxidation state, and the ease of modification of the DNA with different signal reporters make DNA an emerging and promising class of molecules for metal ion sensing and imaging in many fields of applications. PMID:24359450

  3. CMOS Image Sensor with On-Chip Image Compression: A Review and Performance Analysis

    Directory of Open Access Journals (Sweden)

    Milin Zhang

    2010-01-01

    Full Text Available Demand for high-resolution, low-power sensing devices with integrated image processing capabilities, especially compression capability, is increasing. CMOS technology enables the integration of image sensing and image processing, making it possible to improve the overall system performance. This paper reviews the current state of the art in CMOS image sensors featuring on-chip image compression. Firstly, typical sensing systems consisting of separate image-capturing unit and image-compression processing unit are reviewed, followed by systems that integrate focal-plane compression. The paper also provides a thorough review of a new design paradigm, in which image compression is performed during the image-capture phase prior to storage, referred to as compressive acquisition. High-performance sensor systems reported in recent years are also introduced. Performance analysis and comparison of the reported designs using different design paradigm are presented at the end.

  4. Image upconversion - a low noise infrared sensor?

    DEFF Research Database (Denmark)

    Dam, Jeppe Seidelin; Tidemand-Lichtenberg, Peter; Pedersen, Christian

    for detection of infrared images. Silicon cameras have much smaller intrinsic noise than their IR counter part- some models even offer near single photon detection capability. We demonstrate that an ordinary CCD camera combined with a low noise upconversion has superior noise characteristics when compared......Low noise upconversion of IR images by three-wave mixing, can be performed with high efficiency when mixing the object with a powerful laser field inside a highly non-linear crystal such as periodically poled Lithium Niobate. This feature effectively allows the use of silicon based cameras...... to even state-of-the art IR cameras....

  5. Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image

    Directory of Open Access Journals (Sweden)

    Wei Wen

    2017-03-01

    Full Text Available Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera.

  6. Minimal form factor digital-image sensor for endoscopic applications

    Science.gov (United States)

    Wäny, Martin; Voltz, Stephan; Gaspar, Fabio; Chen, Lei

    2009-02-01

    This paper presents a digital image sensor SOC featuring a total chip area (including dicing tolerances) of 0.34mm2 for endoscopic applications. Due to this extremely small form factor the sensor enables integration in endoscopes, guide wires and locater devices of less than 1mm outer diameter. The sensor embeds a pixel matrix of 10'000 pixels with a pitch of 3um x 3um covered with RGB filters in Bayer pattern. The sensor operates fully autonomous, controlled by an on chip ring oscillator and readout state machine, which controls integration AD conversion and data transmission, thus the sensor only requires 4 pin's for power supply and data communication. The sensor provides a frame rate of 40Frames per second over a LVDS serial data link. The endoscopic application requires that the sensor must work without any local power decoupling capacitances at the end of up to 2m cabling and be able to sustain data communication over the same wire length without deteriorating image quality. This has been achieved by implementation of a current mode successive approximation ADC and current steering LVDS data transmission. An band gap circuit with -40dB PSRR at the data frequency was implemented as on chip reference to improve robustness against power supply ringing due to the high series inductance of the long cables. The B&W versions of the sensor provides a conversion gain of 30DN/nJ/cm2 at 550nm with a read noise in dark of 1.2DN when operated at 2m cable. Using the photon transfer method according to EMVA1288 standard the full well capacity was determined to be 18ke-. According to our knowledge the presented work is the currently world smallest fully digital image sensor. The chip was designed along with a aspheric single surface lens to assemble on the chip without increasing the form factor. The extremely small form factor of the resulting camera permit's to provide visualization with much higher than state of the art spatial resolution in sub 1mm endoscopic

  7. Image upconversion, a low noise infrared sensor?

    DEFF Research Database (Denmark)

    for detection of infrared images. Silicon cameras have much smaller intrinsic noise than their IR counter part- some models even offer near single photon detection capability. We demonstrate that an ordinary CCD camera combined with a low noise upconversion has superior noise characteristics when compared...

  8. 77 FR 74513 - Certain CMOS Image Sensors and Products Containing Same; Investigations: Terminations...

    Science.gov (United States)

    2012-12-14

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain CMOS Image Sensors and Products Containing Same; Investigations: Terminations... importation, and the sale within the United States after importation of certain CMOS image sensors and...

  9. Imaging Extracellular Protein Concentration with Nanoplasmonic Sensors

    OpenAIRE

    Byers, Jeff M.; Christodoulides, Joseph A.; Delehanty, James B.; Raghu, Deepa; Raphael, Marc P.

    2015-01-01

    Extracellular protein concentrations and gradients queue a wide range of cellular responses, such as cell motility and division. Spatio-temporal quantification of these concentrations as produced by cells has proven challenging. As a result, artificial gradients must be introduced to the cell culture to correlate signal and response. Here we demonstrate a label-free nanoplasmonic imaging technique that can directly map protein concentrations as secreted by single cells in real time and which ...

  10. A video precipitation sensor for imaging and velocimetry of hydrometeors

    Science.gov (United States)

    Liu, X. C.; Gao, T. C.; Liu, L.

    2014-07-01

    A new method to determine the shape and fall velocity of hydrometeors by using a single CCD camera is proposed in this paper, and a prototype of a video precipitation sensor (VPS) is developed. The instrument consists of an optical unit (collimated light source with multi-mode fibre cluster), an imaging unit (planar array CCD sensor), an acquisition and control unit, and a data processing unit. The cylindrical space between the optical unit and imaging unit is sampling volume (300 mm × 40 mm × 30 mm). As the precipitation particles fall through the sampling volume, the CCD camera exposes twice in a single frame, which allows the double exposure of particles images to be obtained. The size and shape can be obtained by the images of particles; the fall velocity can be calculated by particle displacement in the double-exposure image and interval time; the drop size distribution and velocity distribution, precipitation intensity, and accumulated precipitation amount can be calculated by time integration. The innovation of VPS is that the shape, size, and velocity of precipitation particles can be measured by only one planar array CCD sensor, which can address the disadvantages of a linear scan CCD disdrometer and an impact disdrometer. Field measurements of rainfall demonstrate the VPS's capability to measure micro-physical properties of single particles and integral parameters of precipitation.

  11. Broadband image sensor array based on graphene-CMOS integration

    Science.gov (United States)

    Goossens, Stijn; Navickaite, Gabriele; Monasterio, Carles; Gupta, Shuchi; Piqueras, Juan José; Pérez, Raúl; Burwell, Gregory; Nikitskiy, Ivan; Lasanta, Tania; Galán, Teresa; Puma, Eric; Centeno, Alba; Pesquera, Amaia; Zurutuza, Amaia; Konstantatos, Gerasimos; Koppens, Frank

    2017-06-01

    Integrated circuits based on complementary metal-oxide-semiconductors (CMOS) are at the heart of the technological revolution of the past 40 years, enabling compact and low-cost microelectronic circuits and imaging systems. However, the diversification of this platform into applications other than microcircuits and visible-light cameras has been impeded by the difficulty to combine semiconductors other than silicon with CMOS. Here, we report the monolithic integration of a CMOS integrated circuit with graphene, operating as a high-mobility phototransistor. We demonstrate a high-resolution, broadband image sensor and operate it as a digital camera that is sensitive to ultraviolet, visible and infrared light (300-2,000 nm). The demonstrated graphene-CMOS integration is pivotal for incorporating 2D materials into the next-generation microelectronics, sensor arrays, low-power integrated photonics and CMOS imaging systems covering visible, infrared and terahertz frequencies.

  12. Image sensor for testing refractive error of eyes

    Science.gov (United States)

    Li, Xiangning; Chen, Jiabi; Xu, Longyun

    2000-05-01

    It is difficult to detect ametropia and anisometropia for children. Image sensor for testing refractive error of eyes does not need the cooperation of children and can be used to do the general survey of ametropia and anisometropia for children. In our study, photographs are recorded by a CCD element in a digital form which can be directly processed by a computer. In order to process the image accurately by digital technique, formula considering the effect of extended light source and the size of lens aperture has been deduced, which is more reliable in practice. Computer simulation of the image sensing is made to verify the fineness of the results.

  13. Noise Reduction for CFA Image Sensors Exploiting HVS Behaviour

    Directory of Open Access Journals (Sweden)

    Angelo Bosco

    2009-03-01

    Full Text Available This paper presents a spatial noise reduction technique designed to work on CFA (Color Filtering Array data acquired by CCD/CMOS image sensors. The overall processing preserves image details using some heuristics related to the HVS (Human Visual System; estimates of local texture degree and noise levels are computed to regulate the filter smoothing capability. Experimental results confirm the effectiveness of the proposed technique. The method is also suitable for implementation in low power mobile devices with imaging capabilities such as camera phones and PDAs.

  14. BIOME: An Ecosystem Remote Sensor Based on Imaging Interferometry

    Science.gov (United States)

    Peterson, David L.; Hammer, Philip; Smith, William H.; Lawless, James G. (Technical Monitor)

    1994-01-01

    Until recent times, optical remote sensing of ecosystem properties from space has been limited to broad band multispectral scanners such as Landsat and AVHRR. While these sensor data can be used to derive important information about ecosystem parameters, they are very limited for measuring key biogeochemical cycling parameters such as the chemical content of plant canopies. Such parameters, for example the lignin and nitrogen contents, are potentially amenable to measurements by very high spectral resolution instruments using a spectroscopic approach. Airborne sensors based on grating imaging spectrometers gave the first promise of such potential but the recent decision not to deploy the space version has left the community without many alternatives. In the past few years, advancements in high performance deep well digital sensor arrays coupled with a patented design for a two-beam interferometer has produced an entirely new design for acquiring imaging spectroscopic data at the signal to noise levels necessary for quantitatively estimating chemical composition (1000:1 at 2 microns). This design has been assembled as a laboratory instrument and the principles demonstrated for acquiring remote scenes. An airborne instrument is in production and spaceborne sensors being proposed. The instrument is extremely promising because of its low cost, lower power requirements, very low weight, simplicity (no moving parts), and high performance. For these reasons, we have called it the first instrument optimized for ecosystem studies as part of a Biological Imaging and Observation Mission to Earth (BIOME).

  15. Low-power high-accuracy micro-digital sun sensor by means of a CMOS image sensor

    NARCIS (Netherlands)

    Xie, N.; Theuwissen, A.J.P.

    2013-01-01

    A micro-digital sun sensor (?DSS) is a sun detector which senses a satellite’s instant attitude angle with respect to the sun. The core of this sensor is a system-on-chip imaging chip which is referred to as APS+. The APS+ integrates a CMOS active pixel sensor (APS) array of 368×368??pixels , a

  16. SENSOR CORRECTION AND RADIOMETRIC CALIBRATION OF A 6-BAND MULTISPECTRAL IMAGING SENSOR FOR UAV REMOTE SENSING

    Directory of Open Access Journals (Sweden)

    J. Kelcey

    2012-07-01

    Full Text Available The increased availability of unmanned aerial vehicles (UAVs has resulted in their frequent adoption for a growing range of remote sensing tasks which include precision agriculture, vegetation surveying and fine-scale topographic mapping. The development and utilisation of UAV platforms requires broad technical skills covering the three major facets of remote sensing: data acquisition, data post-processing, and image analysis. In this study, UAV image data acquired by a miniature 6-band multispectral imaging sensor was corrected and calibrated using practical image-based data post-processing techniques. Data correction techniques included dark offset subtraction to reduce sensor noise, flat-field derived per-pixel look-up-tables to correct vignetting, and implementation of the Brown- Conrady model to correct lens distortion. Radiometric calibration was conducted with an image-based empirical line model using pseudo-invariant features (PIFs. Sensor corrections and radiometric calibration improve the quality of the data, aiding quantitative analysis and generating consistency with other calibrated datasets.

  17. 77 FR 26787 - Certain CMOS Image Sensors and Products Containing Same; Notice of Receipt of Complaint...

    Science.gov (United States)

    2012-05-07

    ... COMMISSION Certain CMOS Image Sensors and Products Containing Same; Notice of Receipt of Complaint... complaint entitled Certain CMOS Image Sensors and Products Containing Same, DN 2895; the Commission is... importation of certain CMOS image sensors and products containing same. The complaint names as respondents...

  18. 77 FR 33488 - Certain CMOS Image Sensors and Products Containing Same; Institution of Investigation Pursuant to...

    Science.gov (United States)

    2012-06-06

    ... COMMISSION Certain CMOS Image Sensors and Products Containing Same; Institution of Investigation Pursuant to... States after importation of certain CMOS image sensors and products containing same by reason of... image sensors and products containing same that infringe one or more of claims 1 and 2 of the `126...

  19. A Multilayer Improved RBM Network Based Image Compression Method in Wireless Sensor Networks

    National Research Council Canada - National Science Library

    Cheng, Chunling; Wang, Shu; Chen, Xingguo; Yang, Yanying

    2016-01-01

    The processing capacity and power of nodes in a Wireless Sensor Network (WSN) are limited. And most image compression algorithms in WSN are subject to random image content changes or have low image qualities after the images are decoded...

  20. Development of integrated semiconductor optical sensors for functional brain imaging

    Science.gov (United States)

    Lee, Thomas T.

    Optical imaging of neural activity is a widely accepted technique for imaging brain function in the field of neuroscience research, and has been used to study the cerebral cortex in vivo for over two decades. Maps of brain activity are obtained by monitoring intensity changes in back-scattered light, called Intrinsic Optical Signals (IOS), that correspond to fluctuations in blood oxygenation and volume associated with neural activity. Current imaging systems typically employ bench-top equipment including lamps and CCD cameras to study animals using visible light. Such systems require the use of anesthetized or immobilized subjects with craniotomies, which imposes limitations on the behavioral range and duration of studies. The ultimate goal of this work is to overcome these limitations by developing a single-chip semiconductor sensor using arrays of sources and detectors operating at near-infrared (NIR) wavelengths. A single-chip implementation, combined with wireless telemetry, will eliminate the need for immobilization or anesthesia of subjects and allow in vivo studies of free behavior. NIR light offers additional advantages because it experiences less absorption in animal tissue than visible light, which allows for imaging through superficial tissues. This, in turn, reduces or eliminates the need for traumatic surgery and enables long-term brain-mapping studies in freely-behaving animals. This dissertation concentrates on key engineering challenges of implementing the sensor. This work shows the feasibility of using a GaAs-based array of vertical-cavity surface emitting lasers (VCSELs) and PIN photodiodes for IOS imaging. I begin with in-vivo studies of IOS imaging through the skull in mice, and use these results along with computer simulations to establish minimum performance requirements for light sources and detectors. I also evaluate the performance of a current commercial VCSEL for IOS imaging, and conclude with a proposed prototype sensor.

  1. Imaging sensor constellation for tomographic chemical cloud mapping.

    Science.gov (United States)

    Cosofret, Bogdan R; Konno, Daisei; Faghfouri, Aram; Kindle, Harry S; Gittins, Christopher M; Finson, Michael L; Janov, Tracy E; Levreault, Mark J; Miyashiro, Rex K; Marinelli, William J

    2009-04-01

    A sensor constellation capable of determining the location and detailed concentration distribution of chemical warfare agent simulant clouds has been developed and demonstrated on government test ranges. The constellation is based on the use of standoff passive multispectral infrared imaging sensors to make column density measurements through the chemical cloud from two or more locations around its periphery. A computed tomography inversion method is employed to produce a 3D concentration profile of the cloud from the 2D line density measurements. We discuss the theoretical basis of the approach and present results of recent field experiments where controlled releases of chemical warfare agent simulants were simultaneously viewed by three chemical imaging sensors. Systematic investigations of the algorithm using synthetic data indicate that for complex functions, 3D reconstruction errors are less than 20% even in the case of a limited three-sensor measurement network. Field data results demonstrate the capability of the constellation to determine 3D concentration profiles that account for ~?86%? of the total known mass of material released.

  2. Optical Imaging Sensors and Systems for Homeland Security Applications

    CERN Document Server

    Javidi, Bahram

    2006-01-01

    Optical and photonic systems and devices have significant potential for homeland security. Optical Imaging Sensors and Systems for Homeland Security Applications presents original and significant technical contributions from leaders of industry, government, and academia in the field of optical and photonic sensors, systems and devices for detection, identification, prevention, sensing, security, verification and anti-counterfeiting. The chapters have recent and technically significant results, ample illustrations, figures, and key references. This book is intended for engineers and scientists in the relevant fields, graduate students, industry managers, university professors, government managers, and policy makers. Advanced Sciences and Technologies for Security Applications focuses on research monographs in the areas of -Recognition and identification (including optical imaging, biometrics, authentication, verification, and smart surveillance systems) -Biological and chemical threat detection (including bios...

  3. Numerical Demultiplexing of Color Image Sensor Measurements via Non-linear Random Forest Modeling

    OpenAIRE

    Jason Deglint; Farnoud Kazemzadeh; Daniel Cho; Clausi, David A.; Alexander Wong

    2016-01-01

    The simultaneous capture of imaging data at multiple wavelengths across the electromagnetic spectrum is highly challenging, requiring complex and costly multispectral image devices. In this study, we investigate the feasibility of simultaneous multispectral imaging using conventional image sensors with color filter arrays via a novel comprehensive framework for numerical demultiplexing of the color image sensor measurements. A numerical forward model characterizing the formation of sensor mea...

  4. Digital image processing of earth observation sensor data

    Science.gov (United States)

    Bernstein, R.

    1976-01-01

    This paper describes digital image processing techniques that were developed to precisely correct Landsat multispectral earth observation data and gives illustrations of the results achieved, e.g., geometric corrections with an error of less than one picture element, a relative error of one-fourth picture element, and no radiometric error effect. Techniques for enhancing the sensor data, digitally mosaicking multiple scenes, and extracting information are also illustrated.

  5. Polymer Optical Fibre Sensors for Endoscopic Opto-Acoustic Imaging

    DEFF Research Database (Denmark)

    Broadway, Christian; Gallego, Daniel; Woyessa, Getinet

    2015-01-01

    Opto-acoustic imaging (OAI) shows particular promise for in-vivo biomedical diagnostics. Its applications include cardiovascular, gastrointestinal and urogenital systems imaging. Opto-acoustic endoscopy (OAE) allows the imaging of body parts through cavities permitting entry. The critical parameter...... is the physical size of the device, allowing compatibility with current technology, while governing flexibility of the distal end of the endoscope based on the needs of the sensor. Polymer optical fibre (POF) presents a novel approach for endoscopic applications and has been positively discussed and compared...... in existing publications. A great advantage can be obtained for endoscopy due to a small size and array potential to provide discrete imaging speed improvements. Optical fibre exhibits numerous advantages over conventional piezo-electric transducers, such as immunity from electromagnetic interference...

  6. MIST Final Report: Multi-sensor Imaging Science and Technology

    Energy Technology Data Exchange (ETDEWEB)

    Lind, Michael A.; Medvick, Patricia A.; Foley, Michael G.; Foote, Harlan P.; Heasler, Patrick G.; Thompson, Sandra E.; Nuffer, Lisa L.; Mackey, Patrick S.; Barr, Jonathan L.; Renholds, Andrea S.

    2008-03-15

    The Multi-sensor Imaging Science and Technology (MIST) program was undertaken to advance exploitation tools for Long Wavelength Infra Red (LWIR) hyper-spectral imaging (HSI) analysis as applied to the discovery and quantification of nuclear proliferation signatures. The program focused on mitigating LWIR image background clutter to ease the analyst burden and enable a) faster more accurate analysis of large volumes of high clutter data, b) greater detection sensitivity of nuclear proliferation signatures (primarily released gasses) , and c) quantify confidence estimates of the signature materials detected. To this end the program investigated fundamental limits and logical modifications of the more traditional statistical discovery and analysis tools applied to hyperspectral imaging and other disciplines, developed and tested new software incorporating advanced mathematical tools and physics based analysis, and demonstrated the strength and weaknesses of the new codes on relevant hyperspectral data sets from various campaigns. This final report describes the content of the program and the outlines the significant results.

  7. Smart image sensor with adaptive correction of brightness

    Science.gov (United States)

    Paindavoine, Michel; Ngoua, Auguste; Brousse, Olivier; Clerc, Cédric

    2012-03-01

    Today, intelligent image sensors require the integration in the focal plane (or near the focal plane) of complex algorithms for image processing. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, analog pre-processing are essential, on the one hand, to improve the quality of the images making them usable whatever the light conditions, and secondly, to detect regions of interest (ROIs) to limit the amount of pixels to be transmitted to a digital processor performing the high-level processing such as feature extraction for pattern recognition. To show that it is possible to implement analog pre-processing in the focal plane, we have designed and implemented in 130nm CMOS technology, a test circuit with groups of 4, 16 and 144 pixels, each incorporating analog average calculations.

  8. Laser doppler blood flow imaging using a CMOS imaging sensor with on-chip signal processing.

    Science.gov (United States)

    He, Diwei; Nguyen, Hoang C; Hayes-Gill, Barrie R; Zhu, Yiqun; Crowe, John A; Gill, Cally; Clough, Geraldine F; Morgan, Stephen P

    2013-09-18

    The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.

  9. Laser Doppler Blood Flow Imaging Using a CMOS Imaging Sensor with On-Chip Signal Processing

    Directory of Open Access Journals (Sweden)

    Cally Gill

    2013-09-01

    Full Text Available The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.

  10. Miniature infrared hyperspectral imaging sensor for airborne applications

    Science.gov (United States)

    Hinnrichs, Michele; Hinnrichs, Bradford; McCutchen, Earl

    2017-05-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera, both MWIR and LWIR, small enough to serve as a payload on a miniature unmanned aerial vehicles. The optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of the sensor. This new and innovative approach to infrared hyperspectral imaging spectrometer uses micro-optics and will be explained in this paper. The micro-optics are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a mini-UAV or commercial quadcopter. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the spatial resolution. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4 lenslet array on a 1024 x 1024 pixel element focal plane array which gives 16 spectral images of 256 x 256 pixel resolution each

  11. New Endoscopic Imaging Technology Based on MEMS Sensors and Actuators

    Directory of Open Access Journals (Sweden)

    Zhen Qiu

    2017-07-01

    Full Text Available Over the last decade, optical fiber-based forms of microscopy and endoscopy have extended the realm of applicability for many imaging modalities. Optical fiber-based imaging modalities permit the use of remote illumination sources and enable flexible forms supporting the creation of portable and hand-held imaging instrumentations to interrogate within hollow tissue cavities. A common challenge in the development of such devices is the design and integration of miniaturized optical and mechanical components. Until recently, microelectromechanical systems (MEMS sensors and actuators have been playing a key role in shaping the miniaturization of these components. This is due to the precision mechanics of MEMS, microfabrication techniques, and optical functionality enabling a wide variety of movable and tunable mirrors, lenses, filters, and other optical structures. Many promising results from MEMS based optical fiber endoscopy have demonstrated great potentials for clinical translation. In this article, reviews of MEMS sensors and actuators for various fiber-optical endoscopy such as fluorescence, optical coherence tomography, confocal, photo-acoustic, and two-photon imaging modalities will be discussed. This advanced MEMS based optical fiber endoscopy can provide cellular and molecular features with deep tissue penetration enabling guided resections and early cancer assessment to better treatment outcomes.

  12. A 14-megapixel 36 x 24-mm2 image sensor

    Science.gov (United States)

    Meynants, Guy; Scheffer, Danny; Dierickx, Bart; Alaerts, Andre

    2004-06-01

    We will present a 3044 x 4556 pixels CMOS image sensor with a pixel array of 36 x 24 mm2, equal to the size of 35 mm film. Though primarily developed for digital photography, the compatibility of the device with standard optics for film cameras makes the device also attractive for machine vision applications as well as many scientific and highresolution applications. The sensor makes use of a standard rolling shutter 3-transistor active pixel in standard 0.35 μm CMOS technology. On-chip double sampling is used to reduce fixed pattern noise. The pixel is 8 μm large, has 60,000 electrons full well charge and a conversion gain of 18.5 μV/electron. The product of quantum efficiency and fill factor of the monochrome device is 40%. Temporal noise is 35 electrons, offering a dynamic range of 65.4 dB. Dark current is 4.2 mV/s at 30 degrees C. Fixed pattern noise is less than 1.5 mV RMS over the entire focal plane and less than 1 mV RMS in local windows of 32 x 32 pixels. The sensor is read out over 4 parallel outputs at 15 MHz each, offering 3.2 images/second. The device runs at 3.3 V and consumes 200 mW.

  13. High speed global shutter image sensors for professional applications

    Science.gov (United States)

    Wu, Xu; Meynants, Guy

    2015-04-01

    Global shutter imagers expand the use to miscellaneous applications, such as machine vision, 3D imaging, medical imaging, space etc. to eliminate motion artifacts in rolling shutter imagers. A low noise global shutter pixel requires more than one non-light sensitive memory to reduce the read noise. But larger memory area reduces the fill-factor of the pixels. Modern micro-lenses technology can compensate this fill-factor loss. Backside illumination (BSI) is another popular technique to improve the pixel fill-factor. But some pixel architecture may not reach sufficient shutter efficiency with backside illumination. Non-light sensitive memory elements make the fabrication with BSI possible. Machine vision like fast inspection system, medical imaging like 3D medical or scientific applications always ask for high frame rate global shutter image sensors. Thanks to the CMOS technology, fast Analog-to-digital converters (ADCs) can be integrated on chip. Dual correlated double sampling (CDS) on chip ADC with high interface digital data rate reduces the read noise and makes more on-chip operation control. As a result, a global shutter imager with digital interface is a very popular solution for applications with high performance and high frame rate requirements. In this paper we will review the global shutter architectures developed in CMOSIS, discuss their optimization process and compare their performances after fabrication.

  14. EUROCMOSHF: demonstration of a fully European supply chain for space image sensors

    Science.gov (United States)

    De Moor, P.; De Munck, K.; Haspeslagh, L.; Guerrieri, S.; Van Olmen, J.; Meynants, G.; Beeckman, G.; Vanwichelen, K.; Van Esbroeck, K.; Ghiglione, Alexandre; Gilbert, Teva; Demiguel, Stéphane

    2017-09-01

    Europe has currently no full supply chain of CMOS image sensors (CIS) for space use, certainly not in terms of image sensor manufacturing. Although a few commercial foundries in Europe manufacture CMOS image sensors for consumer and automotive applications, they are typically not interested in adapting their process flow to meet high-end performance specifications, mainly because the expected manufacturing volume for space imagers is extremely low.

  15. Lead salt TE-cooled imaging sensor development

    Science.gov (United States)

    Green, Kenton; Yoo, Sung-Shik; Kauffman, Christopher

    2014-06-01

    Progress on development of lead-salt thermoelectrically-cooled (TE-cooled) imaging sensors will be presented. The imaging sensor architecture has been integrated into field-ruggedized hardware, and supports the use of lead-salt based detector material, including lead selenide and lead sulfide. Images and video are from a lead selenide focal plane array on silicon ROIC at temperatures approaching room temperature, and at high frame rates. Lead-salt imagers uniquely possess three traits: (1) Sensitive operation at high temperatures above the typical `cooled' sensor maximum (2) Photonic response which enables high frame rates faster than the bolometric, thermal response time (3) Capability to reliably fabricate 2D arrays from solution-deposition directly, i. e. monolithically, on silicon. These lead-salt imagers are less expensive to produce and operate compared to other IR imagers based on II-VI HgCdTe and III-V InGaAsSb, because they do not require UHV epitaxial growth nor hybrid assembly, and no cryo-engine is needed to maintain low thermal noise. Historically, there have been challenges with lead-salt detector-to-detector non-uniformities and detector noise. Staring arrays of lead-salt imagers are promising today because of advances in ROIC technology and fabrication improvements. Non-uniformities have been addressed by on-FPA non-uniformity correction and 1/f noise has been mitigated with adjustable noise filtering without mechanical chopping. Finally, improved deposition process and measurement controls have enabled reliable fabrication of high-performance, lead-salt, large format staring arrays on the surface of large silicon ROIC wafers. The imaging array performance has achieved a Noise Equivalent Temperature Difference (NETD) of 30 mK at 2.5 millisecond integration time with an f/1 lens in the 3-5 μm wavelength band using a two-stage TE cooler to operate the FPA at 230 K. Operability of 99.6% is reproducible on 240 × 320 format arrays.

  16. Multiple-Event, Single-Photon Counting Imaging Sensor

    Science.gov (United States)

    Zheng, Xinyu; Cunningham, Thomas J.; Sun, Chao; Wang, Kang L.

    2011-01-01

    The single-photon counting imaging sensor is typically an array of silicon Geiger-mode avalanche photodiodes that are monolithically integrated with CMOS (complementary metal oxide semiconductor) readout, signal processing, and addressing circuits located in each pixel and the peripheral area of the chip. The major problem is its single-event method for photon count number registration. A single-event single-photon counting imaging array only allows registration of up to one photon count in each of its pixels during a frame time, i.e., the interval between two successive pixel reset operations. Since the frame time can t be too short, this will lead to very low dynamic range and make the sensor merely useful for very low flux environments. The second problem of the prior technique is a limited fill factor resulting from consumption of chip area by the monolithically integrated CMOS readout in pixels. The resulting low photon collection efficiency will substantially ruin any benefit gained from the very sensitive single-photon counting detection. The single-photon counting imaging sensor developed in this work has a novel multiple-event architecture, which allows each of its pixels to register as more than one million (or more) photon-counting events during a frame time. Because of a consequently boosted dynamic range, the imaging array of the invention is capable of performing single-photon counting under ultra-low light through high-flux environments. On the other hand, since the multiple-event architecture is implemented in a hybrid structure, back-illumination and close-to-unity fill factor can be realized, and maximized quantum efficiency can also be achieved in the detector array.

  17. Detection in urban scenario using combined airborne imaging sensors

    Science.gov (United States)

    Renhorn, Ingmar; Axelsson, Maria; Benoist, Koen; Bourghys, Dirk; Boucher, Yannick; Briottet, Xavier; De Ceglie, Sergio; Dekker, Rob; Dimmeler, Alwin; Dost, Remco; Friman, Ola; Kåsen, Ingebjørg; Maerker, Jochen; van Persie, Mark; Resta, Salvatore; Schwering, Piet; Shimoni, Michal; Haavardsholm, Trym Vegard

    2012-06-01

    The EDA project "Detection in Urban scenario using Combined Airborne imaging Sensors" (DUCAS) is in progress. The aim of the project is to investigate the potential benefit of combined high spatial and spectral resolution airborne imagery for several defense applications in the urban area. The project is taking advantage of the combined resources from 7 contributing nations within the EDA framework. An extensive field trial has been carried out in the city of Zeebrugge at the Belgian coast in June 2011. The Belgian armed forces contributed with platforms, weapons, personnel (soldiers) and logistics for the trial. Ground truth measurements with respect to geometrical characteristics, optical material properties and weather conditions were obtained in addition to hyperspectral, multispectral and high resolution spatial imagery. High spectral/spatial resolution sensor data are used for detection, classification, identification and tracking.

  18. Software defined multi-spectral imaging for Arctic sensor networks

    Science.gov (United States)

    Siewert, Sam; Angoth, Vivek; Krishnamurthy, Ramnarayan; Mani, Karthikeyan; Mock, Kenrick; Singh, Surjith B.; Srivistava, Saurav; Wagner, Chris; Claus, Ryan; Vis, Matthew Demi

    2016-05-01

    Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop

  19. Numerical Demultiplexing of Color Image Sensor Measurements via Non-linear Random Forest Modeling.

    Science.gov (United States)

    Deglint, Jason; Kazemzadeh, Farnoud; Cho, Daniel; Clausi, David A; Wong, Alexander

    2016-06-27

    The simultaneous capture of imaging data at multiple wavelengths across the electromagnetic spectrum is highly challenging, requiring complex and costly multispectral image devices. In this study, we investigate the feasibility of simultaneous multispectral imaging using conventional image sensors with color filter arrays via a novel comprehensive framework for numerical demultiplexing of the color image sensor measurements. A numerical forward model characterizing the formation of sensor measurements from light spectra hitting the sensor is constructed based on a comprehensive spectral characterization of the sensor. A numerical demultiplexer is then learned via non-linear random forest modeling based on the forward model. Given the learned numerical demultiplexer, one can then demultiplex simultaneously-acquired measurements made by the color image sensor into reflectance intensities at discrete selectable wavelengths, resulting in a higher resolution reflectance spectrum. Experimental results demonstrate the feasibility of such a method for the purpose of simultaneous multispectral imaging.

  20. Numerical Demultiplexing of Color Image Sensor Measurements via Non-linear Random Forest Modeling

    Science.gov (United States)

    Deglint, Jason; Kazemzadeh, Farnoud; Cho, Daniel; Clausi, David A.; Wong, Alexander

    2016-06-01

    The simultaneous capture of imaging data at multiple wavelengths across the electromagnetic spectrum is highly challenging, requiring complex and costly multispectral image devices. In this study, we investigate the feasibility of simultaneous multispectral imaging using conventional image sensors with color filter arrays via a novel comprehensive framework for numerical demultiplexing of the color image sensor measurements. A numerical forward model characterizing the formation of sensor measurements from light spectra hitting the sensor is constructed based on a comprehensive spectral characterization of the sensor. A numerical demultiplexer is then learned via non-linear random forest modeling based on the forward model. Given the learned numerical demultiplexer, one can then demultiplex simultaneously-acquired measurements made by the color image sensor into reflectance intensities at discrete selectable wavelengths, resulting in a higher resolution reflectance spectrum. Experimental results demonstrate the feasibility of such a method for the purpose of simultaneous multispectral imaging.

  1. Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors

    Directory of Open Access Journals (Sweden)

    Neale A. W. Dutton

    2016-07-01

    Full Text Available SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN permitting single photon counting (SPC imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW of single photon peaks in a photon counting histogram (PCH. The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed.

  2. Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors.

    Science.gov (United States)

    Dutton, Neale A W; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K

    2016-07-20

    SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed.

  3. Fast Image Restoration for Spatially Varying Defocus Blur of Imaging Sensor

    Directory of Open Access Journals (Sweden)

    Hejin Cheong

    2015-01-01

    Full Text Available This paper presents a fast adaptive image restoration method for removing spatially varying out-of-focus blur of a general imaging sensor. After estimating the parameters of space-variant point-spread-function (PSF using the derivative in each uniformly blurred region, the proposed method performs spatially adaptive image restoration by selecting the optimal restoration filter according to the estimated blur parameters. Each restoration filter is implemented in the form of a combination of multiple FIR filters, which guarantees the fast image restoration without the need of iterative or recursive processing. Experimental results show that the proposed method outperforms existing space-invariant restoration methods in the sense of both objective and subjective performance measures. The proposed algorithm can be employed to a wide area of image restoration applications, such as mobile imaging devices, robot vision, and satellite image processing.

  4. Displacement Damage Effects in Pinned Photodiode CMOS Image Sensors

    OpenAIRE

    Virmontois, Cédric; Goiffon, Vincent; Corbière, Franck; Magnan, Pierre; Girard, Sylvain; Bardoux, Alain

    2012-01-01

    This paper investigates the effects of displacement damage in Pinned Photodiode (PPD) CMOS Image Sensors (CIS) using proton and neutron irradiations. The DDD ranges from 12 TeV/g to ${1.2 times 10^{6}}$ TeV/g. Particle fluence up to $5 times 10^{14}$ n.cm $^{-2}$ is investigated to observe electro-optic degradation in harsh environments. The dark current is also investigated and it would appear that it is possible to use the dark current spectroscopy in PPD CIS. The dark current random telegr...

  5. Laser Doppler perfusion imaging with a complimentary metal oxide semiconductor image sensor

    NARCIS (Netherlands)

    Serov, Alexander; Steenbergen, Wiendelt; de Mul, F.F.M.

    2002-01-01

    We utilized a complimentary metal oxide semiconductor video camera for fast f low imaging with the laser Doppler technique. A single sensor is used for both observation of the area of interest and measurements of the interference signal caused by dynamic light scattering from moving particles inside

  6. A Wafer scale active pixel CMOS image sensor for generic x-ray radiology

    Science.gov (United States)

    Scheffer, Danny

    2007-03-01

    This paper describes a CMOS Active Pixel Image Sensor developed for generic X-ray imaging systems using standard CMOS technology and an active pixel architecture featuring low noise and a high sensitivity. The image sensor has been manufactured in a standard 0.35 μm technology using 8" wafers. The resolution of the sensor is 3360x3348 pixels of 40x40 μm2 each. The diagonal of the sensor measures little over 190 mm. The paper discusses the floor planning, stitching diagram, and the electro-optical performance of the sensor that has been developed.

  7. Quantum dots in imaging, drug delivery and sensor applications.

    Science.gov (United States)

    Matea, Cristian T; Mocan, Teodora; Tabaran, Flaviu; Pop, Teodora; Mosteanu, Ofelia; Puia, Cosmin; Iancu, Cornel; Mocan, Lucian

    2017-01-01

    Quantum dots (QDs), also known as nanoscale semiconductor crystals, are nanoparticles with unique optical and electronic properties such as bright and intensive fluorescence. Since most conventional organic label dyes do not offer the near-infrared (>650 nm) emission possibility, QDs, with their tunable optical properties, have gained a lot of interest. They possess characteristics such as good chemical and photo-stability, high quantum yield and size-tunable light emission. Different types of QDs can be excited with the same light wavelength, and their narrow emission bands can be detected simultaneously for multiple assays. There is an increasing interest in the development of nano-theranostics platforms for simultaneous sensing, imaging and therapy. QDs have great potential for such applications, with notable results already published in the fields of sensors, drug delivery and biomedical imaging. This review summarizes the latest developments available in literature regarding the use of QDs for medical applications.

  8. Covariance estimation in Terms of Stokes Parameters iwth Application to Vector Sensor Imaging

    Science.gov (United States)

    2016-12-15

    vector sensor imaging problem: estimating the magnitude, polarization, and direction of plane wave sources from a sample covariance matrix of vector mea...Covariance estimation in terms of Stokes parameters with application to vector sensor imaging Ryan Volz∗, Mary Knapp†, Frank D. Lind∗, Frank C. Robey...Lincoln Laboratory, Lexington, MA Abstract— Vector sensor imaging presents a challeng- ing problem in covariance estimation when allowing arbitrarily

  9. Novel near-to-mid IR imaging sensors without cooling Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Boston Applied Technologies, Inc (BATi), together with Kent State University (KSU), proposes to develop a high sensitivity infrared (IR) imaging sensor without...

  10. High Time Resolution Photon Counting 3D Imaging Sensors

    Science.gov (United States)

    Siegmund, O.; Ertley, C.; Vallerga, J.

    2016-09-01

    Novel sealed tube microchannel plate (MCP) detectors using next generation cross strip (XS) anode readouts and high performance electronics have been developed to provide photon counting imaging sensors for Astronomy and high time resolution 3D remote sensing. 18 mm aperture sealed tubes with MCPs and high efficiency Super-GenII or GaAs photocathodes have been implemented to access the visible/NIR regimes for ground based research, astronomical and space sensing applications. The cross strip anode readouts in combination with PXS-II high speed event processing electronics can process high single photon counting event rates at >5 MHz ( 80 ns dead-time per event), and time stamp events to better than 25 ps. Furthermore, we are developing a high speed ASIC version of the electronics for low power/low mass spaceflight applications. For a GaAs tube the peak quantum efficiency has degraded from 30% (at 560 - 850 nm) to 25% over 4 years, but for Super-GenII tubes the peak quantum efficiency of 17% (peak at 550 nm) has remained unchanged for over 7 years. The Super-GenII tubes have a uniform spatial resolution of MCP gain photon counting operation also permits longer overall sensor lifetimes and high local counting rates. Using the high timing resolution, we have demonstrated 3D object imaging with laser pulse (630 nm 45 ps jitter Pilas laser) reflections in single photon counting mode with spatial and depth sensitivity of the order of a few millimeters. A 50 mm Planacon sealed tube was also constructed, using atomic layer deposited microchannel plates which potentially offer better overall sealed tube lifetime, quantum efficiency and gain stability. This tube achieves standard bialkali quantum efficiency levels, is stable, and has been coupled to the PXS-II electronics and used to detect and image fast laser pulse signals.

  11. Evaluation of the AN/SAY-1 Thermal Imaging Sensor System

    National Research Council Canada - National Science Library

    Smith, John G; Middlebrook, Christopher T

    2002-01-01

    The AN/SAY-1 Thermal Imaging Sensor System "TISS" was developed to provide surface ships with a day/night imaging capability to detect low radar reflective, small cross-sectional area targets such as floating mines...

  12. Special Sensor Microwave Imager/Sounder (SSMIS) Temperature Data Record (TDR) in netCDF

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Special Sensor Microwave Imager/Sounder (SSMIS) is a series of passive microwave conically scanning imagers and sounders onboard the DMSP satellites beginning...

  13. Human Posture Recognition Based on Images Captured by the Kinect Sensor

    National Research Council Canada - National Science Library

    Wang, Wen-June; Chang, Jun-Wei; Haung, Shih-Fu; Wang, Rong-Jyue

    2016-01-01

    In this paper we combine several image processing techniques with the depth images captured by a Kinect sensor to successfully recognize the five distinct human postures of sitting, standing, stooping...

  14. Snapshot Spectral and Color Imaging Using a Regular Digital Camera with a Monochromatic Image Sensor

    Science.gov (United States)

    Hauser, J.; Zheludev, V. A.; Golub, M. A.; Averbuch, A.; Nathan, M.; Inbar, O.; Neittaanmäki, P.; Pölönen, I.

    2017-10-01

    Spectral imaging (SI) refers to the acquisition of the three-dimensional (3D) spectral cube of spatial and spectral data of a source object at a limited number of wavelengths in a given wavelength range. Snapshot spectral imaging (SSI) refers to the instantaneous acquisition (in a single shot) of the spectral cube, a process suitable for fast changing objects. Known SSI devices exhibit large total track length (TTL), weight and production costs and relatively low optical throughput. We present a simple SSI camera based on a regular digital camera with (i) an added diffusing and dispersing phase-only static optical element at the entrance pupil (diffuser) and (ii) tailored compressed sensing (CS) methods for digital processing of the diffused and dispersed (DD) image recorded on the image sensor. The diffuser is designed to mix the spectral cube data spectrally and spatially and thus to enable convergence in its reconstruction by CS-based algorithms. In addition to performing SSI, this SSI camera is capable to perform color imaging using a monochromatic or gray-scale image sensor without color filter arrays.

  15. SNAPSHOT SPECTRAL AND COLOR IMAGING USING A REGULAR DIGITAL CAMERA WITH A MONOCHROMATIC IMAGE SENSOR

    Directory of Open Access Journals (Sweden)

    J. Hauser

    2017-10-01

    Full Text Available Spectral imaging (SI refers to the acquisition of the three-dimensional (3D spectral cube of spatial and spectral data of a source object at a limited number of wavelengths in a given wavelength range. Snapshot spectral imaging (SSI refers to the instantaneous acquisition (in a single shot of the spectral cube, a process suitable for fast changing objects. Known SSI devices exhibit large total track length (TTL, weight and production costs and relatively low optical throughput. We present a simple SSI camera based on a regular digital camera with (i an added diffusing and dispersing phase-only static optical element at the entrance pupil (diffuser and (ii tailored compressed sensing (CS methods for digital processing of the diffused and dispersed (DD image recorded on the image sensor. The diffuser is designed to mix the spectral cube data spectrally and spatially and thus to enable convergence in its reconstruction by CS-based algorithms. In addition to performing SSI, this SSI camera is capable to perform color imaging using a monochromatic or gray-scale image sensor without color filter arrays.

  16. Radiometric Normalization of Large Airborne Image Data Sets Acquired by Different Sensor Types

    Science.gov (United States)

    Gehrke, S.; Beshah, B. T.

    2016-06-01

    Generating seamless mosaics of aerial images is a particularly challenging task when the mosaic comprises a large number of im-ages, collected over longer periods of time and with different sensors under varying imaging conditions. Such large mosaics typically consist of very heterogeneous image data, both spatially (different terrain types and atmosphere) and temporally (unstable atmo-spheric properties and even changes in land coverage). We present a new radiometric normalization or, respectively, radiometric aerial triangulation approach that takes advantage of our knowledge about each sensor's properties. The current implementation supports medium and large format airborne imaging sensors of the Leica Geosystems family, namely the ADS line-scanner as well as DMC and RCD frame sensors. A hierarchical modelling - with parameters for the overall mosaic, the sensor type, different flight sessions, strips and individual images - allows for adaptation to each sensor's geometric and radiometric properties. Additional parameters at different hierarchy levels can compensate radiome-tric differences of various origins to compensate for shortcomings of the preceding radiometric sensor calibration as well as BRDF and atmospheric corrections. The final, relative normalization is based on radiometric tie points in overlapping images, absolute radiometric control points and image statistics. It is computed in a global least squares adjustment for the entire mosaic by altering each image's histogram using a location-dependent mathematical model. This model involves contrast and brightness corrections at radiometric fix points with bilinear interpolation for corrections in-between. The distribution of the radiometry fixes is adaptive to each image and generally increases with image size, hence enabling optimal local adaptation even for very long image strips as typi-cally captured by a line-scanner sensor. The normalization approach is implemented in HxMap software. It has been

  17. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    OpenAIRE

    Chulhee Park; Moon Gi Kang

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB co...

  18. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization.

    Science.gov (United States)

    Hutcheson, Joshua A; Majid, Aneeka A; Powless, Amy J; Muldoon, Timothy J

    2015-09-01

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min(-1) with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels(-1).

  19. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization

    Energy Technology Data Exchange (ETDEWEB)

    Hutcheson, Joshua A.; Majid, Aneeka A.; Powless, Amy J.; Muldoon, Timothy J., E-mail: tmuldoon@uark.edu [Department of Biomedical Engineering, University of Arkansas, 120 Engineering Hall, Fayetteville, Arkansas 72701 (United States)

    2015-09-15

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min{sup −1} with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels{sup −1}.

  20. Low-light color image enhancement via iterative noise reduction using RGB/NIR sensor

    Science.gov (United States)

    Yamashita, Hiroki; Sugimura, Daisuke; Hamamoto, Takayuki

    2017-07-01

    We propose a method to enhance the color image of a low-light scene using a single sensor that simultaneously captures red, green, blue (RGB), and near-infrared (NIR) information. Typical image enhancement methods require two sensors to simultaneously capture color and NIR images. In contrast, our proposed system utilizes a single sensor but achieves accurate color image restoration. We divide the captured multispectral data into RGB and NIR information based on the spectral sensitivity of our imaging system. Using the NIR information for guidance, we reconstruct the corresponding color image based on a joint demosaicking and denoising technique. Subsequently, we restore the estimated color image iteratively using the constructed guidance image. Our experiments demonstrate the effectiveness of our method using synthetic data, and real raw data captured by our imaging system.

  1. A new type of remote sensors which allow directly forming certain statistical estimates of images

    Science.gov (United States)

    Podlaskin, Boris; Guk, Elena; Karpenko, Andrey

    2010-10-01

    A new approach to the problems of statistical and structural pattern recognition, a signal processing and image analysis techniques has been considered. These problems are extremely important for tasks being solved by airborne and space borne remote sensing systems. Development of new remote sensors for image and signal processing is inherently connected with a possibility of statistical processing of images. Fundamentally new optoelectronic sensors "Multiscan" have been suggested in the present paper. Such sensors make it possible to form directly certain statistical estimates, which describe completely enough the different types of images. The sensors under discussion perform the Lebesgue-Stieltjes signal integration rather than the Cauchy-Riemann one. That permits to create integral functionals for determining statistical features of images. The use of the integral functionals for image processing provides a good agreement of obtained statistical estimates with required image information features. The Multiscan remote sensors allows to create a set of integral moments of an input image right up to high-order integral moments, to form a quantile representation of an input image, which provides a count number limited texture, to form a median, which provides a localisation of a low-contrast horizon line in fog, localisation of water flow boundary etc. This work presents both the description of the design concept of the new remote sensor and mathematical apparatus providing the possibility to create input image statistical features and integral functionals.

  2. Development of Thermal Infrared Sensor to Supplement Operational Land Imager

    Science.gov (United States)

    Shu, Peter; Waczynski, Augustyn; Kan, Emily; Wen, Yiting; Rosenberry, Robert

    2012-01-01

    The thermal infrared sensor (TIRS) is a quantum well infrared photodetector (QWIP)-based instrument intended to supplement the Operational Land Imager (OLI) for the Landsat Data Continuity Mission (LDCM). The TIRS instrument is a far-infrared imager operating in the pushbroom mode with two IR channels: 10.8 and 12 m. The focal plane will contain three 640 512 QWIP arrays mounted onto a silicon substrate. The readout integrated circuit (ROIC) addresses each pixel on the QWIP arrays and reads out the pixel value (signal). The ROIC is controlled by the focal plane electronics (FPE) by means of clock signals and bias voltage value. The means of how the FPE is designed to control and interact with the TIRS focal plane assembly (FPA) is the basis for this work. The technology developed under the FPE is for the TIRS focal plane assembly (FPA). The FPE must interact with the FPA to command and control the FPA, extract analog signals from the FPA, and then convert the analog signals to digital format and send them via a serial link (USB) to a computer. The FPE accomplishes the described functions by converting electrical power from generic power supplies to the required bias power that is needed by the FPA. The FPE also generates digital clocking signals and shifts the typical transistor-to-transistor logic (TTL) to }5 V required by the FPA. The FPE also uses an application- specific integrated circuit (ASIC) named System Image, Digitizing, Enhancing, Controlling, And Retrieving (SIDECAR) from Teledyne Corp. to generate the clocking patterns commanded by the user. The uniqueness of the FPE for TIRS lies in that the TIRS FPA has three QWIP detector arrays, and all three detector arrays must be in synchronization while in operation. This is to avoid data skewing while observing Earth flying in space. The observing scenario may be customized by uploading new control software to the SIDECAR.

  3. Security SVGA image sensor with on-chip video data authentication and cryptographic circuit

    Science.gov (United States)

    Stifter, P.; Eberhardt, K.; Erni, A.; Hofmann, K.

    2005-10-01

    Security applications of sensors in a networking environment has a strong demand of sensor authentication and secure data transmission due to the possibility of man-in-the-middle and address spoofing attacks. Therefore a secure sensor system should fulfil the three standard requirements of cryptography, namely data integrity, authentication and non-repudiation. This paper is intended to present the unique sensor development by AIM, the so called SecVGA, which is a high performance, monochrome (B/W) CMOS active pixel image sensor. The device is capable of capturing still and motion images with a resolution of 800x600 active pixels and converting the image into a digital data stream. The distinguishing feature of this development in comparison to standard imaging sensors is the on-chip cryptographic engine which provides the sensor authentication, based on a one-way challenge/response protocol. The implemented protocol results in the exchange of a session-key which will secure the following video data transmission. This is achieved by calculating a cryptographic checksum derived from a stateful hash value of the complete image frame. Every sensor contains an EEPROM memory cell for the non-volatile storage of a unique identifier. The imager is programmable via a two-wire I2C compatible interface which controls the integration time, the active window size of the pixel array, the frame rate and various operating modes including the authentication procedure.

  4. Low-cost compact thermal imaging sensors for body temperature measurement

    Science.gov (United States)

    Han, Myung-Soo; Han, Seok Man; Kim, Hyo Jin; Shin, Jae Chul; Ahn, Mi Sook; Kim, Hyung Won; Han, Yong Hee

    2013-06-01

    This paper presents a 32x32 microbolometer thermal imaging sensor for human body temperature measurement. Waferlevel vacuum packaging technology allows us to get a low cost and compact imaging sensor chip. The microbolometer uses V-W-O film as sensing material and ROIC has been designed 0.35-um CMOS process in UMC. A thermal image of a human face and a hand using f/1 lens convinces that it has a potential of human body temperature for commercial use.

  5. Improving the Ability of Image Sensors to Detect Faint Stars and Moving Objects Using Image Deconvolution Techniques

    Directory of Open Access Journals (Sweden)

    Octavi Fors

    2010-03-01

    Full Text Available In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris. In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.

  6. Improving the ability of image sensors to detect faint stars and moving objects using image deconvolution techniques.

    Science.gov (United States)

    Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D

    2010-01-01

    In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.

  7. Adaptive sensing and optimal power allocation for wireless video sensors with sigma-delta imager.

    Science.gov (United States)

    Marijan, Malisa; Demirkol, Ilker; Maricić I, Danijel; Sharma, Gaurav; Ignjatovi, Zeljko

    2010-10-01

    We consider optimal power allocation for wireless video sensors (WVSs), including the image sensor subsystem in the system analysis. By assigning a power-rate-distortion (P-R-D) characteristic for the image sensor, we build a comprehensive P-R-D optimization framework for WVSs. For a WVS node operating under a power budget, we propose power allocation among the image sensor, compression, and transmission modules, in order to minimize the distortion of the video reconstructed at the receiver. To demonstrate the proposed optimization method, we establish a P-R-D model for an image sensor based upon a pixel level sigma-delta (Σ∆) image sensor design that allows investigation of the tradeoff between the bit depth of the captured images and spatio-temporal characteristics of the video sequence under the power constraint. The optimization results obtained in this setting confirm that including the image sensor in the system optimization procedure can improve the overall video quality under power constraint and prolong the lifetime of the WVSs. In particular, when the available power budget for a WVS node falls below a threshold, adaptive sensing becomes necessary to ensure that the node communicates useful information about the video content while meeting its power budget.

  8. A wireless sensor network for vineyard monitoring that uses image processing.

    Science.gov (United States)

    Lloret, Jaime; Bosch, Ignacio; Sendra, Sandra; Serrano, Arturo

    2011-01-01

    The first step to detect when a vineyard has any type of deficiency, pest or disease is to observe its stems, its grapes and/or its leaves. To place a sensor in each leaf of every vineyard is obviously not feasible in terms of cost and deployment. We should thus look for new methods to detect these symptoms precisely and economically. In this paper, we present a wireless sensor network where each sensor node takes images from the field and internally uses image processing techniques to detect any unusual status in the leaves. This symptom could be caused by a deficiency, pest, disease or other harmful agent. When it is detected, the sensor node sends a message to a sink node through the wireless sensor network in order to notify the problem to the farmer. The wireless sensor uses the IEEE 802.11 a/b/g/n standard, which allows connections from large distances in open air. This paper describes the wireless sensor network design, the wireless sensor deployment, how the node processes the images in order to monitor the vineyard, and the sensor network traffic obtained from a test bed performed in a flat vineyard in Spain. Although the system is not able to distinguish between deficiency, pest, disease or other harmful agents, a symptoms image database and a neuronal network could be added in order learn from the experience and provide an accurate problem diagnosis.

  9. Optical and Electric Multifunctional CMOS Image Sensors for On-Chip Biosensing Applications

    Directory of Open Access Journals (Sweden)

    Kiyotaka Sasagawa

    2010-12-01

    Full Text Available In this review, the concept, design, performance, and a functional demonstration of multifunctional complementary metal-oxide-semiconductor (CMOS image sensors dedicated to on-chip biosensing applications are described. We developed a sensor architecture that allows flexible configuration of a sensing pixel array consisting of optical and electric sensing pixels, and designed multifunctional CMOS image sensors that can sense light intensity and electric potential or apply a voltage to an on-chip measurement target. We describe the sensors’ architecture on the basis of the type of electric measurement or imaging functionalities.

  10. The influence of sensor and flight parameters on texture in radar images

    Science.gov (United States)

    Frost, V. S.; Shanmugan, K. S.; Holtzman, J. C.

    1984-01-01

    Texture is known to be important in the analysis of radar images for geologic applications. It has previously been shown that texture features derived from the grey level co-occurrence matrix (GLCM) can be used to separate large scale texture in radar images. Here the influence of sensor parameters, specifically the spatial and radiometric resolution and flight parameters, i.e., the orientation of the surface structure relative to the sensor, on the ability to classify texture based on the GLCM features is investigated. It was found that changing these sensor and flight parameters greatly affects the usefulness of the GLCM for classifying texture on radar images.

  11. Smart image sensors: an emerging key technology for advanced optical measurement and microsystems

    Science.gov (United States)

    Seitz, Peter

    1996-08-01

    Optical microsystems typically include photosensitive devices, analog preprocessing circuitry and digital signal processing electronics. The advances in semiconductor technology have made it possible today to integrate all photosensitive and electronical devices on one 'smart image sensor' or photo-ASIC (application-specific integrated circuits containing photosensitive elements). It is even possible to provide each 'smart pixel' with additional photoelectronic functionality, without compromising the fill factor substantially. This technological capability is the basis for advanced cameras and optical microsystems showing novel on-chip functionality: Single-chip cameras with on- chip analog-to-digital converters for less than $10 are advertised; image sensors have been developed including novel functionality such as real-time selectable pixel size and shape, the capability of performing arbitrary convolutions simultaneously with the exposure, as well as variable, programmable offset and sensitivity of the pixels leading to image sensors with a dynamic range exceeding 150 dB. Smart image sensors have been demonstrated offering synchronous detection and demodulation capabilities in each pixel (lock-in CCD), and conventional image sensors are combined with an on-chip digital processor for complete, single-chip image acquisition and processing systems. Technological problems of the monolithic integration of smart image sensors include offset non-uniformities, temperature variations of electronic properties, imperfect matching of circuit parameters, etc. These problems can often be overcome either by designing additional compensation circuitry or by providing digital correction routines. Where necessary for technological or economic reasons, smart image sensors can also be combined with or realized as hybrids, making use of commercially available electronic components. It is concluded that the possibilities offered by custom smart image sensors will influence the design

  12. Multi-wavelength laser sensor surface for high frame rate imaging refractometry (Conference Presentation)

    Science.gov (United States)

    Kristensen, Anders; Vannahme, Christoph; Sørensen, Kristian T.; Dufva, Martin

    2016-09-01

    A highly sensitive distributed feedback (DFB) dye laser sensor for high frame rate imaging refractometry without moving parts is presented. The laser sensor surface comprises areas of different grating periods. Imaging in two dimensions of space is enabled by analyzing laser light from all areas in parallel with an imaging spectrometer. Refractive index imaging of a 2 mm by 2 mm surface is demonstrated with a spatial resolution of 10 μm, a detection limit of 8 10-6 RIU, and a framerate of 12 Hz, limited by the CCD camera. Label-free imaging of dissolution dynamics is demonstrated.

  13. CMOS Imaging of Pin-Printed Xerogel-Based Luminescent Sensor Microarrays.

    Science.gov (United States)

    Yao, Lei; Yung, Ka Yi; Khan, Rifat; Chodavarapu, Vamsy P; Bright, Frank V

    2010-12-01

    We present the design and implementation of a luminescence-based miniaturized multisensor system using pin-printed xerogel materials which act as host media for chemical recognition elements. We developed a CMOS imager integrated circuit (IC) to image the luminescence response of the xerogel-based sensor array. The imager IC uses a 26 × 20 (520 elements) array of active pixel sensors and each active pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. The imager includes a correlated double sampling circuit and pixel address/digital control circuit; the image data is read-out as coded serial signal. The sensor system uses a light-emitting diode (LED) to excite the target analyte responsive luminophores doped within discrete xerogel-based sensor elements. As a prototype, we developed a 4 × 4 (16 elements) array of oxygen (O2) sensors. Each group of 4 sensor elements in the array (arranged in a row) is designed to provide a different and specific sensitivity to the target gaseous O2 concentration. This property of multiple sensitivities is achieved by using a strategic mix of two oxygen sensitive luminophores ([Ru(dpp)3]2+ and ([Ru(bpy)3]2+) in each pin-printed xerogel sensor element. The CMOS imager consumes an average power of 8 mW operating at 1 kHz sampling frequency driven at 5 V. The developed prototype system demonstrates a low cost and miniaturized luminescence multisensor system.

  14. Optimal Broadband Noise Matching to Inductive Sensors: Application to Magnetic Particle Imaging.

    Science.gov (United States)

    Zheng, Bo; Goodwill, Patrick W; Dixit, Neerav; Xiao, Di; Zhang, Wencong; Gunel, Beliz; Lu, Kuan; Scott, Greig C; Conolly, Steven M

    2017-10-01

    Inductive sensor-based measurement techniques are useful for a wide range of biomedical applications. However, optimizing the noise performance of these sensors is challenging at broadband frequencies, owing to the frequency-dependent reactance of the sensor. In this work, we describe the fundamental limits of noise performance and bandwidth for these sensors in combination with a low-noise amplifier. We also present three equivalent methods of noise matching to inductive sensors using transformer-like network topologies. Finally, we apply these techniques to improve the noise performance in magnetic particle imaging, a new molecular imaging modality with excellent detection sensitivity. Using a custom noise-matched amplifier, we experimentally demonstrate an 11-fold improvement in noise performance in a small animal magnetic particle imaging scanner.

  15. Technical guidance for the development of a solid state image sensor for human low vision image warping

    Science.gov (United States)

    Vanderspiegel, Jan

    1994-01-01

    This report surveys different technologies and approaches to realize sensors for image warping. The goal is to study the feasibility, technical aspects, and limitations of making an electronic camera with special geometries which implements certain transformations for image warping. This work was inspired by the research done by Dr. Juday at NASA Johnson Space Center on image warping. The study has looked into different solid-state technologies to fabricate image sensors. It is found that among the available technologies, CMOS is preferred over CCD technology. CMOS provides more flexibility to design different functions into the sensor, is more widely available, and is a lower cost solution. By using an architecture with row and column decoders one has the added flexibility of addressing the pixels at random, or read out only part of the image.

  16. Low-Power Radio and Image-Sensor Package Project

    Data.gov (United States)

    National Aeronautics and Space Administration — One of the most effective sensor modalities for situational awareness is imagery. While typically high bandwidth and relegated to analog wireless communications,...

  17. Nanoimprinted distributed feedback dye laser sensor for real-time imaging of small molecule diffusion

    DEFF Research Database (Denmark)

    Vannahme, Christoph; Dufva, Martin; Kristensen, Anders

    2014-01-01

    distributed feedback (DFB) dye laser sensor for real-time label-free imaging without any moving parts enabling a frame rate of 12 Hz is presented. The presence of molecules on the laser surface results in a wavelength shift which is used as sensor signal. The unique DFB laser structure comprises several areas...... molecules in water....

  18. Two-Level Evaluation on Sensor Interoperability of Features in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Ya-Shuo Li

    2012-03-01

    Full Text Available Features used in fingerprint segmentation significantly affect the segmentation performance. Various features exhibit different discriminating abilities on fingerprint images derived from different sensors. One feature which has better discriminating ability on images derived from a certain sensor may not adapt to segment images derived from other sensors. This degrades the segmentation performance. This paper empirically analyzes the sensor interoperability problem of segmentation feature, which refers to the feature’s ability to adapt to the raw fingerprints captured by different sensors. To address this issue, this paper presents a two-level feature evaluation method, including the first level feature evaluation based on segmentation error rate and the second level feature evaluation based on decision tree. The proposed method is performed on a number of fingerprint databases which are obtained from various sensors. Experimental results show that the proposed method can effectively evaluate the sensor interoperability of features, and the features with good evaluation results acquire better segmentation accuracies of images originating from different sensors.

  19. BOREAS RSS-02 Level-1b ASAS Image Data: At-sensor Radiance in BSQ Format

    Data.gov (United States)

    National Aeronautics and Space Administration — The BOREAS RSS-02 team used the ASAS instrument, mounted on the NASA C-130 aircraft, to create at-sensor radiance images of various sites as a function of spectral...

  20. NRT Lightning Imaging Sensor (LIS) on International Space Station (ISS) Science Data Vb0

    Data.gov (United States)

    National Aeronautics and Space Administration — The NRT Lightning Imaging Sensor (LIS) on International Space Station (ISS) Science Data were collected by the LIS instrument on the ISS used to detect the...

  1. NRT Lightning Imaging Sensor (LIS) on International Space Station (ISS) Backgrounds Vb0

    Data.gov (United States)

    National Aeronautics and Space Administration — The NRT Lightning Imaging Sensor (LIS) on International Space Station (ISS) Backgrounds dataset was collected by the LIS instrument on the ISS used to detect the...

  2. Non-Quality Controlled Lightning Imaging Sensor (LIS) on International Space Station (ISS) Backgrounds Vb0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Non-Quality Controlled Lightning Imaging Sensor (LIS) on International Space Station (ISS) Backgrounds dataset was collected by the LIS instrument on the ISS...

  3. A High-Speed CMOS Image Sensor with Global Electronic Shutter Pixels Using Pinned Diodes

    Science.gov (United States)

    Yasutomi, Keita; Tamura, Toshihiro; Furuta, Masanori; Itoh, Shinya; Kawahito, Shoji

    This paper describes a high-speed CMOS image sensor with a new type of global electronic shutter pixel. A global electronic shutter is necessary for imaging fast-moving objects without motion blur or distortion. The proposed pixel has two potential wells with pinned diode structure for two-stage charge transfer that enables a global electronic shuttering and reset noise canceling. A prototype high-speed image sensor fabricated in 0.18μm standard CMOS image sensor process consists of the proposed pixel array, 12-bit column-parallel cyclic ADC arrays and 192-channel digital outputs. The sensor achieves a good linearity at low-light intensity, demonstrating the perfect charge transfer between two pinned diodes. The input referred noise of the proposed pixel is measured to be 6.3 e-.

  4. GPM GROUND VALIDATION SPECIAL SENSOR MICROWAVE IMAGER/SOUNDER (SSMI/S) LPVEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Special Sensor Microwave Imager/Sounder (SSMI/S) LPVEx dataset contains brightness temperature data processed from the NOAA CLASS QC...

  5. Extended Special Sensor Microwave Imager (SSM/I) Temperature Data Record (TDR) in netCDF

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Special Sensor Microwave Imager (SSM/I) is a seven-channel linearly polarized passive microwave radiometer that operates at frequencies of 19.36 (vertically and...

  6. Process for the Development of Image Quality Metrics for Underwater Electro-Optic Sensors

    National Research Council Canada - National Science Library

    Taylor, Jr., James S; Cordes, Brett; Osofsky, Sam; Domnich, Ann

    2002-01-01

    .... These sensors produce two and three-dimensional images that will be used by operators to make the all-important decision regarding use of neutralization systems against sonar contacts classified as mine-like...

  7. Landsat 8 Operational Land Imager (OLI)_Thermal Infared Sensor (TIRS) V1

    Data.gov (United States)

    National Aeronautics and Space Administration — Abstract:The Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) are instruments onboard the Landsat 8 satellite, which was launched in February of...

  8. Gimbal Integration to Small Format, Airborne, MWIR and LWIR Imaging Sensors Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed innovation is for enhanced sensor performance and high resolution imaging for Long Wave InfraRed (LWIR) and Medium Wave IR (MWIR) camera systems used in...

  9. Hyperspectral Imaging Sensor with Real-Time Processor Performing Principle Components Analyses for Gas Detection

    National Research Council Canada - National Science Library

    Hinnrichs, Michele

    2000-01-01

    .... With support from the US Air Force and Navy, Pacific Advanced Technology has developed a small man portable hyperspectral imaging sensor with an embedded DSP processor for real time processing...

  10. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Sensor Data Record (SDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sensor Data Records (SDRs), or Level 1b data, from the Visible Infrared Imaging Radiometer Suite (VIIRS) are the calibrated and geolocated radiance and reflectance...

  11. Researchers develop CCD image sensor with 20ns per row parallel readout time

    CERN Multimedia

    Bush, S

    2004-01-01

    "Scientists at the Rutherford Appleton Laboratory (RAL) in Oxfordshire have developed what they claim is the fastest CCD (charge-coupled device) image sensor, with a readout time which is 20ns per row" (1/2 page)

  12. High-content analysis of single cells directly assembled on CMOS sensor based on color imaging.

    Science.gov (United States)

    Tanaka, Tsuyoshi; Saeki, Tatsuya; Sunaga, Yoshihiko; Matsunaga, Tadashi

    2010-12-15

    A complementary metal oxide semiconductor (CMOS) image sensor was applied to high-content analysis of single cells which were assembled closely or directly onto the CMOS sensor surface. The direct assembling of cell groups on CMOS sensor surface allows large-field (6.66 mm×5.32 mm in entire active area of CMOS sensor) imaging within a second. Trypan blue-stained and non-stained cells in the same field area on the CMOS sensor were successfully distinguished as white- and blue-colored images under white LED light irradiation. Furthermore, the chemiluminescent signals of each cell were successfully visualized as blue-colored images on CMOS sensor only when HeLa cells were placed directly on the micro-lens array of the CMOS sensor. Our proposed approach will be a promising technique for real-time and high-content analysis of single cells in a large-field area based on color imaging. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. A counting pixel chip and sensor system for X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, P.; Hausmann, J.; Helmich, A.; Lindner, M.; Wermes, N. [Universitaet Bonn (Germany). Physikalisches Institut; Blanquart, L. [CNRS, Marseille (France). Centre de Physique des Particules

    1999-08-01

    Results obtained with a (photon) counting pixel imaging chip connected to a silicon pixel sensor using the bump and flip-chip technology are presented. The performance of the chip electronics is characterized by an average equivalent noise charge (ENC) below 135 e and a threshold spread of less than 35 e after individual threshold adjust, both measured with a sensor attached. First results on the imaging performance are also reported.

  14. Median filters as a tool to determine dark noise thresholds in high resolution smartphone image sensors for scientific imaging

    Science.gov (United States)

    Igoe, Damien P.; Parisi, Alfio V.; Amar, Abdurazaq; Rummenie, Katherine J.

    2018-01-01

    An evaluation of the use of median filters in the reduction of dark noise in smartphone high resolution image sensors is presented. The Sony Xperia Z1 employed has a maximum image sensor resolution of 20.7 Mpixels, with each pixel having a side length of just over 1 μm. Due to the large number of photosites, this provides an image sensor with very high sensitivity but also makes them prone to noise effects such as hot-pixels. Similar to earlier research with older models of smartphone, no appreciable temperature effects were observed in the overall average pixel values for images taken in ambient temperatures between 5 °C and 25 °C. In this research, hot-pixels are defined as pixels with intensities above a specific threshold. The threshold is determined using the distribution of pixel values of a set of images with uniform statistical properties associated with the application of median-filters of increasing size. An image with uniform statistics was employed as a training set from 124 dark images, and the threshold was determined to be 9 digital numbers (DN). The threshold remained constant for multiple resolutions and did not appreciably change even after a year of extensive field use and exposure to solar ultraviolet radiation. Although the temperature effects' uniformity masked an increase in hot-pixel occurrences, the total number of occurrences represented less than 0.1% of the total image. Hot-pixels were removed by applying a median filter, with an optimum filter size of 7 × 7; similar trends were observed for four additional smartphone image sensors used for validation. Hot-pixels were also reduced by decreasing image resolution. The method outlined in this research provides a methodology to characterise the dark noise behavior of high resolution image sensors for use in scientific investigations, especially as pixel sizes decrease.

  15. Particle detection and classification using commercial off the shelf CMOS image sensors

    Energy Technology Data Exchange (ETDEWEB)

    Pérez, Martín [Instituto Balseiro, Av. Bustillo 9500, Bariloche, 8400 (Argentina); Comisión Nacional de Energía Atómica (CNEA), Centro Atómico Bariloche, Av. Bustillo 9500, Bariloche 8400 (Argentina); Consejo Nacional de Investigaciones Científicas y Técnicas, Centro Atómico Bariloche, Av. Bustillo 9500, 8400 Bariloche (Argentina); Lipovetzky, Jose, E-mail: lipo@cab.cnea.gov.ar [Instituto Balseiro, Av. Bustillo 9500, Bariloche, 8400 (Argentina); Comisión Nacional de Energía Atómica (CNEA), Centro Atómico Bariloche, Av. Bustillo 9500, Bariloche 8400 (Argentina); Consejo Nacional de Investigaciones Científicas y Técnicas, Centro Atómico Bariloche, Av. Bustillo 9500, 8400 Bariloche (Argentina); Sofo Haro, Miguel; Sidelnik, Iván; Blostein, Juan Jerónimo; Alcalde Bessia, Fabricio; Berisso, Mariano Gómez [Instituto Balseiro, Av. Bustillo 9500, Bariloche, 8400 (Argentina); Consejo Nacional de Investigaciones Científicas y Técnicas, Centro Atómico Bariloche, Av. Bustillo 9500, 8400 Bariloche (Argentina)

    2016-08-11

    In this paper we analyse the response of two different Commercial Off The shelf CMOS image sensors as particle detectors. Sensors were irradiated using X-ray photons, gamma photons, beta particles and alpha particles from diverse sources. The amount of charge produced by different particles, and the size of the spot registered on the sensor are compared, and analysed by an algorithm to classify them. For a known incident energy spectrum, the employed sensors provide a dose resolution lower than microGray, showing their potentials in radioprotection, area monitoring, or medical applications.

  16. New amorphous-silicon image sensor for x-ray diagnostic medical imaging applications

    Science.gov (United States)

    Weisfield, Richard L.; Hartney, Mark A.; Street, Robert A.; Apte, Raj B.

    1998-07-01

    This paper introduces new high-resolution amorphous Silicon (a-Si) image sensors specifically configured for demonstrating film-quality medical x-ray imaging capabilities. The devices utilizes an x-ray phosphor screen coupled to an array of a-Si photodiodes for detecting visible light, and a-Si thin-film transistors (TFTs) for connecting the photodiodes to external readout electronics. We have developed imagers based on a pixel size of 127 micrometer X 127 micrometer with an approximately page-size imaging area of 244 mm X 195 mm, and array size of 1,536 data lines by 1,920 gate lines, for a total of 2.95 million pixels. More recently, we have developed a much larger imager based on the same pixel pattern, which covers an area of approximately 406 mm X 293 mm, with 2,304 data lines by 3,200 gate lines, for a total of nearly 7.4 million pixels. This is very likely to be the largest image sensor array and highest pixel count detector fabricated on a single substrate. Both imagers connect to a standard PC and are capable of taking an image in a few seconds. Through design rule optimization we have achieved a light sensitive area of 57% and optimized quantum efficiency for x-ray phosphor output in the green part of the spectrum, yielding an average quantum efficiency between 500 and 600 nm of approximately 70%. At the same time, we have managed to reduce extraneous leakage currents on these devices to a few fA per pixel, which allows for very high dynamic range to be achieved. We have characterized leakage currents as a function of photodiode bias, time and temperature to demonstrate high stability over these large sized arrays. At the electronics level, we have adopted a new generation of low noise, charge- sensitive amplifiers coupled to 12-bit A/D converters. Considerable attention was given to reducing electronic noise in order to demonstrate a large dynamic range (over 4,000:1) for medical imaging applications. Through a combination of low data lines capacitance

  17. Optimal Magnetic Sensor Vests for Cardiac Source Imaging

    Directory of Open Access Journals (Sweden)

    Stephan Lau

    2016-05-01

    Full Text Available Magnetocardiography (MCG non-invasively provides functional information about the heart. New room-temperature magnetic field sensors, specifically magnetoresistive and optically pumped magnetometers, have reached sensitivities in the ultra-low range of cardiac fields while allowing for free placement around the human torso. Our aim is to optimize positions and orientations of such magnetic sensors in a vest-like arrangement for robust reconstruction of the electric current distributions in the heart. We optimized a set of 32 sensors on the surface of a torso model with respect to a 13-dipole cardiac source model under noise-free conditions. The reconstruction robustness was estimated by the condition of the lead field matrix. Optimization improved the condition of the lead field matrix by approximately two orders of magnitude compared to a regular array at the front of the torso. Optimized setups exhibited distributions of sensors over the whole torso with denser sampling above the heart at the front and back of the torso. Sensors close to the heart were arranged predominantly tangential to the body surface. The optimized sensor setup could facilitate the definition of a standard for sensor placement in MCG and the development of a wearable MCG vest for clinical diagnostics.

  18. A Solar Position Sensor Based on Image Vision.

    Science.gov (United States)

    Ruelas, Adolfo; Velázquez, Nicolás; Villa-Angulo, Carlos; Acuña, Alexis; Rosales, Pedro; Suastegui, José

    2017-07-29

    Solar collector technologies operate with better performance when the Sun beam direction is normal to the capturing surface, and for that to happen despite the relative movement of the Sun, solar tracking systems are used, therefore, there are rules and standards that need minimum accuracy for these tracking systems to be used in solar collectors' evaluation. Obtaining accuracy is not an easy job, hence in this document the design, construction and characterization of a sensor based on a visual system that finds the relative azimuth error and height of the solar surface of interest, is presented. With these characteristics, the sensor can be used as a reference in control systems and their evaluation. The proposed sensor is based on a microcontroller with a real-time clock, inertial measurement sensors, geolocation and a vision sensor, that obtains the angle of incidence from the sunrays' direction as well as the tilt and sensor position. The sensor's characterization proved how a measurement of a focus error or a Sun position can be made, with an accuracy of 0.0426° and an uncertainty of 0.986%, which can be modified to reach an accuracy under 0.01°. The validation of this sensor was determined showing the focus error on one of the best commercial solar tracking systems, a Kipp & Zonen SOLYS 2. To conclude, the solar tracking sensor based on a vision system meets the Sun detection requirements and components that meet the accuracy conditions to be used in solar tracking systems and their evaluation or, as a tracking and orientation tool, on photovoltaic installations and solar collectors.

  19. Multi-Sensor Fusion of Infrared and Electro-Optic Signals for High Resolution Night Images

    Directory of Open Access Journals (Sweden)

    Victor Lawrence

    2012-07-01

    Full Text Available Electro-optic (EO image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF of a uniform detector array and the incoherent optical transfer function (OTF of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1 inverse filter-based IR image transformation; (2 EO image edge detection; (3 registration; and (4 blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available.

  20. An improved Ras sensor for highly sensitive and quantitative FRET-FLIM imaging.

    Directory of Open Access Journals (Sweden)

    Ana F Oliveira

    Full Text Available Ras is a signaling protein involved in a variety of cellular processes. Hence, studying Ras signaling with high spatiotemporal resolution is crucial to understanding the roles of Ras in many important cellular functions. Previously, fluorescence lifetime imaging (FLIM of fluorescent resonance energy transfer (FRET-based Ras activity sensors, FRas and FRas-F, have been demonstrated to be useful for measuring the spatiotemporal dynamics of Ras signaling in subcellular micro-compartments. However the predominantly nuclear localization of the sensors' acceptor has limited its sensitivity. Here, we have overcome this limitation and developed two variants of the existing FRas sensor with different affinities: FRas2-F (K(d∼1.7 µM and FRas2-M (K(d∼0.5 µM. We demonstrate that, under 2-photon fluorescence lifetime imaging microscopy, FRas2 sensors provide higher sensitivity compared to previous sensors in 293T cells and neurons.

  1. Performance analysis of gamma-ray-irradiated color complementary metal oxide semiconductor digital image sensors

    CERN Document Server

    Kang, A G; Liu, J Q; You, Z

    2003-01-01

    The performance parameters of dark output images captured from color complementary metal oxide semiconductor (CMOS) digital image sensors before and after gamma-ray irradiation were studied. The changes of red, green and blue color parameters of dark output images with different gamma-ray doses and exposure times were analyzed with our computer software. The effect of irradiation on the response of blue color was significantly affected at a lower dose. The dark current density of the sensors increases by three orders at > 60 krad compared to that of unirradiated sensors. The maximum and minimum analog output voltages all increase with irradiation doses, and are almost the same at > 120 krad. The signal to noise ratio is 48 dB before irradiation and 35 dB after irradiation of 180 krad. The antiradiation threshold for these sensors is about 100 krad. The primary explanation for the changes and the degradation of device performance parameters is presented. (author)

  2. Time comparison in image processing: APS sensors versus an artificial retina based vision system

    Science.gov (United States)

    Elouardi, A.; Bouaziz, S.; Dupret, A.; Lacassagne, L.; Klein, J. O.; Reynaud, R.

    2007-09-01

    To resolve the computational complexity of computer vision algorithms, one of the solutions is to perform some low-level image processing on the sensor focal plane. It becomes a smart sensor device called a retina. This concept makes vision systems more compact. It increases performance thanks to the reduction of the data flow exchanges with external circuits. This paper presents a comparison between two different vision system architectures. The first one involves a smart sensor including analogue processors allowing on-chip image processing. An external microprocessor is used to control the on-chip dataflow and integrated operators. The second system implements a logarithmic CMOS/APS sensor interfaced to the same microprocessor, in which all computations are carried out. We have designed two vision systems as proof of concept. The comparison is related to image processing time.

  3. Zero-Transition Serial Encoding for Image Sensors

    OpenAIRE

    Jahier Pagliari, Daniele; Macii, Enrico; Poncino, Massimo

    2017-01-01

    Off-chip serial buses are the most common interfaces between sensors and processing elements in embedded systems. Due to their length, these connections dissipate a large amount of energy, contributing significantly to the total consumption of the system. The error-tolerant feature of many sensor applications can be leveraged to reduce this energy contribution by means of an approximate serial data encoding. In this paper, we propose one such encoding called Serial T0, particularly, effective...

  4. Biomedical Applications of the Information-efficient Spectral Imaging Sensor (ISIS)

    Energy Technology Data Exchange (ETDEWEB)

    Gentry, S.M.; Levenson, R.

    1999-01-21

    The Information-efficient Spectral Imaging Sensor (ISIS) approach to spectral imaging seeks to bridge the gap between tuned multispectral and fixed hyperspectral imaging sensors. By allowing the definition of completely general spectral filter functions, truly optimal measurements can be made for a given task. These optimal measurements significantly improve signal-to-noise ratio (SNR) and speed, minimize data volume and data rate, while preserving classification accuracy. The following paper investigates the application of the ISIS sensing approach in two sample biomedical applications: prostate and colon cancer screening. It is shown that in these applications, two to three optimal measurements are sufficient to capture the majority of classification information for critical sample constituents. In the prostate cancer example, the optimal measurements allow 8% relative improvement in classification accuracy of critical cell constituents over a red, green, blue (RGB) sensor. In the colon cancer example, use of optimal measurements boost the classification accuracy of critical cell constituents by 28% relative to the RGB sensor. In both cases, optimal measurements match the performance achieved by the entire hyperspectral data set. The paper concludes that an ISIS style spectral imager can acquire these optimal spectral images directly, allowing improved classification accuracy over an RGB sensor. Compared to a hyperspectral sensor, the ISIS approach can achieve similar classification accuracy using a significantly lower number of spectral samples, thus minimizing overall sample classification time and cost.

  5. Toward One Giga Frames per Second — Evolution of in Situ Storage Image Sensors

    Directory of Open Access Journals (Sweden)

    Edoardo Charbon

    2013-04-01

    Full Text Available The ISIS is an ultra-fast image sensor with in-pixel storage. The evolution of the ISIS in the past and in the near future is reviewed and forecasted. To cover the storage area with a light shield, the conventional frontside illuminated ISIS has a limited fill factor. To achieve higher sensitivity, a BSI ISIS was developed. To avoid direct intrusion of light and migration of signal electrons to the storage area on the frontside, a cross-sectional sensor structure with thick pnpn layers was developed, and named “Tetratified structure”. By folding and looping in-pixel storage CCDs, an image signal accumulation sensor, ISAS, is proposed. The ISAS has a new function, the in-pixel signal accumulation, in addition to the ultra-high-speed imaging. To achieve much higher frame rate, a multi-collection-gate (MCG BSI image sensor architecture is proposed. The photoreceptive area forms a honeycomb-like shape. Performance of a hexagonal CCD-type MCG BSI sensor is examined by simulations. The highest frame rate is theoretically more than 1Gfps. For the near future, a stacked hybrid CCD/CMOS MCG image sensor seems most promising. The associated problems are discussed. A fine TSV process is the key technology to realize the structure.

  6. A three-phase time-correlation image sensor using pinned photodiode active pixels

    Science.gov (United States)

    Han, Sangman; Iwahori, Tomohiro; Sawada, Tomonari; Kawahito, Shoji; Ando, Shigeru

    2010-01-01

    A time correlation (TC) image sensor is a device that produces 3-phase time-correlated signals between the incident light intensity and three reference signals. A conventional implementation of the TC image sensor using a standard CMOS technology works at low frequency and with low sensitivity. In order to achieve higher modulation frequency and high sensitivity, the TC image sensor with a dual potential structure using a pinned diode is proposed. The dual potential structure is created by changing the impurity doping concentration in the two different potential regions. In this structure, high-frequency modulation can be achieved, while maintaining a sufficient light receiving area. A prototype TC image sensor with 366×390pixels is implemented with 0.18-μm 1P4M CMOS image sensor technology. Each pixel with the size of 12μm×12μm has one pinned photodiode with the dual potential structure, 12 transistors and 3capacitors to implement three-parallel-output active pixel circuits. A fundamental operation of the implemented TC sensor is demonstrated.

  7. Gamma-ray irradiation tests of CMOS sensors used in imaging techniques

    Directory of Open Access Journals (Sweden)

    Cappello Salvatore G.

    2014-01-01

    Full Text Available Technologically-enhanced electronic image sensors are used in various fields as diagnostic techniques in medicine or space applications. In the latter case the devices can be exposed to intense radiation fluxes over time which may impair the functioning of the same equipment. In this paper we report the results of gamma-ray irradiation tests on CMOS image sensors simulating the space radiation over a long time period. Gamma-ray irradiation tests were carried out by means of IGS-3 gamma irradiation facility of Palermo University, based on 60Co sources with different activities. To reduce the dose rate and realize a narrow gamma-ray beam, a lead-collimation system was purposely built. It permits to have dose rate values less than 10 mGy/s and to irradiate CMOS Image Sensors during operation. The total ionizing dose to CMOS image sensors was monitored in-situ, during irradiation, up to 1000 Gy and images were acquired every 25 Gy. At the end of the tests, the sensors continued to operate despite a background noise and some pixels were completely saturated. These effects, however, involve isolated pixels and therefore, should not affect the image quality.

  8. Image sensor for security applications with on-chip data authentication

    Science.gov (United States)

    Stifter, P.; Eberhardt, K.; Erni, A.; Hofmann, K.

    2006-04-01

    Sensors in a networked environment which are used for security applications could be jeopardized by man-in-the-middle or address spoofing attacks. By authentication and secure data transmission of the sensor's data stream, this can be thwart by fusing the image sensor with the necessary digital encryption and authentication circuit, which fulfils the three standard requirements of cryptography: data integrity, confidentiality and non-repudiation. This paper presents the development done by AIM, which led to the unique sensor SECVGA, a high performance monochrome (B/W) CMOS active pixel image sensor. The device captures still and motion images with a resolution of 800x600 active pixels and converts them into a digital data stream. Additional to a standard imaging sensor there is the capability of the on-chip cryptographic engine to provide the authentication of the sensor to the host, based on a one-way challenge/response protocol. The protocol that has been realized uses the exchange of a session key to secure the following video data transmission. To achieve this, we calculate a cryptographic checksum derived from a message authentication code (MAC) for a complete image frame. The imager is equipped with an EEPROM to give it the capability to personalize it with a unique and unchangeable identity. A two-wire I2C compatible serial interface allows to program the functions of the imager, i.e. various operating modes, including the authentication procedure, the control of the integration time, sub-frames and the frame rate.

  9. Multi-Sensor Image Fusion for Target Recognition in the Environment of Network Decision Support Systems

    Science.gov (United States)

    2015-12-01

    E. Liggins, David L. Hall, Handbook of Multisensor Data Fusion - Theory and Practice, 2nd ed. Boca Raton, Florida: CRC Press, 2009. [60] “Machine...imagery data . Additionally, multi-spectral image fusion of thermal and visual images for target recognition yielded the best classification...43 a. Speeded-Up Robust Features (SURF) .............................43 3. Multi-Sensor Data Fusion

  10. VLC-Based Positioning System for an Indoor Environment Using an Image Sensor and an Accelerometer Sensor.

    Science.gov (United States)

    Huynh, Phat; Yoo, Myungsik

    2016-05-28

    Recently, it is believed that lighting and communication technologies are being replaced by high power LEDs, which are core parts of the visible light communication (VLC) system. In this paper, by taking advantages of VLC, we propose a novel design for an indoor positioning system using LEDs, an image sensor (IS) and an accelerometer sensor (AS) from mobile devices. The proposed algorithm, which provides a high precision indoor position, consists of four LEDs mounted on the ceiling transmitting their own three-dimensional (3D) world coordinates and an IS at an unknown position receiving and demodulating the signals. Based on the 3D world coordinates and the 2D image coordinate of LEDs, the position of the mobile device is determined. Compared to existing algorithms, the proposed algorithm only requires one IS. In addition, by using an AS, the mobile device is allowed to have arbitrary orientation. Last but not least, a mechanism for reducing the image sensor noise is proposed to further improve the accuracy of the positioning algorithm. A simulation is conducted to verify the performance of the proposed algorithm.

  11. Pesticide residue quantification analysis by hyperspectral imaging sensors

    Science.gov (United States)

    Liao, Yuan-Hsun; Lo, Wei-Sheng; Guo, Horng-Yuh; Kao, Ching-Hua; Chou, Tau-Meu; Chen, Junne-Jih; Wen, Chia-Hsien; Lin, Chinsu; Chen, Hsian-Min; Ouyang, Yen-Chieh; Wu, Chao-Cheng; Chen, Shih-Yu; Chang, Chein-I.

    2015-05-01

    Pesticide residue detection in agriculture crops is a challenging issue and is even more difficult to quantify pesticide residue resident in agriculture produces and fruits. This paper conducts a series of base-line experiments which are particularly designed for three specific pesticides commonly used in Taiwan. The materials used for experiments are single leaves of vegetable produces which are being contaminated by various amount of concentration of pesticides. Two sensors are used to collected data. One is Fourier Transform Infrared (FTIR) spectroscopy. The other is a hyperspectral sensor, called Geophysical and Environmental Research (GER) 2600 spectroradiometer which is a batteryoperated field portable spectroradiometer with full real-time data acquisition from 350 nm to 2500 nm. In order to quantify data with different levels of pesticide residue concentration, several measures for spectral discrimination are developed. Mores specifically, new measures for calculating relative power between two sensors are particularly designed to be able to evaluate effectiveness of each of sensors in quantifying the used pesticide residues. The experimental results show that the GER is a better sensor than FTIR in the sense of pesticide residue quantification.

  12. The coronagraphic Modal Wavefront Sensor: a hybrid focal-plane sensor for the high-contrast imaging of circumstellar environments

    Science.gov (United States)

    Wilby, M. J.; Keller, C. U.; Snik, F.; Korkiakoski, V.; Pietrow, A. G. M.

    2017-01-01

    The raw coronagraphic performance of current high-contrast imaging instruments is limited by the presence of a quasi-static speckle (QSS) background, resulting from instrumental Non-Common Path Errors (NCPEs). Rapid development of efficient speckle subtraction techniques in data reduction has enabled final contrasts of up to 10-6 to be obtained, however it remains preferable to eliminate the underlying NCPEs at the source. In this work we introduce the coronagraphic Modal Wavefront Sensor (cMWS), a new wavefront sensor suitable for real-time NCPE correction. This combines the Apodizing Phase Plate (APP) coronagraph with a holographic modal wavefront sensor to provide simultaneous coronagraphic imaging and focal-plane wavefront sensing with the science point-spread function. We first characterise the baseline performance of the cMWS via idealised closed-loop simulations, showing that the sensor is able to successfully recover diffraction-limited coronagraph performance over an effective dynamic range of ±2.5 radians root-mean-square (rms) wavefront error within 2-10 iterations, with performance independent of the specific choice of mode basis. We then present the results of initial on-sky testing at the William Herschel Telescope, which demonstrate that the sensor is capable of NCPE sensing under realistic seeing conditions via the recovery of known static aberrations to an accuracy of 10 nm (0.1 radians) rms error in the presence of a dominant atmospheric speckle foreground. We also find that the sensor is capable of real-time measurement of broadband atmospheric wavefront variance (50% bandwidth, 158 nm rms wavefront error) at a cadence of 50 Hz over an uncorrected telescope sub-aperture. When combined with a suitable closed-loop adaptive optics system, the cMWS holds the potential to deliver an improvement of up to two orders of magnitude over the uncorrected QSS floor. Such a sensor would be eminently suitable for the direct imaging and spectroscopy of

  13. Fixed Pattern Noise pixel-wise linear correction for crime scene imaging CMOS sensor

    Science.gov (United States)

    Yang, Jie; Messinger, David W.; Dube, Roger R.; Ientilucci, Emmett J.

    2017-05-01

    Filtered multispectral imaging technique might be a potential method for crime scene documentation and evidence detection due to its abundant spectral information as well as non-contact and non-destructive nature. Low-cost and portable multispectral crime scene imaging device would be highly useful and efficient. The second generation crime scene imaging system uses CMOS imaging sensor to capture spatial scene and bandpass Interference Filters (IFs) to capture spectral information. Unfortunately CMOS sensors suffer from severe spatial non-uniformity compared to CCD sensors and the major cause is Fixed Pattern Noise (FPN). IFs suffer from "blue shift" effect and introduce spatial-spectral correlated errors. Therefore, Fixed Pattern Noise (FPN) correction is critical to enhance crime scene image quality and is also helpful for spatial-spectral noise de-correlation. In this paper, a pixel-wise linear radiance to Digital Count (DC) conversion model is constructed for crime scene imaging CMOS sensor. Pixel-wise conversion gain Gi,j and Dark Signal Non-Uniformity (DSNU) Zi,j are calculated. Also, conversion gain is divided into four components: FPN row component, FPN column component, defects component and effective photo response signal component. Conversion gain is then corrected to average FPN column and row components and defects component so that the sensor conversion gain is uniform. Based on corrected conversion gain and estimated image incident radiance from the reverse of pixel-wise linear radiance to DC model, corrected image spatial uniformity can be enhanced to 7 times as raw image, and the bigger the image DC value within its dynamic range, the better the enhancement.

  14. Simulating The Performance Of Imaging Sensors For Use In Realistic Tactical Environments

    Science.gov (United States)

    Matise, Brian K.; Rogne, Timothy J.; Gerhart, Grant R.; Graziano, James M.

    1985-10-01

    An imaging sensor simulation model is described which allows a modeled or measured scene radiance map to be displayed on a video monitor as it would be seen if viewed through a simulated sensor under simulated environmental conditions. The model includes atmospheric effects (transmittance, path radiance, and single-scattered solar radiance) by incorporating a modified version of the LOWTRAN 6 code. Obscuration and scattered radiance introduced into the scene by battlefield induced contaminants are represented by a battlefield effects module. This module treats smoke clouds as a series of Gaussian puffs whose transport and diffusion are modeled in a semi-random fashion to simulate atmospheric turbulence. The imaging sensor is modeled by rigorous application of appropriate optical transfer functions with appropriate insertion of random system noise. The simulation includes atmospheric turbulence transfer functions according to the method of Fried. Of particular use to sensor designers, the various effects may be applied individually or in sequence to observe which effects are responsible for image distortion. Sensor parameters may be modified interactively, or recalled from a sensor library. The range of the sensor from a measured scene may be varied in the simulation, and background and target radiance maps may be combined into a single image. The computer model itself is written in FORTRAN IV so that it may be transported between a wide variety of computer installations. Currently, versions of the model are running on a VAX 11/750 and an Amdahl 5860. The model is menu driven allowing for convenient operation. The model has been designed to output processed images to a COMTAL image processing system for observer interpretation. Preliminary validation of the simulation using unbiased observer interpretation of minimum resolvable temperature (MRT)-type bar patterns is presented.

  15. Video image processing to create a speed sensor

    Science.gov (United States)

    1999-11-01

    Image processing has been applied to traffic analysis in recent years, with different goals. In the report, a new approach is presented for extracting vehicular speed information, given a sequence of real-time traffic images. We extract moving edges ...

  16. A novel design of subminiature star sensor's imaging system based on TMS320DM3730

    Science.gov (United States)

    Liu, Meiying; Wang, Hu; Wen, Desheng; Yang, Shaodong

    2017-02-01

    Development of the next generation star sensor is tending to miniaturization, low cost and low power consumption, so the imaging system based on FPGA in the past could not meet its developing requirements. A novel design of digital imaging system is discussed in this paper. Combined with the MT9P031 CMOS image sensor's timing sequence and working mode, the sensor driving circuit and image data memory circuit were implemented with the main control unit TMS320DM3730. In order to make the hardware system has the advantage of small size and light weight, the hardware adopted miniaturization design. The software simulation and experimental results demonstrated that the designed imaging system was reasonable, the function of tunable integration time and selectable window readout modes were realized. The communication with computer was exact. The system has the advantage of the powerful image processing, small-size, compact, stable, reliable and low power consumption. The whole system volume is 40 mm *40 mm *40mm,the system weight is 105g, the system power consumption is lower than 1w. This design provided a feasible solution for the realization of the subminiature star sensor's imaging system.

  17. Construction, imaging, and analysis of FRET-based tension sensors in living cells.

    Science.gov (United States)

    LaCroix, Andrew S; Rothenberg, Katheryn E; Berginski, Matthew E; Urs, Aarti N; Hoffman, Brenton D

    2015-01-01

    Due to an increased appreciation for the importance of mechanical stimuli in many biological contexts, an interest in measuring the forces experienced by specific proteins in living cells has recently emerged. The development and use of Förster resonance energy transfer (FRET)-based molecular tension sensors has enabled these types of studies and led to important insights into the mechanisms those cells utilize to probe and respond to the mechanical nature of their surrounding environment. The process for creating and utilizing FRET-based tension sensors can be divided into three main parts: construction, imaging, and analysis. First we review several methods for the construction of genetically encoded FRET-based tension sensors, including restriction enzyme-based methods as well as the more recently developed overlap extension or Gibson Assembly protocols. Next, we discuss the intricacies associated with imaging tension sensors, including optimizing imaging parameters as well as common techniques for estimating artifacts within standard imaging systems. Then, we detail the analysis of such data and describe how to extract useful information from a FRET experiment. Finally, we provide a discussion on identifying and correcting common artifacts in the imaging of FRET-based tension sensors. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Experiment on digital CDS with 33-M pixel 120-fps super hi-vision image sensor

    Science.gov (United States)

    Yonai, J.; Yasue, T.; Kitamura, K.; Hayashida, T.; Watabe, T.; Shimamoto, H.; Kawahito, S.

    2014-03-01

    We have developed a CMOS image sensor with 33 million pixels and 120 frames per second (fps) for Super Hi-Vision (SHV:8K version of UHDTV). There is a way to reduce the fixed pattern noise (FPN) caused in CMOS image sensors by using digital correlated double sampling (digital CDS), but digital CDS methods need high-speed analog-to-digital conversion and are not applicable to conventional UHDTV image sensors due to their speed limit. Our image sensor, on the other hand, has a very fast analog-to-digital converter (ADC) using "two-stage cyclic ADC" architecture that is capable of being driven at 120-fps, which is double the normal frame rate for TV. In this experiment, we performed experimental digital CDS using the high-frame rate UHDTV image sensor. By reading the same row twice at 120-fps and subtracting dark pixel signals from accumulated pixel signals, we obtained a 60-fps equivalent video signal with digital noise reduction. The results showed that the VFPN was effectively reduced from 24.25 e-rms to 0.43 e-rms.

  19. A bio-image sensor for simultaneous detection of multi-neurotransmitters.

    Science.gov (United States)

    Lee, You-Na; Okumura, Koichi; Horio, Tomoko; Iwata, Tatsuya; Takahashi, Kazuhiro; Hattori, Toshiaki; Sawada, Kazuaki

    2018-03-01

    We report here a new bio-image sensor for simultaneous detection of spatial and temporal distribution of multi-neurotransmitters. It consists of multiple enzyme-immobilized membranes on a 128 × 128 pixel array with read-out circuit. Apyrase and acetylcholinesterase (AChE), as selective elements, are used to recognize adenosine 5'-triphosphate (ATP) and acetylcholine (ACh), respectively. To enhance the spatial resolution, hydrogen ion (H+) diffusion barrier layers are deposited on top of the bio-image sensor and demonstrated their prevention capability. The results are used to design the space among enzyme-immobilized pixels and the null H+ sensor to minimize the undesired signal overlap by H+ diffusion. Using this bio-image sensor, we can obtain H+ diffusion-independent imaging of concentration gradients of ATP and ACh in real-time. The sensing characteristics, such as sensitivity and detection of limit, are determined experimentally. With the proposed bio-image sensor the possibility exists for customizable monitoring of the activities of various neurochemicals by using different kinds of proton-consuming or generating enzymes. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. An Ultrahigh-Resolution Digital Image Sensor with Pixel Size of 50 nm by Vertical Nanorod Arrays.

    Science.gov (United States)

    Jiang, Chengming; Song, Jinhui

    2015-07-01

    The pixel size limit of existing digital image sensors is successfully overcome by using vertically aligned semiconducting nanorods as the 3D photosensing pixels. On this basis, an unprecedentedly high-resolution digital image sensor with a pixel size of 50 nm and a resolution of 90 nm is fabricated. The ultrahigh-resolution digital image sensor can heavily impact the field of visual information. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Multiple image sensor data fusion through artificial neural networks

    Science.gov (United States)

    With multisensor data fusion technology, the data from multiple sensors are fused in order to make a more accurate estimation of the environment through measurement, processing and analysis. Artificial neural networks are the computational models that mimic biological neural networks. With high per...

  2. Simulation and measurement of total ionizing dose radiation induced image lag increase in pinned photodiode CMOS image sensors

    Science.gov (United States)

    Liu, Jing; Chen, Wei; Wang, Zujun; Xue, Yuanyuan; Yao, Zhibin; He, Baoping; Ma, Wuying; Jin, Junshan; Sheng, Jiangkun; Dong, Guantao

    2017-06-01

    This paper presents an investigation of total ionizing dose (TID) induced image lag sources in pinned photodiodes (PPD) CMOS image sensors based on radiation experiments and TCAD simulation. The radiation experiments have been carried out at the Cobalt -60 gamma-ray source. The experimental results show the image lag degradation is more and more serious with increasing TID. Combining with the TCAD simulation results, we can confirm that the junction of PPD and transfer gate (TG) is an important region forming image lag during irradiation. These simulations demonstrate that TID can generate a potential pocket leading to incomplete transfer.

  3. Simulation and measurement of total ionizing dose radiation induced image lag increase in pinned photodiode CMOS image sensors

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Jing [School of Materials Science and Engineering, Xiangtan University, Hunan (China); State Key Laboratory of Intense Pulsed Irradiation Simulation and Effect, Northwest Institute of Nuclear Technology, P.O.Box 69-10, Xi’an (China); Chen, Wei, E-mail: chenwei@nint.ac.cn [State Key Laboratory of Intense Pulsed Irradiation Simulation and Effect, Northwest Institute of Nuclear Technology, P.O.Box 69-10, Xi’an (China); Wang, Zujun, E-mail: wangzujun@nint.ac.cn [State Key Laboratory of Intense Pulsed Irradiation Simulation and Effect, Northwest Institute of Nuclear Technology, P.O.Box 69-10, Xi’an (China); Xue, Yuanyuan; Yao, Zhibin; He, Baoping; Ma, Wuying; Jin, Junshan; Sheng, Jiangkun; Dong, Guantao [State Key Laboratory of Intense Pulsed Irradiation Simulation and Effect, Northwest Institute of Nuclear Technology, P.O.Box 69-10, Xi’an (China)

    2017-06-01

    This paper presents an investigation of total ionizing dose (TID) induced image lag sources in pinned photodiodes (PPD) CMOS image sensors based on radiation experiments and TCAD simulation. The radiation experiments have been carried out at the Cobalt −60 gamma-ray source. The experimental results show the image lag degradation is more and more serious with increasing TID. Combining with the TCAD simulation results, we can confirm that the junction of PPD and transfer gate (TG) is an important region forming image lag during irradiation. These simulations demonstrate that TID can generate a potential pocket leading to incomplete transfer.

  4. Sensors

    Energy Technology Data Exchange (ETDEWEB)

    Jensen, H. [PBI-Dansensor A/S (Denmark); Toft Soerensen, O. [Risoe National Lab., Materials Research Dept. (Denmark)

    1999-10-01

    A new type of ceramic oxygen sensors based on semiconducting oxides was developed in this project. The advantage of these sensors compared to standard ZrO{sub 2} sensors is that they do not require a reference gas and that they can be produced in small sizes. The sensor design and the techniques developed for production of these sensors are judged suitable by the participating industry for a niche production of a new generation of oxygen sensors. Materials research on new oxygen ion conducting conductors both for applications in oxygen sensors and in fuel was also performed in this project and finally a new process was developed for fabrication of ceramic tubes by dip-coating. (EHS)

  5. Sensors

    CERN Document Server

    Pigorsch, Enrico

    1997-01-01

    This is the 5th edition of the Metra Martech Directory "EUROPEAN CENTRES OF EXPERTISE - SENSORS." The entries represent a survey of European sensors development. The new edition contains 425 detailed profiles of companies and research institutions in 22 countries. This is reflected in the diversity of sensors development programmes described, from sensors for physical parameters to biosensors and intelligent sensor systems. We do not claim that all European organisations developing sensors are included, but this is a good cross section from an invited list of participants. If you see gaps or omissions, or would like your organisation to be included, please send details. The data base invites the formation of effective joint ventures by identifying and providing access to specific areas in which organisations offer collaboration. This issue is recognised to be of great importance and most entrants include details of collaboration offered and sought. We hope the directory on Sensors will help you to find the ri...

  6. Imaging the tissue distribution of glucose in livers using a PARACEST sensor.

    Science.gov (United States)

    Ren, Jimin; Trokowski, Robert; Zhang, Shanrong; Malloy, Craig R; Sherry, A Dean

    2008-11-01

    Noninvasive imaging of glucose in tissues could provide important insights about glucose gradients in tissue, the origins of gluconeogenesis, or perhaps differences in tissue glucose utilization in vivo. Direct spectral detection of glucose in vivo by (1)H NMR is complicated by interfering signals from other metabolites and the much larger water signal. One potential way to overcome these problems is to use an exogenous glucose sensor that reports glucose concentrations indirectly through the water signal by chemical exchange saturation transfer (CEST). Such a method is demonstrated here in mouse liver perfused with a Eu(3+)-based glucose sensor containing two phenylboronate moieties as the recognition site. Activation of the sensor by applying a frequency-selective presaturation pulse at 42 ppm resulted in a 17% decrease in water signal in livers perfused with 10 mM sensor and 10 mM glucose compared with livers with the same amount of sensor but without glucose. It was shown that livers perfused with 5 mM sensor but no glucose can detect glucose exported from hepatocytes after hormonal stimulation of glycogenolysis. CEST images of livers perfused in the magnet responded to changes in glucose concentrations demonstrating that the method has potential for imaging the tissue distribution of glucose in vivo.

  7. Multimass velocity-map imaging with the Pixel Imaging Mass Spectrometry (PImMS) sensor: an ultra-fast event-triggered camera for particle imaging.

    Science.gov (United States)

    Clark, Andrew T; Crooks, Jamie P; Sedgwick, Iain; Turchetta, Renato; Lee, Jason W L; John, Jaya John; Wilman, Edward S; Hill, Laura; Halford, Edward; Slater, Craig S; Winter, Benjamin; Yuen, Wei Hao; Gardiner, Sara H; Lipciuc, M Laura; Brouard, Mark; Nomerotski, Andrei; Vallance, Claire

    2012-11-15

    We present the first multimass velocity-map imaging data acquired using a new ultrafast camera designed for time-resolved particle imaging. The PImMS (Pixel Imaging Mass Spectrometry) sensor allows particle events to be imaged with time resolution as high as 25 ns over data acquisition times of more than 100 μs. In photofragment imaging studies, this allows velocity-map images to be acquired for multiple fragment masses on each time-of-flight cycle. We describe the sensor architecture and present bench-testing data and multimass velocity-map images for photofragments formed in the UV photolysis of two test molecules: Br(2) and N,N-dimethylformamide.

  8. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    Directory of Open Access Journals (Sweden)

    Chulhee Park

    2016-05-01

    Full Text Available A multispectral filter array (MSFA image sensor with red, green, blue and near-infrared (NIR filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF. However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.

  9. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.

    Science.gov (United States)

    Park, Chulhee; Kang, Moon Gi

    2016-05-18

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.

  10. Soft sensor design by multivariate fusion of image features and process measurements

    DEFF Research Database (Denmark)

    Lin, Bao; Jørgensen, Sten Bay

    2011-01-01

    This paper presents a multivariate data fusion procedure for design of dynamic soft sensors where suitably selected image features are combined with traditional process measurements to enhance the performance of data-driven soft sensors. A key issue of fusing multiple sensor data, i.e. to determine...... with a multivariate analysis technique from RGB pictures. The color information is also transformed to hue, saturation and intensity components. Both sets of image features are combined with traditional process measurements to obtain an inferential model by partial least squares (PLS) regression. A dynamic PLS model...... the weight of each regressor, is achieved through multivariate regression. The framework is described and illustrated with applications to cement kiln systems that are characterized by off-line quality measurements and on-line analyzers with limited reliability. Image features are extracted...

  11. Edge pixel response studies of edgeless silicon sensor technology for pixellated imaging detectors

    Science.gov (United States)

    Maneuski, D.; Bates, R.; Blue, A.; Buttar, C.; Doonan, K.; Eklund, L.; Gimenez, E. N.; Hynds, D.; Kachkanov, S.; Kalliopuska, J.; McMullen, T.; O'Shea, V.; Tartoni, N.; Plackett, R.; Vahanen, S.; Wraight, K.

    2015-03-01

    Silicon sensor technologies with reduced dead area at the sensor's perimeter are under development at a number of institutes. Several fabrication methods for sensors which are sensitive close to the physical edge of the device are under investigation utilising techniques such as active-edges, passivated edges and current-terminating rings. Such technologies offer the goal of a seamlessly tiled detection surface with minimum dead space between the individual modules. In order to quantify the performance of different geometries and different bulk and implant types, characterisation of several sensors fabricated using active-edge technology were performed at the B16 beam line of the Diamond Light Source. The sensors were fabricated by VTT and bump-bonded to Timepix ROICs. They were 100 and 200 μ m thick sensors, with the last pixel-to-edge distance of either 50 or 100 μ m. The sensors were fabricated as either n-on-n or n-on-p type devices. Using 15 keV monochromatic X-rays with a beam spot of 2.5 μ m, the performance at the outer edge and corners pixels of the sensors was evaluated at three bias voltages. The results indicate a significant change in the charge collection properties between the edge and 5th (up to 275 μ m) from edge pixel for the 200 μ m thick n-on-n sensor. The edge pixel performance of the 100 μ m thick n-on-p sensors is affected only for the last two pixels (up to 110 μ m) subject to biasing conditions. Imaging characteristics of all sensor types investigated are stable over time and the non-uniformities can be minimised by flat-field corrections. The results from the synchrotron tests combined with lab measurements are presented along with an explanation of the observed effects.

  12. Plasmonics-Based Multifunctional Electrodes for Low-Power-Consumption Compact Color-Image Sensors.

    Science.gov (United States)

    Lin, Keng-Te; Chen, Hsuen-Li; Lai, Yu-Sheng; Chi, Yi-Min; Chu, Ting-Wei

    2016-03-01

    High pixel density, efficient color splitting, a compact structure, superior quantum efficiency, and low power consumption are all important features for contemporary color-image sensors. In this study, we developed a surface plasmonics-based color-image sensor displaying a high photoelectric response, a microlens-free structure, and a zero-bias working voltage. Our compact sensor comprised only (i) a multifunctional electrode based on a single-layer structured aluminum (Al) film and (ii) an underlying silicon (Si) substrate. This approach significantly simplifies the device structure and fabrication processes; for example, the red, green, and blue color pixels can be prepared simultaneously in a single lithography step. Moreover, such Schottky-based plasmonic electrodes perform multiple functions, including color splitting, optical-to-electrical signal conversion, and photogenerated carrier collection for color-image detection. Our multifunctional, electrode-based device could also avoid the interference phenomenon that degrades the color-splitting spectra found in conventional color-image sensors. Furthermore, the device took advantage of the near-field surface plasmonic effect around the Al-Si junction to enhance the optical absorption of Si, resulting in a significant photoelectric current output even under low-light surroundings and zero bias voltage. These plasmonic Schottky-based color-image devices could convert a photocurrent directly into a photovoltage and provided sufficient voltage output for color-image detection even under a light intensity of only several femtowatts per square micrometer. Unlike conventional color image devices, using voltage as the output signal decreases the area of the periphery read-out circuit because it does not require a current-to-voltage conversion capacitor or its related circuit. Therefore, this strategy has great potential for direct integration with complementary metal-oxide-semiconductor (CMOS)-compatible circuit

  13. Image Centroid Algorithms for Sun Sensors with Super Wide Field of View

    Directory of Open Access Journals (Sweden)

    ZHAN Yinhu

    2015-10-01

    Full Text Available Sun image centroid algorithm is one of the key technologies of celestial navigation using sun sensors, which directly determine the precision of the sensors. Due to the limitation of centroid algorithm for non-circular sun image of the sun sensor of large field of view, firstly, the ellipse fitting algorithm is proposed for solving elliptical or sub-elliptical sun images. Then the spherical circle fitting algorithm is put forward. Based on the projection model and distortion model of the camera, the spherical circle fitting algorithm is used to obtain the edge points of the sun in the object space, and then the centroid of the sun can be determined by fitting the edge points as a spherical circle. In order to estimate the precision of spherical circle fitting algorithm, the centroid of the sun should be projected back to the image space. Theoretically, the spherical circle fitting algorithm is no longer need to take into account the shape of the sun image, the algorithm is more precise. The results of practical sun images demonstrate that the ellipse fitting algorithm is more suitable for the sun image with 70°~80.3° half angle of view, and the mean precision is about 0.075 pixels; the spherical circle fitting algorithm is more suitable for the sun image with a half angle of view larger than 80.3°, and the mean precision is about 0.082 pixels.

  14. Image accuracy and representational enhancement through low-level, multi-sensor integration techniques

    Energy Technology Data Exchange (ETDEWEB)

    Baker, J.E.

    1994-09-01

    Multi-Sensor Integration (MSI) is the combining of data and information from more than one source in order to generate a more reliable and consistent representation of the environment. The need for MSI derives largely from basic ambiguities inherent in our current sensor imaging technologies. These ambiguities exist as long as the mapping from reality to image is not 1-to-1. That is, if different {open_quotes}realities{close_quotes} lead to identical images, a single image cannot reveal the particular reality which was the truth. MSI techniques attempt to resolve some of these ambiguities by appropriately coupling complementary images to eliminate possible inverse mappings. What constitutes the best MSI technique is dependent on the given application domain, available sensors, and task requirements. MSI techniques can be divided into three categories based on the relative information content of the original images with that of the desired representation: (1) {open_quotes}detail enhancement,{close_quotes} wherein the relative information content of the original images is less rich than the desired representation; (2) {open_quotes}data enhancement,{close_quotes} wherein the MSI techniques are concerned with improving the accuracy of the data rather than either increasing or decreasing the level of detail; and (3) {open_quotes}conceptual enhancement,{close_quotes} wherein the image contains more detail than is desired, making it difficult to easily recognize objects of interest. In conceptual enhancement one must group pixels corresponding to the same conceptual object and thereby reduce the level of extraneous detail.

  15. Integrated sensor with frame memory and programmable resolution for light adaptive imaging

    Science.gov (United States)

    Zhou, Zhimin (Inventor); Fossum, Eric R. (Inventor); Pain, Bedabrata (Inventor)

    2004-01-01

    An image sensor operable to vary the output spatial resolution according to a received light level while maintaining a desired signal-to-noise ratio. Signals from neighboring pixels in a pixel patch with an adjustable size are added to increase both the image brightness and signal-to-noise ratio. One embodiment comprises a sensor array for receiving input signals, a frame memory array for temporarily storing a full frame, and an array of self-calibration column integrators for uniform column-parallel signal summation. The column integrators are capable of substantially canceling fixed pattern noise.

  16. Synthetic SAR Image Generation using Sensor, Terrain and Target Models

    DEFF Research Database (Denmark)

    Kusk, Anders; Abulaitijiang, Adili; Dall, Jørgen

    2016-01-01

    A tool to generate synthetic SAR images of objects set on a clutter background is described. The purpose is to generate images for training Automatic Target Recognition and Identification algorithms. The tool employs a commercial electromagnetic simulation program to calculate radar cross sections...

  17. CMOS active pixel sensor type imaging system on a chip

    Science.gov (United States)

    Fossum, Eric R. (Inventor); Nixon, Robert (Inventor)

    2011-01-01

    A single chip camera which includes an .[.intergrated.]. .Iadd.integrated .Iaddend.image acquisition portion and control portion and which has double sampling/noise reduction capabilities thereon. Part of the .[.intergrated.]. .Iadd.integrated .Iaddend.structure reduces the noise that is picked up during imaging.

  18. Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback

    Directory of Open Access Journals (Sweden)

    Haoting Liu

    2017-02-01

    Full Text Available An imaging sensor-based intelligent Light Emitting Diode (LED lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes.

  19. Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback

    Science.gov (United States)

    Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie

    2017-01-01

    An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes. PMID:28208781

  20. Crosstalk in multi-collection-gate image sensors and its improvement

    Science.gov (United States)

    Nguyen, A. Q.; Dao, V. T. S.; Shimonomura, K.; Kamakura, Y.; Etoh, T. G.

    2017-02-01

    Crosstalk in the backside-illuminated multi-collection-gate (BSI-MCG) image sensor was analyzed by means of Monte Carlo simulation. The BSI-MCG image sensor was proposed to achieve the temporal resolution of 1 ns. In this sensor, signal electrons generated by incident light near the back side travel to the central area of the pixel on the front side. Most of the signal electrons are collected by a collecting gate, to which a higher voltage is applied than that of other collection gates. However, due to spatial and temporal diffusion, some of the signal electrons migrate to other collection gates than the collecting gate, resulting in spatiotemporal crosstalk, i.e., mixture of signal electrons at neighboring collection gates and/or pixels. To reduce the crosstalk, the BSI-MCG structure is modified and the performance is preliminarily evaluated by Monte Carlo simulation. An additional donut-shaped N type implantation at the collection-gate area improves the potential gradient to the collecting gate, which reduces the crosstalk caused by the spatial diffusion. A multi-framing camera based on the BSI-MCG image sensor can be applied to Fluorescence Lifetime Imaging Microscopy (FLIM). In this case, crosstalk reduces accuracy in estimation of the lifetimes of fluorophore samples. The inaccuracy is compensated in a post image processing based on a proposed impulse response method.

  1. Noise suppression algorithm of short-wave infrared star image for daytime star sensor

    Science.gov (United States)

    Wang, Wenjie; Wei, Xinguo; Li, Jian; Wang, Gangyi

    2017-09-01

    As an important development trend of star sensor technology, research on daytime star sensor technology can expand the applications of star sensor from spacecrafts to airborne vehicles. The biggest problem for daytime star sensor is the detection of dim stars from strong atmospheric background radiation. The use of short-wave infrared (SWIR) technology has been proven to be an effective approach to solve this problem. However, the SWIR star images inevitably contain stripe nonuniformity noise and defective pixels, which degrade the quality of the acquired images and affect the subsequent star spot extraction and star centroiding accuracy seriously. Because the characteristics of stripe nonuniformity and defective pixels in the SWIR star images change with time during a long term continuous operation, the method of one-time off-line calibration is not applicable. To solve this problem, an algorithm of noise suppression for SWIR star image is proposed. It firstly extracts non-background pixels by one-dimensional mean filtering. Then through one-dimensional feature point descriptor, which is used to distinguish the bright star spots pixels from defective pixels, various types of defective pixels are accurately detected. Finally, the method of moment matching is adopted to remove the stripe nonuniformity and the defective pixels are compensated effectively. The simulation experiments results indicates that the proposed algorithm can adaptively and effectively suppress the influence of stripe nonuniformity and defective pixels in SWIR star images and it is beneficial to obtain higher star centroiding accuracy.

  2. Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback.

    Science.gov (United States)

    Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie

    2017-02-09

    An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes.

  3. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    Science.gov (United States)

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  4. Energy-Efficient Transmission of Wavelet-Based Images in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Vincent Lecuire

    2007-01-01

    Full Text Available We propose a self-adaptive image transmission scheme driven by energy efficiency considerations in order to be suitable for wireless sensor networks. It is based on wavelet image transform and semireliable transmission to achieve energy conservation. Wavelet image transform provides data decomposition in multiple levels of resolution, so the image can be divided into packets with different priorities. Semireliable transmission enables priority-based packet discarding by intermediate nodes according to their battery's state-of-charge. Such an image transmission approach provides a graceful tradeoff between the reconstructed images quality and the sensor nodes' lifetime. An analytical study in terms of dissipated energy is performed to compare the self-adaptive image transmission scheme to a fully reliable scheme. Since image processing is computationally intensive and operates on a large data set, the cost of the wavelet image transform is considered in the energy consumption analysis. Results show up to 80% reduction in the energy consumption achieved by our proposal compared to a nonenergy-aware one, with the guarantee for the image quality to be lower-bounded.

  5. Filter-free image sensor pixels comprising silicon nanowires with selective color absorption.

    Science.gov (United States)

    Park, Hyunsung; Dan, Yaping; Seo, Kwanyong; Yu, Young J; Duane, Peter K; Wober, Munib; Crozier, Kenneth B

    2014-01-01

    The organic dye filters of conventional color image sensors achieve the red/green/blue response needed for color imaging, but have disadvantages related to durability, low absorption coefficient, and fabrication complexity. Here, we report a new paradigm for color imaging based on all-silicon nanowire devices and no filters. We fabricate pixels consisting of vertical silicon nanowires with integrated photodetectors, demonstrate that their spectral sensitivities are governed by nanowire radius, and perform color imaging. Our approach is conceptually different from filter-based methods, as absorbed light is converted to photocurrent, ultimately presenting the opportunity for very high photon efficiency.

  6. Biologically motivated composite image sensor for deep-field target tracking

    Science.gov (United States)

    Melnyk, Pavlo B.; Messner, Richard A.

    2007-01-01

    The present work addresses the design of an image acquisition front-end for target detection and tracking within a wide range of distances. Inspired by raptor bird's vision, a novel design for a visual sensor is proposed. The sensor consists of two parts, each originating from the studies of biological vision systems of different species. The front end is comprised of a set of video cameras imitating a falconiform eye, in particular its optics and retina [1]. The back end is a software remapper that uses a popular in machine vision log-polar model of retino-cortical projection in primates [2], [3], [4]. The output of this sensor is a composite log-polar image incorporating both near and far visual fields into a single homogeneous image space. In such space it is easier to perform target detection and tracking for those applications that deal with targets moving along the camera axis. The target object preserves its shape and size being handled seamlessly between cameras regardless of distance to the composite sensor. The prototype of proposed composite sensor has been created and is used as a front-end in experimental mobile vehicle detection and tracking system. Its has been tested inside a driving simulator and results are presented.

  7. Change Detection with GRASS GIS – Comparison of images taken by different sensors

    Directory of Open Access Journals (Sweden)

    Michael Fuchs

    2009-04-01

    Full Text Available Images of American military reconnaissance satellites of the Sixties (CORONA in combination with modern sensors (SPOT, QuickBird were used for detection of changes in land use. The pilot area was located about 40 km northwest of Yemen’s capital Sana’a and covered approximately 100 km2 . To produce comparable layers from images of distinctly different sources, the moving window technique was applied, using the diversity parameter. The resulting difference layers reveal plausible and interpretable change patterns, particularly in areas where urban sprawl occurs.The comparison of CORONA images with images taken by modern sensors proved to be an additional tool to visualize and quantify major changes in land use. The results should serve as additional basic data eg. in regional planning.The computation sequence was executed in GRASS GIS.

  8. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    Directory of Open Access Journals (Sweden)

    Zhuowen Lv

    2015-01-01

    Full Text Available Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach.

  9. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    Science.gov (United States)

    Lv, Zhuowen; Xing, Xianglei; Wang, Kejun; Guan, Donghai

    2015-01-01

    Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach. PMID:25574935

  10. A Target Tracking System Based on Imaging Sensor Network with Wi-Fi

    Directory of Open Access Journals (Sweden)

    Aiqun Chen

    2014-05-01

    Full Text Available With the rapid development of network communication technology, a variety of network technology and communication technology has been integrated into our lives and work, bringing great convenience to our work and life. Current research in wireless sensor network technology in the field of communication technology is more popular because of the use of wireless sensor network technology can achieve the communication between objects and objects, people and things, the application of this technology has greatly expanded the ability for people to obtain information, have important significance to the development of people and society. Based on the powerful function of wireless sensor and bring the influence of people, this paper focuses on the design and implementation of the target tracking system based on image sensor networks with Wi-Fi.

  11. Active-Pixel Image Sensor With Analog-To-Digital Converters

    Science.gov (United States)

    Fossum, Eric R.; Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.

    1995-01-01

    Proposed single-chip integrated-circuit image sensor contains 128 x 128 array of active pixel sensors at 50-micrometer pitch. Output terminals of all pixels in each given column connected to analog-to-digital (A/D) converter located at bottom of column. Pixels scanned in semiparallel fashion, one row at time; during time allocated to scanning row, outputs of all active pixel sensors in row fed to respective A/D converters. Design of chip based on complementary metal oxide semiconductor (CMOS) technology, and individual circuit elements fabricated according to 2-micrometer CMOS design rules. Active pixel sensors designed to operate at video rate of 30 frames/second, even at low light levels. A/D scheme based on first-order Sigma-Delta modulation.

  12. Automatic Characterization of Electro-Optical Sensors with Image Processing, using the Triangle Orientation Discrimination (TOD) Method

    NARCIS (Netherlands)

    Lange, D.J. de; Valeton, J.M.; Bijl, P.

    2000-01-01

    The objective characterization of electro-optical sensors and that of image enhancement techniques has always been a difficult task. Up to now the sensor is characterized using the minimum resolvable temperature difference (MRTD) or the minimum resolvable contrast (MRC). The performance of image

  13. Measuring the Contractile Response of Isolated Tissue Using an Image Sensor

    Directory of Open Access Journals (Sweden)

    David Díaz-Martín

    2015-04-01

    Full Text Available Isometric or isotonic transducers have traditionally been used to study the contractile/relaxation effects of drugs on isolated tissues. However, these mechanical sensors are expensive and delicate, and they are associated with certain disadvantages when performing experiments in the laboratory. In this paper, a method that uses an image sensor to measure the contractile effect of drugs on blood vessel rings and other luminal organs is presented. The new method is based on an image-processing algorithm, and it provides a fast, easy and non-expensive way to analyze the effects of such drugs. In our tests, we have obtained dose-response curves from rat aorta rings that are equivalent to those achieved with classical mechanic sensors.

  14. Low-Power Smart Imagers for Vision-Enabled Sensor Networks

    CERN Document Server

    Fernández-Berni, Jorge; Rodríguez-Vázquez, Ángel

    2012-01-01

    This book presents a comprehensive, systematic approach to the development of vision system architectures that employ sensory-processing concurrency and parallel processing to meet the autonomy challenges posed by a variety of safety and surveillance applications.  Coverage includes a thorough analysis of resistive diffusion networks embedded within an image sensor array. This analysis supports a systematic approach to the design of spatial image filters and their implementation as vision chips in CMOS technology. The book also addresses system-level considerations pertaining to the embedding of these vision chips into vision-enabled wireless sensor networks.  Describes a system-level approach for designing of vision devices and  embedding them into vision-enabled, wireless sensor networks; Surveys state-of-the-art, vision-enabled WSN nodes; Includes details of specifications and challenges of vision-enabled WSNs; Explains architectures for low-energy CMOS vision chips with embedded, programmable spatial f...

  15. Determining approximate age of digital images using sensor defects

    Science.gov (United States)

    Fridrich, Jessica; Goljan, Miroslav

    2011-02-01

    The goal of temporal forensics is to establish temporal relationship among two or more pieces of evidence. In this paper, we focus on digital images and describe a method using which an analyst can estimate the acquisition time of an image given a set of other images from the same camera whose time ordering is known. This is achieved by first estimating the parameters of pixel defects, including their onsets, and then detecting their presence in the image under investigation. Both estimators are constructed using the maximum-likelihood principle. The accuracy and limitations of this approach are illustrated on experiments with three cameras. Forensic and law-enforcement analysts are expected to benefit from this technique in situations when the temporal data stored in the EXIF header is lost due to processing or editing images off-line or when the header cannot be trusted. Reliable methods for establishing temporal order between individual pieces of evidence can help reveal deception attempts of an adversary or a criminal. The causal relationship may also provide information about the whereabouts of the photographer.

  16. A Full Parallel Event Driven Readout Technique for Area Array SPAD FLIM Image Sensors.

    Science.gov (United States)

    Nie, Kaiming; Wang, Xinlei; Qiao, Jun; Xu, Jiangtao

    2016-01-27

    This paper presents a full parallel event driven readout method which is implemented in an area array single-photon avalanche diode (SPAD) image sensor for high-speed fluorescence lifetime imaging microscopy (FLIM). The sensor only records and reads out effective time and position information by adopting full parallel event driven readout method, aiming at reducing the amount of data. The image sensor includes four 8 × 8 pixel arrays. In each array, four time-to-digital converters (TDCs) are used to quantize the time of photons' arrival, and two address record modules are used to record the column and row information. In this work, Monte Carlo simulations were performed in Matlab in terms of the pile-up effect induced by the readout method. The sensor's resolution is 16 × 16. The time resolution of TDCs is 97.6 ps and the quantization range is 100 ns. The readout frame rate is 10 Mfps, and the maximum imaging frame rate is 100 fps. The chip's output bandwidth is 720 MHz with an average power of 15 mW. The lifetime resolvability range is 5-20 ns, and the average error of estimated fluorescence lifetimes is below 1% by employing CMM to estimate lifetimes.

  17. A graph reader using a CCD image sensor | Seeti | Nigerian Journal ...

    African Journals Online (AJOL)

    Nigerian Journal of Physics. Journal Home · ABOUT · Advanced Search · Current Issue · Archives · Journal Home > Vol 20, No 1 (2008) >. Log in or Register to get access to full text downloads. Username, Password, Remember me, or Register. A graph reader using a CCD image sensor. ML Seeti. Abstract. No Abstract.

  18. Plasmonic color filters to decrease ambient light errors on active type dual band infrared image sensors

    Science.gov (United States)

    Lyu, Hong-Kun; Park, Young-Jin; Cho, Hui-Sup; Jo, Sung-Hyun; Lee, Hee-Ho; Shin, Jang-Kyoo

    2014-09-01

    In this paper, we proposed the plasmonic color filters to decrease ambient light errors on active type dual band infrared image sensors for a large-area multi-touch display system. Although the strong point of the touch display system in the area of education and exhibition there are some limits of the ambient light. When an unexpected ambient light incidents into the display the touch recognition system can make errors classifying the touch point in the unexpected ambient light area. We proposed a new touch recognition image sensor system to decrease the ambient light error and investigated the optical transmission properties of plasmonic color filters for IR image sensor. To find a proper structure of the plasmonic color filters we used a commercial computer simulation tool utilizing finite-difference time-domain (FDTD) method as several thicknesses and whit the cover passivation layer or not. Gold (Au) applied for the metal film and the dispersion information associated with was derived from the Lorentz-Drude model. We also described the mechanism applied the double band filter on the IR image sensors.

  19. Toward one giga frames per second : Evolution of in Situ storage image sensors

    NARCIS (Netherlands)

    Etoh, T.G.; Son, D.V.T.; Yamada, T.; Charbon, E.

    2013-01-01

    The ISIS is an ultra-fast image sensor with in-pixel storage. The evolution of the ISIS in the past and in the near future is reviewed and forecasted. To cover the storage area with a light shield, the conventional frontside illuminated ISIS has a limited fill factor. To achieve higher sensitivity,

  20. Visualizing the evolution of image features in time-series: supporting the exploration of sensor data

    NARCIS (Netherlands)

    Turdukulov, U.D.

    2007-01-01

    Sensor image repositories are becoming the fastest growing archives of spatio-temporal information and they are only projected to grow through the twenty-first century. This continuous data flow leads to large time-series and accordingly, geoscientists are often confronted with the amount of data

  1. Photoacoustic imaging of blood vessels with a double-ring sensor featuring a narrow angular aperture

    NARCIS (Netherlands)

    Kolkman, R.G.M.; Hondebrink, Erwin; Steenbergen, Wiendelt; van Leeuwen, Ton; de Mul, F.F.M.

    2004-01-01

    A photoacoustic double-ring sensor, featuring a narrow angular aperture, is developed for laser-induced photoacoustic imaging of blood vessels. An integrated optical fiber enables reflection-mode detection of ultrasonic waves. By using the cross-correlation between the signals detected by the two

  2. A directly converting high-resolution intra-oral X-ray imaging sensor

    CERN Document Server

    Spartiotis, K; Schulman, T; Puhakka, K; Muukkonen, K

    2003-01-01

    A digital intra-oral X-ray imaging sensor with an active area of 3.6x2.9 cm sup 2 and consisting of six charge-integrating CMOS signal readout circuits bump bonded to one high-resistivity silicon pixel detector has been developed and tested. The pixel size is 35 mu m. The X-rays entering the sensor window are converted directly to electrical charge in the depleted detector material yielding minimum lateral signal spread and maximum image sharpness. The signal charge is collected on the gates of the input field effect transistors of the CMOS signal readout circuits. The analog signal readout is performed by multiplexing in the current mode independent of the signal charge collection enabling multiple readout cycles with negligible dead time and thus imaging with wide dynamic range. Since no intermediate conversion material of X-rays to visible light is needed, the sensor structure is very compact. The analog image signals are guided from the sensor output through a thin cable to signal processing, AD conversio...

  3. Non-invasive mechanical properties estimation of embedded objects using tactile imaging sensor

    Science.gov (United States)

    Saleheen, Firdous; Oleksyuk, Vira; Sahu, Amrita; Won, Chang-Hee

    2013-05-01

    Non-invasive mechanical property estimation of an embedded object (tumor) can be used in medicine for characterization between malignant and benign lesions. We developed a tactile imaging sensor which is capable of detecting mechanical properties of inclusions. Studies show that stiffness of tumor is a key physiological discerning parameter for malignancy. As our sensor compresses the tumor from the surface, the sensing probe deforms, and the light scatters. This forms the tactile image. Using the features of the image, we can estimate the mechanical properties such as size, depth, and elasticity of the embedded object. To test the performance of the method, a phantom study was performed. Silicone rubber balls were used as embedded objects inside the tissue mimicking substrate made of Polydimethylsiloxane. The average relative errors for size, depth, and elasticity were found to be 67.5%, 48.2%, and 69.1%, respectively. To test the feasibility of the sensor in estimating the elasticity of tumor, a pilot clinical study was performed on twenty breast cancer patients. The estimated elasticity was correlated with the biopsy results. Preliminary results show that the sensitivity of 67% and the specificity of 91.7% for elasticity. Results from the clinical study suggest that the tactile imaging sensor may be used as a tumor malignancy characterization tool.

  4. Multi-Sensor Fusion of Landsat 8 Thermal Infrared (TIR and Panchromatic (PAN Images

    Directory of Open Access Journals (Sweden)

    Hyung-Sup Jung

    2014-12-01

    Full Text Available Data fusion is defined as the combination of data from multiple sensors such that the resulting information is better than would be possible when the sensors are used individually. The multi-sensor fusion of panchromatic (PAN and thermal infrared (TIR images is a good example of this data fusion. While a PAN image has higher spatial resolution, a TIR one has lower spatial resolution. In this study, we have proposed an efficient method to fuse Landsat 8 PAN and TIR images using an optimal scaling factor in order to control the trade-off between the spatial details and the thermal information. We have compared the fused images created from different scaling factors and then tested the performance of the proposed method at urban and rural test areas. The test results show that the proposed method merges the spatial resolution of PAN image and the temperature information of TIR image efficiently. The proposed method may be applied to detect lava flows of volcanic activity, radioactive exposure of nuclear power plants, and surface temperature change with respect to land-use change.

  5. Multi-sensor fusion of Landsat 8 thermal infrared (TIR) and panchromatic (PAN) images.

    Science.gov (United States)

    Jung, Hyung-Sup; Park, Sung-Whan

    2014-12-18

    Data fusion is defined as the combination of data from multiple sensors such that the resulting information is better than would be possible when the sensors are used individually. The multi-sensor fusion of panchromatic (PAN) and thermal infrared (TIR) images is a good example of this data fusion. While a PAN image has higher spatial resolution, a TIR one has lower spatial resolution. In this study, we have proposed an efficient method to fuse Landsat 8 PAN and TIR images using an optimal scaling factor in order to control the trade-off between the spatial details and the thermal information. We have compared the fused images created from different scaling factors and then tested the performance of the proposed method at urban and rural test areas. The test results show that the proposed method merges the spatial resolution of PAN image and the temperature information of TIR image efficiently. The proposed method may be applied to detect lava flows of volcanic activity, radioactive exposure of nuclear power plants, and surface temperature change with respect to land-use change.

  6. Ferromagnetic particles as magnetic resonance imaging temperature sensors.

    Science.gov (United States)

    Hankiewicz, J H; Celinski, Z; Stupic, K F; Anderson, N R; Camley, R E

    2016-08-09

    Magnetic resonance imaging is an important technique for identifying different types of tissues in a body or spatial information about composite materials. Because temperature is a fundamental parameter reflecting the biological status of the body and individual tissues, it would be helpful to have temperature maps superimposed on spatial maps. Here we show that small ferromagnetic particles with a strong temperature-dependent magnetization, can be used to produce temperature-dependent images in magnetic resonance imaging with an accuracy of about 1 °C. This technique, when further developed, could be used to identify inflammation or tumours, or to obtain spatial maps of temperature in various medical interventional procedures such as hyperthermia and thermal ablation. This method could also be used to determine temperature profiles inside nonmetallic composite materials.

  7. Reliable Asynchronous Image Transfer Protocol in Wireless Multimedia Sensor Networks

    Directory of Open Access Journals (Sweden)

    In-Bum Jung

    2010-02-01

    Full Text Available In the paper, we propose a reliable asynchronous image transfer protocol, RAIT. RAIT applies a double sliding window method to node-to-node transfer, with one sliding window for the receiving queue, which is used to prevent packet loss caused by communication failure between nodes, and another sliding window for the sending queue, which prevents packet loss caused by network congestion. The routing node prevents packet loss between nodes by preemptive scheduling of multiple packets for a given image. RAIT implements a double sliding window method by means of a cross-layer design between the RAIT layer, routing layer, and queue layer. We demonstrate that RAIT guarantees a higher reliability of image transmission compared to the existing protocols.

  8. Reliable asynchronous image transfer protocol in wireless multimedia sensor networks.

    Science.gov (United States)

    Lee, Joa-Hyoung; Jung, In-Bum

    2010-01-01

    In the paper, we propose a reliable asynchronous image transfer protocol, RAIT. RAIT applies a double sliding window method to node-to-node transfer, with one sliding window for the receiving queue, which is used to prevent packet loss caused by communication failure between nodes, and another sliding window for the sending queue, which prevents packet loss caused by network congestion. The routing node prevents packet loss between nodes by preemptive scheduling of multiple packets for a given image. RAIT implements a double sliding window method by means of a cross-layer design between the RAIT layer, routing layer, and queue layer. We demonstrate that RAIT guarantees a higher reliability of image transmission compared to the existing protocols.

  9. Single photon imaging and timing array sensor apparatus and method

    Science.gov (United States)

    Smith, R. Clayton

    2003-06-24

    An apparatus and method are disclosed for generating a three-dimension image of an object or target. The apparatus is comprised of a photon source for emitting a photon at a target. The emitted photons are received by a photon receiver for receiving the photon when reflected from the target. The photon receiver determines a reflection time of the photon and further determines an arrival position of the photon on the photon receiver. An analyzer is communicatively coupled to the photon receiver, wherein the analyzer generates a three-dimensional image of the object based upon the reflection time and the arrival position.

  10. Detection in Urban Scenario using Combined Airborne Imaging Sensors

    NARCIS (Netherlands)

    Renhorn, I.; Axelsson, M.; Benoist, K.W.; Bourghys, D.; Boucher, Y.; Xavier Briottet, X.; Sergio De CeglieD, S. De; Dekker, R.J.; Dimmeler, A.; Dost, R.; Friman, O.; Kåsen, I.; Maerker, J.; Persie, M. van; Resta, S.; Schwering, P.B.W.; Shimoni, M.; Vegard Haavardsholm, T.

    2012-01-01

    The EDA project “Detection in Urban scenario using Combined Airborne imaging Sensors” (DUCAS) is in progress. The aim of the project is to investigate the potential benefit of combined high spatial and spectral resolution airborne imagery for several defense applications in the urban area. The

  11. Detection in Urban Scenario Using Combined Airborne Imaging Sensors

    NARCIS (Netherlands)

    Renhorn, I.; Axelsson, M.; Benoist, K.W.; Bourghys, D.; Boucher, Y.; Xavier Briottet, X.; Sergio De CeglieD, S. De; Dekker, R.J.; Dimmeler, A.; Dost, R.; Friman, O.; Kåsen, I.; Maerker, J.; Persie, M. van; Resta, S.; Schwering, P.B.W.; Shimoni, M.; Vegard Haavardsholm, T.

    2012-01-01

    The EDA project “Detection in Urban scenario using Combined Airborne imaging Sensors” (DUCAS) is in progress. The aim of the project is to investigate the potential benefit of combined high spatial and spectral resolution airborne imagery for several defense applications in the urban area. The

  12. Bioinspired Polarization Imaging Sensors: From Circuits and Optics to Signal Processing Algorithms and Biomedical Applications: Analysis at the focal plane emulates nature's method in sensors to image and diagnose with polarized light.

    Science.gov (United States)

    York, Timothy; Powell, Samuel B; Gao, Shengkui; Kahan, Lindsey; Charanya, Tauseef; Saha, Debajit; Roberts, Nicholas W; Cronin, Thomas W; Marshall, Justin; Achilefu, Samuel; Lake, Spencer P; Raman, Baranidharan; Gruev, Viktor

    2014-10-01

    In this paper, we present recent work on bioinspired polarization imaging sensors and their applications in biomedicine. In particular, we focus on three different aspects of these sensors. First, we describe the electro-optical challenges in realizing a bioinspired polarization imager, and in particular, we provide a detailed description of a recent low-power complementary metal-oxide-semiconductor (CMOS) polarization imager. Second, we focus on signal processing algorithms tailored for this new class of bioinspired polarization imaging sensors, such as calibration and interpolation. Third, the emergence of these sensors has enabled rapid progress in characterizing polarization signals and environmental parameters in nature, as well as several biomedical areas, such as label-free optical neural recording, dynamic tissue strength analysis, and early diagnosis of flat cancerous lesions in a murine colorectal tumor model. We highlight results obtained from these three areas and discuss future applications for these sensors.

  13. Simulation of Image Performance Characteristics of the Landsat Data Continuity Mission (LDCM) Thermal Infrared Sensor (TIRS)

    Science.gov (United States)

    Schott, John; Gerace, Aaron; Brown, Scott; Gartley, Michael; Montanaro, Matthew; Reuter, Dennis C.

    2012-01-01

    The next Landsat satellite, which is scheduled for launch in early 2013, will carry two instruments: the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS). Significant design changes over previous Landsat instruments have been made to these sensors to potentially enhance the quality of Landsat image data. TIRS, which is the focus of this study, is a dual-band instrument that uses a push-broom style architecture to collect data. To help understand the impact of design trades during instrument build, an effort was initiated to model TIRS imagery. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool was used to produce synthetic "on-orbit" TIRS data with detailed radiometric, geometric, and digital image characteristics. This work presents several studies that used DIRSIG simulated TIRS data to test the impact of engineering performance data on image quality in an effort to determine if the image data meet specifications or, in the event that they do not, to determine if the resulting image data are still acceptable.

  14. Determination of sensor oversize for stereo-pair mismatch compensation and image stabilization

    Science.gov (United States)

    Kulkarni, Prajit

    2013-03-01

    Stereoscopic cameras consist of two camera modules that in theory are mounted parallel to each other at a fixed distance along a single plane. Practical tolerances in the manufacturing and assembly process can, however, cause mismatches in the relative orientation of the modules. One solution to this problem is to design sensors that image a larger field-of-view than is necessary to meet system specifications. This requires the computation of the sensor oversize needed to compensate for the various types of mismatch. This work presents a mathematical framework to determine these oversize values for mismatch along each of the six degrees of freedom. One module is considered as the reference and the extreme rays of the field-of-view of the second sensor are traced in order to derive equations for the required horizontal and vertical oversize. As a further application, by modeling user hand-shake as the displacement of the sensor from its intended position, these deterministic equations could be used to estimate the sensor oversize required to stabilize images that are captured using cell phones.

  15. Impedance Sensors for Fast Multiphase Flow Measurement and Imaging

    OpenAIRE

    Da Silva, Marco Jose

    2008-01-01

    Multiphase flow denotes the simultaneous flow of two or more physically distinct and immiscible substances and it can be widely found in several engineering applications, for instance, power generation, chemical engineering and crude oil extraction and processing. In many of those applications, multiphase flows determine safety and efficiency aspects of processes and plants where they occur. Therefore, the measurement and imaging of multiphase flows has received much attention in recent years...

  16. Microfluidic oxygen imaging using integrated optical sensor layers and a color camera.

    Science.gov (United States)

    Ungerböck, Birgit; Charwat, Verena; Ertl, Peter; Mayr, Torsten

    2013-04-21

    In this work we present a high resolution oxygen imaging approach, which can be used to study 2D oxygen distribution inside microfluidic environments. The presented setup comprises a fabrication process of microfluidic chips with integrated luminescent sensing films combined with referenced oxygen imaging applying a color CCD-camera. Enhancement of the sensor performance was achieved by applying the principle of light harvesting. This principle enabled ratiometric imaging employing the red and the green channel of a color CCD-camera. The oxygen sensitive emission of platinum(ii)-5,10,15,20-tetrakis-(2,3,4,5,6-pentafluorphenyl)-porphyrin (PtTFPP) was detected by the red channel, while the emission of a reference dye was detected by the green channel. This measurement setup allowed for accurate real-time 2D oxygen imaging with superior quality compared to intensity imaging. The sensor films were subsequently used to measure the respiratory activity of human cell cultures (HeLa carcinoma cells and normal human dermal fibroblasts) in a microfluidic system. The sensor setup is well suited for different applications from spatially and temporally resolving oxygen concentration inside microfluidic channels to parallelization of oxygen measurements and paves the way to novel cell based assays, e.g. in tissue engineering, tumor biology and hypoxia reperfusion phenomena.

  17. CMOS Image Sensor with a Built-in Lane Detector

    Directory of Open Access Journals (Sweden)

    Li-Chen Fu

    2009-03-01

    Full Text Available This work develops a new current-mode mixed signal Complementary Metal-Oxide-Semiconductor (CMOS imager, which can capture images and simultaneously produce vehicle lane maps. The adopted lane detection algorithm, which was modified to be compatible with hardware requirements, can achieve a high recognition rate of up to approximately 96% under various weather conditions. Instead of a Personal Computer (PC based system or embedded platform system equipped with expensive high performance chip of Reduced Instruction Set Computer (RISC or Digital Signal Processor (DSP, the proposed imager, without extra Analog to Digital Converter (ADC circuits to transform signals, is a compact, lower cost key-component chip. It is also an innovative component device that can be integrated into intelligent automotive lane departure systems. The chip size is 2,191.4 x 2,389.8 mm, and the package uses 40 pin Dual-In-Package (DIP. The pixel cell size is 18.45 x 21.8 mm and the core size of photodiode is 12.45 x 9.6 mm; the resulting fill factor is 29.7%.

  18. CMOS Image Sensor with a Built-in Lane Detector.

    Science.gov (United States)

    Hsiao, Pei-Yung; Cheng, Hsien-Chein; Huang, Shih-Shinh; Fu, Li-Chen

    2009-01-01

    This work develops a new current-mode mixed signal Complementary Metal-Oxide-Semiconductor (CMOS) imager, which can capture images and simultaneously produce vehicle lane maps. The adopted lane detection algorithm, which was modified to be compatible with hardware requirements, can achieve a high recognition rate of up to approximately 96% under various weather conditions. Instead of a Personal Computer (PC) based system or embedded platform system equipped with expensive high performance chip of Reduced Instruction Set Computer (RISC) or Digital Signal Processor (DSP), the proposed imager, without extra Analog to Digital Converter (ADC) circuits to transform signals, is a compact, lower cost key-component chip. It is also an innovative component device that can be integrated into intelligent automotive lane departure systems. The chip size is 2,191.4 × 2,389.8 μm, and the package uses 40 pin Dual-In-Package (DIP). The pixel cell size is 18.45 × 21.8 μm and the core size of photodiode is 12.45 × 9.6 μm; the resulting fill factor is 29.7%.

  19. Scene correction (precision techniques) of ERTS sensor data using digital image processing techniques

    Science.gov (United States)

    Bernstein, R.

    1974-01-01

    Techniques have been developed, implemented, and evaluated to process ERTS Return Beam Vidicon (RBV) and Multispectral Scanner (MSS) sensor data using digital image processing techniques. The RBV radiometry has been corrected to remove shading effects, and the MSS geometry and radiometry have been corrected to remove internal and external radiometric and geometric errors. The results achieved show that geometric mapping accuracy of about one picture element RMS and two picture elements (maximum) can be achieved by the use of nine ground control points. Radiometric correction of MSS and RBV sensor data has been performed to eliminate striping and shading effects to about one count accuracy. Image processing times on general purpose computers of the IBM 370/145 to 168 class are in the range of 29 to 3.2 minutes per MSS scene (4 bands). Photographic images of the fully corrected and annotated scenes have been generated from the processed data and have demonstrated excellent quality and information extraction potential.

  20. Modeling of Potential Distribution of Electrical Capacitance Tomography Sensor for Multiphase Flow Image

    Directory of Open Access Journals (Sweden)

    S. Sathiyamoorthy

    2007-09-01

    Full Text Available Electrical Capacitance Tomography (ECT was used to develop image of various multi phase flow of gas-liquid-solid in a closed pipe. The principal difficulties to obtained real time image from ECT sensor are permittivity distribution across the plate and capacitance is nonlinear; the electric field is distorted by the material present and is also sensitive to measurement errors and noise. This work present a detailed description is given on method employed for image reconstruction from the capacitance measurements. The discretization and iterative algorithm is developed for improving the predictions with minimum error. The author analyzed eight electrodes square sensor ECT system with two-phase water-gas and solid-gas.

  1. Comparison of Leica ADS40 and Z/I imaging DMC high-resolution airborne sensors

    Science.gov (United States)

    Craig, John C.

    2005-01-01

    The Leica ADS40 is a line scanning sensor that collects stereo panchromatic imagery and 4 discrete multispectral bands in a 12,000 pixel-wide swath. The Z/I Imaging DMC is a frame based sensor that produces 13,824x7,680 pixel panchromatic images and 3072x2048 pixel multispectral images, which are normally pan sharpened to produce high resolution RGB and color infrared products. The suitability of the two systems for multispectral remote sensing and photogrammetric applications are compared, and contrasted with other film and digital alternatives. Results indicate that the DMC has an advantage for large scale photogrammetry applications, and the ADS40 is superior for remote sensing applications.

  2. A Passive Learning Sensor Architecture for Multimodal Image Labeling: An Application for Social Robots.

    Science.gov (United States)

    Gutiérrez, Marco A; Manso, Luis J; Pandya, Harit; Núñez, Pedro

    2017-02-11

    Object detection and classification have countless applications in human-robot interacting systems. It is a necessary skill for autonomous robots that perform tasks in household scenarios. Despite the great advances in deep learning and computer vision, social robots performing non-trivial tasks usually spend most of their time finding and modeling objects. Working in real scenarios means dealing with constant environment changes and relatively low-quality sensor data due to the distance at which objects are often found. Ambient intelligence systems equipped with different sensors can also benefit from the ability to find objects, enabling them to inform humans about their location. For these applications to succeed, systems need to detect the objects that may potentially contain other objects, working with relatively low-resolution sensor data. A passive learning architecture for sensors has been designed in order to take advantage of multimodal information, obtained using an RGB-D camera and trained semantic language models. The main contribution of the architecture lies in the improvement of the performance of the sensor under conditions of low resolution and high light variations using a combination of image labeling and word semantics. The tests performed on each of the stages of the architecture compare this solution with current research labeling techniques for the application of an autonomous social robot working in an apartment. The results obtained demonstrate that the proposed sensor architecture outperforms state-of-the-art approaches.

  3. A Passive Learning Sensor Architecture for Multimodal Image Labeling: An Application for Social Robots

    Science.gov (United States)

    Gutiérrez, Marco A.; Manso, Luis J.; Pandya, Harit; Núñez, Pedro

    2017-01-01

    Object detection and classification have countless applications in human–robot interacting systems. It is a necessary skill for autonomous robots that perform tasks in household scenarios. Despite the great advances in deep learning and computer vision, social robots performing non-trivial tasks usually spend most of their time finding and modeling objects. Working in real scenarios means dealing with constant environment changes and relatively low-quality sensor data due to the distance at which objects are often found. Ambient intelligence systems equipped with different sensors can also benefit from the ability to find objects, enabling them to inform humans about their location. For these applications to succeed, systems need to detect the objects that may potentially contain other objects, working with relatively low-resolution sensor data. A passive learning architecture for sensors has been designed in order to take advantage of multimodal information, obtained using an RGB-D camera and trained semantic language models. The main contribution of the architecture lies in the improvement of the performance of the sensor under conditions of low resolution and high light variations using a combination of image labeling and word semantics. The tests performed on each of the stages of the architecture compare this solution with current research labeling techniques for the application of an autonomous social robot working in an apartment. The results obtained demonstrate that the proposed sensor architecture outperforms state-of-the-art approaches. PMID:28208671

  4. A Passive Learning Sensor Architecture for Multimodal Image Labeling: An Application for Social Robots

    Directory of Open Access Journals (Sweden)

    Marco A. Gutiérrez

    2017-02-01

    Full Text Available Object detection and classification have countless applications in human–robot interacting systems. It is a necessary skill for autonomous robots that perform tasks in household scenarios. Despite the great advances in deep learning and computer vision, social robots performing non-trivial tasks usually spend most of their time finding and modeling objects. Working in real scenarios means dealing with constant environment changes and relatively low-quality sensor data due to the distance at which objects are often found. Ambient intelligence systems equipped with different sensors can also benefit from the ability to find objects, enabling them to inform humans about their location. For these applications to succeed, systems need to detect the objects that may potentially contain other objects, working with relatively low-resolution sensor data. A passive learning architecture for sensors has been designed in order to take advantage of multimodal information, obtained using an RGB-D camera and trained semantic language models. The main contribution of the architecture lies in the improvement of the performance of the sensor under conditions of low resolution and high light variations using a combination of image labeling and word semantics. The tests performed on each of the stages of the architecture compare this solution with current research labeling techniques for the application of an autonomous social robot working in an apartment. The results obtained demonstrate that the proposed sensor architecture outperforms state-of-the-art approaches.

  5. Studies of prototype DEPFET sensors for the Wide Field Imager of Athena

    Science.gov (United States)

    Treberspurg, Wolfgang; Andritschke, Robert; Bähr, Alexander; Behrens, Annika; Hauser, Günter; Lechner, Peter; Meidinger, Norbert; Müller-Seidlitz, Johannes; Treis, Johannes

    2017-08-01

    The Wide Field Imager (WFI) of ESA's next X-ray observatory Athena will combine a high count rate capability with a large field of view, both with state-of-the-art spectroscopic performance. To meet these demands, specific DEPFET active pixel detectors have been developed and operated. Due to the intrinsic amplification of detected signals they are best suited to achieve a high speed and low noise performance. Different fabrication technologies and transistor geometries have been implemented on a dedicated prototype production in the course of the development of the DEPFET sensors. The main modifications between the sensors concern the shape of the transistor gate - regarding the layout - and the thickness of the gate oxide - regarding the technology. To facilitate the fabrication and testing of the resulting variety of sensors the presented studies were carried out with 64×64 pixel detectors. The detector comprises a control ASIC (Switcher-A), a readout ASIC (VERITAS- 2) and the sensor. In this paper we give an overview on the evaluation of different prototype sensors. The most important results, which have been decisive for the identification of the optimal fabrication technology and transistor layout for subsequent sensor productions are summarized. It will be shown that the developments result in an excellent performance of spectroscopic X-ray DEPFETs with typical noise values below 2.5 ENC at 2.5 μs/row.

  6. The Algorithm of CFNN Image Data Fusion in Multi-sensor Data Fusion

    Directory of Open Access Journals (Sweden)

    Xiaohong ZENG

    2014-03-01

    Full Text Available CFNN hybrid system in Multi-sensor data fusion introduced fuzzy logic reasoning and neural network adaptive, self-learning ability, and using fuzzy neurons, so networking skills appropriate to adjust the input and output fuzzy membership function, and can dynamically optimize fuzzy reasoning in global by means of compensated logic algorithm, to make the network more fault tolerance, stability and speed up training. This paper introduces a mathematical model of the image data fusion, and elaborates CFNN image data fusion algorithms, simulation results show that this method can significantly improve the quality of the image data fusion, data fusion with other existing algorithms have a very significant effect.

  7. Confocal FLIM of genetically encoded FRET sensors for quantitative Ca2+ imaging.

    Science.gov (United States)

    Sauer, Benjamin; Tian, Qinghai; Lipp, Peter; Kaestner, Lars

    2014-12-01

    Fluorescence lifetime imaging (FLIM) is a powerful imaging mode that can be combined with confocal imaging. Changes in the fluorescence decay time of a donor in an intramolecular Förster resonance energy transfer (FRET)-based biosensor provide intrinsic quantitative data. Here, we describe a protocol using both the Ca(2+) sensor TN-XL, which uses troponin C, as the Ca(2+)-sensing unit, and the FLIM technology based on time-correlated single-photon counting. © 2014 Cold Spring Harbor Laboratory Press.

  8. Imaging intracellular pH in live cells with a genetically encoded red fluorescent protein sensor.

    Science.gov (United States)

    Tantama, Mathew; Hung, Yin Pun; Yellen, Gary

    2011-07-06

    Intracellular pH affects protein structure and function, and proton gradients underlie the function of organelles such as lysosomes and mitochondria. We engineered a genetically encoded pH sensor by mutagenesis of the red fluorescent protein mKeima, providing a new tool to image intracellular pH in live cells. This sensor, named pHRed, is the first ratiometric, single-protein red fluorescent sensor of pH. Fluorescence emission of pHRed peaks at 610 nm while exhibiting dual excitation peaks at 440 and 585 nm that can be used for ratiometric imaging. The intensity ratio responds with an apparent pK(a) of 6.6 and a >10-fold dynamic range. Furthermore, pHRed has a pH-responsive fluorescence lifetime that changes by ~0.4 ns over physiological pH values and can be monitored with single-wavelength two-photon excitation. After characterizing the sensor, we tested pHRed's ability to monitor intracellular pH by imaging energy-dependent changes in cytosolic and mitochondrial pH.

  9. Giga-pixel lensfree holographic microscopy and tomography using color image sensors.

    Science.gov (United States)

    Isikman, Serhan O; Greenbaum, Alon; Luo, Wei; Coskun, Ahmet F; Ozcan, Aydogan

    2012-01-01

    We report Giga-pixel lensfree holographic microscopy and tomography using color sensor-arrays such as CMOS imagers that exhibit Bayer color filter patterns. Without physically removing these color filters coated on the sensor chip, we synthesize pixel super-resolved lensfree holograms, which are then reconstructed to achieve ~350 nm lateral resolution, corresponding to a numerical aperture of ~0.8, across a field-of-view of ~20.5 mm(2). This constitutes a digital image with ~0.7 Billion effective pixels in both amplitude and phase channels (i.e., ~1.4 Giga-pixels total). Furthermore, by changing the illumination angle (e.g., ± 50°) and scanning a partially-coherent light source across two orthogonal axes, super-resolved images of the same specimen from different viewing angles are created, which are then digitally combined to synthesize tomographic images of the object. Using this dual-axis lensfree tomographic imager running on a color sensor-chip, we achieve a 3D spatial resolution of ~0.35 µm × 0.35 µm × ~2 µm, in x, y and z, respectively, creating an effective voxel size of ~0.03 µm(3) across a sample volume of ~5 mm(3), which is equivalent to >150 Billion voxels. We demonstrate the proof-of-concept of this lensfree optical tomographic microscopy platform on a color CMOS image sensor by creating tomograms of micro-particles as well as a wild-type C. elegans nematode.

  10. Giga-pixel lensfree holographic microscopy and tomography using color image sensors.

    Directory of Open Access Journals (Sweden)

    Serhan O Isikman

    Full Text Available We report Giga-pixel lensfree holographic microscopy and tomography using color sensor-arrays such as CMOS imagers that exhibit Bayer color filter patterns. Without physically removing these color filters coated on the sensor chip, we synthesize pixel super-resolved lensfree holograms, which are then reconstructed to achieve ~350 nm lateral resolution, corresponding to a numerical aperture of ~0.8, across a field-of-view of ~20.5 mm(2. This constitutes a digital image with ~0.7 Billion effective pixels in both amplitude and phase channels (i.e., ~1.4 Giga-pixels total. Furthermore, by changing the illumination angle (e.g., ± 50° and scanning a partially-coherent light source across two orthogonal axes, super-resolved images of the same specimen from different viewing angles are created, which are then digitally combined to synthesize tomographic images of the object. Using this dual-axis lensfree tomographic imager running on a color sensor-chip, we achieve a 3D spatial resolution of ~0.35 µm × 0.35 µm × ~2 µm, in x, y and z, respectively, creating an effective voxel size of ~0.03 µm(3 across a sample volume of ~5 mm(3, which is equivalent to >150 Billion voxels. We demonstrate the proof-of-concept of this lensfree optical tomographic microscopy platform on a color CMOS image sensor by creating tomograms of micro-particles as well as a wild-type C. elegans nematode.

  11. Giga-Pixel Lensfree Holographic Microscopy and Tomography Using Color Image Sensors

    Science.gov (United States)

    Coskun, Ahmet F.; Ozcan, Aydogan

    2012-01-01

    We report Giga-pixel lensfree holographic microscopy and tomography using color sensor-arrays such as CMOS imagers that exhibit Bayer color filter patterns. Without physically removing these color filters coated on the sensor chip, we synthesize pixel super-resolved lensfree holograms, which are then reconstructed to achieve ∼350 nm lateral resolution, corresponding to a numerical aperture of ∼0.8, across a field-of-view of ∼20.5 mm2. This constitutes a digital image with ∼0.7 Billion effective pixels in both amplitude and phase channels (i.e., ∼1.4 Giga-pixels total). Furthermore, by changing the illumination angle (e.g., ±50°) and scanning a partially-coherent light source across two orthogonal axes, super-resolved images of the same specimen from different viewing angles are created, which are then digitally combined to synthesize tomographic images of the object. Using this dual-axis lensfree tomographic imager running on a color sensor-chip, we achieve a 3D spatial resolution of ∼0.35 µm×0.35 µm×∼2 µm, in x, y and z, respectively, creating an effective voxel size of ∼0.03 µm3 across a sample volume of ∼5 mm3, which is equivalent to >150 Billion voxels. We demonstrate the proof-of-concept of this lensfree optical tomographic microscopy platform on a color CMOS image sensor by creating tomograms of micro-particles as well as a wild-type C. elegans nematode. PMID:22984606

  12. High-Performance Motion Estimation for Image Sensors with Video Compression

    OpenAIRE

    Weizhi Xu; Shouyi Yin; Leibo Liu; Zhiyong Liu; Shaojun Wei

    2015-01-01

    It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME) is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME). Pixels of reconstructed...

  13. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme

    OpenAIRE

    Hao Wang; Jie Jiang; Guangjun Zhang

    2017-01-01

    The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibil...

  14. Airborne measurements in the longwave infrared using an imaging hyperspectral sensor

    Science.gov (United States)

    Allard, Jean-Pierre; Chamberland, Martin; Farley, Vincent; Marcotte, Frédérick; Rolland, Matthias; Vallières, Alexandre; Villemaire, André

    2008-08-01

    Emerging applications in Defense and Security require sensors with state-of-the-art sensitivity and capabilities. Among these sensors, the imaging spectrometer is an instrument yielding a large amount of rich information about the measured scene. Standoff detection, identification and quantification of chemicals in the gaseous state is one important application. Analysis of the surface emissivity as a means to classify ground properties and usage is another one. Imaging spectrometers have unmatched capabilities to meet the requirements of these applications. Telops has developed the FIRST, a LWIR hyperspectral imager. The FIRST is based on the Fourier Transform technology yielding high spectral resolution and enabling high accuracy radiometric calibration. The FIRST, a man portable sensor, provides datacubes of up to 320x256 pixels at 0.35mrad spatial resolution over the 8-12 μm spectral range at spectral resolutions of up to 0.25cm-1. The FIRST has been used in several field campaigns, including the demonstration of standoff chemical agent detection [http://dx.doi.org/10.1117/12.795119.1]. More recently, an airborne system integrating the FIRST has been developed to provide airborne hyperspectral measurement capabilities. The airborne system and its capabilities are presented in this paper. The FIRST sensor modularity enables operation in various configurations such as tripod-mounted and airborne. In the airborne configuration, the FIRST can be operated in push-broom mode, or in staring mode with image motion compensation. This paper focuses on the airborne operation of the FIRST sensor.

  15. Automatic fusion of multiple-sensor and multiple-season images

    Science.gov (United States)

    Lutsiv, Vadim R.; Malyshev, Igor A.; Pepelka, Vadim

    2001-08-01

    The aim of investigation was developing the data fusion algorithms dealing with the aerial and cosmic pictures taken in different seasons from the differing view points, or formed by differing kinds of sensors (visible, IR, SAR). This task couldn't be solved using the traditional correlation based approaches, thus we chose the structural juxtaposition of the stable characteristic details of pictures as the general technique for images matching and fusion. The structural matching usually was applied in the expert systems where the rather reliable results were based on the target specific algorithms. In the contrast to such classifiers our algorithm deals with the aerial and cosmic photographs of arbitrary contents for which the application specific algorithms couldn't be used. To deal with the arbitrary images we chose a structural description alphabet based on the simple contour components: arcs, angles, segments of straight lines, line branching. This alphabet is applicable to the arbitrary images, and its elements due to their simplicity are stable under different image transformations and distortions. To distinguish between the similar simple elements in the huge multitudes of image contours we applied the hierarchical contour descriptions: we grouped the contour elements belonging to the uninterrupted lines or to the separate image regions. Different types of structural matching were applied: the ones based on the simulated annealing and on the restricted examination of all hypotheses. The matching results reached were reliable both for the multiple season and multiple sensor images.

  16. Multi-sensor radiation detection, imaging, and fusion

    Energy Technology Data Exchange (ETDEWEB)

    Vetter, Kai [Department of Nuclear Engineering, University of California, Berkeley, CA 94720 (United States); Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2016-01-01

    Glenn Knoll was one of the leaders in the field of radiation detection and measurements and shaped this field through his outstanding scientific and technical contributions, as a teacher, his personality, and his textbook. His Radiation Detection and Measurement book guided me in my studies and is now the textbook in my classes in the Department of Nuclear Engineering at UC Berkeley. In the spirit of Glenn, I will provide an overview of our activities at the Berkeley Applied Nuclear Physics program reflecting some of the breadth of radiation detection technologies and their applications ranging from fundamental studies in physics to biomedical imaging and to nuclear security. I will conclude with a discussion of our Berkeley Radwatch and Resilient Communities activities as a result of the events at the Dai-ichi nuclear power plant in Fukushima, Japan more than 4 years ago. - Highlights: • .Electron-tracking based gamma-ray momentum reconstruction. • .3D volumetric and 3D scene fusion gamma-ray imaging. • .Nuclear Street View integrates and associates nuclear radiation features with specific objects in the environment. • Institute for Resilient Communities combines science, education, and communities to minimize impact of disastrous events.

  17. Self-mixing imaging sensor using a monolithic VCSEL array with parallel readout.

    Science.gov (United States)

    Lim, Yah Leng; Nikolic, Milan; Bertling, Karl; Kliese, Russell; Rakić, Aleksandar D

    2009-03-30

    The advent of two-dimensional arrays of Vertical-Cavity Surface-Emitting Lasers (VCSELs) opened a range of potential sensing applications for nanotechnology and life-sciences. With each laser independently addressable, there is scope for the development of high-resolution full-field imaging systems with electronic scanning. We report on the first implementation of a self-mixing imaging system with parallel readout based on a monolithic VCSEL array. A self-mixing Doppler signal was acquired from the variation in VCSEL junction voltage rather than from a conventional variation in laser power, thus markedly reducing the system complexity. The sensor was validated by imaging the velocity distribution on the surface of a rotating disc. The results obtained demonstrate that monolithic arrays of Vertical-Cavity lasers present a powerful tool for the advancement of self-mixing sensors into parallel imaging paradigms and provide a stepping stone to the implementation of a full-field self-mixing sensor systems.

  18. A Full Parallel Event Driven Readout Technique for Area Array SPAD FLIM Image Sensors

    Directory of Open Access Journals (Sweden)

    Kaiming Nie

    2016-01-01

    Full Text Available This paper presents a full parallel event driven readout method which is implemented in an area array single-photon avalanche diode (SPAD image sensor for high-speed fluorescence lifetime imaging microscopy (FLIM. The sensor only records and reads out effective time and position information by adopting full parallel event driven readout method, aiming at reducing the amount of data. The image sensor includes four 8 × 8 pixel arrays. In each array, four time-to-digital converters (TDCs are used to quantize the time of photons’ arrival, and two address record modules are used to record the column and row information. In this work, Monte Carlo simulations were performed in Matlab in terms of the pile-up effect induced by the readout method. The sensor’s resolution is 16 × 16. The time resolution of TDCs is 97.6 ps and the quantization range is 100 ns. The readout frame rate is 10 Mfps, and the maximum imaging frame rate is 100 fps. The chip’s output bandwidth is 720 MHz with an average power of 15 mW. The lifetime resolvability range is 5–20 ns, and the average error of estimated fluorescence lifetimes is below 1% by employing CMM to estimate lifetimes.

  19. A sprayable luminescent pH sensor and its use for wound imaging in vivo.

    Science.gov (United States)

    Schreml, Stephan; Meier, Robert J; Weiß, Katharina T; Cattani, Julia; Flittner, Dagmar; Gehmert, Sebastian; Wolfbeis, Otto S; Landthaler, Michael; Babilas, Philipp

    2012-12-01

    Non-invasive luminescence imaging is of great interest for studying biological parameters in wound healing, tumors and other biomedical fields. Recently, we developed the first method for 2D luminescence imaging of pH in vivo on humans, and a novel method for one-stop-shop visualization of oxygen and pH using the RGB read-out of digital cameras. Both methods make use of semitransparent sensor foils. Here, we describe a sprayable ratiometric luminescent pH sensor, which combines properties of both these methods. Additionally, a major advantage is that the sensor spray is applicable to very uneven tissue surfaces due to its consistency. A digital RGB image of the spray on tissue is taken. The signal of the pH indicator (fluorescein isothiocyanate) is stored in the green channel (G), while that of the reference dye [ruthenium(II)-tris-(4,7-diphenyl-1,10-phenanthroline)] is stored in the red channel (R). Images are processed by rationing luminescence intensities (G/R) to result in pseudocolor pH maps of tissues, e.g. wounds. © 2012 John Wiley & Sons A/S.

  20. Indoor and Outdoor Depth Imaging of Leaves With Time-of-Flight and Stereo Vision Sensors

    DEFF Research Database (Denmark)

    Kazmi, Wajahat; Foix, Sergi; Alenya, Guilliem

    2014-01-01

    In this article we analyze the response of Time-of-Flight (ToF) cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. ToF cameras are sensitive to ambient light and have low resolution but deliver...... poorly under sunlight. Stereo vision is comparatively more robust to ambient illumination and provides high resolution depth data but is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves...... of the sensors. Performance of three different ToF cameras (PMD CamBoard, PMD CamCube and SwissRanger SR4000) is compared against selected stereo correspondence algorithms (local correlation and graph cuts). PMD CamCube has better cancelation of sunlight, followed by CamBoard, while SwissRanger SR4000 performs...

  1. Radiometric inter-sensor cross-calibration uncertainty using a traceable high accuracy reference hyperspectral imager

    Science.gov (United States)

    Gorroño, Javier; Banks, Andrew C.; Fox, Nigel P.; Underwood, Craig

    2017-08-01

    Optical earth observation (EO) satellite sensors generally suffer from drifts and biases relative to their pre-launch calibration, caused by launch and/or time in the space environment. This places a severe limitation on the fundamental reliability and accuracy that can be assigned to satellite derived information, and is particularly critical for long time base studies for climate change and enabling interoperability and Analysis Ready Data. The proposed TRUTHS (Traceable Radiometry Underpinning Terrestrial and Helio-Studies) mission is explicitly designed to address this issue through re-calibrating itself directly to a primary standard of the international system of units (SI) in-orbit and then through the extension of this SI-traceability to other sensors through in-flight cross-calibration using a selection of Committee on Earth Observation Satellites (CEOS) recommended test sites. Where the characteristics of the sensor under test allows, this will result in a significant improvement in accuracy. This paper describes a set of tools, algorithms and methodologies that have been developed and used in order to estimate the radiometric uncertainty achievable for an indicative target sensor through in-flight cross-calibration using a well-calibrated hyperspectral SI-traceable reference sensor with observational characteristics such as TRUTHS. In this study, Multi-Spectral Imager (MSI) of Sentinel-2 and Landsat-8 Operational Land Imager (OLI) is evaluated as an example, however the analysis is readily translatable to larger-footprint sensors such as Sentinel-3 Ocean and Land Colour Instrument (OLCI) and Visible Infrared Imaging Radiometer Suite (VIIRS). This study considers the criticality of the instrumental and observational characteristics on pixel level reflectance factors, within a defined spatial region of interest (ROI) within the target site. It quantifies the main uncertainty contributors in the spectral, spatial, and temporal domains. The resultant tool

  2. A novel dual gating approach using joint inertial sensors: implications for cardiac PET imaging

    Science.gov (United States)

    Jafari Tadi, Mojtaba; Teuho, Jarmo; Lehtonen, Eero; Saraste, Antti; Pänkäälä, Mikko; Koivisto, Tero; Teräs, Mika

    2017-10-01

    Positron emission tomography (PET) is a non-invasive imaging technique which may be considered as the state of art for the examination of cardiac inflammation due to atherosclerosis. A fundamental limitation of PET is that cardiac and respiratory motions reduce the quality of the achieved images. Current approaches for motion compensation involve gating the PET data based on the timing of quiescent periods of cardiac and respiratory cycles. In this study, we present a novel gating method called microelectromechanical (MEMS) dual gating which relies on joint non-electrical sensors, i.e. tri-axial accelerometer and gyroscope. This approach can be used for optimized selection of quiescent phases of cardiac and respiratory cycles. Cardiomechanical activity according to echocardiography observations was investigated to confirm whether this dual sensor solution can provide accurate trigger timings for cardiac gating. Additionally, longitudinal chest motions originating from breathing were measured by accelerometric- and gyroscopic-derived respiratory (ADR and GDR) tracking. The ADR and GDR signals were evaluated against Varian real-time position management (RPM) signals in terms of amplitude and phase. Accordingly, high linear correlation and agreement were achieved between the reference electrocardiography, RPM, and measured MEMS signals. We also performed a Ge-68 phantom study to evaluate possible metal artifacts caused by the integrated read-out electronics including mechanical sensors and semiconductors. The reconstructed phantom images did not reveal any image artifacts. Thus, it was concluded that MEMS-driven dual gating can be used in PET studies without an effect on the quantitative or visual accuracy of the PET images. Finally, the applicability of MEMS dual gating for cardiac PET imaging was investigated with two atherosclerosis patients. Dual gated PET images were successfully reconstructed using only MEMS signals and both qualitative and quantitative

  3. Sensor for real-time determining the polarization state distribution in the object images

    Science.gov (United States)

    Kilosanidze, Barbara; Kakauridze, George; Kvernadze, Teimuraz; Kurkhuli, Georgi

    2015-10-01

    An innovative real-time polarimetric method is presented based on the integral polarization-holographic diffraction element developed by us. This element is suggested to be used for real time analysis of the polarization state of light, to help highlight military equipment in a scene. In the process of diffraction, the element decomposes light incoming on them onto orthogonal circular and linear basis. The simultaneous measurement of the intensities of four diffracted beams by means of photodetectors and the appropriate software enable the polarization state of an analyzable light (all the four Stokes parameters) and its change to be obtained in real time. The element with photodetectors and software is a sensor of the polarization state. Such a sensor allows the point-by-point distribution of the polarization state in the images of objects to be determined. The spectral working range of such an element is 530 - 1600 nm. This sensor is compact, lightweight and relatively cheap, and it can be easily installed on any space and airborne platforms. It has no mechanically moving or electronically controlled elements. The speed of its operation is limited only by computer processing. Such a sensor is proposed to be use for the determination of the characteristics of the surface of objects at optical remote sensing by means of the determination of the distribution of the polarization state of light in the image of recognizable object and the dispersion of this distribution, which provides additional information while identifying an object. The possibility of detection of a useful signal of the predetermined polarization on a background of statistically random noise of an underlying surface is also possible. The application of the sensor is also considered for the nondestructive determination of the distribution of stressed state in different constructions based on the determination of the distribution of the polarization state of light reflected from the object under

  4. Simulation of Meteosat Third Generation-Lightning Imager through tropical rainfall measuring mission: Lightning Imaging Sensor data

    Science.gov (United States)

    Biron, Daniele; De Leonibus, Luigi; Laquale, Paolo; Labate, Demetrio; Zauli, Francesco; Melfi, Davide

    2008-08-01

    The Centro Nazionale di Meteorologia e Climatologia Aeronautica recently hosted a fellowship sponsored by Galileo Avionica, with the intent to study and perform a simulation of Meteosat Third Generation - Lightning Imager (MTG-LI) sensor behavior through Tropical Rainfall Measuring Mission - Lightning Imaging Sensor data (TRMM-LIS). For the next generation of earth observation geostationary satellite, major operating agencies are planning to insert an optical imaging mission, that continuously observes lightning pulses in the atmosphere; EUMETSAT has decided in recent years that one of the three candidate mission to be flown on MTG is LI, a Lightning Imager. MTG-LI mission has no Meteosat Second Generation heritage, but users need to evaluate the possible real time data output of the instrument to agree in inserting it on MTG payload. Authors took the expected LI design from MTG Mission Requirement Document, and reprocess real lightning dataset, acquired from space by TRMM-LIS instrument, to produce a simulated MTG-LI lightning dataset. The simulation is performed in several run, varying Minimum Detectable Energy, taking into account processing steps from event detection to final lightning information. A definition of the specific meteorological requirements is given from the potential use in meteorology of lightning final information for convection estimation and numerical cloud modeling. Study results show the range of instrument requirements relaxation which lead to minimal reduction in the final lightning information.

  5. New optical sensor systems for high-resolution satellite, airborne and terrestrial imaging systems

    Science.gov (United States)

    Eckardt, Andreas; Börner, Anko; Lehmann, Frank

    2007-10-01

    The department of Optical Information Systems (OS) at the Institute of Robotics and Mechatronics of the German Aerospace Center (DLR) has more than 25 years experience with high-resolution imaging technology. The technology changes in the development of detectors, as well as the significant change of the manufacturing accuracy in combination with the engineering research define the next generation of spaceborne sensor systems focusing on Earth observation and remote sensing. The combination of large TDI lines, intelligent synchronization control, fast-readable sensors and new focal-plane concepts open the door to new remote-sensing instruments. This class of instruments is feasible for high-resolution sensor systems regarding geometry and radiometry and their data products like 3D virtual reality. Systemic approaches are essential for such designs of complex sensor systems for dedicated tasks. The system theory of the instrument inside a simulated environment is the beginning of the optimization process for the optical, mechanical and electrical designs. Single modules and the entire system have to be calibrated and verified. Suitable procedures must be defined on component, module and system level for the assembly test and verification process. This kind of development strategy allows the hardware-in-the-loop design. The paper gives an overview about the current activities at DLR in the field of innovative sensor systems for photogrammetric and remote sensing purposes.

  6. A CMOS image sensor using high-speed lock-in pixels for stimulated Raman scattering

    Science.gov (United States)

    Lioe, DeXing; Mars, Kamel; Takasawa, Taishi; Yasutomi, Keita; Kagawa, Keiichiro; Hashimoto, Mamoru; Kawahito, Shoji

    2016-03-01

    A CMOS image sensor using high-speed lock-in pixels for stimulated Raman scattering (SRS) spectroscopy is presented in this paper. The effective SRS signal from the stimulated emission of SRS mechanism is very small in contrast to the offset of a probing laser source, which is in the ratio of 10-4 to 10-5. In order to extract this signal, the common offset component is removed, and the small difference component is sampled using switched-capacitor integrator with a fully differential amplifier. The sampling is performed over many integration cycles to achieve appropriate amplification. The lock-in pixels utilizes high-speed lateral electric field charge modulator (LEFM) to demodulate the SRS signal which is modulated at high-frequency of 20MHz. A prototype chip is implemented using 0.11μm CMOS image sensor technology.

  7. Optical palpation: optical coherence tomography-based tactile imaging using a compliant sensor.

    Science.gov (United States)

    Kennedy, Kelsey M; Es'haghian, Shaghayegh; Chin, Lixin; McLaughlin, Robert A; Sampson, David D; Kennedy, Brendan F

    2014-05-15

    We present optical palpation, a tactile imaging technique for mapping micrometer- to millimeter-scale mechanical variations in soft tissue. In optical palpation, a stress sensor consisting of translucent, compliant silicone with known stress-strain behavior is placed on the tissue surface and a compressive load is applied. Optical coherence tomography (OCT) is used to measure the local strain in the sensor, from which the local stress at the sample surface is calculated and mapped onto an image. We present results in tissue-mimicking phantoms, demonstrating the detection of a feature embedded 4.7 mm below the sample surface, well beyond the depth range of OCT. We demonstrate the use of optical palpation to delineate the boundary of a region of tumor in freshly excised human breast tissue, validated against histopathology.

  8. An Analog Gamma Correction Scheme for High Dynamic Range CMOS Logarithmic Image Sensors

    Science.gov (United States)

    Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi

    2014-01-01

    In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process. PMID:25517692

  9. Defense Industrial Base Assessment: U.S. Imaging and Sensors Industry

    Science.gov (United States)

    2006-10-01

    welding , and the C-17 aircraft program.1 Background In the past, highly sophisticated imaging and sensors applications were mainly used for...rn ia Th e H um an -C en te re d Sy st em s L ab re se ar ch p ro gr am s i nv es tig at e hu m an pe rf or m an ce is su es re la te d

  10. Electrodynamics sensor for the image reconstruction process in an electrical charge tomography system.

    Science.gov (United States)

    Rahmat, Mohd Fua'ad; Isa, Mohd Daud; Rahim, Ruzairi Abdul; Hussin, Tengku Ahmad Raja

    2009-01-01

    Electrical charge tomography (EChT) is a non-invasive imaging technique that is aimed to reconstruct the image of materials being conveyed based on data measured by an electrodynamics sensor installed around the pipe. Image reconstruction in electrical charge tomography is vital and has not been widely studied before. Three methods have been introduced before, namely the linear back projection method, the filtered back projection method and the least square method. These methods normally face ill-posed problems and their solutions are unstable and inaccurate. In order to ensure the stability and accuracy, a special solution should be applied to obtain a meaningful image reconstruction result. In this paper, a new image reconstruction method - Least squares with regularization (LSR) will be introduced to reconstruct the image of material in a gravity mode conveyor pipeline for electrical charge tomography. Numerical analysis results based on simulation data indicated that this algorithm efficiently overcomes the numerical instability. The results show that the accuracy of the reconstruction images obtained using the proposed algorithm was enhanced and similar to the image captured by a CCD Camera. As a result, an efficient method for electrical charge tomography image reconstruction has been introduced.

  11. Versatile, Compact, Low-Cost, MEMS-Based Image Stabilization for Imaging Sensor Performance Enhancement Project

    Data.gov (United States)

    National Aeronautics and Space Administration — LW Microsystems proposes to develop a compact, low-cost image stabilization system suitable for use with a wide range of focal-plane imaging systems in remote...

  12. Low Computational-Cost Footprint Deformities Diagnosis Sensor through Angles, Dimensions Analysis and Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    J. Rodolfo Maestre-Rendon

    2017-11-01

    Full Text Available Manual measurements of foot anthropometry can lead to errors since this task involves the experience of the specialist who performs them, resulting in different subjective measures from the same footprint. Moreover, some of the diagnoses that are given to classify a footprint deformity are based on a qualitative interpretation by the physician; there is no quantitative interpretation of the footprint. The importance of providing a correct and accurate diagnosis lies in the need to ensure that an appropriate treatment is provided for the improvement of the patient without risking his or her health. Therefore, this article presents a smart sensor that integrates the capture of the footprint, a low computational-cost analysis of the image and the interpretation of the results through a quantitative evaluation. The smart sensor implemented required the use of a camera (Logitech C920 connected to a Raspberry Pi 3, where a graphical interface was made for the capture and processing of the image, and it was adapted to a podoscope conventionally used by specialists such as orthopedist, physiotherapists and podiatrists. The footprint diagnosis smart sensor (FPDSS has proven to be robust to different types of deformity, precise, sensitive and correlated in 0.99 with the measurements from the digitalized image of the ink mat.

  13. Column-parallel correlated multiple sampling circuits for CMOS image sensors and their noise reduction effects.

    Science.gov (United States)

    Suh, Sungho; Itoh, Shinya; Aoyama, Satoshi; Kawahito, Shoji

    2010-01-01

    For low-noise complementary metal-oxide-semiconductor (CMOS) image sensors, the reduction of pixel source follower noises is becoming very important. Column-parallel high-gain readout circuits are useful for low-noise CMOS image sensors. This paper presents column-parallel high-gain signal readout circuits, correlated multiple sampling (CMS) circuits and their noise reduction effects. In the CMS, the gain of the noise cancelling is controlled by the number of samplings. It has a similar effect to that of an amplified CDS for the thermal noise but is a little more effective for 1/f and RTS noises. Two types of the CMS with simple integration and folding integration are proposed. In the folding integration, the output signal swing is suppressed by a negative feedback using a comparator and one-bit D-to-A converter. The CMS circuit using the folding integration technique allows to realize a very low-noise level while maintaining a wide dynamic range. The noise reduction effects of their circuits have been investigated with a noise analysis and an implementation of a 1Mpixel pinned photodiode CMOS image sensor. Using 16 samplings, dynamic range of 59.4 dB and noise level of 1.9 e(-) for the simple integration CMS and 75 dB and 2.2 e(-) for the folding integration CMS, respectively, are obtained.

  14. Low Computational-Cost Footprint Deformities Diagnosis Sensor through Angles, Dimensions Analysis and Image Processing Techniques.

    Science.gov (United States)

    Maestre-Rendon, J Rodolfo; Rivera-Roman, Tomas A; Sierra-Hernandez, Juan M; Cruz-Aceves, Ivan; Contreras-Medina, Luis M; Duarte-Galvan, Carlos; Fernandez-Jaramillo, Arturo A

    2017-11-22

    Manual measurements of foot anthropometry can lead to errors since this task involves the experience of the specialist who performs them, resulting in different subjective measures from the same footprint. Moreover, some of the diagnoses that are given to classify a footprint deformity are based on a qualitative interpretation by the physician; there is no quantitative interpretation of the footprint. The importance of providing a correct and accurate diagnosis lies in the need to ensure that an appropriate treatment is provided for the improvement of the patient without risking his or her health. Therefore, this article presents a smart sensor that integrates the capture of the footprint, a low computational-cost analysis of the image and the interpretation of the results through a quantitative evaluation. The smart sensor implemented required the use of a camera (Logitech C920) connected to a Raspberry Pi 3, where a graphical interface was made for the capture and processing of the image, and it was adapted to a podoscope conventionally used by specialists such as orthopedist, physiotherapists and podiatrists. The footprint diagnosis smart sensor (FPDSS) has proven to be robust to different types of deformity, precise, sensitive and correlated in 0.99 with the measurements from the digitalized image of the ink mat.

  15. A low-noise wide dynamic range CMOS image sensor with low and high temperatures resistance

    Science.gov (United States)

    Mizobuchi, Koichi; Adachi, Satoru; Tejada, Jose; Oshikubo, Hiromichi; Akahane, Nana; Sugawa, Shigetoshi

    2008-02-01

    A temperature-resistant 1/3 inch SVGA (800×600 pixels) 5.6 μm pixel pitch wide-dynamic-range (WDR) CMOS image sensor has been developed using a lateral-over-flow-integration-capacitor (LOFIC) in a pixel. The sensor chips are fabricated through 0.18 μm 2P3M process with totally optimized front-end-of-line (FEOL) & back-end-of-line (BEOL) for a lower dark current. By implementing a low electrical field potential design for photodiodes, reducing damages, recovering crystal defects and terminating interface states in the FEOL+BEOL, the dark current is improved to 12 e - /pixel-sec at 60 deg.C with 50% reduction from the previous very-low-dark-current (VLDC) FEOL and its contribution to the temporal noise is improved. Furthermore, design optimizations of the readout circuits, especially a signal-and noise-hold circuit and a programmable-gain-amplifier (PGA) are also implemented. The measured temporal noise is 2.4 e -rms at 60 fps (:36 MHz operation). The dynamic-range (DR) is extended to 100 dB with 237 ke - full well capacity. In order to secure the temperature-resistance, the sensor chip also receives both an inorganic cap onto micro lens and a metal hermetic seal package assembly. Image samples at low & high temperatures show significant improvement in image qualities.

  16. Color filters including infrared cut-off integrated on CMOS image sensor.

    Science.gov (United States)

    Frey, Laurent; Parrein, Pascale; Raby, Jacques; Pellé, Catherine; Hérault, Didier; Marty, Michel; Michailos, Jean

    2011-07-04

    A color image was taken with a CMOS image sensor without any infrared cut-off filter, using red, green and blue metal/dielectric filters arranged in Bayer pattern with 1.75 µm pixel pitch. The three colors were obtained by a thickness variation of only two layers in the 7-layer stack, with a technological process including four photolithography levels. The thickness of the filter stack was only half of the traditional color resists, potentially enabling a reduction of optical crosstalk for smaller pixels. Both color errors and signal to noise ratio derived from optimized spectral responses are expected to be similar to color resists associated with infrared filter.

  17. The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors

    Science.gov (United States)

    Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.

    2015-12-01

    Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and

  18. Energy-Constrained Quality Optimization for Secure Image Transmission in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2007-01-01

    Full Text Available Resource allocation for multimedia selective encryption and energy efficient transmission has not been fully investigated in literature for wireless sensor networks (WSNs. In this article, we propose a new cross-layer approach to optimize selectively encrypted image transmission quality in WSNs with strict energy constraint. A new selective image encryption approach favorable for unequal error protection (UEP is proposed, which reduces encryption overhead considerably by controlling the structure of image bitstreams. Also, a novel cross-layer UEP scheme based on cipher-plain-text diversity is studied. In this UEP scheme, resources are unequally and optimally allocated in the encrypted bitstream structure, including data position information and magnitude value information. Simulation studies demonstrate that the proposed approach can simultaneously achieve improved image quality and assured energy efficiency with secure transmissions over WSNs.

  19. High frame rate multi-resonance imaging refractometry with distributed feedback dye laser sensor

    DEFF Research Database (Denmark)

    Vannahme, Christoph; Dufva, Martin; Kristensen, Anders

    2015-01-01

    High frame rate and highly sensitive imaging of refractive index changes on a surface is very promising for studying the dynamics of dissolution, mixing and biological processes without the need for labeling. Here, a highly sensitive distributed feedback (DFB) dye laser sensor for high frame rate...... by analyzing laser light from all areas in parallel with an imaging spectrometer. With this multi-resonance imaging refractometry method, the spatial position in one direction is identified from the horizontal, i.e., spectral position of the multiple laser lines which is obtained from the spectrometer charged...... coupled device (CCD) array. The orthogonal spatial position is obtained from the vertical spatial position on the spectrometer CCD array as in established spatially resolved spectroscopy. Here, the imaging technique is demonstrated by monitoring the motion of small sucrose molecules upon dissolution...

  20. Landsat 7 thermal-IR image sharpening using an artificial neural network and sensor model

    Science.gov (United States)

    Lemeshewsky, G.P.; Schowengerdt, R.A.; ,

    2001-01-01

    The enhanced thematic mapper (plus) (ETM+) instrument on Landsat 7 shares the same basic design as the TM sensors on Landsats 4 and 5, with some significant improvements. In common are six multispectral bands with a 30-m ground-projected instantaneous field of view (GIFOV). However, the thermaL-IR (TIR) band now has a 60-m GIFOV, instead of 120-m. Also, a 15-m panchromatic band has been added. The artificial neural network (NN) image sharpening method described here uses data from the higher spatial resolution ETM+ bands to enhance (sharpen) the spatial resolution of the TIR imagery. It is based on an assumed correlation over multiple scales of resolution, between image edge contrast patterns in the TIR band and several other spectral bands. A multilayer, feedforward NN is trained to approximate TIR data at 60m, given degraded (from 30-m to 60-m) spatial resolution input from spectral bands 7, 5, and 2. After training, the NN output for full-resolution input generates an approximation of a TIR image at 30-m resolution. Two methods are used to degrade the spatial resolution of the imagery used for NN training, and the corresponding sharpening results are compared. One degradation method uses a published sensor transfer function (TF) for Landsat 5 to simulate sensor coarser resolution imagery from higher resolution imagery. For comparison, the second degradation method is simply Gaussian low pass filtering and subsampling, wherein the Gaussian filter approximates the full width at half maximum amplitude characteristics of the TF-based spatial filter. Two fixed-size NNs (that is, number of weights and processing elements) were trained separately with the degraded resolution data, and the sharpening results compared. The comparison evaluates the relative influence of the degradation technique employed and whether or not it is desirable to incorporate a sensor TF model. Preliminary results indicate some improvements for the sensor model-based technique. Further

  1. Non-Quality Controlled Lightning Imaging Sensor (LIS) on International Space Station (ISS) Science Data Vb0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Non-Quality Controlled Lightning Imaging Sensor (LIS) on International Space Station (ISS) Science Data were collected by the LIS instrument on the ISS used to...

  2. An Image Compression Scheme in Wireless Multimedia Sensor Networks Based on NMF

    Directory of Open Access Journals (Sweden)

    Shikang Kong

    2017-02-01

    Full Text Available With the goal of addressing the issue of image compression in wireless multimedia sensor networks with high recovered quality and low energy consumption, an image compression and transmission scheme based on non-negative matrix factorization (NMF is proposed in this paper. First, the NMF algorithm theory is studied. Then, a collaborative mechanism of image capture, block, compression and transmission is completed. Camera nodes capture images and send them to ordinary nodes which use an NMF algorithm for image compression. Compressed images are transmitted to the station by the cluster head node and received from ordinary nodes. The station takes on the image restoration. Simulation results show that, compared with the JPEG2000 and singular value decomposition (SVD compression schemes, the proposed scheme has a higher quality of recovered images and lower total node energy consumption. It is beneficial to reduce the burden of energy consumption and prolong the life of the whole network system, which has great significance for practical applications of WMSNs.

  3. Ultrahigh sensitivity endoscopic camera using a new CMOS image sensor: providing with clear images under low illumination in addition to fluorescent images.

    Science.gov (United States)

    Aoki, Hisae; Yamashita, Hiromasa; Mori, Toshiyuki; Fukuyo, Tsuneo; Chiba, Toshio

    2014-11-01

    We developed a new ultrahigh-sensitive CMOS camera using a specific sensor that has a wide range of spectral sensitivity characteristics. The objective of this study is to present our updated endoscopic technology that has successfully integrated two innovative functions; ultrasensitive imaging as well as advanced fluorescent viewing. Two different experiments were conducted. One was carried out to evaluate the function of the ultrahigh-sensitive camera. The other was to test the availability of the newly developed sensor and its performance as a fluorescence endoscope. In both studies, the distance from the endoscopic tip to the target was varied and those endoscopic images in each setting were taken for further comparison. In the first experiment, the 3-CCD camera failed to display the clear images under low illumination, and the target was hardly seen. In contrast, the CMOS camera was able to display the targets regardless of the camera-target distance under low illumination. Under high illumination, imaging quality given by both cameras was quite alike. In the second experiment as a fluorescence endoscope, the CMOS camera was capable of clearly showing the fluorescent-activated organs. The ultrahigh sensitivity CMOS HD endoscopic camera is expected to provide us with clear images under low illumination in addition to the fluorescent images under high illumination in the field of laparoscopic surgery.

  4. Development and evaluation of a digital radiographic system based on CMOS image sensor

    Science.gov (United States)

    Kim, Ho Kyung; Cho, Gyuseong; Lee, Seung Wook; Shin, Young Hoon; Cho, Hyo Sung

    2001-06-01

    A cost-effective digital radiographic system with a large field-of-view (FOV) of 17"/spl times/17" has been developed. The cascaded imaging system mainly consists of three parts: (1) a phosphor screen to convert incident X-rays into visible photons; (2) a matrix of 8/spl times/8 array of lens assembly to efficiently collect visible photons emitted by the phosphor screen; and (3) 8/spl times/8 complementary metal-oxide-semiconductor (CMOS) image sensors, aligned to the corresponding lens assembly. Although low in cost due to economical CMOS image sensors, the overall performance features are comparable with other commercial digital radiographic systems. From the analysis of signal and noise propagation, the system is not an "X-ray quantum-limited" system, rather the system has secondary quantum sink at the light collecting stage. The system resolution is about 2 line pairs per millimeter from both of measured from X-ray images of a line-pair test pattern and the calculation of the modulation-transfer function. Detailed experimental and theoretical analysis of performance are discussed.

  5. Planetary exploration with optical imaging systems review: what is the best sensor for future missions

    Science.gov (United States)

    Michaelis, H.; Behnke, T.; Bredthauer, R.; Holland, A.; Janesick, J.; Jaumann, R.; Keller, H. U.; Magrin, D.; Greggio, D.; Mottola, Stefano; Thomas, N.; Smith, P.

    2017-11-01

    When we talk about planetary exploration missions most people think spontaneously about fascinating images from other planets or close-up pictures of small planetary bodies such as asteroids and comets. Such images come in most cases from VIS/NIR- imaging- systems, simply called `cameras', which were typically built by institutes in collaboration with industry. Until now, they have nearly all been based on silicon CCD sensors, they have filter wheels and have often high power-consuming electronics. The question is, what are the challenges for future missions and what can be done to improve performance and scientific output. The exploration of Mars is ongoing. NASA and ESA are planning future missions to the outer planets like to the icy Jovian moons. Exploration of asteroids and comets are in focus of several recent and future missions. Furthermore, the detection and characterization of exo-planets will keep us busy for next generations. The paper is discussing the challenges and visions of imaging sensors for future planetary exploration missions. The focus of the talk is monolithic VIS/NIR- detectors.

  6. Fluorescence lifetime imaging using a single photon avalanche diode array sensor (Conference Presentation)

    Science.gov (United States)

    Wargocki, Piotr M.; Spence, David J.; Goldys, Ewa M.; Charbon, Edoardo; Bruschini, Claudio E.; Antalović, Ivan Michel; Burri, Samuel

    2017-02-01

    Single photon detectors allows us work with the weakest fluorescence signals. Single photon arrays, combined with ps-controlled gating allow us to create image maps of fluorescence lifetimes, which can be used for in-vivo discrimination of tissue activity. Here we present fluorescence lifetime imaging using the `SwissSPAD' sensor, a 512-by-128-pixel array of gated single photon detectors, fabricated in a standard high-voltage 0.35 μm CMOS process. We present a protocol for spatially resolved lifetime measurements where the lifetime can be retrieved for each pixel. We demonstrate the system by imaging patterns of Fluorescein and Rhodamine B on test slides, as well as measuring mixed samples to retrieve both components of the decay lifetime. The single photon sensitivity of the sensor creates a valuable instrument to perform live cell or live animal (in vivo) measurements of the weak autofluorescent signals, for example distinguishing unlabelled free and bound NADH. Our ultimate goal is to create a real time fluorescence lifetime imaging system, possibly integrated into augmented reality goggles, which could allow immediate discrimination of in vivo tissues.

  7. Design of a Solar Tracking System Using the Brightest Region in the Sky Image Sensor.

    Science.gov (United States)

    Wei, Ching-Chuan; Song, Yu-Chang; Chang, Chia-Chi; Lin, Chuan-Bi

    2016-11-25

    Solar energy is certainly an energy source worth exploring and utilizing because of the environmental protection it offers. However, the conversion efficiency of solar energy is still low. If the photovoltaic panel perpendicularly tracks the sun, the solar energy conversion efficiency will be improved. In this article, we propose an innovative method to track the sun using an image sensor. In our method, it is logical to assume the points of the brightest region in the sky image representing the location of the sun. Then, the center of the brightest region is assumed to be the solar-center, and is mathematically calculated using an embedded processor (Raspberry Pi). Finally, the location information on the sun center is sent to the embedded processor to control two servo motors that are capable of moving both horizontally and vertically to track the sun. In comparison with the existing sun tracking methods using image sensors, such as the Hough transform method, our method based on the brightest region in the sky image remains accurate under conditions such as a sunny day and building shelter. The practical sun tracking system using our method was implemented and tested. The results reveal that the system successfully captured the real sun center in most weather conditions, and the servo motor system was able to direct the photovoltaic panel perpendicularly to the sun center. In addition, our system can be easily and practically integrated, and can operate in real-time.

  8. Design of a Solar Tracking System Using the Brightest Region in the Sky Image Sensor

    Directory of Open Access Journals (Sweden)

    Ching-Chuan Wei

    2016-11-01

    Full Text Available Solar energy is certainly an energy source worth exploring and utilizing because of the environmental protection it offers. However, the conversion efficiency of solar energy is still low. If the photovoltaic panel perpendicularly tracks the sun, the solar energy conversion efficiency will be improved. In this article, we propose an innovative method to track the sun using an image sensor. In our method, it is logical to assume the points of the brightest region in the sky image representing the location of the sun. Then, the center of the brightest region is assumed to be the solar-center, and is mathematically calculated using an embedded processor (Raspberry Pi. Finally, the location information on the sun center is sent to the embedded processor to control two servo motors that are capable of moving both horizontally and vertically to track the sun. In comparison with the existing sun tracking methods using image sensors, such as the Hough transform method, our method based on the brightest region in the sky image remains accurate under conditions such as a sunny day and building shelter. The practical sun tracking system using our method was implemented and tested. The results reveal that the system successfully captured the real sun center in most weather conditions, and the servo motor system was able to direct the photovoltaic panel perpendicularly to the sun center. In addition, our system can be easily and practically integrated, and can operate in real-time.

  9. Information theory analysis of sensor-array imaging systems for computer vision

    Science.gov (United States)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.; Self, M. O.

    1983-01-01

    Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95 percent) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30 percent at high SNRs.

  10. Gold nanoparticle flow sensors designed for dynamic X-ray imaging in biofluids.

    Science.gov (United States)

    Ahn, Sungsook; Jung, Sung Yong; Lee, Jin Pyung; Kim, Hae Koo; Lee, Sang Joon

    2010-07-27

    X-ray-based imaging is one of the most powerful and convenient methods in terms of versatility in applicable energy and high performance in use. Different from conventional nuclear medicine imaging, contrast agents are required in X-ray imaging especially for effectively targeted and molecularly specific functions. Here, in contrast to much reported static accumulation of the contrast agents in targeted organs, dynamic visualization in a living organism is successfully accomplished by the particle-traced X-ray imaging for the first time. Flow phenomena across perforated end walls of xylem vessels in rice are monitored by a gold nanoparticle (AuNP) (approximately 20 nm in diameter) as a flow tracing sensor working in nontransparent biofluids. AuNPs are surface-modified to control the hydrodynamic properties such as hydrodynamic size (DH), zeta-potential, and surface plasmonic properties in aqueous conditions. Transmission electron microscopy (TEM), scanning electron microscopy (SEM), X-ray nanoscopy (XN), and X-ray microscopy (XM) are used to correlate the interparticle interactions with X-ray absorption ability. Cluster formation and X-ray contrast ability of the AuNPs are successfully modulated by controlling the interparticle interactions evaluated as flow-tracing sensors.

  11. A radiographic imaging system based upon a 2-D silicon microstrip sensor

    CERN Document Server

    Papanestis, A; Corrin, E; Raymond, M; Hall, G; Triantis, F A; Manthos, N; Evagelou, I; Van den Stelt, P; Tarrant, T; Speller, R D; Royle, G F

    2000-01-01

    A high resolution, direct-digital detector system based upon a 2-D silicon microstrip sensor has been designed, built and is undergoing evaluation for applications in dentistry and mammography. The sensor parameters and image requirements were selected using Monte Carlo simulations. Sensors selected for evaluation have a strip pitch of 50mum on the p-side and 80mum on the n-side. Front-end electronics and data acquisition are based on the APV6 chip and were adapted from systems used at CERN for high-energy physics experiments. The APV6 chip is not self-triggering so data acquisition is done at a fixed trigger rate. This paper describes the mammographic evaluation of the double sided microstrip sensor. Raw data correction procedures were implemented to remove the effects of dead strips and non-uniform response. Standard test objects (TORMAX) were used to determine limiting spatial resolution and detectability. MTFs were determined using the edge response. The results indicate that the spatial resolution of the...

  12. Characterisation of a novel reverse-biased PPD CMOS image sensor

    Science.gov (United States)

    Stefanov, K. D.; Clarke, A. S.; Ivory, J.; Holland, A. D.

    2017-11-01

    A new pinned photodiode (PPD) CMOS image sensor (CIS) has been developed and characterised. The sensor can be fully depleted by means of reverse bias applied to the substrate, and the principle of operation is applicable to very thick sensitive volumes. Additional n-type implants under the pixel p-wells, called Deep Depletion Extension (DDE), have been added in order to eliminate the large parasitic substrate current that would otherwise be present in a normal device. The first prototype has been manufactured on a 18 μm thick, 1000 Ω .cm epitaxial silicon wafers using 180 nm PPD image sensor process at TowerJazz Semiconductor. The chip contains arrays of 10 μm and 5.4 μm pixels, with variations of the shape, size and the depth of the DDE implant. Back-side illuminated (BSI) devices were manufactured in collaboration with Teledyne e2v, and characterised together with the front-side illuminated (FSI) variants. The presented results show that the devices could be reverse-biased without parasitic leakage currents, in good agreement with simulations. The new 10 μm pixels in both BSI and FSI variants exhibit nearly identical photo response to the reference non-modified pixels, as characterised with the photon transfer curve. Different techniques were used to measure the depletion depth in FSI and BSI chips, and the results are consistent with the expected full depletion.

  13. Development of an ordered array of optoelectrochemical individually readable sensors with submicrometer dimensions: application to remote electrochemiluminescence imaging.

    Science.gov (United States)

    Chovin, Arnaud; Garrigue, Patrick; Vinatier, Philippe; Sojic, Neso

    2004-01-15

    A novel array of optoelectrochemical submicrometer sensors for remote electrochemiluminescence (ECL) imaging is presented. This device was fabricated by chemical etching of a coherent optical fiber bundle to produce a nanotip array. The surface of the etched bundle was sputter-coated with a thin layer of indium tin oxide in order to create a transparent and electrically conductive surface that is insulated eventually by a new electrophoretic paint except for the apex of the tip. These fabrication steps produced an ordered array of optoelectrochemical sensors with submicrometer dimensions that retains the optical fiber bundle architecture. The electrochemical behavior of the sensor array was independently characterized by cyclic voltammetry and ECL experiments. The steady-state current indicates that the sensors are diffusively independent. This sensor array was further studied with a co-reactant ECL model system, such as Ru(bpy)(3)(2+)/TPrA. We clearly observed an ordered array of individual ECL micrometer spots, which corresponds to the sensor array structure. While the sensors of the array are not individually addressable electrochemically, we could establish that the sensors are optically independent and individually readable. Finally, we show that remote ECL imaging is performed quantitatively through the optoelectrochemical sensor array itself.

  14. Imaging Voltage in Genetically Defined Neuronal Subpopulations with a Cre Recombinase-Targeted Hybrid Voltage Sensor.

    Science.gov (United States)

    Bayguinov, Peter O; Ma, Yihe; Gao, Yu; Zhao, Xinyu; Jackson, Meyer B

    2017-09-20

    Genetically encoded voltage indicators create an opportunity to monitor electrical activity in defined sets of neurons as they participate in the complex patterns of coordinated electrical activity that underlie nervous system function. Taking full advantage of genetically encoded voltage indicators requires a generalized strategy for targeting the probe to genetically defined populations of cells. To this end, we have generated a mouse line with an optimized hybrid voltage sensor (hVOS) probe within a locus designed for efficient Cre recombinase-dependent expression. Crossing this mouse with Cre drivers generated double transgenics expressing hVOS probe in GABAergic, parvalbumin, and calretinin interneurons, as well as hilar mossy cells, new adult-born neurons, and recently active neurons. In each case, imaging in brain slices from male or female animals revealed electrically evoked optical signals from multiple individual neurons in single trials. These imaging experiments revealed action potentials, dynamic aspects of dendritic integration, and trial-to-trial fluctuations in response latency. The rapid time response of hVOS imaging revealed action potentials with high temporal fidelity, and enabled accurate measurements of spike half-widths characteristic of each cell type. Simultaneous recording of rapid voltage changes in multiple neurons with a common genetic signature offers a powerful approach to the study of neural circuit function and the investigation of how neural networks encode, process, and store information. SIGNIFICANCE STATEMENT Genetically encoded voltage indicators hold great promise in the study of neural circuitry, but realizing their full potential depends on targeting the sensor to distinct cell types. Here we present a new mouse line that expresses a hybrid optical voltage sensor under the control of Cre recombinase. Crossing this line with Cre drivers generated double-transgenic mice, which express this sensor in targeted cell types. In

  15. Few-photon color imaging using energy-dispersive superconducting transition-edge sensor spectrometry.

    Science.gov (United States)

    Niwa, Kazuki; Numata, Takayuki; Hattori, Kaori; Fukuda, Daiji

    2017-04-04

    Highly sensitive spectral imaging is increasingly being demanded in bioanalysis research and industry to obtain the maximum information possible from molecules of different colors. We introduce an application of the superconducting transition-edge sensor (TES) technique to highly sensitive spectral imaging. A TES is an energy-dispersive photodetector that can distinguish the wavelength of each incident photon. Its effective spectral range is from the visible to the infrared (IR), up to 2800 nm, which is beyond the capabilities of other photodetectors. TES was employed in this study in a fiber-coupled optical scanning microscopy system, and a test sample of a three-color ink pattern was observed. A red-green-blue (RGB) image and a near-IR image were successfully obtained in the few-incident-photon regime, whereas only a black and white image could be obtained using a photomultiplier tube. Spectral data were also obtained from a selected focal area out of the entire image. The results of this study show that TES is feasible for use as an energy-dispersive photon-counting detector in spectral imaging applications.

  16. Novel image fusion quality metrics based on sensor models and image statistics

    Science.gov (United States)

    Smith, Forrest A.; Chari, Srikant; Halford, Carl E.; Fanning, Jonathan; Reynolds, Joseph P.

    2009-05-01

    This paper presents progress in image fusion modeling. One fusion quality metric based on the Targeting Task performance (TTP) metric and another based on entropy are presented. A human perception test was performed with fused imagery to determine effectiveness of the metrics in predicting image fusion quality. Both fusion metrics first establish which of two source images is ideal in a particular spatial frequency pass band. The fused output of a given algorithm is then measured against this ideal in each pass band. The entropy based fusion quality metric (E-FQM) uses statistical information (entropy) from the images while the Targeting Task Performance fusion quality metric (TTPFQM) utilizes the TTP metric value in each spatial frequency band. This TTP metric value is the measure of available excess contrast determined by the Contrast Threshold Function (CTF) of the source system and the target contrast. The paper also proposes an image fusion algorithm that chooses source image contributions using a quality measure similar to the TTP-FQM. To test the effectiveness of TTP-FQM and E-FQM in predicting human image quality preferences, SWIR and LWIR imagery of tanks were fused using four different algorithms. A paired comparison test was performed with both source and fused imagery as stimuli. Eleven observers were asked to select which image enabled them to better identify the target. Over the ensemble of test images, the experiment showed that both TTP-FQM and E-FQM were capable of identifying the fusion algorithms most and least preferred by human observers. Analysis also showed that the performance of the TTP-FQM and E-FQM in identifying human image preferences are better than existing fusion quality metrics such as the Weighted Fusion Quality Index and Mutual Information.

  17. Transmission-Type 2-Bit Programmable Metasurface for Single-Sensor and Single-Frequency Microwave Imaging

    Science.gov (United States)

    Li, Yun Bo; Li, Lian Lin; Xu, Bai Bing; Wu, Wei; Wu, Rui Yuan; Wan, Xiang; Cheng, Qiang; Cui, Tie Jun

    2016-03-01

    The programmable and digital metamaterials or metasurfaces presented recently have huge potentials in designing real-time-controlled electromagnetic devices. Here, we propose the first transmission-type 2-bit programmable coding metasurface for single-sensor and single- frequency imaging in the microwave frequency. Compared with the existing single-sensor imagers composed of active spatial modulators with their units controlled independently, we introduce randomly programmable metasurface to transform the masks of modulators, in which their rows and columns are controlled simultaneously so that the complexity and cost of the imaging system can be reduced drastically. Different from the single-sensor approach using the frequency agility, the proposed imaging system makes use of variable modulators under single frequency, which can avoid the object dispersion. In order to realize the transmission-type 2-bit programmable metasurface, we propose a two-layer binary coding unit, which is convenient for changing the voltages in rows and columns to switch the diodes in the top and bottom layers, respectively. In our imaging measurements, we generate the random codes by computer to achieve different transmission patterns, which can support enough multiple modes to solve the inverse-scattering problem in the single-sensor imaging. Simple experimental results are presented in the microwave frequency, validating our new single-sensor and single-frequency imaging system.

  18. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing

    Science.gov (United States)

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  19. Radiometric, geometric, and image quality assessment of ALOS AVNIR-2 and PRISM sensors

    Science.gov (United States)

    Saunier, S.; Goryl, P.; Chander, G.; Santer, R.; Bouvet, M.; Collet, B.; Mambimba, A.; Kocaman, Aksakal S.

    2010-01-01

    The Advanced Land Observing Satellite (ALOS) was launched on January 24, 2006, by a Japan Aerospace Exploration Agency (JAXA) H-IIA launcher. It carries three remote-sensing sensors: 1) the Advanced Visible and Near-Infrared Radiometer type 2 (AVNIR-2); 2) the Panchromatic Remote-Sensing Instrument for Stereo Mapping (PRISM); and 3) the Phased-Array type L-band Synthetic Aperture Radar (PALSAR). Within the framework of ALOS Data European Node, as part of the European Space Agency (ESA), the European Space Research Institute worked alongside JAXA to provide contributions to the ALOS commissioning phase plan. This paper summarizes the strategy that was adopted by ESA to define and implement a data verification plan for missions operated by external agencies; these missions are classified by the ESA as third-party missions. The ESA was supported in the design and execution of this plan by GAEL Consultant. The verification of ALOS optical data from PRISM and AVNIR-2 sensors was initiated 4 months after satellite launch, and a team of principal investigators assembled to provide technical expertise. This paper includes a description of the verification plan and summarizes the methodologies that were used for radiometric, geometric, and image quality assessment. The successful completion of the commissioning phase has led to the sensors being declared fit for operations. The consolidated measurements indicate that the radiometric calibration of the AVNIR-2 sensor is stable and agrees with the Landsat-7 Enhanced Thematic Mapper Plus and the Envisat MEdium-Resolution Imaging Spectrometer calibration. The geometrical accuracy of PRISM and AVNIR-2 products improved significantly and remains under control. The PRISM modulation transfer function is monitored for improved characterization.

  20. An Efficient Image Enlargement Method for Image Sensors of Mobile in Embedded Systems

    OpenAIRE

    Hua Hua; Xiaomin Yang; Binyu Yan; Kai Zhou; Wei Lu

    2016-01-01

    Main challenges for image enlargement methods in embedded systems come from the requirements of good performance, low computational cost, and low memory usage. This paper proposes an efficient image enlargement method which can meet these requirements in embedded system. Firstly, to improve the performance of enlargement methods, this method extracts different kind of features for different morphologies with different approaches. Then, various dictionaries based on different kind of features ...

  1. Low-complex energy-aware image communication in visual sensor networks

    Science.gov (United States)

    Phamila, Yesudhas Asnath Victy; Amutha, Ramachandran

    2013-10-01

    A low-complex, low bit rate, energy-efficient image compression algorithm explicitly designed for resource-constrained visual sensor networks applied for surveillance, battle field, habitat monitoring, etc. is presented, where voluminous amount of image data has to be communicated over a bandwidth-limited wireless medium. The proposed method overcomes the energy limitation of individual nodes and is investigated in terms of image quality, entropy, processing time, overall energy consumption, and system lifetime. This algorithm is highly energy efficient and extremely fast since it applies energy-aware zonal binary discrete cosine transform (DCT) that computes only the few required significant coefficients and codes them using enhanced complementary Golomb Rice code without using any floating point operations. Experiments are performed using the Atmel Atmega128 and MSP430 processors to measure the resultant energy savings. Simulation results show that the proposed energy-aware fast zonal transform consumes only 0.3% of energy needed by conventional DCT. This algorithm consumes only 6% of energy needed by Independent JPEG Group (fast) version, and it suits for embedded systems requiring low power consumption. The proposed scheme is unique since it significantly enhances the lifetime of the camera sensor node and the network without any need for distributed processing as was traditionally required in existing algorithms.

  2. Simultaneous live cell imaging using dual FRET sensors with a single excitation light.

    Directory of Open Access Journals (Sweden)

    Yusuke Niino

    Full Text Available Fluorescence resonance energy transfer (FRET between fluorescent proteins is a powerful tool for visualization of signal transduction in living cells, and recently, some strategies for imaging of dual FRET pairs in a single cell have been reported. However, these necessitate alteration of excitation light between two different wavelengths to avoid the spectral overlap, resulting in sequential detection with a lag time. Thus, to follow fast signal dynamics or signal changes in highly motile cells, a single-excitation dual-FRET method should be required. Here we reported this by using four-color imaging with a single excitation light and subsequent linear unmixing to distinguish fluorescent proteins. We constructed new FRET sensors with Sapphire/RFP to combine with CFP/YFP, and accomplished simultaneous imaging of cAMP and cGMP in single cells. We confirmed that signal amplitude of our dual FRET measurement is comparable to of conventional single FRET measurement. Finally, we demonstrated to monitor both intracellular Ca(2+ and cAMP in highly motile cardiac myocytes. To cancel out artifacts caused by the movement of the cell, this method expands the applicability of the combined use of dual FRET sensors for cell samples with high motility.

  3. Compact SPAD-Based Pixel Architectures for Time-Resolved Image Sensors

    Directory of Open Access Journals (Sweden)

    Matteo Perenzoni

    2016-05-01

    Full Text Available This paper reviews the state of the art of single-photon avalanche diode (SPAD image sensors for time-resolved imaging. The focus of the paper is on pixel architectures featuring small pixel size (<25 μm and high fill factor (>20% as a key enabling technology for the successful implementation of high spatial resolution SPAD-based image sensors. A summary of the main CMOS SPAD implementations, their characteristics and integration challenges, is provided from the perspective of targeting large pixel arrays, where one of the key drivers is the spatial uniformity. The main analog techniques aimed at time-gated photon counting and photon timestamping suitable for compact and low-power pixels are critically discussed. The main features of these solutions are the adoption of analog counting techniques and time-to-analog conversion, in NMOS-only pixels. Reliable quantum-limited single-photon counting, self-referenced analog-to-digital conversion, time gating down to 0.75 ns and timestamping with 368 ps jitter are achieved.

  4. IR sensitivity enhancement of CMOS Image Sensor with diffractive light trapping pixels.

    Science.gov (United States)

    Yokogawa, Sozo; Oshiyama, Itaru; Ikeda, Harumi; Ebiko, Yoshiki; Hirano, Tomoyuki; Saito, Suguru; Oinoue, Takashi; Hagimoto, Yoshiya; Iwamoto, Hayato

    2017-06-19

    We report on the IR sensitivity enhancement of back-illuminated CMOS Image Sensor (BI-CIS) with 2-dimensional diffractive inverted pyramid array structure (IPA) on crystalline silicon (c-Si) and deep trench isolation (DTI). FDTD simulations of semi-infinite thick c-Si having 2D IPAs on its surface whose pitches over 400 nm shows more than 30% improvement of light absorption at λ = 850 nm and the maximum enhancement of 43% with the 540 nm pitch at the wavelength is confirmed. A prototype BI-CIS sample with pixel size of 1.2 μm square containing 400 nm pitch IPAs shows 80% sensitivity enhancement at λ = 850 nm compared to the reference sample with flat surface. This is due to diffraction with the IPA and total reflection at the pixel boundary. The NIR images taken by the demo camera equip with a C-mount lens show 75% sensitivity enhancement in the λ = 700-1200 nm wavelength range with negligible spatial resolution degradation. Light trapping CIS pixel technology promises to improve NIR sensitivity and appears to be applicable to many different image sensor applications including security camera, personal authentication, and range finding Time-of-Flight camera with IR illuminations.

  5. First tests of CHERWELL, a Monolithic Active Pixel Sensor: A CMOS Image Sensor (CIS) using 180 nm technology

    Energy Technology Data Exchange (ETDEWEB)

    Mylroie-Smith, James, E-mail: j.mylroie-smith@qmul.ac.uk [Queen Mary, University of London (United Kingdom); Kolya, Scott; Velthuis, Jaap [University of Bristol (United Kingdom); Bevan, Adrian; Inguglia, Gianluca [Queen Mary, University of London (United Kingdom); Headspith, Jon; Lazarus, Ian; Lemon, Roy [Daresbury Laboratory, STFC (United Kingdom); Crooks, Jamie; Turchetta, Renato; Wilson, Fergus [Rutherford Appleton Laboratory, STFC (United Kingdom)

    2013-12-11

    The Cherwell is a 4T CMOS sensor in 180 nm technology developed for the detection of charged particles. Here, the different test structures on the sensor will be described and first results from tests on the reference pixel variant are shown. The sensors were shown to have a noise of 12 e{sup −} and a signal to noise up to 150 in {sup 55}Fe.

  6. Applications of the Integrated High-Performance CMOS Image Sensor to Range Finders - from Optical Triangulation to the Automotive Field.

    Science.gov (United States)

    Wu, Jih-Huah; Pen, Cheng-Chung; Jiang, Joe-Air

    2008-03-13

    With their significant features, the applications of complementary metal-oxidesemiconductor (CMOS) image sensors covers a very extensive range, from industrialautomation to traffic applications such as aiming systems, blind guidance, active/passiverange finders, etc. In this paper CMOS image sensor-based active and passive rangefinders are presented. The measurement scheme of the proposed active/passive rangefinders is based on a simple triangulation method. The designed range finders chieflyconsist of a CMOS image sensor and some light sources such as lasers or LEDs. Theimplementation cost of our range finders is quite low. Image processing software to adjustthe exposure time (ET) of the CMOS image sensor to enhance the performance oftriangulation-based range finders was also developed. An extensive series of experimentswere conducted to evaluate the performance of the designed range finders. From theexperimental results, the distance measurement resolutions achieved by the active rangefinder and the passive range finder can be better than 0.6% and 0.25% within themeasurement ranges of 1 to 8 m and 5 to 45 m, respectively. Feasibility tests onapplications of the developed CMOS image sensor-based range finders to the automotivefield were also conducted. The experimental results demonstrated that our range finders arewell-suited for distance measurements in this field.

  7. A study of CR-39 plastic charged-particle detector replacement by consumer imaging sensors

    Energy Technology Data Exchange (ETDEWEB)

    Plaud-Ramos, K. O.; Freeman, M. S.; Wei, W.; Guardincerri, E.; Bacon, J. D.; Cowan, J.; Durham, J. M.; Huang, D.; Gao, J.; Hoffbauer, M. A.; Morley, D. J.; Morris, C. L.; Poulson, D. C.; Wang, Zhehui, E-mail: zwang@lanl.gov [Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)

    2016-11-15

    Consumer imaging sensors (CIS) are examined for real-time charged-particle detection and CR-39 plastic detector replacement. Removing cover glass from CIS is hard if not impossible, in particular for the latest inexpensive webcam models. We show that $10-class CIS are sensitive to MeV and higher energy protons and α-particles by using a {sup 90}Sr β-source with its cover glass in place. Indirect, real-time, high-resolution detection is also feasible when combining CIS with a ZnS:Ag phosphor screen and optics. Noise reduction in CIS is nevertheless important for the indirect approach.

  8. Visible Wavelength Color Filters Using Dielectric Subwavelength Gratings for Backside-Illuminated CMOS Image Sensor Technologies.

    Science.gov (United States)

    Horie, Yu; Han, Seunghoon; Lee, Jeong-Yub; Kim, Jaekwan; Kim, Yongsung; Arbabi, Amir; Shin, Changgyun; Shi, Lilong; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Lee, Hong-Seok; Hwang, Sungwoo; Faraon, Andrei

    2017-05-10

    We report transmissive color filters based on subwavelength dielectric gratings that can replace conventional dye-based color filters used in backside-illuminated CMOS image sensor (BSI CIS) technologies. The filters are patterned in an 80 nm-thick poly silicon film on a 115 nm-thick SiO2 spacer layer. They are optimized for operating at the primary RGB colors, exhibit peak transmittance of 60-80%, and have an almost insensitive response over a ± 20° angular range. This technology enables shrinking of the pixel sizes down to near a micrometer.

  9. Modelling the influence of noise of the image sensor for blood cells recognition in computer microscopy

    Science.gov (United States)

    Nikitaev, V. G.; Nagornov, O. V.; Pronichev, A. N.; Polyakov, E. V.; Dmitrieva, V. V.

    2017-12-01

    The first stage of diagnostics of blood cancer is the analysis of blood smears. The application of decision-making support systems would reduce the subjectivity of the diagnostic process and avoid errors, resulting in often irreversible changes in the patient's condition. In this regard, the solution of this problem requires the use of modern technology. One of the tools of the program classification of blood cells are texture features, and the task of finding informative among them is promising. The paper investigates the effect of noise of the image sensor to informative texture features with application of methods of mathematical modelling.

  10. Adaptive Gain and Analog Wavelet Transform for Low-Power Infrared Image Sensors

    Directory of Open Access Journals (Sweden)

    P. Villard

    2012-01-01

    Full Text Available A decorrelation and analog-to-digital conversion scheme aiming to reduce the power consumption of infrared image sensors is presented in this paper. To exploit both intraframe redundancy and inherent photon shot noise characteristics, a column based 1D Haar analog wavelet transform combined with variable gain amplification prior to A/D conversion is used. This allows to use only an 11-bit ADC, instead of a 13-bit one, and to save 15% of data transfer. An 8×16 pixels test circuit demonstrates this functionality.

  11. The electromagnetic-trait imaging computation of traveling wave method in breast tumor microwave sensor system.

    Science.gov (United States)

    Tao, Zhi-Fu; Han, Zhong-Ling; Yao, Meng

    2011-01-01

    Using the difference of dielectric constant between malignant tumor tissue and normal breast tissue, breast tumor microwave sensor system (BRATUMASS) determines the detected target of imaging electromagnetic trait by analyzing the properties of target tissue back wave obtained after near-field microwave radicalization (conelrad). The key of obtained target properties relationship and reconstructed detected space is to analyze the characteristics of the whole process from microwave transmission to back wave reception. Using traveling wave method, we derive spatial transmission properties and the relationship of the relation detected points distances, and valuate the properties of each unit by statistical valuation theory. This chapter gives the experimental data analysis results.

  12. Dark Current Random Telegraph Signals in Solid-State Image Sensors

    OpenAIRE

    Virmontois, Cédric; Goiffon, Vincent; Mark S Robbins; Tauziède, Laurie; Geoffray, Hervé; Raine, Mélanie; Girard, Sylvain; Gilard, Olivier; Magnan, Pierre; Bardoux, Alain

    2013-01-01

    This paper focuses on the Dark Current-Random Telegraph Signal (DC-RTS) in solid-state image sensors. The DCRTS is investigated in several bulk materials, for different surface interfaces and for different trench isolation interfaces. The main parameter used to characterize the DC-RTS is the transition maximum amplitude which seems to be the most appropriate for studying the phenomenon and identifying its origin. Proton, neutron and Co-60 Gamma-ray irradiations are used to study DC-RTS induce...

  13. 1T Pixel Using Floating-Body MOSFET for CMOS Image Sensors.

    Science.gov (United States)

    Lu, Guo-Neng; Tournier, Arnaud; Roy, François; Deschamps, Benoît

    2009-01-01

    We present a single-transistor pixel for CMOS image sensors (CIS). It is a floating-body MOSFET structure, which is used as photo-sensing device and source-follower transistor, and can be controlled to store and evacuate charges. Our investigation into this 1T pixel structure includes modeling to obtain analytical description of conversion gain. Model validation has been done by comparing theoretical predictions and experimental results. On the other hand, the 1T pixel structure has been implemented in different configurations, including rectangular-gate and ring-gate designs, and variations of oxidation parameters for the fabrication process. The pixel characteristics are presented and discussed.

  14. Time-Domain Fluorescence Lifetime Imaging Techniques Suitable for Solid-State Imaging Sensor Arrays

    Directory of Open Access Journals (Sweden)

    Robert K. Henderson

    2012-05-01

    Full Text Available We have successfully demonstrated video-rate CMOS single-photon avalanche diode (SPAD-based cameras for fluorescence lifetime imaging microscopy (FLIM by applying innovative FLIM algorithms. We also review and compare several time-domain techniques and solid-state FLIM systems, and adapt the proposed algorithms for massive CMOS SPAD-based arrays and hardware implementations. The theoretical error equations are derived and their performances are demonstrated on the data obtained from 0.13 μm CMOS SPAD arrays and the multiple-decay data obtained from scanning PMT systems. In vivo two photon fluorescence lifetime imaging data of FITC-albumin labeled vasculature of a P22 rat carcinosarcoma (BD9 rat window chamber are used to test how different algorithms perform on bi-decay data. The proposed techniques are capable of producing lifetime images with enough contrast.

  15. Estimation of the Image Interpretability of ZY-3 Sensor Corrected Panchromatic Nadir Data

    Directory of Open Access Journals (Sweden)

    Lin Li

    2014-05-01

    Full Text Available Image quality is important for taking full advantage of satellite data. As a common indicator, the National Imagery Interpretability Scale (NIIRS is widely used for image quality assessment and provides a comprehensive representation of image quality from the perspective of interpretability. The ZY-3 (Ziyuan-3 satellite is the first civil high resolution mapping satellite in China, which was established in 2012. So far, there has been no reports on adopting NIIRS as the common indicator for the quality assessment of that satellite image data. This lack of a common quality indicator results in a gap between satellite data users around the world and those in China regarding the understanding of the quality and usability of ZY-3 data. To overcome the gap, using the general image-quality equation (GIQE, this study evaluates the ZY-3 sensor-corrected (SC panchromatic nadir (NAD data in terms of the NIIRS. In order to solve the uncertainty resulting from the exceeding of the ground sample distance (GSD of ZY-3 data (2.1 m in GIQE (less than 2.03 m, eight images are used to establish the relationship between the manually obtained NIIRS and the GIQE predicted NIIRS. An adjusted GIQE is based on the relationship and verified by another five images. Our study demonstrates that the method of using adjusted GIQE for calculating NIIRS can be used for the quality assessment of ZY-3 satellite images and reveals that the NIIRS value of ZY-3 SC NAD data is about 2.79.

  16. Atomic-Scale Nuclear Spin Imaging Using Quantum-Assisted Sensors in Diamond

    Science.gov (United States)

    Ajoy, A.; Bissbort, U.; Lukin, M. D.; Walsworth, R. L.; Cappellaro, P.

    2015-01-01

    Nuclear spin imaging at the atomic level is essential for the understanding of fundamental biological phenomena and for applications such as drug discovery. The advent of novel nanoscale sensors promises to achieve the long-standing goal of single-protein, high spatial-resolution structure determination under ambient conditions. In particular, quantum sensors based on the spin-dependent photoluminescence of nitrogen-vacancy (NV) centers in diamond have recently been used to detect nanoscale ensembles of external nuclear spins. While NV sensitivity is approaching single-spin levels, extracting relevant information from a very complex structure is a further challenge since it requires not only the ability to sense the magnetic field of an isolated nuclear spin but also to achieve atomic-scale spatial resolution. Here, we propose a method that, by exploiting the coupling of the NV center to an intrinsic quantum memory associated with the nitrogen nuclear spin, can reach a tenfold improvement in spatial resolution, down to atomic scales. The spatial resolution enhancement is achieved through coherent control of the sensor spin, which creates a dynamic frequency filter selecting only a few nuclear spins at a time. We propose and analyze a protocol that would allow not only sensing individual spins in a complex biomolecule, but also unraveling couplings among them, thus elucidating local characteristics of the molecule structure.

  17. Data Retrieval Algorithms for Validating the Optical Transient Detector and the Lightning Imaging Sensor

    Science.gov (United States)

    Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.

    2000-01-01

    A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from an Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions. Solutions for the plane (i.e., no earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated datasets, and the relative influence of bearing and arrival time data an the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA:s Optical Transient Detector and Lightning Imaging Sensor. A quadratic planar solution that is useful when only three arrival time measurements are available is also introduced. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in sc)iirce location, Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated datasets, and the results are generally better than those obtained from the three-station linear planar method when bearing errors are about 2 deg.

  18. CZT sensors for Computed Tomography: from crystal growth to image quality

    Science.gov (United States)

    Iniewski, K.

    2016-12-01

    Recent advances in Traveling Heater Method (THM) growth and device fabrication that require additional processing steps have enabled to dramatically improve hole transport properties and reduce polarization effects in Cadmium Zinc Telluride (CZT) material. As a result high flux operation of CZT sensors at rates in excess of 200 Mcps/mm2 is now possible and has enabled multiple medical imaging companies to start building prototype Computed Tomography (CT) scanners. CZT sensors are also finding new commercial applications in non-destructive testing (NDT) and baggage scanning. In order to prepare for high volume commercial production we are moving from individual tile processing to whole wafer processing using silicon methodologies, such as waxless processing, cassette based/touchless wafer handling. We have been developing parametric level screening at the wafer stage to ensure high wafer quality before detector fabrication in order to maximize production yields. These process improvements enable us, and other CZT manufacturers who pursue similar developments, to provide high volume production for photon counting applications in an economically feasible manner. CZT sensors are capable of delivering both high count rates and high-resolution spectroscopic performance, although it is challenging to achieve both of these attributes simultaneously. The paper discusses material challenges, detector design trade-offs and ASIC architectures required to build cost-effective CZT based detection systems. Photon counting ASICs are essential part of the integrated module platforms as charge-sensitive electronics needs to deal with charge-sharing and pile-up effects.

  19. Proton magnetic resonance imaging using a nitrogen-vacancy spin sensor.

    Science.gov (United States)

    Rugar, D; Mamin, H J; Sherwood, M H; Kim, M; Rettner, C T; Ohno, K; Awschalom, D D

    2015-02-01

    Magnetic resonance imaging, with its ability to provide three-dimensional, elementally selective imaging without radiation damage, has had a revolutionary impact in many fields, especially medicine and the neurosciences. Although challenging, its extension to the nanometre scale could provide a powerful new tool for the nanosciences, especially if it can provide a means for non-destructively visualizing the full three-dimensional morphology of complex nanostructures, including biomolecules. To achieve this potential, innovative new detection strategies are required to overcome the severe sensitivity limitations of conventional inductive detection techniques. One successful example is magnetic resonance force microscopy, which has demonstrated three-dimensional imaging of proton NMR with resolution on the order of 10 nm, but with the requirement of operating at cryogenic temperatures. Nitrogen-vacancy (NV) centres in diamond offer an alternative detection strategy for nanoscale magnetic resonance imaging that is operable at room temperature. Here, we demonstrate two-dimensional imaging of (1)H NMR from a polymer test sample using a single NV centre in diamond as the sensor. The NV centre detects the oscillating magnetic field from precessing protons as the sample is scanned past the NV centre. A spatial resolution of ∼12 nm is shown, limited primarily by the scan resolution.

  20. Integrated Spectral Low Noise Image Sensor with Nanowire Polarization Filters for Low Contrast Imaging

    Science.gov (United States)

    2015-11-05

    found sexually dimorphic polarized reflectance, polarization-dependent mate choice behavior, and differential polarization signaling across social...investigate alternative spectral imaging architectures based on my previous experience in this research area. I will develop nanowire polarization...influence the accuracy of this estimation. Presented here are a formal system of DISTRIBUTION A: Distribution approved for public release experiments

  1. An Efficient Image Enlargement Method for Image Sensors of Mobile in Embedded Systems

    Directory of Open Access Journals (Sweden)

    Hua Hua

    2016-01-01

    Full Text Available Main challenges for image enlargement methods in embedded systems come from the requirements of good performance, low computational cost, and low memory usage. This paper proposes an efficient image enlargement method which can meet these requirements in embedded system. Firstly, to improve the performance of enlargement methods, this method extracts different kind of features for different morphologies with different approaches. Then, various dictionaries based on different kind of features are learned, which represent the image in a more efficient manner. Secondly, to accelerate the enlargement speed and reduce the memory usage, this method divides the atoms of each dictionary into several clusters. For each cluster, separate projection matrix is calculated. This method reformulates the problem as a least squares regression. The high-resolution (HR images can be reconstructed based on a few projection matrixes. Numerous experiment results show that this method has advantages such as being efficient and real-time and having less memory cost. These advantages make this method easy to implement in mobile embedded system.

  2. False-color display of special sensor microwave/imager (SSM/I) data

    Science.gov (United States)

    Negri, Andrew J.; Adler, Robert F.; Kummerow, Christian D.

    1989-01-01

    Displays of multifrequency passive microwave data from the Special Sensor Microwave/Imager (SSM/I) flying on the Defense Meteorological Satellite Program (DMSP) spacecraft are presented. Observed brightness temperatures at 85.5 GHz (vertical and horizontal polarizations) and 37 GHz (vertical polarization) are respectively used to 'drive' the red, green, and blue 'guns' of a color monitor. The resultant false-color images can be used to distinguish land from water, highlight precipitation processes and structure over both land and water, and detail variations in other surfaces such as deserts, snow cover, and sea ice. The observations at 85.5 GHz also add a previously unavailable frequency to the problem of rainfall estimation from space. Examples of mesoscale squall lines, tropical and extra-tropical storms, and larger-scale land and atmospheric features as 'viewed' by the SSM/I are shown.

  3. System for Digital 1D-Image Processing with 1024 Pixel CCD Sensor

    Directory of Open Access Journals (Sweden)

    J. Misun

    1993-11-01

    Full Text Available The conception of system for digital 1D-images processing with digital CCD camera is presented. The system is created from these three basic parts: the digital CCD camera with linear image sensor CCD L133C, 8-bit interface and a personal computer. The scanning digital CCD camera generated a video signals, which are processed in the analog signal processor. The output signal is continually converted to 8-bit data words in A/D converter. This data words maybe transfer over a bus driver to the operation memory of personal computer, by setting one of the three work regimes of digital CCD camera. Some application possibilities and basic technical parameters of this system are given.

  4. A focusing method in the calibration process of image sensors based on IOFBs.

    Science.gov (United States)

    Fernández, Pedro R; Lázaro, José L; Gardel, Alfredo; Cano, Angel E; Bravo, Ignacio

    2010-01-01

    A focusing procedure in the calibration process of image sensors based on Incoherent Optical Fiber Bundles (IOFBs) is described using the information extracted from fibers. These procedures differ from any other currently known focusing method due to the non spatial in-out correspondence between fibers, which produces a natural codification of the image to transmit. Focus measuring is essential prior to carrying out calibration in order to guarantee accurate processing and decoding. Four algorithms have been developed to estimate the focus measure; two methods based on mean grey level, and the other two based on variance. In this paper, a few simple focus measures are defined and compared. Some experimental results referred to the focus measure and the accuracy of the developed methods are discussed in order to demonstrate its effectiveness.

  5. Hard-X-Ray/Soft-Gamma-Ray Imaging Sensor Assembly for Astronomy

    Science.gov (United States)

    Myers, Richard A.

    2008-01-01

    An improved sensor assembly has been developed for astronomical imaging at photon energies ranging from 1 to 100 keV. The assembly includes a thallium-doped cesium iodide scintillator divided into pixels and coupled to an array of high-gain avalanche photodiodes (APDs). Optionally, the array of APDs can be operated without the scintillator to detect photons at energies below 15 keV. The array of APDs is connected to compact electronic readout circuitry that includes, among other things, 64 independent channels for detection of photons in various energy ranges, up to a maximum energy of 100 keV, at a count rate up to 3 kHz. The readout signals are digitized and processed by imaging software that performs "on-the-fly" analysis. The sensor assembly has been integrated into an imaging spectrometer, along with a pair of coded apertures (Fresnel zone plates) that are used in conjunction with the pixel layout to implement a shadow-masking technique to obtain relatively high spatial resolution without having to use extremely small pixels. Angular resolutions of about 20 arc-seconds have been measured. Thus, for example, the imaging spectrometer can be used to (1) determine both the energy spectrum of a distant x-ray source and the angular deviation of the source from the nominal line of sight of an x-ray telescope in which the spectrometer is mounted or (2) study the spatial and temporal development of solar flares, repeating - ray bursters, and other phenomena that emit transient radiation in the hard-x-ray/soft- -ray region of the electromagnetic spectrum.

  6. Toward High Altitude Airship Ground-Based Boresight Calibration of Hyperspectral Pushbroom Imaging Sensors

    Directory of Open Access Journals (Sweden)

    Aiwu Zhang

    2015-12-01

    Full Text Available The complexity of the single linear hyperspectral pushbroom imaging based on a high altitude airship (HAA without a three-axis stabilized platform is much more than that based on the spaceborne and airborne. Due to the effects of air pressure, temperature and airflow, the large pitch and roll angles tend to appear frequently that create pushbroom images highly characterized with severe geometric distortions. Thus, the in-flight calibration procedure is not appropriate to apply to the single linear pushbroom sensors on HAA having no three-axis stabilized platform. In order to address this problem, a new ground-based boresight calibration method is proposed. Firstly, a coordinate’s transformation model is developed for direct georeferencing (DG of the linear imaging sensor, and then the linear error equation is derived from it by using the Taylor expansion formula. Secondly, the boresight misalignments are worked out by using iterative least squares method with few ground control points (GCPs and ground-based side-scanning experiments. The proposed method is demonstrated by three sets of experiments: (i the stability and reliability of the method is verified through simulation-based experiments; (ii the boresight calibration is performed using ground-based experiments; and (iii the validation is done by applying on the orthorectification of the real hyperspectral pushbroom images from a HAA Earth observation payload system developed by our research team—“LanTianHao”. The test results show that the proposed boresight calibration approach significantly improves the quality of georeferencing by reducing the geometric distortions caused by boresight misalignments to the minimum level.

  7. Single-photon sampling architecture for solid-state imaging sensors.

    Science.gov (United States)

    van den Berg, Ewout; Candès, Emmanuel; Chinn, Garry; Levin, Craig; Olcott, Peter Demetri; Sing-Long, Carlos

    2013-07-23

    Advances in solid-state technology have enabled the development of silicon photomultiplier sensor arrays capable of sensing individual photons. Combined with high-frequency time-to-digital converters (TDCs), this technology opens up the prospect of sensors capable of recording with high accuracy both the time and location of each detected photon. Such a capability could lead to significant improvements in imaging accuracy, especially for applications operating with low photon fluxes such as light detection and ranging and positron-emission tomography. The demands placed on on-chip readout circuitry impose stringent trade-offs between fill factor and spatiotemporal resolution, causing many contemporary designs to severely underuse the technology's full potential. Concentrating on the low photon flux setting, this paper leverages results from group testing and proposes an architecture for a highly efficient readout of pixels using only a small number of TDCs. We provide optimized design instances for various sensor parameters and compute explicit upper and lower bounds on the number of TDCs required to uniquely decode a given maximum number of simultaneous photon arrivals. To illustrate the strength of the proposed architecture, we note a typical digitization of a 60 × 60 photodiode sensor using only 142 TDCs. The design guarantees registration and unique recovery of up to four simultaneous photon arrivals using a fast decoding algorithm. By contrast, a cross-strip design requires 120 TDCs and cannot uniquely decode any simultaneous photon arrivals. Among other realistic simulations of scintillation events in clinical positron-emission tomography, the above design is shown to recover the spatiotemporal location of 99.98% of all detected photons.

  8. An alternative cost-effective image processing based sensor for continuous turbidity monitoring

    Science.gov (United States)

    Chai, Matthew Min Enn; Ng, Sing Muk; Chua, Hong Siang

    2017-03-01

    Turbidity is the degree to which the optical clarity of water is reduced by impurities in the water. High turbidity values in rivers and lakes promote the growth of pathogen, reduce dissolved oxygen levels and reduce light penetration. The conventional ways of on-site turbidity measurements involve the use of optical sensors similar to those used in commercial turbidimeters. However, these instruments require frequent maintenance due to biological fouling on the sensors. Thus, image processing was proposed as an alternative technique for continuous turbidity measurement to reduce frequency of maintenance. The camera was kept out of water to avoid biofouling while other parts of the system submerged in water can be coated with anti-fouling surface. The setup developed consisting of a webcam, a light source, a microprocessor and a motor used to control the depth of a reference object. The image processing algorithm quantifies the relationship between the number of circles detected on the reference object and the depth of the reference object. By relating the quantified data to turbidity, the setup was able to detect turbidity levels from 20 NTU to 380 NTU with measurement error of 15.7 percent. The repeatability and sensitivity of the turbidity measurement was found to be satisfactory.

  9. Two-Step Single Slope/SAR ADC with Error Correction for CMOS Image Sensor

    Directory of Open Access Journals (Sweden)

    Fang Tang

    2014-01-01

    Full Text Available Conventional two-step ADC for CMOS image sensor requires full resolution noise performance in the first stage single slope ADC, leading to high power consumption and large chip area. This paper presents an 11-bit two-step single slope/successive approximation register (SAR ADC scheme for CMOS image sensor applications. The first stage single slope ADC generates a 3-bit data and 1 redundant bit. The redundant bit is combined with the following 8-bit SAR ADC output code using a proposed error correction algorithm. Instead of requiring full resolution noise performance, the first stage single slope circuit of the proposed ADC can tolerate up to 3.125% quantization noise. With the proposed error correction mechanism, the power consumption and chip area of the single slope ADC are significantly reduced. The prototype ADC is fabricated using 0.18 μm CMOS technology. The chip area of the proposed ADC is 7 μm × 500 μm. The measurement results show that the energy efficiency figure-of-merit (FOM of the proposed ADC core is only 125 pJ/sample under 1.4 V power supply and the chip area efficiency is 84 k μm2·cycles/sample.

  10. Film cameras or digital sensors? The challenge ahead for aerial imaging

    Science.gov (United States)

    Light, D.L.

    1996-01-01

    Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.

  11. Thin-Film Quantum Dot Photodiode for Monolithic Infrared Image Sensors.

    Science.gov (United States)

    Malinowski, Pawel E; Georgitzikis, Epimitheas; Maes, Jorick; Vamvaka, Ioanna; Frazzica, Fortunato; Van Olmen, Jan; De Moor, Piet; Heremans, Paul; Hens, Zeger; Cheyns, David

    2017-12-10

    Imaging in the infrared wavelength range has been fundamental in scientific, military and surveillance applications. Currently, it is a crucial enabler of new industries such as autonomous mobility (for obstacle detection), augmented reality (for eye tracking) and biometrics. Ubiquitous deployment of infrared cameras (on a scale similar to visible cameras) is however prevented by high manufacturing cost and low resolution related to the need of using image sensors based on flip-chip hybridization. One way to enable monolithic integration is by replacing expensive, small-scale III-V-based detector chips with narrow bandgap thin-films compatible with 8- and 12-inch full-wafer processing. This work describes a CMOS-compatible pixel stack based on lead sulfide quantum dots (PbS QD) with tunable absorption peak. Photodiode with a 150-nm thick absorber in an inverted architecture shows dark current of 10-6 A/cm² at -2 V reverse bias and EQE above 20% at 1440 nm wavelength. Optical modeling for top illumination architecture can improve the contact transparency to 70%. Additional cooling (193 K) can improve the sensitivity to 60 dB. This stack can be integrated on a CMOS ROIC, enabling order-of-magnitude cost reduction for infrared sensors.

  12. Thin-Film Quantum Dot Photodiode for Monolithic Infrared Image Sensors

    Science.gov (United States)

    Georgitzikis, Epimitheas; Vamvaka, Ioanna; Frazzica, Fortunato; Van Olmen, Jan; De Moor, Piet; Heremans, Paul; Hens, Zeger; Cheyns, David

    2017-01-01

    Imaging in the infrared wavelength range has been fundamental in scientific, military and surveillance applications. Currently, it is a crucial enabler of new industries such as autonomous mobility (for obstacle detection), augmented reality (for eye tracking) and biometrics. Ubiquitous deployment of infrared cameras (on a scale similar to visible cameras) is however prevented by high manufacturing cost and low resolution related to the need of using image sensors based on flip-chip hybridization. One way to enable monolithic integration is by replacing expensive, small-scale III–V-based detector chips with narrow bandgap thin-films compatible with 8- and 12-inch full-wafer processing. This work describes a CMOS-compatible pixel stack based on lead sulfide quantum dots (PbS QD) with tunable absorption peak. Photodiode with a 150-nm thick absorber in an inverted architecture shows dark current of 10−6 A/cm2 at −2 V reverse bias and EQE above 20% at 1440 nm wavelength. Optical modeling for top illumination architecture can improve the contact transparency to 70%. Additional cooling (193 K) can improve the sensitivity to 60 dB. This stack can be integrated on a CMOS ROIC, enabling order-of-magnitude cost reduction for infrared sensors. PMID:29232871

  13. Thin-Film Quantum Dot Photodiode for Monolithic Infrared Image Sensors

    Directory of Open Access Journals (Sweden)

    Pawel E. Malinowski

    2017-12-01

    Full Text Available Imaging in the infrared wavelength range has been fundamental in scientific, military and surveillance applications. Currently, it is a crucial enabler of new industries such as autonomous mobility (for obstacle detection, augmented reality (for eye tracking and biometrics. Ubiquitous deployment of infrared cameras (on a scale similar to visible cameras is however prevented by high manufacturing cost and low resolution related to the need of using image sensors based on flip-chip hybridization. One way to enable monolithic integration is by replacing expensive, small-scale III–V-based detector chips with narrow bandgap thin-films compatible with 8- and 12-inch full-wafer processing. This work describes a CMOS-compatible pixel stack based on lead sulfide quantum dots (PbS QD with tunable absorption peak. Photodiode with a 150-nm thick absorber in an inverted architecture shows dark current of 10−6 A/cm2 at −2 V reverse bias and EQE above 20% at 1440 nm wavelength. Optical modeling for top illumination architecture can improve the contact transparency to 70%. Additional cooling (193 K can improve the sensitivity to 60 dB. This stack can be integrated on a CMOS ROIC, enabling order-of-magnitude cost reduction for infrared sensors.

  14. Cost-Efficient Wafer-Level Capping for MEMS and Imaging Sensors by Adhesive Wafer Bonding

    Directory of Open Access Journals (Sweden)

    Simon J. Bleiker

    2016-10-01

    Full Text Available Device encapsulation and packaging often constitutes a substantial part of the fabrication cost of micro electro-mechanical systems (MEMS transducers and imaging sensor devices. In this paper, we propose a simple and cost-effective wafer-level capping method that utilizes a limited number of highly standardized process steps as well as low-cost materials. The proposed capping process is based on low-temperature adhesive wafer bonding, which ensures full complementary metal-oxide-semiconductor (CMOS compatibility. All necessary fabrication steps for the wafer bonding, such as cavity formation and deposition of the adhesive, are performed on the capping substrate. The polymer adhesive is deposited by spray-coating on the capping wafer containing the cavities. Thus, no lithographic patterning of the polymer adhesive is needed, and material waste is minimized. Furthermore, this process does not require any additional fabrication steps on the device wafer, which lowers the process complexity and fabrication costs. We demonstrate the proposed capping method by packaging two different MEMS devices. The two MEMS devices include a vibration sensor and an acceleration switch, which employ two different electrical interconnection schemes. The experimental results show wafer-level capping with excellent bond quality due to the re-flow behavior of the polymer adhesive. No impediment to the functionality of the MEMS devices was observed, which indicates that the encapsulation does not introduce significant tensile nor compressive stresses. Thus, we present a highly versatile, robust, and cost-efficient capping method for components such as MEMS and imaging sensors.

  15. Optical fibre sensors embedded into technical textile for a continuous monitoring of patients under Magnetic Resonance Imaging.

    Science.gov (United States)

    De Jonckheere, J; Narbonneau, F; Kinet, D; Zinke, J; Paquet, B; Depre, A; Jeanne, M; Logier, R

    2008-01-01

    The potential impact of optical fiber sensors embedded into medical textiles for the continuous monitoring of the patient during Magnetic Resonance Imaging (MRI) is presented. In that way, we report on several pure optical sensing technologies for pulse oximetry and respiratory movements monitoring. The technique for pulse oximetry measurement is known as NIRS (Near Infra-Red Spectroscopy) in a reflectance mode. On the other hand, we tested two different optical based designs for the respiratory motions measurements--a macro bending sensor and a Bragg grating sensor, designed to measure the elongation of thoracic and abdominal circumferences during breathing.

  16. Multispectral demosaicking considering out-of-focus problem for red-green-blue-near-infrared image sensors

    Science.gov (United States)

    Kwon, Ji Yong; Kang, Moon Gi

    2016-03-01

    A near-infrared (NIR) band provides information invisible to human eyes for discriminating and recognizing objects more clearly under low lighting conditions. To capture color and NIR images simultaneously, a multispectral filter array (MSFA) sensor is used. However, because lenses have different refractive indices for different wavelengths, lenses may fail to focus all rays to the same convergence. This is the reason an out-of-focus problem occurs and images are blurred. In this paper, a demosaicking algorithm that considers the out-of-focus problem is proposed. This algorithm is used by the MSFA of a red-green-blue-NIR image sensor to obtain color and NIR images. After the energies of the multispectral (MS) channels in the MSFA image are balanced to minimize aliasing, that image is filtered by the estimated low-pass kernel to generate a panchromatic (PAN) image. When an image is acquired, the out-of-focus problem and the formation process of the PAN image are modeled. The desired MS image is estimated by solving the least squares approach of the difference between the PAN and MS images based on the models. The experimental results show that the proposed algorithm performs well in estimating high-quality MS images and reduces the out-of-focus problem.

  17. Amorphous and Polycrystalline Photoconductors for Direct Conversion Flat Panel X-Ray Image Sensors

    Science.gov (United States)

    Kasap, Safa; Frey, Joel B.; Belev, George; Tousignant, Olivier; Mani, Habib; Greenspan, Jonathan; Laperriere, Luc; Bubon, Oleksandr; Reznik, Alla; DeCrescenzo, Giovanni; Karim, Karim S.; Rowlands, John A.

    2011-01-01

    In the last ten to fifteen years there has been much research in using amorphous and polycrystalline semiconductors as x-ray photoconductors in various x-ray image sensor applications, most notably in flat panel x-ray imagers (FPXIs). We first outline the essential requirements for an ideal large area photoconductor for use in a FPXI, and discuss how some of the current amorphous and polycrystalline semiconductors fulfill these requirements. At present, only stabilized amorphous selenium (doped and alloyed a-Se) has been commercialized, and FPXIs based on a-Se are particularly suitable for mammography, operating at the ideal limit of high detective quantum efficiency (DQE). Further, these FPXIs can also be used in real-time, and have already been used in such applications as tomosynthesis. We discuss some of the important attributes of amorphous and polycrystalline x-ray photoconductors such as their large area deposition ability, charge collection efficiency, x-ray sensitivity, DQE, modulation transfer function (MTF) and the importance of the dark current. We show the importance of charge trapping in limiting not only the sensitivity but also the resolution of these detectors. Limitations on the maximum acceptable dark current and the corresponding charge collection efficiency jointly impose a practical constraint that many photoconductors fail to satisfy. We discuss the case of a-Se in which the dark current was brought down by three orders of magnitude by the use of special blocking layers to satisfy the dark current constraint. There are also a number of polycrystalline photoconductors, HgI2 and PbO being good examples, that show potential for commercialization in the same way that multilayer stabilized a-Se x-ray photoconductors were developed for commercial applications. We highlight the unique nature of avalanche multiplication in a-Se and how it has led to the development of the commercial HARP video-tube. An all solid state version of the HARP has been

  18. Amorphous and Polycrystalline Photoconductors for Direct Conversion Flat Panel X-Ray Image Sensors

    Directory of Open Access Journals (Sweden)

    Karim S. Karim

    2011-05-01

    Full Text Available In the last ten to fifteen years there has been much research in using amorphous and polycrystalline semiconductors as x-ray photoconductors in various x-ray image sensor applications, most notably in flat panel x-ray imagers (FPXIs. We first outline the essential requirements for an ideal large area photoconductor for use in a FPXI, and discuss how some of the current amorphous and polycrystalline semiconductors fulfill these requirements. At present, only stabilized amorphous selenium (doped and alloyed a-Se has been commercialized, and FPXIs based on a-Se are particularly suitable for mammography, operating at the ideal limit of high detective quantum efficiency (DQE. Further, these FPXIs can also be used in real-time, and have already been used in such applications as tomosynthesis. We discuss some of the important attributes of amorphous and polycrystalline x-ray photoconductors such as their large area deposition ability, charge collection efficiency, x-ray sensitivity, DQE, modulation transfer function (MTF and the importance of the dark current. We show the importance of charge trapping in limiting not only the sensitivity but also the resolution of these detectors. Limitations on the maximum acceptable dark current and the corresponding charge collection efficiency jointly impose a practical constraint that many photoconductors fail to satisfy. We discuss the case of a-Se in which the dark current was brought down by three orders of magnitude by the use of special blocking layers to satisfy the dark current constraint. There are also a number of polycrystalline photoconductors, HgI2 and PbO being good examples, that show potential for commercialization in the same way that multilayer stabilized a-Se x-ray photoconductors were developed for commercial applications. We highlight the unique nature of avalanche multiplication in a-Se and how it has led to the development of the commercial HARP video-tube. An all solid state version of the

  19. Real time three-dimensional space video rate sensors for millimeter waves imaging based very inexpensive plasma LED lamps

    Science.gov (United States)

    Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S.; Rozban, Daniel; Abramovich, Amir

    2014-10-01

    In recent years, much effort has been invested to develop inexpensive but sensitive Millimeter Wave (MMW) detectors that can be used in focal plane arrays (FPAs), in order to implement real time MMW imaging. Real time MMW imaging systems are required for many varied applications in many fields as homeland security, medicine, communications, military products and space technology. It is mainly because this radiation has high penetration and good navigability through dust storm, fog, heavy rain, dielectric materials, biological tissue, and diverse materials. Moreover, the atmospheric attenuation in this range of the spectrum is relatively low and the scattering is also low compared to NIR and VIS. The lack of inexpensive room temperature imaging systems makes it difficult to provide a suitable MMW system for many of the above applications. In last few years we advanced in research and development of sensors using very inexpensive (30-50 cents) Glow Discharge Detector (GDD) plasma indicator lamps as MMW detectors. This paper presents three kinds of GDD sensor based lamp Focal Plane Arrays (FPA). Those three kinds of cameras are different in the number of detectors, scanning operation, and detection method. The 1st and 2nd generations are 8 × 8 pixel array and an 18 × 2 mono-rail scanner array respectively, both of them for direct detection and limited to fixed imaging. The last designed sensor is a multiplexing frame rate of 16x16 GDD FPA. It permits real time video rate imaging of 30 frames/ sec and comprehensive 3D MMW imaging. The principle of detection in this sensor is a frequency modulated continuous wave (FMCW) system while each of the 16 GDD pixel lines is sampled simultaneously. Direct detection is also possible and can be done with a friendly user interface. This FPA sensor is built over 256 commercial GDD lamps with 3 mm diameter International Light, Inc., Peabody, MA model 527 Ne indicator lamps as pixel detectors. All three sensors are fully supported

  20. BOREAS RSS-19 1996 CASI At-Sensor Radiance and Reflectance Images

    Science.gov (United States)

    Miller, John; Hall, Forrest G. (Editor); Nickerson, Jaime (Editor); Freemantle, Jim; Smith, David E. (Technical Monitor)

    2000-01-01

    The BOREAS RSS-19 team collected CASI images from the Chieftain Navaho aircraft in order to observe the seasonal change in the radiometric reflectance properties of the boreal forest landscape. CASI was deployed as a site-specific optical sensor as part of BOREAS. The overall objective of the CASI deployment was to observe the seasonal change in the radiometric reflectance properties of the boreal forest landscape. In 1996, image data were collected with CASI on 15 days during a field campaign between 18-July and 01 -August, primarily at flux tower sites located at study sites near Thompson, Manitoba, and Prince Albert, Saskatchewan. A variety of CASI data collection strategies were used to meet the following scientific objectives: 1) canopy bidirectional reflectance, 2) canopy biochemistry, 3) spatial variability, and 4) estimates of up and downwelling PAR spectral albedo, as well as changes along transects across lakes at the southern site and transects between the NSA and SSA. The images are stored as binary image files. The data files are available on a CD-ROM (see document number 20010000884) or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  1. Spectral and temporal multiplexing for multispectral fluorescence and reflectance imaging using two color sensors.

    Science.gov (United States)

    Dimitriadis, Nikolas; Grychtol, Bartłomiej; Theuring, Martin; Behr, Tobias; Sippel, Christian; Deliolanis, Nikolaos C

    2017-05-29

    Fluorescence imaging can reveal functional, anatomical or pathological features of high interest in medical interventions. We present a novel method to record and display in video rate multispectral color and fluorescence images over the visible and near infrared range. The fast acquisition in multiple channels is achieved through a combination of spectral and temporal multiplexing in a system with two standard color sensors. Accurate color reproduction and high fluorescence unmixing performance are experimentally demonstrated with a prototype system in a challenging imaging scenario. Through spectral simulation and optimization we show that the system is sensitive to all dyes emitting in the visible and near infrared region without changing filters and that the SNR of multiple unmixed components can be kept high if parameters are chosen well. We propose a sensitive per-pixel metric of unmixing quality in a single image based on noise propagation and present a method to visualize the high-dimensional data in a 2D graph, where up to three fluorescent components can be distinguished and segmented.

  2. Elasticity Signal and Image Processing Sensor and Algorithms for Tissue Characterization

    Directory of Open Access Journals (Sweden)

    Jong-Ha LEE

    2014-01-01

    Full Text Available The tissue inclusion parameter estimation method is proposed to measure the stiffness as well as geometric parameters. The estimation is performed based on the elasticity image obtained at the surface of the tissue using an optical based elasticity imaging sensor. A forward algorithm is designed to comprehensively predict the elasticity image based on the mechanical properties of tissue inclusion using finite element modeling. This forward information is used to develop an inversion algorithm that will be used to extract the size, depth, and Young's modulus of a tissue inclusion from the elasticity image. We utilize the artificial neural network (ANN for inversion algorithm. The proposed estimation method was validated by the realistic tissue phantom with stiff inclusions. The experimental results showed that the proposed estimation method can measure the size, depth, and Young's modulus of a tissue inclusion with 0.58 %, 1.12 %, and 0.51 % relative errors, respectively. A small-scale of breast cancer patient experiments is also presented. The obtained results prove that the proposed method has potential to become a screening and diagnostic method for breast tumor.

  3. High-Resolution Spin-on-Patterning of Perovskite Thin Films for a Multiplexed Image Sensor Array.

    Science.gov (United States)

    Lee, Woongchan; Lee, Jongha; Yun, Huiwon; Kim, Joonsoo; Park, Jinhong; Choi, Changsoon; Kim, Dong Chan; Seo, Hyunseon; Lee, Hakyong; Yu, Ji Woong; Lee, Won Bo; Kim, Dae-Hyeong

    2017-10-01

    Inorganic-organic hybrid perovskite thin films have attracted significant attention as an alternative to silicon in photon-absorbing devices mainly because of their superb optoelectronic properties. However, high-definition patterning of perovskite thin films, which is important for fabrication of the image sensor array, is hardly accomplished owing to their extreme instability in general photolithographic solvents. Here, a novel patterning process for perovskite thin films is described: the high-resolution spin-on-patterning (SoP) process. This fast and facile process is compatible with a variety of spin-coated perovskite materials and perovskite deposition techniques. The SoP process is successfully applied to develop a high-performance, ultrathin, and deformable perovskite-on-silicon multiplexed image sensor array, paving the road toward next-generation image sensor arrays. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability.

    Science.gov (United States)

    Cevik, Ismail; Huang, Xiwei; Yu, Hao; Yan, Mei; Ay, Suat U

    2015-03-06

    An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability is introduced in this paper. The photodiode pixel array can not only capture images but also harvest solar energy. As such, the CMOS image sensor chip is able to switch between imaging and harvesting modes towards self-power operation. Moreover, an on-chip maximum power point tracking (MPPT)-based power management system (PMS) is designed for the dual-mode image sensor to further improve the energy efficiency. A new isolated P-well energy harvesting and imaging (EHI) pixel with very high fill factor is introduced. Several ultra-low power design techniques such as reset and select boosting techniques have been utilized to maintain a wide pixel dynamic range. The chip was designed and fabricated in a 1.8 V, 1P6M 0.18 µm CMOS process. Total power consumption of the imager is 6.53 µW for a 96 × 96 pixel array with 1 V supply and 5 fps frame rate. Up to 30 μW of power could be generated by the new EHI pixels. The PMS is capable of providing 3× the power required during imaging mode with 50% efficiency allowing energy autonomous operation with a 72.5% duty cycle.

  5. An Ultra-Low Power CMOS Image Sensor with On-Chip Energy Harvesting and Power Management Capability

    Directory of Open Access Journals (Sweden)

    Ismail Cevik

    2015-03-01

    Full Text Available An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability is introduced in this paper. The photodiode pixel array can not only capture images but also harvest solar energy. As such, the CMOS image sensor chip is able to switch between imaging and harvesting modes towards self-power operation. Moreover, an on-chip maximum power point tracking (MPPT-based power management system (PMS is designed for the dual-mode image sensor to further improve the energy efficiency. A new isolated P-well energy harvesting and imaging (EHI pixel with very high fill factor is introduced. Several ultra-low power design techniques such as reset and select boosting techniques have been utilized to maintain a wide pixel dynamic range. The chip was designed and fabricated in a 1.8 V, 1P6M 0.18 µm CMOS process. Total power consumption of the imager is 6.53 µW for a 96 × 96 pixel array with 1 V supply and 5 fps frame rate. Up to 30 μW of power could be generated by the new EHI pixels. The PMS is capable of providing 3× the power required during imaging mode with 50% efficiency allowing energy autonomous operation with a 72.5% duty cycle.

  6. Design of a GaAs X-ray imaging sensor with integrated HEMT readout circuitry

    Energy Technology Data Exchange (ETDEWEB)

    Boardman, D

    2002-01-01

    A new monolithic semi-insulating (SI) GaAs sensor design for X-ray imaging applications between 10-100keV has been proposed. Monolithic pixel detectors offer a number of advantages over hybrid bump-bonded detectors, such as high device yield, low costs and are easier to produce large scale arrays. In this thesis, an investigation is made of the use of a SI GaAs wafer as both a detector element and substrate for the epitaxially grown High Electron Mobility Transistors (HEMTs). The design of the HEMT transistors, optimised for this application, were produced with the aid of the Silvaco 'Virtual Wafer Fab' simulation package. It was determined that the device characteristics would consist of a small positive threshold voltage, a low off-state drain current and high transconductance. The final HEMT transistor design, that would be integrated to a pixel detector, had a threshold voltage of 0.17V, an off-state leakage current of {approx}1nA and a transconductance of 7.4mS. A number of test detectors were characterised using an ion beam induced charge technique. Charge collection efficiency maps of the test detectors were produced to determine their quality as a X-ray detection material. From the results, the inhomogeneity of SI GaAs, homogeneity of epitaxial GaAs and granular nature of polycrystalline GaAs, were observed. The best of these detectors was used in conjunction with a commercial field effect transistor to produce a hybrid device. The charge switching nature of the hybrid device was shown and a sensitivity of 0.44pC/{mu}Gy mm{sup 2}, for a detector bias of 60V, was found. The functionality of the hybrid sensor was the same to that proposed for the monolithic sensor. The fabrication of the monolithic sensor, with an integrated HEMT transistor and external capacitor, was achieved. To reach the next stage of producing a monolithic sensor that integrates charge, requires further work in the design and the fabrication process. (author)

  7. Optical fiber sensors for image formation in radiodiagnostic - preliminary essays; Sensores a fibra optica para formacao de imagens em radiodiagnostico - ensaios preliminares

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, Cesar C. de; Werneck, Marcelo M. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Biomedica

    1998-07-01

    This work describes preliminary experiments that will bring subsidies to analyze the capability to implement a system able to capture radiological images with new sensor system, comprised by FOs scanning process and I-CCD camera. These experiments have the main objective to analyze the optical response from FOs bundle, with several typos of scintillators associated with them, when it is submitted to medical x-rays exposition. (author)

  8. Development of a handheld widefield hyperspectral imaging (HSI) sensor for standoff detection of explosive, chemical, and narcotic residues

    Science.gov (United States)

    Nelson, Matthew P.; Basta, Andrew; Patil, Raju; Klueva, Oksana; Treado, Patrick J.

    2013-05-01

    The utility of Hyper Spectral Imaging (HSI) passive chemical detection employing wide field, standoff imaging continues to be advanced in detection applications. With a drive for reduced SWaP (Size, Weight, and Power), increased speed of detection and sensitivity, developing a handheld platform that is robust and user-friendly increases the detection capabilities of the end user. In addition, easy to use handheld detectors could improve the effectiveness of locating and identifying threats while reducing risks to the individual. ChemImage Sensor Systems (CISS) has developed the HSI Aperio™ sensor for real time, wide area surveillance and standoff detection of explosives, chemical threats, and narcotics for use in both government and commercial contexts. Employing liquid crystal tunable filter technology, the HSI system has an intuitive user interface that produces automated detections and real-time display of threats with an end user created library of threat signatures that is easily updated allowing for new hazardous materials. Unlike existing detection technologies that often require close proximity for sensing and so endanger operators and costly equipment, the handheld sensor allows the individual operator to detect threats from a safe distance. Uses of the sensor include locating production facilities of illegal drugs or IEDs by identification of materials on surfaces such as walls, floors, doors, deposits on production tools and residue on individuals. In addition, the sensor can be used for longer-range standoff applications such as hasty checkpoint or vehicle inspection of residue materials on surfaces or bulk material identification. The CISS Aperio™ sensor has faster data collection, faster image processing, and increased detection capability compared to previous sensors.

  9. Development and Application of Non-Linear Image Enhancement and Multi-Sensor Fusion Techniques for Hazy and Dark Imaging

    Science.gov (United States)

    Rahman, Zia-ur

    2005-01-01

    The purpose of this research was to develop enhancement and multi-sensor fusion algorithms and techniques to make it safer for the pilot to fly in what would normally be considered Instrument Flight Rules (IFR) conditions, where pilot visibility is severely restricted due to fog, haze or other weather phenomenon. We proposed to use the non-linear Multiscale Retinex (MSR) as the basic driver for developing an integrated enhancement and fusion engine. When we started this research, the MSR was being applied primarily to grayscale imagery such as medical images, or to three-band color imagery, such as that produced in consumer photography: it was not, however, being applied to other imagery such as that produced by infrared image sources. However, we felt that it was possible by using the MSR algorithm in conjunction with multiple imaging modalities such as long-wave infrared (LWIR), short-wave infrared (SWIR), and visible spectrum (VIS), we could substantially improve over the then state-of-the-art enhancement algorithms, especially in poor visibility conditions. We proposed the following tasks: 1) Investigate the effects of applying the MSR to LWIR and SWIR images. This consisted of optimizing the algorithm in terms of surround scales, and weights for these spectral bands; 2) Fusing the LWIR and SWIR images with the VIS images using the MSR framework to determine the best possible representation of the desired features; 3) Evaluating different mixes of LWIR, SWIR and VIS bands for maximum fog and haze reduction, and low light level compensation; 4) Modifying the existing algorithms to work with video sequences. Over the course of the 3 year research period, we were able to accomplish these tasks and report on them at various internal presentations at NASA Langley Research Center, and in presentations and publications elsewhere. A description of the work performed under the tasks is provided in Section 2. The complete list of relevant publications during the research

  10. Toward 100 Mega-Frames per Second: Design of an Ultimate Ultra-High-Speed Image Sensor

    Directory of Open Access Journals (Sweden)

    Dao Vu Truong Son

    2009-12-01

    Full Text Available Our experiencein the design of an ultra-high speed image sensor targeting the theoretical maximum frame rate is summarized. The imager is the backside illuminated in situ storage image sensor (BSI ISIS. It is confirmed that the critical factor limiting the highest frame rate is the signal electron transit time from the generation layer at the back side of each pixel to the input gate to the in situ storage area on the front side. The theoretical maximum frame rate is estimated at 100 Mega-frames per second (Mfps by transient simulation study. The sensor has a spatial resolution of 140,800 pixels with 126 linear storage elements installed in each pixel. The very high sensitivity is ensured by application of backside illumination technology and cooling. The ultra-high frame rate is achieved by the in situ storage image sensor (ISIS structure on the front side. In this paper, we summarize technologies developed to achieve the theoretical maximum frame rate, including: (1 a special p-well design by triple injections to generate a smooth electric field backside towards the collection gate on the front side, resulting in much shorter electron transit time; (2 design technique to reduce RC delay by employing an extra metal layer exclusively to electrodes responsible for ultra-high speed image capturing; (3 a CCD specific complementary on-chip inductance minimization technique with a couple of stacked differential bus lines.

  11. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme.

    Science.gov (United States)

    Wang, Hao; Jiang, Jie; Zhang, Guangjun

    2017-04-21

    The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.

  12. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2017-04-01

    Full Text Available The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV optical navigation sensor using the well capacity adjusting (WCA scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.

  13. Performance improvement of indoor positioning using light-emitting diodes and an image sensor for light-emitting diode communication

    Science.gov (United States)

    Hossen, Md. Sazzad; Park, Youngil; Kim, Ki-Doo

    2015-04-01

    Light-emitting diodes (LEDs) are expected to replace existing lighting technologies in the near future because of the potential dual function of LED light (i.e., wireless communication and lighting) in the context of visible light communication (VLC). We propose a highly precise indoor positioning algorithm using lighting LEDs, an image sensor, and VLC. In the proposed algorithm, three LEDs transmit their three-dimensional coordinate information, which is received and demodulated by a single image sensor at an unknown position. The unknown position is then calculated from the geometrical relations of the LED images created on the image sensor plane. We describe the algorithm in detail. A simulation of the proposed algorithm is presented in this paper. We also compare the performance of this algorithm with that of our previously proposed algorithm. The comparison indicates significant improvement in positioning accuracy because of the simple algorithmic structure and low computational complexity. This technique does not require any angular measurement, which is needed in the contemporary positioning algorithms using LEDs and image sensor. The simulation results show that the proposed system can estimate the unknown position to an accuracy of 0.001 m inside the approximate positioning area when the pixel value is >3000.

  14. Imaging objects behind a partially reflective surface with a modified time-of-flight sensor

    Science.gov (United States)

    Geerardyn, D.; Kuijk, M.

    2014-05-01

    Time-of-Flight (ToF) methods are used in different applications for depth measurements. There are mainly 2 types of ToF measurements, Pulsed Time-of-Flight and Continuous-Wave Time-of-Flight. Pulsed Time-of-Flight (PToF) techniques are mostly used in combination with a scanning mirror, which makes them not well suited for imaging purposes. Continuous-wave Time-of-Flight (CWToF) techniques are mostly used wide-field, hence they are much faster and more suited for imaging purposes but cannot be used behind partially-reflective surfaces. In commercial applications, both ToF methods require specific hardware, which cannot be exchanged. In this paper, we discuss the transformation of a CWToF sensor to a PToF camera, which is able to make images and measure the distances of objects behind a partially-reflective surface, like the air-water interface in swimming pools when looking from above. We first created our own depth camera which is suitable for both CWToF and PToF. We describe the necessary hardware components for a normal ToF camera and compare it with the adapted components which make it a range-gating depth imager. Afterwards, we modeled the distances and images of one or more objects positioned behind a partially-reflective surface and combine it with measurement data of the optical pulse. A scene was virtualized and the rays from a raytracing software tool were exported to Matlab™. Subsequently, pulse deformations were calculated for every pixel, which resulted in the calculation of the depth information.

  15. 3D imaging for ballistics analysis using chromatic white light sensor

    Science.gov (United States)

    Makrushin, Andrey; Hildebrandt, Mario; Dittmann, Jana; Clausing, Eric; Fischer, Robert; Vielhauer, Claus

    2012-03-01

    The novel application of sensing technology, based on chromatic white light (CWL), gives a new insight into ballistic analysis of cartridge cases. The CWL sensor uses a beam of white light to acquire highly detailed topography and luminance data simultaneously. The proposed 3D imaging system combines advantages of 3D and 2D image processing algorithms in order to automate the extraction of firearm specific toolmarks shaped on fired specimens. The most important characteristics of a fired cartridge case are the type of the breech face marking as well as size, shape and location of extractor, ejector and firing pin marks. The feature extraction algorithm normalizes the casing surface and consistently searches for the appropriate distortions on the rim and on the primer. The location of the firing pin mark in relation to the lateral scratches on the rim provides unique rotation invariant characteristics of the firearm mechanisms. Additional characteristics are the volume and shape of the firing pin mark. The experimental evaluation relies on the data set of 15 cartridge cases fired from three 9mm firearms of different manufacturers. The results show very high potential of 3D imaging systems for casing-based computer-aided firearm identification, which is prospectively going to support human expertise.

  16. Novel x-ray image sensor using CsBr:Eu phosphor for computed radiography

    Science.gov (United States)

    Nanto, H.; Takei, Y.; Nishimura, A.; Nakano, Y.; Shouji, T.; Yanagita, T.; Kasai, S.

    2006-03-01

    CsBr phosphor ceramics doped with different luminescence center such as In IIO 3, Eu IIO 3, EuCl 3, SmCl 3, TbCl 3, GdCl 3 or NdCl 3 as a candidate of a new photosimulable phosphor for medical x-ray imaging sensor are prepared using a conventional ceramic fabrication process. It is found that x-ray-irradiated Eu-doped CsBr (CsBr:Eu) exhibits intense photostimulated luminescence (PSL). The peak wavelength of the PSL emission and stimulation spectra of CsBr:Eu phosphor ceramic sample is 450 nm and 690 nm, respectively. The dependence of PSL properties on preparing conditions of phosphor ceramic samples, such as Eu concentration, sintering temperature and sintering time, is studied and the optimum preparing condition is also studied. It is found that the PSL intensity of CsBr:Eu phosphor ceramics fabricated under optimum preparation condition is higher than that of commercially available imaging plate (IP) using BafBr:Eu. The image quality of the IP using CsBr:Eu phosphor film is better than that of commercially available IP.

  17. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation

    Directory of Open Access Journals (Sweden)

    Giovanna Sansoni

    2009-01-01

    Full Text Available 3D imaging sensors for the acquisition of three dimensional (3D shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a “sensor fusion” approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications.

  18. High-Performance Motion Estimation for Image Sensors with Video Compression

    Directory of Open Access Journals (Sweden)

    Weizhi Xu

    2015-08-01

    Full Text Available It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME. Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME.

  19. Atomic-scale nuclear spin imaging using quantum-assisted sensors in diamond

    Science.gov (United States)

    Ajoy, Ashok; Bissbort, Ulf; Liu, Yixiang; Marseglia, Luca; Saha, Kasturi; Cappellaro, Paola

    2015-05-01

    Recent developments in materials fabrication and coherent control have brought quantum magnetometers based on electronic spin defects in diamond close to single nuclear spin sensitivity. These quantum sensors have the potential to be a revolutionary tool in proteomics, thus helping drug discovery: They can overcome some of the challenges plaguing other experimental techniques (x-ray and NMR) and allow single protein reconstruction in their natural conditions. While the sensitivity of diamond-based magnetometers approaches the single nuclear spin level, the outstanding challenge is to resolve contributions arising from distinct nuclear spins in a dense sample and use the acquired signal to reconstruct their positions. This talk describes a strategy to boost the spatial resolution of NV-based magnetic resonance imaging, by combining the use of a quantum memory intrinsic to the NV system with Hamiltonian engineering by coherent quantum control. The proposed strategy promises to make diamond-based quantum sensors an invaluable technology for bioimaging, as they could achieve the reconstruction of biomolecules local structure without the need to crystallize them, to synthesize large ensembles or to alter their natural environment.

  20. Fabrication of CMOS-compatible nanopillars for smart bio-mimetic CMOS image sensors

    KAUST Repository

    Saffih, Faycal

    2012-06-01

    In this paper, nanopillars with heights of 1μm to 5μm and widths of 250nm to 500nm have been fabricated with a near room temperature etching process. The nanopillars were achieved with a continuous deep reactive ion etching technique and utilizing PMMA (polymethylmethacrylate) and Chromium as masking layers. As opposed to the conventional Bosch process, the usage of the unswitched deep reactive ion etching technique resulted in nanopillars with smooth sidewalls with a measured surface roughness of less than 40nm. Moreover, undercut was nonexistent in the nanopillars. The proposed fabrication method achieves etch rates four times faster when compared to the state-of-the-art, leading to higher throughput and more vertical side walls. The fabrication of the nanopillars was carried out keeping the CMOS process in mind to ultimately obtain a CMOS-compatible process. This work serves as an initial step in the ultimate objective of integrating photo-sensors based on these nanopillars seamlessly along with the controlling transistors to build a complete bio-inspired smart CMOS image sensor on the same wafer. © 2012 IEEE.

  1. A new methodology for in-flight radiometric calibration of the MIVIS imaging sensor

    Directory of Open Access Journals (Sweden)

    G. Lechi

    2006-06-01

    Full Text Available Sensor radiometric calibration is of great importance in computing physical values of radiance of the investigated targets, but often airborne scanners are not equipped with any in-flight radiometric calibration facility. Consequently, the radiometric calibration or airborne systems usually relies only on pre-flight and vicarious calibration or on indirect approaches. This paper introduces an experimental approach that makes use of on-board calibration techniques to perform the radiometric calibration of the CNR’s MIVIS (Multispectral Infrared and Visible Imaging Spectrometer airborne scanner. This approach relies on the use of an experimental optical test bench originally designed at Politecnico di Milano University (Italy, called MIVIS Flying Test Bench (MFTB, to perform the first On-The-Fly (OTF calibration of the MIVIS reflective spectral bands. The main task of this study is to estimate how large are the effects introduced by aircraft motion (e.g., e.m. noise or vibrations and by environment conditions (e.g., environment temperature on the radiance values measured by the MIVIS sensor during the fly. This paper describes the first attempt to perform an On-The-Fly (OTF calibration of the MIVIS reflective spectral bands (ranging from 430 nm to 2.500 nm. Analysis of results seems to point out limitations of traditional radiometric calibration methodology based only on pre-flight approaches, with important implications for data quality assessment.

  2. High-Performance Motion Estimation for Image Sensors with Video Compression.

    Science.gov (United States)

    Xu, Weizhi; Yin, Shouyi; Liu, Leibo; Liu, Zhiyong; Wei, Shaojun

    2015-08-21

    It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME) is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME). Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME.

  3. Creation of 3D multi-body orthodontic models by using independent imaging sensors.

    Science.gov (United States)

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-02-05

    In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT) and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces) through the digitalization of both patients' mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning.

  4. Creation of 3D Multi-Body Orthodontic Models by Using Independent Imaging Sensors

    Directory of Open Access Journals (Sweden)

    Armando Viviano Razionale

    2013-02-01

    Full Text Available In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces through the digitalization of both patients’ mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning.

  5. Creation of 3D Multi-Body Orthodontic Models by Using Independent Imaging Sensors

    Science.gov (United States)

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-01-01

    In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT) and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces) through the digitalization of both patients' mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning. PMID:23385416

  6. Low Dose X-Ray Sources and High Quantum Efficiency Sensors: The Next Challenge in Dental Digital Imaging?

    OpenAIRE

    Mistry, Arnav R.; Daniel Uzbelger Feldman; Jie Yang; Eric Ryterski

    2014-01-01

    Objective(s). The major challenge encountered to decrease the milliamperes (mA) level in X-ray imaging systems is the quantum noise phenomena. This investigation evaluated dose exposure and image resolution of a low dose X-ray imaging (LDXI) prototype comprising a low mA X-ray source and a novel microlens-based sensor relative to current imaging technologies. Study Design. A LDXI in static (group 1) and dynamic (group 2) modes was compared to medical fluoroscopy (group 3), digital intraoral r...

  7. 1T Pixel Using Floating-Body MOSFET for CMOS Image Sensors

    Directory of Open Access Journals (Sweden)

    Guo-Neng Lu

    2009-01-01

    Full Text Available We present a single-transistor pixel for CMOS image sensors (CIS. It is a floating-body MOSFET structure, which is used as photo-sensing device and source-follower transistor, and can be controlled to store and evacuate charges. Our investigation into this 1T pixel structure includes modeling to obtain analytical description of conversion gain. Model validation has been done by comparing theoretical predictions and experimental results. On the other hand, the 1T pixel structure has been implemented in different configurations, including rectangular-gate and ring-gate designs, and variations of oxidation parameters for the fabrication process. The pixel characteristics are presented and discussed.

  8. A Container Horizontal Positioning Method with Image Sensors for Cranes in Automated Container Terminals

    Directory of Open Access Journals (Sweden)

    FU Yonghua

    2014-03-01

    Full Text Available Automation is a trend for large container terminals nowadays, and container positioning techniques are key factor in the automating process. Vision based positioning techniques are inexpensive and rather accurate in nature, while the effect with insufficient illumination is left in question. This paper proposed a vision-based procedure with image sensors to determine the position of one container in the horizontal plane. The points found by the edge detection operator are clustered, and only the peak points in the parameter space of the Hough transformation is selected, in order that the effect of noises could be much decreased. The effectiveness of our procedure is verified in experiments, in which the efficiency of the procedure is also investigated.

  9. Initial Comparison of the Lightning Imaging Sensor (LIS) with Lightning Detection and Ranging (LDAR)

    Science.gov (United States)

    Ushio, Tomoo; Driscoll, Kevin; Heckman, Stan; Boccippio, Dennis; Koshak, William; Christian, Hugh

    1999-01-01

    The mapping of the lightning optical pulses detected by the Lightning Imaging Sensor (LIS) is compared with the radiation sources by Lightning Detection and Ranging (LDAR) and the National Lightning Detection Network (NLDN) for three thunderstorms observed during and overpasses on 15 August 1998. The comparison involves 122 flashes including 42 ground and 80 cloud flashes. For ground flash, the LIS recorded the subsequent strokes and changes inside the cloud. For cloud flashes, LIS recorded those with higher sources in altitude and larger number of sources. The discrepancies between the LIS and LDAR flash locations are about 4.3 km for cloud flashes and 12.2 km for ground flashes. The reason for these differences remain a mystery.

  10. New radiological material detection technologies for nuclear forensics: Remote optical imaging and graphene-based sensors.

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, Richard Karl [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Jeffrey B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wiemann, Dora K. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Choi, Junoh [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Howell, Stephen W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    We developed new detector technologies to identify the presence of radioactive materials for nuclear forensics applications. First, we investigated an optical radiation detection technique based on imaging nitrogen fluorescence excited by ionizing radiation. We demonstrated optical detection in air under indoor and outdoor conditions for alpha particles and gamma radiation at distances up to 75 meters. We also contributed to the development of next generation systems and concepts that could enable remote detection at distances greater than 1 km, and originated a concept that could enable daytime operation of the technique. A second area of research was the development of room-temperature graphene-based sensors for radiation detection and measurement. In this project, we observed tunable optical and charged particle detection, and developed improved devices. With further development, the advancements described in this report could enable new capabilities for nuclear forensics applications.

  11. Hybrid Si nanowire/amorphous silicon FETs for large-area image sensor arrays.

    Science.gov (United States)

    Wong, William S; Raychaudhuri, Sourobh; Lujan, René; Sambandan, Sanjiv; Street, Robert A

    2011-06-08

    Silicon nanowire (SiNW) field-effect transistors (FETs) were fabricated from nanowire mats mechanically transferred from a donor growth wafer. Top- and bottom-gate FET structures were fabricated using a doped a-Si:H thin film as the source/drain (s/d) contact. With a graded doping profile for the a-Si:H s/d contacts, the off-current for the hybrid nanowire/thin-film devices was found to decrease by 3 orders of magnitude. Devices with the graded contacts had on/off ratios of ∼10(5), field-effect mobility of ∼50 cm(2)/(V s), and subthreshold swing of 2.5 V/decade. A 2 in. diagonal 160 × 180 pixel image sensor array was fabricated by integrating the SiNW backplane with an a-Si:H p-i-n photodiode.

  12. Monitoring of Freeze-Thaw Cycles in Concrete Using Embedded Sensors and Ultrasonic Imaging

    Directory of Open Access Journals (Sweden)

    Javier Ranz

    2014-01-01

    Full Text Available This paper deals with the study of damage produced during freeze-thaw (F-T cycles using two non-destructive measurement approaches—the first approach devoted to continuous monitoring using embedded sensors during the cycles, and the second one, performing ultrasonic imaging before and after the cycles. Both methodologies have been tested in two different types of concrete specimens, with and without air-entraining agents. Using the first measurement approach, the size and distribution of pores were estimated using a thermoporometrical model and continuous measurements of temperature and ultrasonic velocity along cycles. These estimates have been compared with the results obtained using mercury porosimetry testing. In the second approach, the damage due to F-T cycles has been evaluated by automated ultrasonic transmission and pulse-echo inspections made before and after the cycles. With these inspections the variations in the dimensions, velocity and attenuation caused by the accelerated F-T cycles were determined.

  13. A 45 nm Stacked CMOS Image Sensor Process Technology for Submicron Pixel.

    Science.gov (United States)

    Takahashi, Seiji; Huang, Yi-Min; Sze, Jhy-Jyi; Wu, Tung-Ting; Guo, Fu-Sheng; Hsu, Wei-Cheng; Tseng, Tung-Hsiung; Liao, King; Kuo, Chin-Chia; Chen, Tzu-Hsiang; Chiang, Wei-Chieh; Chuang, Chun-Hao; Chou, Keng-Yu; Chung, Chi-Hsien; Chou, Kuo-Yu; Tseng, Chien-Hsien; Wang, Chuan-Joung; Yaung, Dun-Nien

    2017-12-05

    A submicron pixel's light and dark performance were studied by experiment and simulation. An advanced node technology incorporated with a stacked CMOS image sensor (CIS) is promising in that it may enhance performance. In this work, we demonstrated a low dark current of 3.2 e-/s at 60 °C, an ultra-low read noise of 0.90 e-·rms, a high full well capacity (FWC) of 4100 e-, and blooming of 0.5% in 0.9 μm pixels with a pixel supply voltage of 2.8 V. In addition, the simulation study result of 0.8 μm pixels is discussed.

  14. Interferometric Reflectance Imaging Sensor (IRIS—A Platform Technology for Multiplexed Diagnostics and Digital Detection

    Directory of Open Access Journals (Sweden)

    Oguzhan Avci

    2015-07-01

    Full Text Available Over the last decade, the growing need in disease diagnostics has stimulated rapid development of new technologies with unprecedented capabilities. Recent emerging infectious diseases and epidemics have revealed the shortcomings of existing diagnostics tools, and the necessity for further improvements. Optical biosensors can lay the foundations for future generation diagnostics by providing means to detect biomarkers in a highly sensitive, specific, quantitative and multiplexed fashion. Here, we review an optical sensing technology, Interferometric Reflectance Imaging Sensor (IRIS, and the relevant features of this multifunctional platform for quantitative, label-free and dynamic detection. We discuss two distinct modalities for IRIS: (i low-magnification (ensemble biomolecular mass measurements and (ii high-magnification (digital detection of individual nanoparticles along with their applications, including label-free detection of multiplexed protein chips, measurement of single nucleotide polymorphism, quantification of transcription factor DNA binding, and high sensitivity digital sensing and characterization of nanoparticles and viruses.

  15. Use of LST images from MODIS/AQUA sensor as an indication of frost occurrence in RS

    Directory of Open Access Journals (Sweden)

    Débora de S. Simões

    2015-10-01

    Full Text Available ABSTRACTAlthough frost occurrence causes severe losses in agriculture, especially in the south of Brazil, the data of minimum air temperature (Tmin currently available for monitoring and predicting frosts show insufficient spatial distribution. This study aimed to evaluate the MDY11A1 (LST – Land Surface Temperature product, from the MODIS sensor on board the AQUA satellite as an estimator of frost occurrence in the southeast of the state of Rio Grande do Sul, Brazil. LST images from the nighttime overpass of the MODIS/AQUA sensor for the months of June, July and August from 2006 to 2012, and data from three conventional weather stations of the National Institute of Meteorology (INMET were used. Consistency was observed between Tmin data measured in weather stations and LST data obtained from the MODIS sensor. According to the results, LSTs below 3 ºC recorded by the MODIS/AQUA sensor are an indication of a favorable scenario to frost occurrence.

  16. Bolometric properties of reactively sputtered TiO2-x films for thermal infrared image sensors

    Science.gov (United States)

    Reddy, Y. Ashok Kumar; Kang, In-Ku; Shin, Young Bong; Lee, Hee Chul

    2015-09-01

    A heat-sensitive layer (TiO2-x ) was successfully deposited by RF reactive magnetron sputtering for infrared (IR) image sensors at different relative mass flow of oxygen gas (R O2) levels. The deposition rate was decreased with an increase in the percentage of R O2 from 3.4% to 3.7%. TiO2-x samples deposited at room temperature exhibited amorphous characteristics. Oxygen deficiency causes a change in the oxidation state and is assumed to decrease the Ti4+ component on the surfaces of TiO2-x films. The oxygen stoichiometry (x) in TiO2-x films decreased from 0.35 to 0.05 with increasing the R O2 level from 3.4% to 3.7%, respectively. In TiO2-x -test-patterned samples, the resistivity decreased with the temperature, confirming the typical semiconducting property. The bolometric properties of the resistivity, temperature coefficient of resistance (TCR), and the flicker (1/ f) noise parameter were determined at different x values in TiO2-x samples. The rate of TCR dependency with regard to the 1/ f noise parameter is a universal bolometric parameter (β), acting as the dynamic element in a bolometer. It is high when a sample has a relatively low resistivity (0.82 Ω·cm) and a lower 1/ f noise parameter (3.16   ×   10-12). The results of this study indicate that reactively sputtered TiO2-x is a viable bolometric material for uncooled IR image sensor devices.

  17. Development of a 55 μm pitch 8 inch CMOS image sensor for the high resolution NDT application

    Science.gov (United States)

    Kim, M. S.; Kim, G.; Cho, G.; Kim, D.

    2016-11-01

    A CMOS image sensor (CIS) with a large area for the high resolution X-ray imaging was designed. The sensor has an active area of 125 × 125 mm2 comprised with 2304 × 2304 pixels and a pixel size of 55 × 55 μm2. First batch samples were fabricated by using an 8 inch silicon CMOS image sensor process with a stitching method. In order to evaluate the performance of the first batch samples, the electro-optical test and the X-ray test after coupling with an image intensifier screen were performed. The primary results showed that the performance of the manufactured sensors was limited by a large stray capacitance from the long path length between the analog multiplexer on the chip and the bank ADC on the data acquisition board. The measured speed and dynamic range were limited up to 12 frame per sec and 55 dB respectively, but other parameters such as the MTF, NNPS and DQE showed a good result as designed. Based on this study, the new X-ray CIS with ~ 50 μm pitch and ~ 150 cm2 active area are going to be designed for the high resolution X-ray NDT equipment for semiconductor and PCB inspections etc.

  18. Test of the Practicality and Feasibility of EDoF-Empowered Image Sensors for Long-Range Biometrics

    Directory of Open Access Journals (Sweden)

    Sheng-Hsun Hsieh

    2016-11-01

    Full Text Available For many practical applications of image sensors, how to extend the depth-of-field (DoF is an important research topic; if successfully implemented, it could be beneficial in various applications, from photography to biometrics. In this work, we want to examine the feasibility and practicability of a well-known “extended DoF” (EDoF technique, or “wavefront coding,” by building real-time long-range iris recognition and performing large-scale iris recognition. The key to the success of long-range iris recognition includes long DoF and image quality invariance toward various object distance, which is strict and harsh enough to test the practicality and feasibility of EDoF-empowered image sensors. Besides image sensor modification, we also explored the possibility of varying enrollment/testing pairs. With 512 iris images from 32 Asian people as the database, 400-mm focal length and F/6.3 optics over 3 m working distance, our results prove that a sophisticated coding design scheme plus homogeneous enrollment/testing setups can effectively overcome the blurring caused by phase modulation and omit Wiener-based restoration. In our experiments, which are based on 3328 iris images in total, the EDoF factor can achieve a result 3.71 times better than the original system without a loss of recognition accuracy.

  19. Preoperative implant selection for unilateral breast reconstruction using 3D imaging with the Microsoft Kinect sensor.

    Science.gov (United States)

    Pöhlmann, Stefanie T L; Harkness, Elaine; Taylor, Christopher J; Gandhi, Ashu; Astley, Susan M

    2017-08-01

    This study aimed to investigate whether breast volume measured preoperatively using a Kinect 3D sensor could be used to determine the most appropriate implant size for reconstruction. Ten patients underwent 3D imaging before and after unilateral implant-based reconstruction. Imaging used seven configurations, varying patient pose and Kinect location, which were compared regarding suitability for volume measurement. Four methods of defining the breast boundary for automated volume calculation were compared, and repeatability assessed over five repetitions. The most repeatable breast boundary annotation used an ellipse to track the inframammary fold and a plane describing the chest wall (coefficient of repeatability: 70 ml). The most reproducible imaging position comparing pre- and postoperative volume measurement of the healthy breast was achieved for the sitting patient with elevated arms and Kinect centrally positioned (coefficient of repeatability: 141 ml). Optimal implant volume was calculated by correcting used implant volume by the observed postoperative asymmetry. It was possible to predict implant size using a linear model derived from preoperative volume measurement of the healthy breast (coefficient of determination R(2) = 0.78, standard error of prediction 120 ml). Mastectomy specimen weight and experienced surgeons' choice showed similar predictive ability (both: R(2) = 0.74, standard error: 141/142 ml). A leave one-out validation showed that in 61% of cases, 3D imaging could predict implant volume to within 10%; however for 17% of cases it was >30%. This technology has the potential to facilitate reconstruction surgery planning and implant procurement to maximise symmetry after unilateral reconstruction. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  20. Design and Implementation of a Novel Compatible Encoding Scheme in the Time Domain for Image Sensor Communication

    Directory of Open Access Journals (Sweden)

    Trang Nguyen

    2016-05-01

    Full Text Available This paper presents a modulation scheme in the time domain based on On-Off-Keying and proposes various compatible supports for different types of image sensors. The content of this article is a sub-proposal to the IEEE 802.15.7r1 Task Group (TG7r1 aimed at Optical Wireless Communication (OWC using an image sensor as the receiver. The compatibility support is indispensable for Image Sensor Communications (ISC because the rolling shutter image sensors currently available have different frame rates, shutter speeds, sampling rates, and resolutions. However, focusing on unidirectional communications (i.e., data broadcasting, beacons, an asynchronous communication prototype is also discussed in the paper. Due to the physical limitations associated with typical image sensors (including low and varying frame rates, long exposures, and low shutter speeds, the link speed performance is critically considered. Based on the practical measurement of camera response to modulated light, an operating frequency range is suggested along with the similar system architecture, decoding procedure, and algorithms. A significant feature of our novel data frame structure is that it can support both typical frame rate cameras (in the oversampling mode as well as very low frame rate cameras (in the error detection mode for a camera whose frame rate is lower than the transmission packet rate. A high frame rate camera, i.e., no less than 20 fps, is supported in an oversampling mode in which a majority voting scheme for decoding data is applied. A low frame rate camera, i.e., when the frame rate drops to less than 20 fps at some certain time, is supported by an error detection mode in which any missing data sub-packet is detected in decoding and later corrected by external code. Numerical results and valuable analysis are also included to indicate the capability of the proposed schemes.

  1. Feasibility of fiber optic displacement sensor scanning system for imaging of dental cavity

    Science.gov (United States)

    Rahman, Husna Abdul; Che Ani, Adi Izhar; Harun, Sulaiman Wadi; Yasin, Moh.; Apsari, Retna; Ahmad, Harith

    2012-07-01

    The purpose of this study is to investigate the potential of intensity modulated fiber optic displacement sensor scanning system for the imaging of dental cavity. Here, we discuss our preliminary results in the imaging of cavities on various teeth surfaces, as well as measurement of the diameter of the cavities which are represented by drilled holes on the teeth surfaces. Based on the analysis of displacement measurement, the sensitivities and linear range for the molar, canine, hybrid composite resin, and acrylic surfaces are obtained at 0.09667 mV/mm and 0.45 mm 0.775 mV/mm and 0.4 mm 0.5109 mV/mm and 0.5 mm and 0.25 mV/mm and 0.5 mm, respectively, with a good linearity of more than 99%. The results also show a clear distinction between the cavity and surrounding tooth region. The stability, simplicity of design, and low cost of fabrication make it suitable for restorative dentistry.

  2. The Integration of the Image Sensor with a 3-DOF Pneumatic Parallel Manipulator

    Directory of Open Access Journals (Sweden)

    Hao-Ting Lin

    2016-07-01

    Full Text Available The study aims to integrate the image sensor for a three-axial pneumatic parallel manipulator which can pick and place objects automatically by the feature information of the image processed through the SURF algorithm. The SURF algorithm is adopted for defining and matching the features of a target object and an object database. In order to accurately mark the center of target and strengthen the feature matching results, the random sample and consensus method (RANSAC is utilized. The ASUS Xtion Pro Live depth camera which can directly estimate the 3-D location of the target point is used in this study. A set of coordinate estimation calibrations is developed for enhancing the accuracy of target location estimation. This study also presents hand gesture recognition exploiting skin detection and noise elimination to determine the active finger count for input signals of the parallel manipulator. The end-effector of the parallel manipulator can be manipulated to the desired poses according to the measured finger count. Finally, the proposed methods are successfully to achieve the feature recognition and pick and place of the target object.

  3. Spatial optical crosstalk in CMOS image sensors integrated with plasmonic color filters.

    Science.gov (United States)

    Yu, Yan; Chen, Qin; Wen, Long; Hu, Xin; Zhang, Hui-Fang

    2015-08-24

    Imaging resolution of complementary metal oxide semiconductor (CMOS) image sensor (CIS) keeps increasing to approximately 7k × 4k. As a result, the pixel size shrinks down to sub-2μm, which greatly increases the spatial optical crosstalk. Recently, plasmonic color filter was proposed as an alternative to conventional colorant pigmented ones. However, there is little work on its size effect and the spatial optical crosstalk in a model of CIS. By numerical simulation, we investigate the size effect of nanocross array plasmonic color filters and analyze the spatial optical crosstalk of each pixel in a Bayer array of a CIS with a pixel size of 1μm. It is found that the small pixel size deteriorates the filtering performance of nanocross color filters and induces substantial spatial color crosstalk. By integrating the plasmonic filters in the low Metal layer in standard CMOS process, the crosstalk reduces significantly, which is compatible to pigmented filters in a state-of-the-art backside illumination CIS.

  4. Calibration of Binocular Vision Sensors Based on Unknown-Sized Elliptical Stripe Images

    Directory of Open Access Journals (Sweden)

    Zhen Liu

    2017-12-01

    Full Text Available Most of the existing calibration methods for binocular stereo vision sensor (BSVS depend on a high-accuracy target with feature points that are difficult and costly to manufacture and. In complex light conditions, optical filters are used for BSVS, but they affect imaging quality. Hence, the use of a high-accuracy target with certain-sized feature points for calibration is not feasible under such complex conditions. To solve these problems, a calibration method based on unknown-sized elliptical stripe images is proposed. With known intrinsic parameters, the proposed method adopts the elliptical stripes located on the parallel planes as a medium to calibrate BSVS online. In comparison with the common calibration methods, the proposed method avoids utilizing high-accuracy target with certain-sized feature points. Therefore, the proposed method is not only easy to implement but is a realistic method for the calibration of BSVS with optical filter. Changing the size of elliptical curves projected on the target solves the difficulty of applying the proposed method in different fields of view and distances. Simulative and physical experiments are conducted to validate the efficiency of the proposed method. When the field of view is approximately 400 mm × 300 mm, the proposed method can reach a calibration accuracy of 0.03 mm, which is comparable with that of Zhang’s method.

  5. Sensor fusion of electron paramagnetic resonance and magnetorelaxometry data for quantitative magnetic nanoparticle imaging

    Science.gov (United States)

    Coene, A.; Leliaert, J.; Crevecoeur, G.; Dupré, L.

    2017-03-01

    Magnetorelaxometry (MRX) imaging and electron paramagnetic resonance (EPR) are two non-invasive techniques capable of recovering the magnetic nanoparticle (MNP) distribution. Both techniques solve an ill-posed inverse problem in order to find the spatial MNP distribution. A lot of research has been done on increasing the stability of these inverse problems with the main objective to improve the quality of MNP imaging. In this paper a proof of concept is presented in which the sensor data of both techniques is fused into EPR-MRX, with the intention to stabilize the inverse problem. First, both techniques are compared by reconstructing several phantoms with different sizes for various noise levels and calculating stability, sensitivity and reconstruction quality parameters for these cases. This study reveals that both techniques are sensitive to different information from the MNP distributions and generate complementary measurement data. As such, their merging might stabilize the inverse problem. In a next step we investigated how both techniques need to be combined to reduce their respective drawbacks, such as a high number of required measurements and reduced stability, and to improve MNP reconstructions. We were able to stabilize both techniques, increase reconstruction quality by an average of 5% and reduce measurement times by 88%. These improvements could make EPR-MRX a valuable and accurate technique in a clinical environment.

  6. In-flight radiometric calibration of the Advanced Land Imager and Hyperion sensors on the EO-1 platform and comparisons with other earth observing sensors

    Science.gov (United States)

    Biggar, Stuart F.; Thome, Kurtis J.; Wisniewski, Wit T.

    2002-09-01

    The radiometric calibration of the two optical sensors on the Earth Observing One satellite has been studied as a function of time since launch. The calibration has been determined by ground reference calibrations at well-characterized field sites, such as White Sands Missile Range and dry playas, and by reference to other sensors such as the Enhanced Thematic Mapper Plus (ETM+) on Landsat 7. The ground reference calibrations of the Advanced Land Imager (ALI) give results consistent with the on-board solar calibrator and show a significant shift since preflight calibration in the short wavelength bands. Similarly, the ground reference calibrations of Hyperion show a change since preflight calibration, however, for Hyperion the largest changes are in the short wave infrared region of the spectrum. Cross calibration of ALI with ETM+ is consistent with the ground reference calibrations in the visible and near infrared. Results showing the changes in radiometric calibration are presented.

  7. Imaging Spectroscopy Techniques for Rapid Assessment of Geologic and Cryospheric Science Data from future Satellite Sensors

    Science.gov (United States)

    Calvin, W. M.; Hill, R.

    2016-12-01

    Several efforts are currently underway to develop and launch the next generation of imaging spectrometer systems on satellite platforms for a wide range of Earth Observation goals. Systems that include the reflected solar wavelength range up to 2.5 μm will be capable of detailed mapping of the composition of the Earth's surface. Sensors under development include EnMAP, HISUI, PRISMA, HERO, and HyspIRI. These systems are expected to be able to provide global data for insights and constraints on fundamental geological processes, natural and anthropogenic hazards, water, energy and mineral resource assessments. Coupled with the development of these sensors is the challenge of bringing a multi-channel user community (from Landsat, MODIS, and ASTER) into the rich science return available from imaging spectrometer systems. Most data end users will never be spectroscopy experts so that making the derived science products accessible to a wide user community is imperative. Simple band parameterizations have been developed for the CRISM instrument at Mars, including mafic and alteration minerals, frost and volatile ice indices. These products enhance and augment the use of that data set by broader group of scientists. Summary products for terrestrial geologic and water resource applications would help build a wider user base for future satellite systems, and rapidly key spectral experts to important regions for detailed spectral mapping. Summary products take advantage of imaging spectroscopy's narrow spectral channels with band depth calculations in addition to band ratios that are commonly used by multi-channel systems (e.g. NDVI, NDWI, NDSI). We are testing summary products for Earth geologic and snow scenes over California using AVIRIS data at 18m/pixel. This has resulted in several algorithms for rapid mineral discrimination and mapping and data collects over the melting Sierra snowpack in spring 2016 are expected to generate algorithms for snow grain size and surface

  8. Comparison of Three Non-Imaging Angle-Diversity Receivers as Input Sensors of Nodes for Indoor Infrared Wireless Sensor Networks: Theory and Simulation

    Directory of Open Access Journals (Sweden)

    Beatriz R. Mendoza

    2016-07-01

    Full Text Available In general, the use of angle-diversity receivers makes it possible to reduce the impact of ambient light noise, path loss and multipath distortion, in part by exploiting the fact that they often receive the desired signal from different directions. Angle-diversity detection can be performed using a composite receiver with multiple detector elements looking in different directions. These are called non-imaging angle-diversity receivers. In this paper, a comparison of three non-imaging angle-diversity receivers as input sensors of nodes for an indoor infrared (IR wireless sensor network is presented. The receivers considered are the conventional angle-diversity receiver (CDR, the sectored angle-diversity receiver (SDR, and the self-orienting receiver (SOR, which have been proposed or studied by research groups in Spain. To this end, the effective signal-collection area of the three receivers is modelled and a Monte-Carlo-based ray-tracing algorithm is implemented which allows us to investigate the effect on the signal to noise ratio and main IR channel parameters, such as path loss and rms delay spread, of using the three receivers in conjunction with different combination techniques in IR links operating at low bit rates. Based on the results of the simulations, we show that the use of a conventional angle-diversity receiver in conjunction with the equal-gain combining technique provides the solution with the best signal to noise ratio, the lowest computational capacity and the lowest transmitted power requirements, which comprise the main limitations for sensor nodes in an indoor infrared wireless sensor network.

  9. Comparison of Three Non-Imaging Angle-Diversity Receivers as Input Sensors of Nodes for Indoor Infrared Wireless Sensor Networks: Theory and Simulation.

    Science.gov (United States)

    Mendoza, Beatriz R; Rodríguez, Silvestre; Pérez-Jiménez, Rafael; Ayala, Alejandro; González, Oswaldo

    2016-07-14

    In general, the use of angle-diversity receivers makes it possible to reduce the impact of ambient light noise, path loss and multipath distortion, in part by exploiting the fact that they often receive the desired signal from different directions. Angle-diversity detection can be performed using a composite receiver with multiple detector elements looking in different directions. These are called non-imaging angle-diversity receivers. In this paper, a comparison of three non-imaging angle-diversity receivers as input sensors of nodes for an indoor infrared (IR) wireless sensor network is presented. The receivers considered are the conventional angle-diversity receiver (CDR), the sectored angle-diversity receiver (SDR), and the self-orienting receiver (SOR), which have been proposed or studied by research groups in Spain. To this end, the effective signal-collection area of the three receivers is modelled and a Monte-Carlo-based ray-tracing algorithm is implemented which allows us to investigate the effect on the signal to noise ratio and main IR channel parameters, such as path loss and rms delay spread, of using the three receivers in conjunction with different combination techniques in IR links operating at low bit rates. Based on the results of the simulations, we show that the use of a conventional angle-diversity receiver in conjunction with the equal-gain combining technique provides the solution with the best signal to noise ratio, the lowest computational capacity and the lowest transmitted power requirements, which comprise the main limitations for sensor nodes in an indoor infrared wireless sensor network.

  10. A New Sensor for Surface Process Quantification in the Geosciences - Image-Assisted Tacheometers

    Science.gov (United States)

    Vicovac, Tanja; Reiterer, Alexander; Rieke-Zapp, Dirk

    2010-05-01

    The quantification of earth surface processes in the geosciences requires precise measurement tools. Typical applications for precise measurement systems involve deformation monitoring for geo-risk management, detection of erosion rates, etc. Often employed for such applications are laser scanners, photogrammetric sensors and image-assisted tacheometers. Image-assisted tacheometers offer the user (metrology expert) an image capturing system (CCD/CMOS camera) in addition to 3D point measurements. The images of the telescope's visual field are projected onto the camera's chip. The camera is capable of capturing panoramic image mosaics through camera rotation if the axes of the measurement system are driven by computer controlled motors. With appropriate calibration, these images are accurately geo-referenced and oriented since the horizontal and vertical angles of rotation are continuously measured and fed into the computer. The oriented images can then directly be used for direction measurements with no need for control points in object space or further photogrammetric orientation processes. In such a system, viewing angles must be addressed to chip pixels inside the optical field of view. Hence dedicated calibration methods have to be applied, an autofocus unit has to be added to the optical path, and special digital image processing procedures have to be used to detect the points of interest on the objects to be measured. We present such a new optical measurement system for measuring and describing 3D surfaces for geosciences. Besides the technique and methods some practical examples will be shown. The system was developed at the Vienna University of Technology (Institute of Geodesy and Geophysics) - two interdisciplinary research project, i-MeaS and SedyMONT, have been launched with the purpose of measuring and interpreting 3D surfaces and surface processes. For the in situ measurement of bed rock erosion the level of surveying accuracy required for recurring sub

  11. A contest of sensors in close range 3D imaging: performance evaluation with a new metric test object

    Directory of Open Access Journals (Sweden)

    M. Hess

    2014-06-01

    Full Text Available An independent means of 3D image quality assessment is introduced, addressing non-professional users of sensors and freeware, which is largely characterized as closed-sourced and by the absence of quality metrics for processing steps, such as alignment. A performance evaluation of commercially available, state-of-the-art close range 3D imaging technologies is demonstrated with the help of a newly developed Portable Metric Test Artefact. The use of this test object provides quality control by a quantitative assessment of 3D imaging sensors. It will enable users to give precise specifications which spatial resolution and geometry recording they expect as outcome from their 3D digitizing process. This will lead to the creation of high-quality 3D digital surrogates and 3D digital assets. The paper is presented in the form of a competition of teams, and a possible winner will emerge.

  12. Fast responsive fluorescence turn-on sensor for Cu{sup 2+} and its application in live cell imaging

    Energy Technology Data Exchange (ETDEWEB)

    Wang Jiaoliang, E-mail: wangjiaoliang@126.com [College of Chemistry and Environment Engineering, Hunan City University, Yiyang 413000 (China); Li Hao; Long Liping; Xiao Guqing; Xie Dan [College of Chemistry and Environment Engineering, Hunan City University, Yiyang 413000 (China)

    2012-09-15

    A new effective fluorescent sensor based on rhodamine was synthesized, which was induced by Cu{sup 2+} in aqueous media to produce turn-on fluorescence. The new sensor 1 exhibited good selectivity for Cu{sup 2+} over other heavy and transition metal (HTM) ions in H{sub 2}O/CH{sub 3}CN(7:3, v/v). Upon addition of Cu{sup 2+}, a remarkable color change from colorless to pink was easily observed by the naked eye, and the dramatic fluorescence turn-on was corroborated. Furthermore, kinetic assay indicates that sensor 1 could be used for real-time tracking of Cu{sup 2+} in cells and organisms. In addition, the turn-on fluorescent change upon the addition of Cu{sup 2+} was also applied in bioimaging. - Highlights: Black-Right-Pointing-Pointer A new effective fluorescent sensor based on rhodamine was developed to detect Cu{sup 2+}. Black-Right-Pointing-Pointer The sensor exhibited fast response, good selectivity at physiological pH condition. Black-Right-Pointing-Pointer The sensor was an effective intracellular Cu{sup 2+} ion imaging agent.

  13. Temperature field reconstruction for minimally invasive cryosurgery with application to wireless implantable temperature sensors and/or medical imaging.

    Science.gov (United States)

    Thaokar, Chandrajit; Rabin, Yoed

    2012-12-01

    There is an undisputed need for temperature-field reconstruction during minimally invasive cryosurgery. The current line of research focuses on developing miniature, wireless, implantable, temperature sensors to enable temperature-field reconstruction in real time. This project combines two parallel efforts: (i) to develop the hardware necessary for implantable sensors, and (ii) to develop mathematical techniques for temperature-field reconstruction in real time-the subject matter of the current study. In particular, this study proposes an approach for temperature-field reconstruction combining data obtained from medical imaging, cryoprobe-embedded sensors, and miniature, wireless, implantable sensors, the development of which is currently underway. This study discusses possible strategies for laying out implantable sensors and approaches for data integration. In particular, prostate cryosurgery is presented as a developmental model and a two-dimensional proof-of-concept is discussed. It is demonstrated that the lethal temperature can be predicted to a significant degree of certainty with implantable sensors and the technique proposed in the current study, a capability that is yet unavailable. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. A Monitoring System for Laying Hens That Uses a Detection Sensor Based on Infrared Technology and Image Pattern Recognition

    Science.gov (United States)

    Zaninelli, Mauro; Redaelli, Veronica; Luzi, Fabio; Bontempo, Valentino; Dell’Orto, Vittorio; Savoini, Giovanni

    2017-01-01

    In Italy, organic egg production farms use free-range housing systems with a big outdoor area and a flock of no more than 500 hens. With additional devices and/or farming procedures, the whole flock could be forced to stay in the outdoor area for a limited time of the day. As a consequence, ozone treatments of housing areas could be performed in order to reduce the levels of atmospheric ammonia and bacterial load without risks, due by its toxicity, both for hens and workers. However, an automatic monitoring system, and a sensor able to detect the presence of animals, would be necessary. For this purpose, a first sensor was developed but some limits, related to the time necessary to detect a hen, were observed. In this study, significant improvements, for this sensor, are proposed. They were reached by an image pattern recognition technique that was applied to thermografic images acquired from the housing system. An experimental group of seven laying hens was selected for the tests, carried out for three weeks. The first week was used to set-up the sensor. Different templates, to use for the pattern recognition, were studied and different floor temperature shifts were investigated. At the end of these evaluations, a template of elliptical shape, and sizes of 135 × 63 pixels, was chosen. Furthermore, a temperature shift of one degree was selected to calculate, for each image, a color background threshold to apply in the following field tests. Obtained results showed an improvement of the sensor detection accuracy that reached values of sensitivity and specificity of 95.1% and 98.7%. In addition, the range of time necessary to detect a hen, or classify a case, was reduced at two seconds. This result could allow the sensor to control a bigger area of the housing system. Thus, the resulting monitoring system could allow to perform the sanitary treatments without risks both for animals and humans. PMID:28538654

  15. Real-time, wide-area hyperspectral imaging sensors for standoff detection of explosives and chemical warfare agents

    Science.gov (United States)

    Gomer, Nathaniel R.; Tazik, Shawna; Gardner, Charles W.; Nelson, Matthew P.

    2017-05-01

    Hyperspectral imaging (HSI) is a valuable tool for the detection and analysis of targets located within complex backgrounds. HSI can detect threat materials on environmental surfaces, where the concentration of the target of interest is often very low and is typically found within complex scenery. Unfortunately, current generation HSI systems have size, weight, and power limitations that prohibit their use for field-portable and/or real-time applications. Current generation systems commonly provide an inefficient area search rate, require close proximity to the target for screening, and/or are not capable of making real-time measurements. ChemImage Sensor Systems (CISS) is developing a variety of real-time, wide-field hyperspectral imaging systems that utilize shortwave infrared (SWIR) absorption and Raman spectroscopy. SWIR HSI sensors provide wide-area imagery with at or near real time detection speeds. Raman HSI sensors are being developed to overcome two obstacles present in standard Raman detection systems: slow area search rate (due to small laser spot sizes) and lack of eye-safety. SWIR HSI sensors have been integrated into mobile, robot based platforms and handheld variants for the detection of explosives and chemical warfare agents (CWAs). In addition, the fusion of these two technologies into a single system has shown the feasibility of using both techniques concurrently to provide higher probability of detection and lower false alarm rates. This paper will provide background on Raman and SWIR HSI, discuss the applications for these techniques, and provide an overview of novel CISS HSI sensors focusing on sensor design and detection results.

  16. A Monitoring System for Laying Hens That Uses a Detection Sensor Based on Infrared Technology and Image Pattern Recognition.

    Science.gov (United States)

    Zaninelli, Mauro; Redaelli, Veronica; Luzi, Fabio; Bontempo, Valentino; Dell'Orto, Vittorio; Savoini, Giovanni

    2017-05-24

    In Italy, organic egg production farms use free-range housing systems with a big outdoor area and a flock of no more than 500 hens. With additional devices and/or farming procedures, the whole flock could be forced to stay in the outdoor area for a limited time of the day. As a consequence, ozone treatments of housing areas could be performed in order to reduce the levels of atmospheric ammonia and bacterial load without risks, due by its toxicity, both for hens and workers. However, an automatic monitoring system, and a sensor able to detect the presence of animals, would be necessary. For this purpose, a first sensor was developed but some limits, related to the time necessary to detect a hen, were observed. In this study, significant improvements, for this sensor, are proposed. They were reached by an image pattern recognition technique that was applied to thermografic images acquired from the housing system. An experimental group of seven laying hens was selected for the tests, carried out for three weeks. The first week was used to set-up the sensor. Different templates, to use for the pattern recognition, were studied and different floor temperature shifts were investigated. At the end of these evaluations, a template of elliptical shape, and sizes of 135 × 63 pixels, was chosen. Furthermore, a temperature shift of one degree was selected to calculate, for each image, a color background threshold to apply in the following field tests. Obtained results showed an improvement of the sensor detection accuracy that reached values of sensitivity and specificity of 95.1% and 98.7%. In addition, the range of time necessary to detect a hen, or classify a case, was reduced at two seconds. This result could allow the sensor to control a bigger area of the housing system. Thus, the resulting monitoring system could allow to perform the sanitary treatments without risks both for animals and humans.

  17. An ultrasensitive method of real time pH monitoring with complementary metal oxide semiconductor image sensor.

    Science.gov (United States)

    Devadhasan, Jasmine Pramila; Kim, Sanghyo

    2015-02-09

    CMOS sensors are becoming a powerful tool in the biological and chemical field. In this work, we introduce a new approach on quantifying various pH solutions with a CMOS image sensor. The CMOS image sensor based pH measurement produces high-accuracy analysis, making it a truly portable and user friendly system. pH indicator blended hydrogel matrix was fabricated as a thin film to the accurate color development. A distinct color change of red, green and blue (RGB) develops in the hydrogel film by applying various pH solutions (pH 1-14). The semi-quantitative pH evolution was acquired by visual read out. Further, CMOS image sensor absorbs the RGB color intensity of the film and hue value converted into digital numbers with the aid of an analog-to-digital converter (ADC) to determine the pH ranges of solutions. Chromaticity diagram and Euclidean distance represent the RGB color space and differentiation of pH ranges, respectively. This technique is applicable to sense the various toxic chemicals and chemical vapors by situ sensing. Ultimately, the entire approach can be integrated into smartphone and operable with the user friendly manner. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Accuracy assessment for the radiometric calibration of imaging sensors using preflight techniques relying on the sun as a source

    Science.gov (United States)

    Thome, K.; Czapla-Myers, J.; Kuester, M.; Anderson, N.

    2008-08-01

    The Remote Sensing Group (RSG) at the University of Arizona has performed high-accuracy radiometric calibration in the laboratory for more than 20 years in support of vicarious calibration of space-borne and airborne imaging sensors. Typical laboratory calibration relies on lamp-based sources which, while convenient to operate and control, do not simulate the solar spectrum that is the basic energy source for many of the imaging systems. Using the sun as a source for preflight radiometric calibration reduces uncertainties caused by the spectral mismatch between the preflight and inflight calibration, especially in the case in which a solar diffuser is the inflight calibration method. Difficulties in using the sun include varying atmospheric conditions, changing solar angle during the day and with season, and ensuring traceability to national standards. This paper presents several approaches using the sun as a radiometric calibration source coupled with the expected traceable accuracies for each method. The methods include direct viewing of the solar disk with the sensor of interest, illumination of the sensor's inflight solar diffuser by the sun, and illumination of an external diffuser that is imaged by the sensor. The results of the error analysis show that it is feasible to achieve preflight calibration using the sun as a source at the same level of uncertainty as those of lamp-based approaches. The error analysis is evaluated and compared to solar-radiation-based calibrations of one of RSG's laboratory-grade radiometers.

  19. A novel CMOS image sensor system for quantitative loop-mediated isothermal amplification assays to detect food-borne pathogens.

    Science.gov (United States)

    Wang, Tiantian; Kim, Sanghyo; An, Jeong Ho

    2017-02-01

    Loop-mediated isothermal amplification (LAMP) is considered as one of the alternatives to the conventional PCR and it is an inexpensive portable diagnostic system with minimal power consumption. The present work describes the application of LAMP in real-time photon detection and quantitative analysis of nucleic acids integrated with a disposable complementary-metal-oxide semiconductor (CMOS) image sensor. This novel system works as an amplification-coupled detection platform, relying on a CMOS image sensor, with the aid of a computerized circuitry controller for the temperature and light sources. The CMOS image sensor captures the light which is passing through the sensor surface and converts into digital units using an analog-to-digital converter (ADC). This new system monitors the real-time photon variation, caused by the color changes during amplification. Escherichia coli O157 was used as a proof-of-concept target for quantitative analysis, and compared with the results for Staphylococcus aureus and Salmonella enterica to confirm the efficiency of the system. The system detected various DNA concentrations of E. coli O157 in a short time (45min), with a detection limit of 10fg/μL. The low-cost, simple, and compact design, with low power consumption, represents a significant advance in the development of a portable, sensitive, user-friendly, real-time, and quantitative analytic tools for point-of-care diagnosis. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Wavelength-Scanning SPR Imaging Sensors Based on an Acousto-Optic Tunable Filter and a White Light Laser

    Directory of Open Access Journals (Sweden)

    Youjun Zeng

    2017-01-01

    Full Text Available A fast surface plasmon resonance (SPR imaging biosensor system based on wavelength interrogation using an acousto-optic tunable filter (AOTF and a white light laser is presented. The system combines the merits of a wide-dynamic detection range and high sensitivity offered by the spectral approach with multiplexed high-throughput data collection and a two-dimensional (2D biosensor array. The key feature is the use of AOTF to realize wavelength scan from a white laser source and thus to achieve fast tracking of the SPR dip movement caused by target molecules binding to the sensor surface. Experimental results show that the system is capable of completing a SPR dip measurement within 0.35 s. To the best of our knowledge, this is the fastest time ever reported in the literature for imaging spectral interrogation. Based on a spectral window with a width of approximately 100 nm, a dynamic detection range and resolution of 4.63 × 10−2 refractive index unit (RIU and 1.27 × 10−6 RIU achieved in a 2D-array sensor is reported here. The spectral SPR imaging sensor scheme has the capability of performing fast high-throughput detection of biomolecular interactions from 2D sensor arrays. The design has no mechanical moving parts, thus making the scheme completely solid-state.

  1. Solid-State Multi-Sensor Array System for Real Time Imaging of Magnetic Fields and Ferrous Objects

    Science.gov (United States)

    Benitez, D.; Gaydecki, P.; Quek, S.; Torres, V.

    2008-02-01

    In this paper the development of a solid-state sensors based system for real-time imaging of magnetic fields and ferrous objects is described. The system comprises 1089 magneto inductive solid state sensors arranged in a 2D array matrix of 33×33 files and columns, equally spaced in order to cover an approximate area of 300 by 300 mm. The sensor array is located within a large current-carrying coil. Data is sampled from the sensors by several DSP controlling units and finally streamed to a host computer via a USB 2.0 interface and the image generated and displayed at a rate of 20 frames per minute. The development of the instrumentation has been complemented by extensive numerical modeling of field distribution patterns using boundary element methods. The system was originally intended for deployment in the non-destructive evaluation (NDE) of reinforced concrete. Nevertheless, the system is not only capable of producing real-time, live video images of the metal target embedded within any opaque medium, it also allows the real-time visualization and determination of the magnetic field distribution emitted by either permanent magnets or geometries carrying current. Although this system was initially developed for the NDE arena, it could also have many potential applications in many other fields, including medicine, security, manufacturing, quality assurance and design involving magnetic fields.

  2. Carbazole-azine based fluorescence 'off-on' sensor for selective detection of Cu2+ and its live cell imaging.

    Science.gov (United States)

    Christopher Leslee, Denzil Britto; Karuppannan, Sekar; Vengaian, Karmegam Muthu; Gandhi, Sivaraman; Subramanian, Singaravadivel

    2017-11-01

    A new carbazole-azine based fluorescent sensor was synthesized and characterized. The selectivity of the sensor for Cu2+ over other counter ions in a dimethyl sulfoxide/H2 O mixture was shown through enhancement in fluorescence - an off to on transformation. The specificity of the probe towards Cu2+ was evident in ultraviolet/visible, fluorescence, Fourier transform infrared and mass studies. Application of the probe in the cell imaging and cytotoxicity of living cells is illustrated. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Control Design and Digital Implementation of a Fast 2-Degree-of-Freedom Translational Optical Image Stabilizer for Image Sensors in Mobile Camera Phones

    Directory of Open Access Journals (Sweden)

    Jeremy H. -S. Wang

    2017-10-01

    Full Text Available This study presents design, digital implementation and performance validation of a lead-lag controller for a 2-degree-of-freedom (DOF translational optical image stabilizer (OIS installed with a digital image sensor in mobile camera phones. Nowadays, OIS is an important feature of modern commercial mobile camera phones, which aims to mechanically reduce the image blur caused by hand shaking while shooting photos. The OIS developed in this study is able to move the imaging lens by actuating its voice coil motors (VCMs at the required speed to the position that significantly compensates for imaging blurs by hand shaking. The compensation proposed is made possible by first establishing the exact, nonlinear equations of motion (EOMs for the OIS, which is followed by designing a simple lead-lag controller based on established nonlinear EOMs for simple digital computation via a field-programmable gate array (FPGA board in order to achieve fast response. Finally, experimental validation is conducted to show the favorable performance of the designed OIS; i.e., it is able to stabilize the lens holder to the desired position within 0.02 s, which is much less than previously reported times of around 0.1 s. Also, the resulting residual vibration is less than 2.2–2.5 μm, which is commensurate to the very small pixel size found in most of commercial image sensors; thus, significantly minimizing image blur caused by hand shaking.

  4. Control Design and Digital Implementation of a Fast 2-Degree-of-Freedom Translational Optical Image Stabilizer for Image Sensors in Mobile Camera Phones.

    Science.gov (United States)

    Wang, Jeremy H-S; Qiu, Kang-Fu; Chao, Paul C-P

    2017-10-13

    This study presents design, digital implementation and performance validation of a lead-lag controller for a 2-degree-of-freedom (DOF) translational optical image stabilizer (OIS) installed with a digital image sensor in mobile camera phones. Nowadays, OIS is an important feature of modern commercial mobile camera phones, which aims to mechanically reduce the image blur caused by hand shaking while shooting photos. The OIS developed in this study is able to move the imaging lens by actuating its voice coil motors (VCMs) at the required speed to the position that significantly compensates for imaging blurs by hand shaking. The compensation proposed is made possible by first establishing the exact, nonlinear equations of motion (EOMs) for the OIS, which is followed by designing a simple lead-lag controller based on established nonlinear EOMs for simple digital computation via a field-programmable gate array (FPGA) board in order to achieve fast response. Finally, experimental validation is conducted to show the favorable performance of the designed OIS; i.e., it is able to stabilize the lens holder to the desired position within 0.02 s, which is much less than previously reported times of around 0.1 s. Also, the resulting residual vibration is less than 2.2-2.5 μm, which is commensurate to the very small pixel size found in most of commercial image sensors; thus, significantly minimizing image blur caused by hand shaking.

  5. Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering.

    Science.gov (United States)

    Mars, Kamel; Lioe, De Xing; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro; Hashimoto, Mamoru

    2017-11-09

    Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system.

  6. Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering

    Directory of Open Access Journals (Sweden)

    Kamel Mars

    2017-11-01

    Full Text Available Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system.

  7. Proximity gettering technology for advanced CMOS image sensors using carbon cluster ion-implantation technique. A review

    Energy Technology Data Exchange (ETDEWEB)

    Kurita, Kazunari; Kadono, Takeshi; Okuyama, Ryousuke; Shigemastu, Satoshi; Hirose, Ryo; Onaka-Masada, Ayumi; Koga, Yoshihiro; Okuda, Hidehiko [SUMCO Corporation, Saga (Japan)

    2017-07-15

    A new technique is described for manufacturing advanced silicon wafers with the highest capability yet reported for gettering transition metallic, oxygen, and hydrogen impurities in CMOS image sensor fabrication processes. Carbon and hydrogen elements are localized in the projection range of the silicon wafer by implantation of ion clusters from a hydrocarbon molecular gas source. Furthermore, these wafers can getter oxygen impurities out-diffused to device active regions from a Czochralski grown silicon wafer substrate to the carbon cluster ion projection range during heat treatment. Therefore, they can reduce the formation of transition metals and oxygen-related defects in the device active regions and improve electrical performance characteristics, such as the dark current, white spot defects, pn-junction leakage current, and image lag characteristics. The new technique enables the formation of high-gettering-capability sinks for transition metals, oxygen, and hydrogen impurities under device active regions of CMOS image sensors. The wafers formed by this technique have the potential to significantly improve electrical devices performance characteristics in advanced CMOS image sensors. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  8. Image Quality Assessment of a CMOS/Gd2O2S:Pr,Ce,F X-Ray Sensor

    Directory of Open Access Journals (Sweden)

    Christos Michail

    2015-01-01

    Full Text Available The aim of the present study was to examine the image quality performance of a CMOS digital imaging optical sensor coupled to custom made gadolinium oxysulfide powder scintillators, doped with praseodymium, cerium, and fluorine (Gd2O2S:Pr,Ce,F. The screens, with coating thicknesses 35.7 and 71.2 mg/cm2, were prepared in our laboratory from Gd2O2S:Pr,Ce,F powder (Phosphor Technology, Ltd. by sedimentation on silica substrates and were placed in direct contact with the optical sensor. Image quality was determined through single index (information capacity, IC and spatial frequency dependent parameters, by assessing the Modulation Transfer Function (MTF and the Normalized Noise Power Spectrum (NNPS. The MTF was measured using the slanted-edge method. The CMOS sensor/Gd2O2S:Pr,Ce,F screens combinations were irradiated under the RQA-5 (IEC 62220-1 beam quality. The detector response function was linear for the exposure range under investigation. Under the general radiography conditions, both Gd2O2S:Pr,Ce,F screen/CMOS combinations exhibited moderate imaging properties, in terms of IC, with previously published scintillators, such as CsI:Tl, Gd2O2S:Tb, and Gd2O2S:Eu.

  9. Optical Demonstration of a Medical Imaging System with an EMCCD-Sensor Array for Use in a High Resolution Dynamic X-ray Imager.

    Science.gov (United States)

    Qu, Bin; Huang, Ying; Wang, Weiyuan; Sharma, Prateek; Kuhls-Gilcrist, Andrew T; Cartwright, Alexander N; Titus, Albert H; Bednarek, Daniel R; Rudin, Stephen

    2010-10-30

    Use of an extensible array of Electron Multiplying CCDs (EMCCDs) in medical x-ray imager applications was demonstrated for the first time. The large variable electronic-gain (up to 2000) and small pixel size of EMCCDs provide effective suppression of readout noise compared to signal, as well as high resolution, enabling the development of an x-ray detector with far superior performance compared to conventional x-ray image intensifiers and flat panel detectors. We are developing arrays of EMCCDs to overcome their limited field of view (FOV). In this work we report on an array of two EMCCD sensors running simultaneously at a high frame rate and optically focused on a mammogram film showing calcified ducts. The work was conducted on an optical table with a pulsed LED bar used to provide a uniform diffuse light onto the film to simulate x-ray projection images. The system can be selected to run at up to 17.5 frames per second or even higher frame rate with binning. Integration time for the sensors can be adjusted from 1 ms to 1000 ms. Twelve-bit correlated double sampling AD converters were used to digitize the images, which were acquired by a National Instruments dual-channel Camera Link PC board in real time. A user-friendly interface was programmed using LabVIEW to save and display 2K × 1K pixel matrix digital images. The demonstration tiles a 2 × 1 array to acquire increased-FOV stationary images taken at different gains and fluoroscopic-like videos recorded by scanning the mammogram simultaneously with both sensors. The results show high resolution and high dynamic range images stitched together with minimal adjustments needed. The EMCCD array design allows for expansion to an M×N array for arbitrarily larger FOV, yet with high resolution and large dynamic range maintained.

  10. Dual-Tree Complex Wavelet Transform and Image Block Residual-Based Multi-Focus Image Fusion in Visual Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yong Yang

    2014-11-01

    Full Text Available This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT. The Sum-Modified-Laplacian (SML-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.

  11. Microwave Imaging Sensor Using Compact Metamaterial UWB Antenna with a High Correlation Factor.

    Science.gov (United States)

    Islam, Md Moinul; Islam, Mohammad Tariqul; Faruque, Mohammad Rashed Iqbal; Samsuzzaman, Md; Misran, Norbahiah; Arshad, Haslina

    2015-07-23

    The design of a compact metamaterial ultra-wideband (UWB) antenna with a goal towards application in microwave imaging systems for detecting unwanted cells in human tissue, such as in cases of breast cancer, heart failure and brain stroke detection is proposed. This proposed UWB antenna is made of four metamaterial unit cells, where each cell is an integration of a modified split ring resonator (SRR), capacitive loaded strip (CLS) and wire, to attain a design layout that simultaneously exhibits both a negative magnetic permeability and a negative electrical permittivity. This design results in an astonishing negative refractive index that enables amplification of the radiated power of this reported antenna, and therefore, high antenna performance. A low-cost FR4 substrate material is used to design and print this reported antenna, and has the following characteristics: thickness of 1.6 mm, relative permeability of one, relative permittivity of 4.60 and loss tangent of 0.02. The overall antenna size is 19.36 mm × 27.72 mm × 1.6 mm where the electrical dimension is 0.20 λ × 0.28 λ × 0.016 λ at the 3.05 GHz lower frequency band. Voltage Standing Wave Ratio (VSWR) measurements have illustrated that this antenna exhibits an impedance bandwidth from 3.05 GHz to more than 15 GHz for VSWR < 2 with an average gain of 4.38 dBi throughout the operating frequency band. The simulations (both HFSS and computer simulation technology (CST)) and the measurements are in high agreement. A high correlation factor and the capability of detecting tumour simulants confirm that this reported UWB antenna can be used as an imaging sensor.

  12. Coseismic displacements from SAR image offsets between different satellite sensors: Application to the 2001 Bhuj (India) earthquake

    KAUST Repository

    Wang, Teng

    2015-09-05

    Synthetic aperture radar (SAR) image offset tracking is increasingly being used for measuring ground displacements, e.g., due to earthquakes and landslide movement. However, this technique has been applied only to images acquired by the same or identical satellites. Here we propose a novel approach for determining offsets between images acquired by different satellite sensors, extending the usability of existing SAR image archives. The offsets are measured between two multiimage reflectivity maps obtained from different SAR data sets, which provide significantly better results than with single preevent and postevent images. Application to the 2001 Mw7.6 Bhuj earthquake reveals, for the first time, its near-field deformation using multiple preearthquake ERS and postearthquake Envisat images. The rupture model estimated from these cross-sensor offsets and teleseismic waveforms shows a compact fault slip pattern with fairly short rise times (<3 s) and a large stress drop (20 MPa), explaining the intense shaking observed in the earthquake.

  13. Improved accuracy and speed in scanning probe microscopy by image reconstruction from non-gridded position sensor data.

    Science.gov (United States)

    Ziegler, Dominik; Meyer, Travis R; Farnham, Rodrigo; Brune, Christoph; Bertozzi, Andrea L; Ashby, Paul D

    2013-08-23

    Scanning probe microscopy (SPM) has facilitated many scientific discoveries utilizing its strengths of spatial resolution, non-destructive characterization and realistic in situ environments. However, accurate spatial data are required for quantitative applications but this is challenging for SPM especially when imaging at higher frame rates. We present a new operation mode for scanning probe microscopy that uses advanced image processing techniques to render accurate images based on position sensor data. This technique, which we call sensor inpainting, frees the scanner to no longer be at a specific location at a given time. This drastically reduces the engineering effort of position control and enables the use of scan waveforms that are better suited for the high inertia nanopositioners of SPM. While in raster scanning, typically only trace or retrace images are used for display, in Archimedean spiral scans 100% of the data can be displayed and at least a two-fold increase in temporal or spatial resolution is achieved. In the new mode, the grid size of the final generated image is an independent variable. Inpainting to a few times more pixels than the samples creates images that more accurately represent the ground truth.

  14. Expansion of Smartwatch Touch Interface from Touchscreen to Around Device Interface Using Infrared Line Image Sensors

    Directory of Open Access Journals (Sweden)

    Soo-Chul Lim

    2015-07-01

    Full Text Available Touchscreen interaction has become a fundamental means of controlling mobile phones and smartwatches. However, the small form factor of a smartwatch limits the available interactive surface area. To overcome this limitation, we propose the expansion of the touch region of the screen to the back of the user’s hand. We developed a touch module for sensing the touched finger position on the back of the hand using infrared (IR line image sensors, based on the calibrated IR intensity and the maximum intensity region of an IR array. For complete touch-sensing solution, a gyroscope installed in the smartwatch is used to read the wrist gestures. The gyroscope incorporates a dynamic time warping gesture recognition algorithm for eliminating unintended touch inputs during the free motion of the wrist while wearing the smartwatch. The prototype of the developed sensing module was implemented in a commercial smartwatch, and it was confirmed that the sensed positional information of the finger when it was used to touch the back of the hand could be used to control the smartwatch graphical user interface. Our system not only affords a novel experience for smartwatch users, but also provides a basis for developing other useful interfaces.

  15. Novel Noninvasive Brain Disease Detection System Using a Facial Image Sensor

    Directory of Open Access Journals (Sweden)

    Ting Shu

    2017-12-01

    Full Text Available Brain disease including any conditions or disabilities that affect the brain is fast becoming a leading cause of death. The traditional diagnostic methods of brain disease are time-consuming, inconvenient and non-patient friendly. As more and more individuals undergo examinations to determine if they suffer from any form of brain disease, developing noninvasive, efficient, and patient friendly detection systems will be beneficial. Therefore, in this paper, we propose a novel noninvasive brain disease detection system based on the analysis of facial colors. The system consists of four components. A facial image is first captured through a specialized sensor, where four facial key blocks are next located automatically from the various facial regions. Color features are extracted from each block to form a feature vector for classification via the Probabilistic Collaborative based Classifier. To thoroughly test the system and its performance, seven facial key block combinations were experimented. The best result was achieved using the second facial key block, where it showed that the Probabilistic Collaborative based Classifier is the most suitable. The overall performance of the proposed system achieves an accuracy −95%, a sensitivity −94.33%, a specificity −95.67%, and an average processing time (for one sample of <1 min at brain disease detection.

  16. Effect of Cu pad morphology on direct-Cu pillar formation in CMOS image sensors

    Science.gov (United States)

    Choi, Eunmi; Kim, Areum; Cui, Eunwha; Lee, Ukjae; Son, Hyung Bin; Hahn, Sang June; Pyo, Sung Gyu

    2014-09-01

    We report the feasibility of forming Ni bumps directly on Cu pads in CMOS image sensor (CIS) logic elements formed by Cu wires with diameters of less than 65 nm. The direct Ni bump process proposed in this study simplifies the fabrication process and reduces costs by eliminating the need for Al pad process. In addition, this process can secure the margin of the final layer, enabling the realization of thin camera modules. In this study, we evaluated the effect of pad annealing on the direct formation of Ni bumps over Cu pads. The results suggest that the morphology of the Cu pad varies depending on the annealing sequence, and post-passivation annealing resulted in fewer defects than pad etch annealing. The shear stress of the Ni bumps was 57.77 mgf/m2, which is six times greater than the corresponding reference value. Furthermore, we evaluated the reliability of a chip with an anisotropic conductive film (ACF) and a non-conducting paste (NCP) by using high-temperature storage (HTS), thermal cycling (TC), and wet high-temperature storage (WHTS) reliability tests. The evaluation results suggest the absence of abnormalities in all samples. [Figure not available: see fulltext.

  17. Denoising Algorithm for CFA Image Sensors Considering Inter-Channel Correlation.

    Science.gov (United States)

    Lee, Min Seok; Park, Sang Wook; Kang, Moon Gi

    2017-05-28

    In this paper, a spatio-spectral-temporal filter considering an inter-channel correlation is proposed for the denoising of a color filter array (CFA) sequence acquired by CCD/CMOS image sensors. Owing to the alternating under-sampled grid of the CFA pattern, the inter-channel correlation must be considered in the direct denoising process. The proposed filter is applied in the spatial, spectral, and temporal domain, considering the spatio-tempo-spectral correlation. First, nonlocal means (NLM) spatial filtering with patch-based difference (PBD) refinement is performed by considering both the intra-channel correlation and inter-channel correlation to overcome the spatial resolution degradation occurring with the alternating under-sampled pattern. Second, a motion-compensated temporal filter that employs inter-channel correlated motion estimation and compensation is proposed to remove the noise in the temporal domain. Then, a motion adaptive detection value controls the ratio of the spatial filter and the temporal filter. The denoised CFA sequence can thus be obtained without motion artifacts. Experimental results for both simulated and real CFA sequences are presented with visual and numerical comparisons to several state-of-the-art denoising methods combined with a demosaicing method. Experimental results confirmed that the proposed frameworks outperformed the other techniques in terms of the objective criteria and subjective visual perception in CFA sequences.

  18. Demonstration of plant fluorescence by imaging technique and Intelligent FluoroSensor

    Science.gov (United States)

    Lenk, Sándor; Gádoros, Patrik; Kocsányi, László; Barócsi, Attila

    2015-10-01

    Photosynthesis is a process that converts carbon-dioxide into organic compounds, especially into sugars, using the energy of sunlight. The absorbed light energy is used mainly for photosynthesis initiated at the reaction centers of chlorophyll-protein complexes, but part of it is lost as heat and chlorophyll fluorescence. Therefore, the measurement of the latter can be used to estimate the photosynthetic activity. The basic method, when illuminating intact leaves with strong light after a dark adaptation of at least 20 minutes resulting in a transient change of fluorescence emission of the fluorophore chlorophyll-a called `Kautsky effect', is demonstrated by an imaging setup. The experimental kit includes a high radiant blue LED and a CCD camera (or a human eye) equipped with a red transmittance filter to detect the changing fluorescence radiation. However, for the measurement of several fluorescence parameters, describing the plant physiological processes in detail, the variation of several excitation light sources and an adequate detection method are needed. Several fluorescence induction protocols (e.g. traditional Kautsky, pulse amplitude modulated and excitation kinetic), are realized in the Intelligent FluoroSensor instrument. Using it, students are able to measure different plant fluorescence induction curves, quantitatively determine characteristic parameters and qualitatively interpret the measured signals.

  19. Denoising Algorithm for CFA Image Sensors Considering Inter-Channel Correlation

    Science.gov (United States)

    Lee, Min Seok; Park, Sang Wook; Kang, Moon Gi

    2017-01-01

    In this paper, a spatio-spectral-temporal filter considering an inter-channel correlation is proposed for the denoising of a color filter array (CFA) sequence acquired by CCD/CMOS image sensors. Owing to the alternating under-sampled grid of the CFA pattern, the inter-channel correlation must be considered in the direct denoising process. The proposed filter is applied in the spatial, spectral, and temporal domain, considering the spatio-tempo-spectral correlation. First, nonlocal means (NLM) spatial filtering with patch-based difference (PBD) refinement is performed by considering both the intra-channel correlation and inter-channel correlation to overcome the spatial resolution degradation occurring with the alternating under-sampled pattern. Second, a motion-compensated temporal filter that employs inter-channel correlated motion estimation and compensation is proposed to remove the noise in the temporal domain. Then, a motion adaptive detection value controls the ratio of the spatial filter and the temporal filter. The denoised CFA sequence can thus be obtained without motion artifacts. Experimental results for both simulated and real CFA sequences are presented with visual and numerical comparisons to several state-of-the-art denoising methods combined with a demosaicing method. Experimental results confirmed that the proposed frameworks outperformed the other techniques in terms of the objective criteria and subjective visual perception in CFA sequences. PMID:28555044

  20. Handheld and mobile hyperspectral imaging sensors for wide-area standoff detection of explosives and chemical warfare agents

    Science.gov (United States)

    Gomer, Nathaniel R.; Gardner, Charles W.; Nelson, Matthew P.

    2016-05-01

    Hyperspectral imaging (HSI) is a valuable tool for the investigation and analysis of targets in complex background with a high degree of autonomy. HSI is beneficial for the detection of threat materials on environmental surfaces, where the concentration of the target of interest is often very low and is typically found within complex scenery. Two HSI techniques that have proven to be valuable are Raman and shortwave infrared (SWIR) HSI. Unfortunately, current generation HSI systems have numerous size, weight, and power (SWaP) limitations that make their potential integration onto a handheld or field portable platform difficult. The systems that are field-portable do so by sacrificing system performance, typically by providing an inefficient area search rate, requiring close proximity to the target for screening, and/or eliminating the potential to conduct real-time measurements. To address these shortcomings, ChemImage Sensor Systems (CISS) is developing a variety of wide-field hyperspectral imaging systems. Raman HSI sensors are being developed to overcome two obstacles present in standard Raman detection systems: slow area search rate (due to small laser spot sizes) and lack of eye-safety. SWIR HSI sensors have been integrated into mobile, robot based platforms and handheld variants for the detection of explosives and chemical warfare agents (CWAs). In addition, the fusion of these two technologies into a single system has shown the feasibility of using both techniques concurrently to provide higher probability of detection and lower false alarm rates. This paper will provide background on Raman and SWIR HSI, discuss the applications for these techniques, and provide an overview of novel CISS HSI sensors focused on sensor design and detection results.

  1. AROSICS: An Automated and Robust Open-Source Image Co-Registration Software for Multi-Sensor Satellite Data

    Directory of Open Access Journals (Sweden)

    Daniel Scheffler

    2017-07-01

    Full Text Available Geospatial co-registration is a mandatory prerequisite when dealing with remote sensing data. Inter- or intra-sensoral misregistration will negatively affect any subsequent image analysis, specifically when processing multi-sensoral or multi-temporal data. In recent decades, many algorithms have been developed to enable manual, semi- or fully automatic displacement correction. Especially in the context of big data processing and the development of automated processing chains that aim to be applicable to different remote sensing systems, there is a strong need for efficient, accurate and generally usable co-registration. Here, we present AROSICS (Automated and Robust Open-Source Image Co-Registration Software, a Python-based open-source software including an easy-to-use user interface for automatic detection and correction of sub-pixel misalignments between various remote sensing datasets. It is independent of spatial or spectral characteristics and robust against high degrees of cloud coverage and spectral and temporal land cover dynamics. The co-registration is based on phase correlation for sub-pixel shift estimation in the frequency domain utilizing the Fourier shift theorem in a moving-window manner. A dense grid of spatial shift vectors can be created and automatically filtered by combining various validation and quality estimation metrics. Additionally, the software supports the masking of, e.g., clouds and cloud shadows to exclude such areas from spatial shift detection. The software has been tested on more than 9000 satellite images acquired by different sensors. The results are evaluated exemplarily for two inter-sensoral and two intra-sensoral use cases and show registration results in the sub-pixel range with root mean square error fits around 0.3 pixels and better.

  2. Laser beam welding quality monitoring system based in high-speed (10 kHz) uncooled MWIR imaging sensors

    Science.gov (United States)

    Linares, Rodrigo; Vergara, German; Gutiérrez, Raúl; Fernández, Carlos; Villamayor, Víctor; Gómez, Luis; González-Camino, Maria; Baldasano, Arturo; Castro, G.; Arias, R.; Lapido, Y.; Rodríguez, J.; Romero, Pablo

    2015-05-01

    The combination of flexibility, productivity, precision and zero-defect manufacturing in future laser-based equipment are a major challenge that faces this enabling technology. New sensors for online monitoring and real-time control of laserbased processes are necessary for improving products quality and increasing manufacture yields. New approaches to fully automate processes towards zero-defect manufacturing demand smarter heads where lasers, optics, actuators, sensors and electronics will be integrated in a unique compact and affordable device. Many defects arising in laser-based manufacturing processes come from instabilities in the dynamics of the laser process. Temperature and heat dynamics are key parameters to be monitored. Low cost infrared imagers with high-speed of response will constitute the next generation of sensors to be implemented in future monitoring and control systems for laser-based processes, capable to provide simultaneous information about heat dynamics and spatial distribution. This work describes the result of using an innovative low-cost high-speed infrared imager based on the first quantum infrared imager monolithically integrated with Si-CMOS ROIC of the market. The sensor is able to provide low resolution images at frame rates up to 10 KHz in uncooled operation at the same cost as traditional infrared spot detectors. In order to demonstrate the capabilities of the new sensor technology, a low-cost camera was assembled on a standard production laser welding head, allowing to register melting pool images at frame rates of 10 kHz. In addition, a specific software was developed for defect detection and classification. Multiple laser welding processes were recorded with the aim to study the performance of the system and its application to the real-time monitoring of laser welding processes. During the experiments, different types of defects were produced and monitored. The classifier was fed with the experimental images obtained. Self

  3. Functional tomographic fluorescence imaging of pH microenvironments in microbial biofilms by use of silica nanoparticle sensors.

    Science.gov (United States)

    Hidalgo, Gabriela; Burns, Andrew; Herz, Erik; Hay, Anthony G; Houston, Paul L; Wiesner, Ulrich; Lion, Leonard W

    2009-12-01

    Attached bacterial communities can generate three-dimensional (3D) physicochemical gradients that create microenvironments where local conditions are substantially different from those in the surrounding solution. Given their ubiquity in nature and their impacts on issues ranging from water quality to human health, better tools for understanding biofilms and the gradients they create are needed. Here we demonstrate the use of functional tomographic imaging via confocal fluorescence microscopy of ratiometric core-shell silica nanoparticle sensors (C dot sensors) to study the morphology and temporal evolution of pH microenvironments in axenic Escherichia coli PHL628 and mixed-culture wastewater biofilms. Testing of 70-, 30-, and 10-nm-diameter sensor particles reveals a critical size for homogeneous biofilm staining, with only the 10-nm-diameter particles capable of successfully generating high-resolution maps of biofilm pH and distinct local heterogeneities. Our measurements revealed pH values that ranged from 5 to >7, confirming the heterogeneity of the pH profiles within these biofilms. pH was also analyzed following glucose addition to both suspended and attached cultures. In both cases, the pH became more acidic, likely due to glucose metabolism causing the release of tricarboxylic acid cycle acids and CO(2). These studies demonstrate that the combination of 3D functional fluorescence imaging with well-designed nanoparticle sensors provides a powerful tool for in situ characterization of chemical microenvironments in complex biofilms.

  4. Assessments of F16 Special Sensor Microwave Imager and Sounder Antenna Temperatures at Lower Atmospheric Sounding Channels

    OpenAIRE

    Banghua Yan; Fuzhong Weng

    2009-01-01

    The main reflector of the Special Sensor Microwave Imager/Sounder (SSMIS) aboard the Defense Meteorological Satellite Program (DMSP) F-16 satellite emits variable radiation, and the SSMIS warm calibration load is intruded by direct and indirect solar radiation. These contamination sources produce antenna brightness temperature anomalies of around 2 K at SSMIS sounding channels which are obviously inappropriate for assimilation into numerical weather prediction models and remote sensing retrie...

  5. Use of LST images from MODIS/AQUA sensor as an indication of frost occurrence in RS

    OpenAIRE

    Débora de S. Simões; Denise C. Fontana; Matheus B. Vicari

    2015-01-01

    ABSTRACTAlthough frost occurrence causes severe losses in agriculture, especially in the south of Brazil, the data of minimum air temperature (Tmin) currently available for monitoring and predicting frosts show insufficient spatial distribution. This study aimed to evaluate the MDY11A1 (LST – Land Surface Temperature) product, from the MODIS sensor on board the AQUA satellite as an estimator of frost occurrence in the southeast of the state of Rio Grande do Sul, Brazil. LST images from ...

  6. Digital Image Sensor-Based Assessment of the Status of Oat (Avena sativa L.) Crops after Frost Damage

    OpenAIRE

    Isidro Villegas-Romero; Matilde Santos; Antonia Macedo-Cruz; Gonzalo Pajares

    2011-01-01

    The aim of this paper is to classify the land covered with oat crops, and the quantification of frost damage on oats, while plants are still in the flowering stage. The images are taken by a digital colour camera CCD-based sensor. Unsupervised classification methods are applied because the plants present different spectral signatures, depending on two main factors: illumination and the affected state. The colour space used in this application is CIELab, based on the decomposition of the colou...

  7. Use of LST images from MODIS/AQUA sensor as an indication of frost occurrence in RS

    OpenAIRE

    Simões, Débora de S.; Fontana, Denise C.; Vicari, Matheus B.

    2015-01-01

    ABSTRACTAlthough frost occurrence causes severe losses in agriculture, especially in the south of Brazil, the data of minimum air temperature (Tmin) currently available for monitoring and predicting frosts show insufficient spatial distribution. This study aimed to evaluate the MDY11A1 (LST – Land Surface Temperature) product, from the MODIS sensor on board the AQUA satellite as an estimator of frost occurrence in the southeast of the state of Rio Grande do Sul, Brazil. LST images from the ni...

  8. Low Dose X-Ray Sources and High Quantum Efficiency Sensors: The Next Challenge in Dental Digital Imaging?

    Directory of Open Access Journals (Sweden)

    Arnav R. Mistry

    2014-01-01

    Full Text Available Objective(s. The major challenge encountered to decrease the milliamperes (mA level in X-ray imaging systems is the quantum noise phenomena. This investigation evaluated dose exposure and image resolution of a low dose X-ray imaging (LDXI prototype comprising a low mA X-ray source and a novel microlens-based sensor relative to current imaging technologies. Study Design. A LDXI in static (group 1 and dynamic (group 2 modes was compared to medical fluoroscopy (group 3, digital intraoral radiography (group 4, and CBCT scan (group 5 using a dental phantom. Results. The Mann-Whitney test showed no statistical significance (α=0.01 in dose exposure between groups 1 and 3 and 1 and 4 and timing exposure (seconds between groups 1 and 5 and 2 and 3. Image resolution test showed group 1 > group 4 > group 2 > group 3 > group 5. Conclusions. The LDXI proved the concept for obtaining a high definition image resolution for static and dynamic radiography at lower or similar dose exposure and smaller pixel size, respectively, when compared to current imaging technologies. Lower mA at the X-ray source and high QE at the detector level principles with microlens could be applied to current imaging technologies to considerably reduce dose exposure without compromising image resolution in the near future.

  9. Low-Voltage 96 dB Snapshot CMOS Image Sensor with 4.5 nW Power Dissipation per Pixel

    Directory of Open Access Journals (Sweden)

    Orly Yadid-Pecht

    2012-07-01

    Full Text Available Modern “smart” CMOS sensors have penetrated into various applications, such as surveillance systems, bio-medical applications, digital cameras, cellular phones and many others. Reducing the power of these sensors continuously challenges designers. In this paper, a low power global shutter CMOS image sensor with Wide Dynamic Range (WDR ability is presented. This sensor features several power reduction techniques, including a dual voltage supply, a selective power down, transistors with different threshold voltages, a non-rationed logic, and a low voltage static memory. A combination of all these approaches has enabled the design of the low voltage “smart” image sensor, which is capable of reaching a remarkable dynamic range, while consuming very low power. The proposed power-saving solutions have allowed the maintenance of the standard architecture of the sensor, reducing both the time and the cost of the design. In order to maintain the image quality, a relation between the sensor performance and power has been analyzed and a mathematical model, describing the sensor Signal to Noise Ratio (SNR and Dynamic Range (DR as a function of the power supplies, is proposed. The described sensor was implemented in a 0.18 um CMOS process and successfully tested in the laboratory. An SNR of 48 dB and DR of 96 dB were achieved with a power dissipation of 4.5 nW per pixel.

  10. Chip-scale fluorescence microscope based on a silo-filter complementary metal-oxide semiconductor image sensor.

    Science.gov (United States)

    Ah Lee, Seung; Ou, Xiaoze; Lee, J Eugene; Yang, Changhuei

    2013-06-01

    We demonstrate a silo-filter (SF) complementary metal-oxide semiconductor (CMOS) image sensor for a chip-scale fluorescence microscope. The extruded pixel design with metal walls between neighboring pixels guides fluorescence emission through the thick absorptive filter to the photodiode of a pixel. Our prototype device achieves 13 μm resolution over a wide field of view (4.8 mm × 4.4 mm). We demonstrate bright-field and fluorescence longitudinal imaging of living cells in a compact, low-cost configuration.

  11. The Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS) on the Landsat Data Continuity Mission (LDCM)

    Science.gov (United States)

    Reuter, Dennis; Irons, James; Lunsford, Allen; Montanero, Matthew; Pellerano, Fernando; Richardson, Cathleen; Smith, Ramsey; Tesfaye, Zelalem; Thome, Kurtis

    2011-01-01

    The Landsat Data Continuity Mission (LDCM), a joint NASA and United States Geological Survey (USGS) mission, is scheduled for launch in December, 2012. The LDCM instrument payload will consist of the Operational Land Imager (OLI), provided by Ball Aerospace and Technology Corporation (BATC) under contract to NASA and the Thermal Infrared Sensor (TIRS), provided by NASA's Goddard Space Flight Center (GSFC). This paper will describe the design, capabilities and status of the OLI and TIRS instruments. The OLI will provide 8 channel multispectral images at a spatial resolution of 30 meters and panchromatic images at 15 meter spatial resolution. The TIRS is a 100 meter spatial resolution push-broom imager whose two spectral channels, centered at 10.8 and 12 microns, split the ETM+ thermal bands. The two channels allow the use of the "split-window" technique to aid in atmospheric correction. The TIRS focal plane consists of three Quantum Well Infrared Photodetector (QWIP) arrays to span the 185 km swath width. The OLI and TIRS instruments will be operated independently but in concert with each other. Data from both instruments will be merged into a single data stream at the (USGS)/Earth Resources Observation and Science (EROS) facility. The ground system, being developed by USGS, includes an Image Assessment System (lAS), similar to Landsat-7's, to operationally monitor, characterize and update the calibrations of the two sensors.

  12. Color imaging via nearest neighbor hole coupling in plasmonic color filters integrated onto a complementary metal-oxide semiconductor image sensor.

    Science.gov (United States)

    Burgos, Stanley P; Yokogawa, Sozo; Atwater, Harry A

    2013-11-26

    State-of-the-art CMOS imagers are composed of very small pixels, so it is critical for plasmonic imaging to understand the optical response of finite-size hole arrays and their coupling efficiency to CMOS image sensor pixels. Here, we demonstrate that the transmission spectra of finite-size hole arrays can be accurately described by only accounting for up to the second nearest-neighbor scattering-absorption interactions of hole pairs, thus making hole arrays appealing for close-packed color filters for imaging applications. Using this model, we find that the peak transmission efficiency of a square-shaped hole array with a triangular lattice reaches ∼90% that of an infinite array at an extent of ∼6 × 6 μm(2), the smallest size array showing near-infinite array transmission properties. Finally, we experimentally validate our findings by investigating the transmission and imaging characteristics of a 360 × 320 pixel plasmonic color filter array composed of 5.6 × 5.6 μm(2) RGB color filters integrated onto a commercial black and white 1/2.8 in. CMOS image sensor, demonstrating full-color high resolution plasmonic imaging. Our results show good color fidelity with a 6-color-averaged color difference metric (ΔE) in the range of 16.6-19.3, after white balancing and color-matrix correcting raw images taken with f-numbers ranging from 1.8 to 16. The integrated peak filter transmission efficiencies are measured to be in the 50% range, with a FWHM of 200 nm for all three RGB filters, in good agreement with the spectral response of isolated unmounted color filters.

  13. A combined sensor for simultaneous high resolution 2-D imaging of oxygen and trace metals fluxes

    DEFF Research Database (Denmark)

    Stahl, Henrik; Warnken, Kent W.; Sochaczewski, Lukasz

    2012-01-01

    A new sandwich sensor, consisting of an O-2 planar optode overlain by a thin (90 mu m) DGT layer is presented. This sensor can simultaneously resolve 2-D O-2 dynamics and trace metal fluxes in benthic substrates at a high spatial resolution. The DGT layer accumulates metals on a small particle si...

  14. A Low-Power Wireless Image Sensor Node with Noise-Robust Moving Object Detection and a Region-of-Interest Based Rate Controller

    Science.gov (United States)

    2017-03-01

    A Low-Power Wireless Image Sensor Node with Noise-Robust Moving Object Detection and a Region-of- Interest Based Rate Controller Jong Hwan Ko...military surveillance, with a noise-robust moving object detection and region-of- interest based rate controller. The improved robustness to noise...detection, Region-of- interest , Rate control Introduction In wireless image sensor nodes for moving object surveillance, energy efficiency can be

  15. The Laser Vegetation Imaging Sensor (LVIS): An Airborne Laser Altimeter for Mapping Vegetation and Topography

    Science.gov (United States)

    Bryan, J.; Rabine, David L.

    1998-01-01

    The Laser Vegetation Imaging Sensor (LVIS) is an airborne laser altimeter designed to quickly and extensively map surface topography as well as the relative heights of other reflecting surfaces within the laser footprint. Since 1997, this instrument has primarily been used as the airborne simulator for the Vegetation Canopy Lidar (VCL) mission, a spaceborne mission designed to measure tree height, vertical structure and ground topography (including sub-canopy topography). LVIS is capable of operating from 500 m to 10 km above ground level with footprint sizes from 1 to 60 m. Laser footprints can be randomly spaced within the 7 degree telescope field-of-view, constrained only by the operating frequency of the ND:YAG Q-switched laser (500 Hz). A significant innovation of the LVIS altimeter is that all ranging, waveform recording, and range gating are performed using a single digitizer, clock base, and detector. A portion of the outgoing laser pulse is fiber-optically fed into the detector used to collect the return signal and this entire time history of the outgoing and return pulses is digitized at 500 Msamp/sec. The ground return is then located using software digital signal processing, even in the presence of visibly opaque clouds. The surface height distribution of all reflecting surfaces within the laser footprint can be determined, for example, tree height and ground elevation. To date, the LVIS system has been used to monitor topographic change at Long Valley caldera, CA, as part of NASA's Topography and Surface Change program, and to map tree structure and sub-canopy topography at the La Selva Biological Research Station in Costa Rica, as part of the pre-launch calibration activities for the VCL mission. We present results that show the laser altimeter consistently and accurately maps surface topography, including sub-canopy topography, and vegetation height and structure. These results confirm the measurement concept of VCL and highlight the benefits of

  16. System and method for three-dimensional image reconstruction using an absolute orientation sensor

    KAUST Repository

    Giancola, Silvio

    2018-01-18

    A three-dimensional image reconstruction system includes an image capture device, an inertial measurement unit (IMU), and an image processor. The image capture device captures image data. The inertial measurement unit (IMU) is affixed to the image capture device and records IMU data associated with the image data. The image processor includes one or more processing units and memory for storing instructions that are executed by the one or more processing units, wherein the image processor receives the image data and the IMU data as inputs and utilizes the IMU data to pre-align the first image and the second image, and wherein the image processor utilizes a registration algorithm to register the pre-aligned first and second images.

  17. Noise Reduction Effect of Multiple-Sampling-Based Signal-Readout Circuits for Ultra-Low Noise CMOS Image Sensors

    Science.gov (United States)

    Kawahito, Shoji; Seo, Min-Woong

    2016-01-01

    This paper discusses the noise reduction effect of multiple-sampling-based signal readout circuits for implementing ultra-low-noise image sensors. The correlated multiple sampling (CMS) technique has recently become an important technology for high-gain column readout circuits in low-noise CMOS image sensors (CISs). This paper reveals how the column CMS circuits, together with a pixel having a high-conversion-gain charge detector and low-noise transistor, realizes deep sub-electron read noise levels based on the analysis of noise components in the signal readout chain from a pixel to the column analog-to-digital converter (ADC). The noise measurement results of experimental CISs are compared with the noise analysis and the effect of noise reduction to the sampling number is discussed at the deep sub-electron level. Images taken with three CMS gains of two, 16, and 128 show distinct advantage of image contrast for the gain of 128 (noise(median): 0.29 e−rms) when compared with the CMS gain of two (2.4 e−rms), or 16 (1.1 e−rms). PMID:27827972

  18. Noise Reduction Effect of Multiple-Sampling-Based Signal-Readout Circuits for Ultra-Low Noise CMOS Image Sensors

    Directory of Open Access Journals (Sweden)

    Shoji Kawahito

    2016-11-01

    Full Text Available This paper discusses the noise reduction effect of multiple-sampling-based signal readout circuits for implementing ultra-low-noise image sensors. The correlated multiple sampling (CMS technique has recently become an important technology for high-gain column readout circuits in low-noise CMOS image sensors (CISs. This paper reveals how the column CMS circuits, together with a pixel having a high-conversion-gain charge detector and low-noise transistor, realizes deep sub-electron read noise levels based on the analysis of noise components in the signal readout chain from a pixel to the column analog-to-digital converter (ADC. The noise measurement results of experimental CISs are compared with the noise analysis and the effect of noise reduction to the sampling number is discussed at the deep sub-electron level. Images taken with three CMS gains of two, 16, and 128 show distinct advantage of image contrast for the gain of 128 (noise(median: 0.29 e−rms when compared with the CMS gain of two (2.4 e−rms, or 16 (1.1 e−rms.

  19. A Negative Index Metamaterial-Inspired UWB Antenna with an Integration of Complementary SRR and CLS Unit Cells for Microwave Imaging Sensor Applications.

    Science.gov (United States)

    Islam, Mohammad Tariqul; Islam, Md Moinul; Samsuzzaman, Md; Faruque, Mohammad Rashed Iqbal; Misran, Norbahiah

    2015-05-20

    This paper presents a negative index metamaterial incorporated UWB antenna with an integration of complementary SRR (split-ring resonator) and CLS (capacitive loaded strip) unit cells for microwave imaging sensor applications. This metamaterial UWB antenna sensor consists of four unit cells along one axis, where each unit cell incorporates a complementary SRR and CLS pair. This integration enables a design layout that allows both a negative value of permittivity and a negative value of permeability simultaneous, resulting in a durable negative index to enhance the antenna sensor performance for microwave imaging sensor applications. The proposed MTM antenna sensor was designed and fabricated on an FR4 substrate having a thickness of 1.6 mm and a dielectric constant of 4.6. The electrical dimensions of this antenna sensor are 0.20 λ × 0.29 λ at a lower frequency of 3.1 GHz. This antenna sensor achieves a 131.5% bandwidth (VSWR < 2) covering the frequency bands from 3.1 GHz to more than 15 GHz with a maximum gain of 6.57 dBi. High fidelity factor and gain, smooth surface-current distribution and nearly omni-directional radiation patterns with low cross-polarization confirm that the proposed negative index UWB antenna is a promising entrant in the field of microwave imaging sensors.

  20. MHz rate X-Ray imaging with GaAs:Cr sensors using the LPD detector system

    Science.gov (United States)

    Veale, M. C.; Booker, P.; Cline, B.; Coughlan, J.; Hart, M.; Nicholls, T.; Schneider, A.; Seller, P.; Pape, I.; Sawhney, K.; Lozinskaya, A. D.; Novikov, V. A.; Tolbanov, O. P.; Tyazhev, A.; Zarubin, A. N.

    2017-02-01

    The STFC Rutherford Appleton Laboratory (U.K.) and Tomsk State University (Russia) have been working together to develop and characterise detector systems based on chromium-compensated gallium arsenide (GaAs:Cr) semiconductor material for high frame rate X-ray imaging. Previous work has demonstrated the spectroscopic performance of the material and its resistance to damage induced by high fluxes of X-rays. In this paper, recent results from experiments at the Diamond Light Source Synchrotron have demonstrated X-ray imaging with GaAs:Cr sensors at a frame rate of 3.7 MHz using the Large Pixel Detector (LPD) ASIC, developed by STFC for the European XFEL. Measurements have been made using a monochromatic 20 keV X-ray beam delivered in a single hybrid pulse with an instantenous flux of up to ~ 1 × 1010 photons s-1 mm-2. The response of 500 μm GaAs:Cr sensors is compared to that of the standard 500 μm thick LPD Si sensors.

  1. Derivation of 2.5D image models from one-dimensional x-ray image sensors

    Science.gov (United States)

    Evans, J. Paul O.; Godber, Simon X.; Robinson, Max

    1996-04-01

    This paper describes on-going research into the development of a 21/2D image modeling technique based on the extraction of relative depth information from stereoscopic x-ray images. This research was initiated in order to aid operators of security x-ray screening equipment in the interpretation of complex radiographic images. It can be shown that a stereoscopic x-ray image can be thought of as a series of depth planes or slice images which are similar in some respects to tomograms produced by computed tomography systems. Thus, if the slice images can be isolated the resulting 3D data set can be used for image reconstruction. Conceptually, the production of a 21/2D image from a stereoscopic image can be thought of as the process of replacing the physiological depth cue of binocular parallax, inherent in a stereoscopic image, with the psychological depth cues such as occlusion and rotation. Once the data is represented in this form it is envisaged that, for instance in the case of a security imaging scenario a suspicious object could be electronically unpacked. The work presented in this paper is based on images obtained from a stereoscopic folded array dual energy x-ray screening system, designed and developed by the Nottingham Trent University group.

  2. Interferometric microstructured polymer optical fiber ultrasound sensor for optoacoustic endoscopic imaging in biomedical applications

    DEFF Research Database (Denmark)

    Gallego, Daniel; Sáez-Rodríguez, David; Webb, David

    2014-01-01

    We report a characterization of the acoustic sensitivity of microstructured polymer optical fiber interferometric sensors at ultrasonic frequencies from 100kHz to 10MHz. The use of wide-band ultrasonic fiber optic sensors in biomedical ultrasonic and optoacoustic applications is an open alternative...... to conventional piezoelectric transducers. These kind of sensors, made of biocompatible polymers, are good candidates for the sensing element in an optoacoustic endoscope because of its high sensitivity, its shape and its non-brittle and non-electric nature. The acoustic sensitivity of the intrinsic fiber optic...... interferometric sensors depends strongly of the material which is composed of. In this work we compare experimentally the intrinsic ultrasonic sensitivities of a PMMA mPOF with other three optical fibers: a singlemode silica optical fiber, a single-mode polymer optical fiber and a multimode graded...

  3. Interferometric microstructured polymer optical fiber ultrasound sensor for optoacoustic endoscopic imaging in biomedical applications

    Science.gov (United States)

    Gallego, Daniel; Sáez-Rodríguez, David; Webb, David; Bang, Ole; Lamela, Horacio

    2014-05-01

    We report a characterization of the acoustic sensitivity of microstructured polymer optical fiber interferometric sensors at ultrasonic frequencies from 100kHz to 10MHz. The use of wide-band ultrasonic fiber optic sensors in biomedical ultrasonic and optoacoustic applications is an open alternative to conventional piezoelectric transducers. These kind of sensors, made of biocompatible polymers, are good candidates for the sensing element in an optoacoustic endoscope because of its high sensitivity, its shape and its non-brittle and non-electric nature. The acoustic sensitivity of the intrinsic fiber optic interferometric sensors depends strongly of the material which is composed of. In this work we compare experimentally the intrinsic ultrasonic sensitivities of a PMMA mPOF with other three optical fibers: a singlemode silica optical fiber, a single-mode polymer optical fiber and a multimode graded-index perfluorinated polymer optical fiber.

  4. Noise reduction effect and analysis through serial multiple sampling in a CMOS image sensor with floating diffusion boost-driving

    Science.gov (United States)

    Wakabayashi, Hayato; Yamaguchi, Keiji; Yamagata, Yuuki

    2017-04-01

    We have developed a 1/2.3-in. 10.3 mega pixel back-illuminated CMOS image sensor utilizing serial multiple sampling. This sensor achieves an RMS random noise of 1.3e- and row temporal noise (RTN) of 0.19e-. Serial multiple sampling is realized with a column inline averaging technique without the need for additional processing circuitry. Pixel readout is accomplished utilizing a 4-shared-pixel floating diffusion (FD) boost-driving architecture. RTN caused by column parallel readout was analyzed considering the transfer function at the system level and the developed model was verified by measurement data taken at each sampling time. This model demonstrates the RTN improvement of -1.6 dB in a parallel multiple readout architecture.

  5. Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera

    Directory of Open Access Journals (Sweden)

    Thomas C. Wilkes

    2016-10-01

    Full Text Available Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements.

  6. Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera.

    Science.gov (United States)

    Wilkes, Thomas C; McGonigle, Andrew J S; Pering, Tom D; Taggart, Angus J; White, Benjamin S; Bryant, Robert G; Willmott, Jon R

    2016-10-06

    Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS) sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements.

  7. Quasi-pixel structured nanocrystalline Gd2O3(Eu) scintillation screens and imaging performance for indirect X-ray imaging sensors

    Science.gov (United States)

    Cha, Bo Kyung; Kim, Jong Yul; Cho, Gyuseong; Seo, Chang-Woo; Jeon, Sungchae; Huh, Young

    2011-08-01

    A novel quasi-pixel structured scintillation screen with nanocrystalline Gd2O3:Eu particle sizes was introduced for indirect X-ray imaging sensors with high sensitivity and high spatial resolution. A nanocrystalline Gd2O3:Eu scintillating phosphor with average 100 nm sizes was used as a conversion material for incident X-rays into optical photons. In this work, silicon-based pixel structures with different 100 and 50 μm pixel sizes, 10 μm wall width and 120 μm thickness were fabricated by a standard photolithography and deep reactive ion etching (DRIE) process. The pixelated scintillation screen was fabricated by filling the synthesized nanocrystalline Gd2O3:Eu scintillating phosphor into pixel-structured silicon arrays, and X-ray imaging performance such as relative light intensity, X-ray to light response and spatial resolution in terms of modulation transfer function (MTF) of the fabricated samples were measured. Although high spatial resolution imaging was largely achieved by pixel-structured nanocrystalline Gd2O3:Eu scintillation screens, X-ray sensitivity was still low for medical imaging applications. As a result, novel quasi-pixel structured screens with additional thin Gd2O2S:Tb scintillating layer were proposed for X-ray imaging detector with suitable sensitivity and spatial resolution in comparison with pixel-structured screens, and X-ray imaging performance of quasi-pixel structured nanocrystalline Gd2O3:Eu scintillating screens was investigated.

  8. Lightning Imaging Sensor (LIS) on the International Space Station (ISS): Launch, Installation, Activation, and First Results

    Science.gov (United States)

    Blakeslee, R. J.; Christian, H. J., Jr.; Mach, D. M.; Buechler, D. E.; Koshak, W. J.; Walker, T. D.; Bateman, M. G.; Stewart, M. F.; O'Brien, S.; Wilson, T. O.; Pavelitz, S. D.; Coker, C.

    2016-12-01

    Over the past 20 years, the NASA Marshall Space Flight Center, the University of Alabama in Huntsville, and their partners developed and demonstrated the effectiveness and value of space-based lightning observations as a remote sensing tool for Earth science research and applications, and, in the process, established a robust global lightning climatology. The observations included measurements from the Lightning Imaging Sensor (LIS) on the Tropical Rainfall Measuring Mission (TRMM) and its Optical Transient Detector (OTD) predecessor that acquired global observations of total lightning (i.e., intracloud and cloud-to-ground discharges) spanning a period from May 1995 through April 2015. As an exciting follow-on to these prior missions, a space-qualified LIS built as a flight-spare for TRMM will be delivered to the International Space Station (ISS) for a 2 year or longer mission, flown as a hosted payload on the Department of Defense (DoD) Space Test Program-Houston 5 (STP-H5) mission. The STP-H5 payload containing LIS is scheduled launch from NASA's Kennedy Space Center to the ISS in November 2016, aboard the SpaceX Cargo Resupply Services-10 (SpaceX-10) mission, installed in the unpressurized "trunk" of the Dragon spacecraft. After the Dragon is berth to ISS Node 2, the payload will be removed from the trunk and robotically installed in a nadir-viewing location on the external truss of the ISS. Following installation on the ISS, the LIS Operations Team will work with the STP-H5 and ISS Operations Teams to power-on LIS and begin instrument checkout and commissioning. Following successful activation, LIS orbital operations will commence, managed from the newly established LIS Payload Operations Control Center (POCC) located at the National Space Science Technology Center (NSSTC) in Huntsville, AL. The well-established and robust processing, archival, and distribution infrastructure used for TRMM was easily adapted to the ISS mission, assuring that lightning

  9. A Very Low Dark Current Temperature-Resistant, Wide Dynamic Range, Complementary Metal Oxide Semiconductor Image Sensor

    Science.gov (United States)

    Mizobuchi, Koichi; Adachi, Satoru; Tejada, Jose; Oshikubo, Hiromichi; Akahane, Nana; Sugawa, Shigetoshi

    2008-07-01

    A very low dark current (VLDC) temperature-resistant approach which best suits a wide dynamic range (WDR) complementary metal oxide semiconductor (CMOS) image sensor with a lateral over-flow integration capacitor (LOFIC) has been developed. By implementing a low electric field photodiode without a trade-off of full well-capacity, reduced plasma damage, re-crystallization, and termination of silicon-silicon dioxide interface states in the front end of line and back end of line (FEOL and BEOL) in a 0.18 µm, two polycrystalline silicon, three metal (2P3M) process, the dark current is reduced to 11 e-/s/pixel (0.35 e-/s/µm2: pixel area normalized) at 60 °C, which is the lowest value ever reported. For further robustness at low and high temperatures, 1/3-in., 5.6-µm pitch, 800×600 pixel sensor chips with low noise readout circuits designed for a signal and noise hold circuit and a programmable gain amplifier (PGA) have also been deposited with an inorganic cap layer on a micro-lens and covered with a metal hermetically sealed package assembly. Image sensing performance results in 2.4 e-rms temporal noise and 100 dB dynamic range (DR) with 237 ke- full well-capacity. The operating temperature range is extended from -40 to 85 °C while retaining good image quality.

  10. Applications of the Integrated High-Performance CMOS Image Sensor to Range Finders — from Optical Triangulation to the Automotive Field

    Science.gov (United States)

    Wu, Jih-Huah; Pen, Cheng-Chung; Jiang, Joe-Air

    2008-01-01

    With their significant features, the applications of complementary metal-oxide semiconductor (CMOS) image sensors covers a very extensive range, from industrial automation to traffic applications such as aiming systems, blind guidance, active/passive range finders, etc. In this paper CMOS image sensor-based active and passive range finders are presented. The measurement scheme of the proposed active/passive range finders is based on a simple triangulation method. The designed range finders chiefly consist of a CMOS image sensor and some light sources such as lasers or LEDs. The implementation cost of our range finders is quite low. Image processing software to adjust the exposure time (ET) of the CMOS image sensor to enhance the performance of triangulation-based range finders was also developed. An extensive series of experiments were conducted to evaluate the performance of the designed range finders. From the experimental results, the distance measurement resolutions achieved by the active range finder and the passive range finder can be better than 0.6% and 0.25% within the measurement ranges of 1 to 8 m and 5 to 45 m, respectively. Feasibility tests on applications of the developed CMOS image sensor-based range finders to the automotive field were also conducted. The experimental results demonstrated that our range finders are well-suited for distance measurements in this field. PMID:27879789

  11. Modeling the dark current histogram induced by gold contamination in complementary-metal-oxide-semiconductor image sensors

    Energy Technology Data Exchange (ETDEWEB)

    Domengie, F., E-mail: florian.domengie@st.com; Morin, P. [STMicroelectronics Crolles 2 (SAS), 850 Rue Jean Monnet, 38926 Crolles Cedex (France); Bauza, D. [CNRS, IMEP-LAHC - Grenoble INP, Minatec: 3, rue Parvis Louis Néel, CS 50257, 38016 Grenoble Cedex 1 (France)

    2015-07-14

    We propose a model for dark current induced by metallic contamination in a CMOS image sensor. Based on Shockley-Read-Hall kinetics, the expression of dark current proposed accounts for the electric field enhanced emission factor due to the Poole-Frenkel barrier lowering and phonon-assisted tunneling mechanisms. To that aim, we considered the distribution of the electric field magnitude and metal atoms in the depth of the pixel. Poisson statistics were used to estimate the random distribution of metal atoms in each pixel for a given contamination dose. Then, we performed a Monte-Carlo-based simulation for each pixel to set the number of metal atoms the pixel contained and the enhancement factor each atom underwent, and obtained a histogram of the number of pixels versus dark current for the full sensor. Excellent agreement with the dark current histogram measured on an ion-implanted gold-contaminated imager has been achieved, in particular, for the description of the distribution tails due to the pixel regions in which the contaminant atoms undergo a large electric field. The agreement remains very good when increasing the temperature by 15 °C. We demonstrated that the amplification of the dark current generated for the typical electric fields encountered in the CMOS image sensors, which depends on the nature of the metal contaminant, may become very large at high electric field. The electron and hole emissions and the resulting enhancement factor are described as a function of the trap characteristics, electric field, and temperature.

  12. Characterisation of a smartphone image sensor response to direct solar 305nm irradiation at high air masses.

    Science.gov (United States)

    Igoe, D P; Amar, A; Parisi, A V; Turner, J

    2017-06-01

    This research reports the first time the sensitivity, properties and response of a smartphone image sensor that has been used to characterise the photobiologically important direct UVB solar irradiances at 305nm in clear sky conditions at high air masses. Solar images taken from Autumn to Spring were analysed using a custom Python script, written to develop and apply an adaptive threshold to mitigate the effects of both noise and hot-pixel aberrations in the images. The images were taken in an unobstructed area, observing from a solar zenith angle as high as 84° (air mass=9.6) to local solar maximum (up to a solar zenith angle of 23°) to fully develop the calibration model in temperatures that varied from 2°C to 24°C. The mean ozone thickness throughout all observations was 281±18 DU (to 2 standard deviations). A Langley Plot was used to confirm that there were constant atmospheric conditions throughout the observations. The quadratic calibration model developed has a strong correlation between the red colour channel from the smartphone with the Microtops measurements of the direct sun 305nm UV, with a coefficient of determination of 0.998 and very low standard errors. Validation of the model verified the robustness of the method and the model, with an average discrepancy of only 5% between smartphone derived and Microtops observed direct solar irradiances at 305nm. The results demonstrate the effectiveness of using the smartphone image sensor as a means to measure photobiologically important solar UVB radiation. The use of ubiquitous portable technologies, such as smartphones and laptop computers to perform data collection and analysis of solar UVB observations is an example of how scientific investigations can be performed by citizen science based individuals and groups, communities and schools. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. The Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS) on the Landsat Data Continuity Mission (LDCM)

    Science.gov (United States)

    Reuter, Dennis; Irons, James; Lunsford, Allen; Montanaro, Matthew; Pellerano, Fernando; Richardson, Cathleen; Smith, Ramsey; Tesfaye, Zelalem; Thome, Kurtis

    2011-06-01

    The Landsat Data Continuity Mission (LDCM), a partnership between the National Aeronautics and Space Administration (NASA) and the Department of Interior (DOI) / United States Geological Survey (USGS), is scheduled for launch in December, 2012. It will be the eighth mission in the Landsat series. The LDCM instrument payload will consist of the Operational Land Imager (OLI), provided by Ball Aerospace and Technology Corporation (BATC) under contract to NASA and the Thermal Infrared Sensor (TIRS), provided by NASA's Goddard Space Flight Center (GSFC). This paper outlines the present development status of the two instruments.

  14. A Method for Application of Classification Tree Models to Map Aquatic Vegetation Using Remotely Sensed Images from Different Sensors and Dates

    Directory of Open Access Journals (Sweden)

    Ying Cai

    2012-09-01

    Full Text Available In previous attempts to identify aquatic vegetation from remotely-sensed images using classification trees (CT, the images used to apply CT models to different times or locations necessarily originated from the same satellite sensor as that from which the original images used in model development came, greatly limiting the application of CT. We have developed an effective normalization method to improve the robustness of CT models when applied to images originating from different sensors and dates. A total of 965 ground-truth samples of aquatic vegetation types were obtained in 2009 and 2010 in Taihu Lake, China. Using relevant spectral indices (SI as classifiers, we manually developed a stable CT model structure and then applied a standard CT algorithm to obtain quantitative (optimal thresholds from 2009 ground-truth data and images from Landsat7-ETM+, HJ-1B-CCD, Landsat5-TM and ALOS-AVNIR-2 sensors. Optimal CT thresholds produced average classification accuracies of 78.1%, 84.7% and 74.0% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. However, the optimal CT thresholds for different sensor images differed from each other, with an average relative variation (RV of 6.40%. We developed and evaluated three new approaches to normalizing the images. The best-performing method (Method of 0.1% index scaling normalized the SI images using tailored percentages of extreme pixel values. Using the images normalized by Method of 0.1% index scaling, CT models for a particular sensor in which thresholds were replaced by those from the models developed for images originating from other sensors provided average classification accuracies of 76.0%, 82.8% and 68.9% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. Applying the CT models developed for normalized 2009 images to 2010 images resulted in high classification (78.0%–93.3% and overall (92.0%–93.1% accuracies. Our

  15. A Method for Application of Classification Tree Models to Map Aquatic Vegetation Using Remotely Sensed Images from Different Sensors and Dates

    Science.gov (United States)

    Jiang, Hao; Zhao, Dehua; Cai, Ying; An, Shuqing

    2012-01-01

    In previous attempts to identify aquatic vegetation from remotely-sensed images using classification trees (CT), the images used to apply CT models to different times or locations necessarily originated from the same satellite sensor as that from which the original images used in model development came, greatly limiting the application of CT. We have developed an effective normalization method to improve the robustness of CT models when applied to images originating from different sensors and dates. A total of 965 ground-truth samples of aquatic vegetation types were obtained in 2009 and 2010 in Taihu Lake, China. Using relevant spectral indices (SI) as classifiers, we manually developed a stable CT model structure and then applied a standard CT algorithm to obtain quantitative (optimal) thresholds from 2009 ground-truth data and images from Landsat7-ETM+, HJ-1B-CCD, Landsat5-TM and ALOS-AVNIR-2 sensors. Optimal CT thresholds produced average classification accuracies of 78.1%, 84.7% and 74.0% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. However, the optimal CT thresholds for different sensor images differed from each other, with an average relative variation (RV) of 6.40%. We developed and evaluated three new approaches to normalizing the images. The best-performing method (Method of 0.1% index scaling) normalized the SI images using tailored percentages of extreme pixel values. Using the images normalized by Method of 0.1% index scaling, CT models for a particular sensor in which thresholds were replaced by those from the models developed for images originating from other sensors provided average classification accuracies of 76.0%, 82.8% and 68.9% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. Applying the CT models developed for normalized 2009 images to 2010 images resulted in high classification (78.0%–93.3%) and overall (92.0%–93.1%) accuracies. Our results suggest

  16. A Feasibility Study of Sea Ice Motion and Deformation Measurements Using Multi-Sensor High-Resolution Optical Satellite Images

    Directory of Open Access Journals (Sweden)

    Chang-Uk Hyun

    2017-09-01

    Full Text Available Sea ice motion and deformation have generally been measured using low-resolution passive microwave or mid-resolution radar remote sensing datasets of daily (or few days intervals to monitor long-term trends over a wide polar area. This feasibility study presents an application of high-resolution optical images from operational satellites, which have become more available in polar regions, for sea ice motion and deformation measurements. The sea ice motion, i.e., Lagrangian vector, is measured by using a maximum cross-correlation (MCC technique and multi-temporal high-resolution images acquired on 14–15 August 2014 from multiple spaceborne sensors on board Korea Multi-Purpose Satellites (KOMPSATs with short acquisition time intervals. The sea ice motion extracted from the six image pairs of the spatial resolutions were resampled to 4 m and 15 m yields with vector length measurements of 57.7 m root mean square error (RMSE and −11.4 m bias and 60.7 m RMSE and −13.5 m bias, respectively, compared with buoy location records. The errors from both resolutions indicate more accurate measurements than from conventional sea ice motion datasets from passive microwave and radar data in ice and water mixed surface conditions. In the results of sea ice deformation caused by interaction of individual ice floes, while free drift patterns of ice floes were delineated from the 4 m spatial resolution images, the deformation was less revealing in the 15 m spatial resolution image pairs due to emphasized discretization uncertainty from coarser pixel sizes. The results demonstrate that using multi-temporal high-resolution optical satellite images enabled precise image block matching in the melting season, thus this approach could be used for expanding sea ice motion and deformation dataset, with an advantage of frequent image acquisition capability in multiple areas by means of many operational satellites.

  17. Folk Dance Pattern Recognition Over Depth Images Acquired via Kinect Sensor

    Science.gov (United States)

    Protopapadakis, E.; Grammatikopoulou, A.; Doulamis, A.; Grammalidis, N.

    2017-02-01

    The possibility of accurate recognition of folk dance patterns is investigated in this paper. System inputs are raw skeleton data, provided by a low cost sensor. In particular, data were obtained by monitoring three professional dancers, using a Kinect II sensor. A set of six traditional Greek dances (without their variations) consists the investigated data. A two-step process was adopted. At first, the most descriptive skeleton data were selected using a combination of density based and sparse modelling algorithms. Then, the representative data served as training set for a variety of classifiers.

  18. FOLK DANCE PATTERN RECOGNITION OVER DEPTH IMAGES ACQUIRED VIA KINECT SENSOR

    Directory of Open Access Journals (Sweden)

    E. Protopapadakis

    2017-02-01

    Full Text Available The possibility of accurate recognition of folk dance patterns is investigated in this paper. System inputs are raw skeleton data, provided by a low cost sensor. In particular, data were obtained by monitoring three professional dancers, using a Kinect II sensor. A set of six traditional Greek dances (without their variations consists the investigated data. A two-step process was adopted. At first, the most descriptive skeleton data were selected using a combination of density based and sparse modelling algorithms. Then, the representative data served as training set for a variety of classifiers.

  19. Multi-Image and Multi-Sensor Change Detection for Long-Term Monitoring of Arid Environments With Landsat Series

    Directory of Open Access Journals (Sweden)

    Emanuele Mandanici

    2015-10-01

    Full Text Available An automated procedure has been proposed to monitor by multispectral satellite imagery the cultivation expansion between 1987 and 2013 in the arid environment of the Fayyum Oasis (Egypt, which is subject to land reclamation. A change detection procedure was applied to the four years investigated (1987, 1998, 2003 and 2013. This long-term analysis is based on images from the Landsat series, adopting a classification strategy relying on vegetation index computations. In particular: (a the consequences of the radiometric differences of three Landsat sensors on the vegetation index values were analyzed using data simulated by a hyperspectral Hyperion image; (b the problems resulting from harvesting cycles were minimized using five images per year, after a preliminary analysis on the effects deriving from the number of processed images; (c an accuracy assessment was carried out on the 2003 and 2013 maps using high resolution images for a portion of the investigated area, with an estimated overall accuracy of 91% for the change detection. The method is implemented in a batch procedure and can be applied to other similar environmental contexts, supporting analyses for sustainable development and exploitation of soil and water resources.

  20. High-Frequency Fiber-Optic Ultrasonic Sensor Using Air Micro-Bubble for Imaging of Seismic Physical Models

    Directory of Open Access Journals (Sweden)

    Tingting Gang

    2016-12-01

    Full Text Available A micro-fiber-optic Fabry-Perot interferometer (FPI is proposed and demonstrated experimentally for ultrasonic imaging of seismic physical models. The device consists of a micro-bubble followed by the end of a single-mode fiber (SMF. The micro-structure is formed by the discharging operation on a short segment of hollow-core fiber (HCF that is spliced to the SMF. This micro FPI is sensitive to ultrasonic waves (UWs, especially to the high-frequency (up to 10 MHz UW, thanks to its ultra-thin cavity wall and micro-diameter. A side-band filter technology is employed for the UW interrogation, and then the high signal-to-noise ratio (SNR UW signal is achieved. Eventually the sensor is used for lateral imaging of the physical model by scanning UW detection and two-dimensional signal reconstruction.

  1. Sub-nano tesla magnetic imaging based on room-temperature magnetic flux sensors with vibrating sample magnetometry

    Science.gov (United States)

    Adachi, Yoshiaki; Oyama, Daisuke

    2017-05-01

    We developed a two-dimensional imaging method for weak magnetic charge distribution using a commercially available magnetic impedance sensor whose magnetic field resolution is 10 pT/Hz1/2 at 10 Hz. When we applied the vibrating sample magnetometry, giving a minute mechanical vibration to the sample and detecting magnetic signals modulated by the vibration frequency, the effects of 1/f noise and the environmental low-frequency band noise were suppressed, and a weak magnetic charge distribution was obtained without magnetic shielding. Furthermore, improvement in the spatial resolution was also expected when the signals were demodulated at the second harmonic frequency of the vibration. In this paper, a preliminary magnetic charge imaging using the vibrating sample magnetometry and its results are demonstrated.

  2. High-Frequency Fiber-Optic Ultrasonic Sensor Using Air Micro-Bubble for Imaging of Seismic Physical Models.

    Science.gov (United States)

    Gang, Tingting; Hu, Manli; Rong, Qiangzhou; Qiao, Xueguang; Liang, Lei; Liu, Nan; Tong, Rongxin; Liu, Xiaobo; Bian, Ce

    2016-12-14

    A micro-fiber-optic Fabry-Perot interferometer (FPI) is proposed and demonstrated experimentally for ultrasonic imaging of seismic physical models. The device consists of a micro-bubble followed by the end of a single-mode fiber (SMF). The micro-structure is formed by the discharging operation on a short segment of hollow-core fiber (HCF) that is spliced to the SMF. This micro FPI is sensitive to ultrasonic waves (UWs), especially to the high-frequency (up to 10 MHz) UW, thanks to its ultra-thin cavity wall and micro-diameter. A side-band filter technology is employed for the UW interrogation, and then the high signal-to-noise ratio (SNR) UW signal is achieved. Eventually the sensor is used for lateral imaging of the physical model by scanning UW detection and two-dimensional signal reconstruction.

  3. High-Frequency Fiber-Optic Ultrasonic Sensor Using Air Micro-Bubble for Imaging of Seismic Physical Models

    Science.gov (United States)

    Gang, Tingting; Hu, Manli; Rong, Qiangzhou; Qiao, Xueguang; Liang, Lei; Liu, Nan; Tong, Rongxin; Liu, Xiaobo; Bian, Ce

    2016-01-01

    A micro-fiber-optic Fabry-Perot interferometer (FPI) is proposed and demonstrated experimentally for ultrasonic imaging of seismic physical models. The device consists of a micro-bubble followed by the end of a single-mode fiber (SMF). The micro-structure is formed by the discharging operation on a short segment of hollow-core fiber (HCF) that is spliced to the SMF. This micro FPI is sensitive to ultrasonic waves (UWs), especially to the high-frequency (up to 10 MHz) UW, thanks to its ultra-thin cavity wall and micro-diameter. A side-band filter technology is employed for the UW interrogation, and then the high signal-to-noise ratio (SNR) UW signal is achieved. Eventually the sensor is used for lateral imaging of the physical model by scanning UW detection and two-dimensional signal reconstruction. PMID:27983639

  4. Learning from concurrent Lightning Imaging Sensor and Lightning Mapping Array observations in preparation for the MTG-LI mission

    Science.gov (United States)

    Defer, Eric; Bovalo, Christophe; Coquillat, Sylvain; Pinty, Jean-Pierre; Farges, Thomas; Krehbiel, Paul; Rison, William

    2016-04-01

    The upcoming decade will see the deployment and the operation of French, European and American space-based missions dedicated to the detection and the characterization of the lightning activity on Earth. For instance the Tool for the Analysis of Radiation from lightNIng and Sprites (TARANIS) mission, with an expected launch in 2018, is a CNES mission dedicated to the study of impulsive energy transfers between the atmosphere of the Earth and the space environment. It will carry a package of Micro Cameras and Photometers (MCP) to detect and locate lightning flashes and triggered Transient Luminous Events (TLEs). At the European level, the Meteosat Third Generation Imager (MTG-I) satellites will carry in 2019 the Lightning Imager (LI) aimed at detecting and locating the lightning activity over almost the full disk of Earth as usually observed with Meteosat geostationary infrared/visible imagers. The American community plans to operate a similar instrument on the GOES-R mission for an effective operation in early 2016. In addition NASA will install in 2016 on the International Space Station the spare version of the Lightning Imaging Sensor (LIS) that has proved its capability to optically detect the tropical lightning activity from the Tropical Rainfall Measuring Mission (TRMM) spacecraft. We will present concurrent observations recorded by the optical space-borne Lightning Imaging Sensor (LIS) and the ground-based Very High Frequency (VHF) Lightning Mapping Array (LMA) for different types of lightning flashes. The properties of the cloud environment will also be considered in the analysis thanks to coincident observations of the different TRMM cloud sensors. The characteristics of the optical signal will be discussed according to the nature of the parent flash components and the cloud properties. This study should provide some insights not only on the expected optical signal that will be recorded by LI, but also on the definition of the validation strategy of LI, and

  5. Real time polarization sensor image processing on an embedded FPGA/multi-core DSP system

    Science.gov (United States)

    Bednara, Marcus; Chuchacz-Kowalczyk, Katarzyna

    2015-05-01

    Most embedded image processing SoCs available on the market are highly optimized for typical consumer applications like video encoding/decoding, motion estimation or several image enhancement processes as used in DSLR or digital video cameras. For non-consumer applications, on the other hand, optimized embedded hardware is rarely available, so often PC based image processing systems are used. We show how a real time capable image processing system for a non-consumer application - namely polarization image data processing - can be efficiently implemented on an FPGA and multi-core DSP based embedded hardware platform.

  6. Sensor Fusion of a Mobile Device to Control and Acquire Videos or Images of Coffee Branches and for Georeferencing Trees.

    Science.gov (United States)

    Giraldo, Paula Jimena Ramos; Aguirre, Álvaro Guerrero; Muñoz, Carlos Mario; Prieto, Flavio Augusto; Oliveros, Carlos Eugenio

    2017-04-06

    Smartphones show potential for controlling and monitoring variables in agriculture. Their processing capacity, instrumentation, connectivity, low cost, and accessibility allow farmers (among other users in rural areas) to operate them easily with applications adjusted to their specific needs. In this investigation, the integration of inertial sensors, a GPS, and a camera are presented for the monitoring of a coffee crop. An Android-based application was developed with two operating modes: (i) Navigation: for georeferencing trees, which can be as close as 0.5 m from each other; and (ii) Acquisition: control of video acquisition, based on the movement of the mobile device over a branch, and measurement of image quality, using clarity indexes to select the most appropriate frames for application in future processes. The integration of inertial sensors in navigation mode, shows a mean relative error of ±0.15 m, and total error ±5.15 m. In acquisition mode, the system correctly identifies the beginning and end of mobile phone movement in 99% of cases, and image quality is determined by means of a sharpness factor which measures blurriness. With the developed system, it will be possible to obtain georeferenced information about coffee trees, such as their production, nutritional state, and presence of plagues or diseases.

  7. Sensor Fusion of a Mobile Device to Control and Acquire Videos or Images of Coffee Branches and for Georeferencing Trees

    Directory of Open Access Journals (Sweden)

    Paula Jimena Ramos Giraldo

    2017-04-01

    Full Text Available Smartphones show potential for controlling and monitoring variables in agriculture. Their processing capacity, instrumentation, connectivity, low cost, and accessibility allow farmers (among other users in rural areas to operate them easily with applications adjusted to their specific needs. In this investigation, the integration of inertial sensors, a GPS, and a camera are presented for the monitoring of a coffee crop. An Android-based application was developed with two operating modes: (i Navigation: for georeferencing trees, which can be as close as 0.5 m from each other; and (ii Acquisition: control of video acquisition, based on the movement of the mobile device over a branch, and measurement of image quality, using clarity indexes to select the most appropriate frames for application in future processes. The integration of inertial sensors in navigation mode, shows a mean relative error of ±0.15 m, and total error ±5.15 m. In acquisition mode, the system correctly identifies the beginning and end of mobile phone movement in 99% of cases, and image quality is determined by means of a sharpness factor which measures blurriness. With the developed system, it will be possible to obtain georeferenced information about coffee trees, such as their production, nutritional state, and presence of plagues or diseases.

  8. Core-shell diode array for high performance particle detectors and imaging sensors: status of the development

    Science.gov (United States)

    Jia, G.; Hübner, U.; Dellith, J.; Dellith, A.; Stolz, R.; Plentz, J.; Andrä, G.

    2017-02-01

    We propose a novel high performance radiation detector and imaging sensor by a ground-breaking core-shell diode array design. This novel core-shell diode array are expected to have superior performance respect to ultrahigh radiation hardness, high sensitivity, low power consumption, fast signal response and high spatial resolution simultaneously. These properties are highly desired in fundamental research such as high energy physics (HEP) at CERN, astronomy and future x-ray based protein crystallography at x-ray free electron laser (XFEL) etc.. This kind of detectors will provide solutions for these fundamental research fields currently limited by instrumentations. In this work, we report our progress on the development of core-shell diode array for the applications as high performance imaging sensors and particle detectors. We mainly present our results in the preparation of high aspect ratio regular silicon rods by metal assisted wet chemical etching technique. Nearly 200 μm deep and 2 μm width channels with high aspect ratio have been etched into silicon. This result will open many applications not only for the core-shell diode array, but also for a high density integration of 3D microelectronics devices.

  9. A Dynamic Range Enhanced Readout Technique with a Two-Step TDC for High Speed Linear CMOS Image Sensors

    Directory of Open Access Journals (Sweden)

    Zhiyuan Gao

    2015-11-01

    Full Text Available This paper presents a dynamic range (DR enhanced readout technique with a two-step time-to-digital converter (TDC for high speed linear CMOS image sensors. A multi-capacitor and self-regulated capacitive trans-impedance amplifier (CTIA structure is employed to extend the dynamic range. The gain of the CTIA is auto adjusted by switching different capacitors to the integration node asynchronously according to the output voltage. A column-parallel ADC based on a two-step TDC is utilized to improve the conversion rate. The conversion is divided into coarse phase and fine phase. An error calibration scheme is also proposed to correct quantization errors caused by propagation delay skew within −Tclk~+Tclk. A linear CMOS image sensor pixel array is designed in the 0.13 μm CMOS process to verify this DR-enhanced high speed readout technique. The post simulation results indicate that the dynamic range of readout circuit is 99.02 dB and the ADC achieves 60.22 dB SNDR and 9.71 bit ENOB at a conversion rate of 2 MS/s after calibration, with 14.04 dB and 2.4 bit improvement, compared with SNDR and ENOB of that without calibration.

  10. Wavefront sensorless adaptive optics versus sensor-based adaptive optics for in vivo fluorescence retinal imaging (Conference Presentation)

    Science.gov (United States)

    Wahl, Daniel J.; Zhang, Pengfei; Jian, Yifan; Bonora, Stefano; Sarunic, Marinko V.; Zawadzki, Robert J.

    2017-02-01

    Adaptive optics (AO) is essential for achieving diffraction limited resolution in large numerical aperture (NA) in-vivo retinal imaging in small animals. Cellular-resolution in-vivo imaging of fluorescently labeled cells is highly desirable for studying pathophysiology in animal models of retina diseases in pre-clinical vision research. Currently, wavefront sensor-based (WFS-based) AO is widely used for retinal imaging and has demonstrated great success. However, the performance can be limited by several factors including common path errors, wavefront reconstruction errors and an ill-defined reference plane on the retina. Wavefront sensorless (WFS-less) AO has the advantage of avoiding these issues at the cost of algorithmic execution time. We have investigated WFS-less AO on a fluorescence scanning laser ophthalmoscopy (fSLO) system that was originally designed for WFS-based AO. The WFS-based AO uses a Shack-Hartmann WFS and a continuous surface deformable mirror in a closed-loop control system to measure and correct for aberrations induced by the mouse eye. The WFS-less AO performs an open-loop modal optimization with an image quality metric. After WFS-less AO aberration correction, the WFS was used as a control of the closed-loop WFS-less AO operation. We can easily switch between WFS-based and WFS-less control of the deformable mirror multiple times within an imaging session for the same mouse. This allows for a direct comparison between these two types of AO correction for fSLO. Our results demonstrate volumetric AO-fSLO imaging of mouse retinal cells labeled with GFP. Most significantly, we have analyzed and compared the aberration correction results for WFS-based and WFS-less AO imaging.

  11. a Semi-Rigorous Sensor Model for Precision Geometric Processing of Mini-Rf Bistatic Radar Images of the Moon

    Science.gov (United States)

    Kirk, R. L.; Barrett, J. M.; Wahl, D. E.; Erteza, I.; Jackowatz, C. V.; Yocky, D. A.; Turner, S.; Bussey, D. B. J.; Paterson, G. W.

    2016-06-01

    The spaceborne synthetic aperture radar (SAR) instruments known as Mini-RF were designed to image shadowed areas of the lunar poles and assay the presence of ice deposits by quantitative polarimetry. We have developed radargrammetric processing techniques to enhance the value of these observations by removing spacecraft ephemeris errors and distortions caused by topographic parallax so the polarimetry can be compared with other data sets. Here we report on the extension of this capability from monostatic imaging (signal transmitted and received on the same spacecraft) to bistatic (transmission from Earth and reception on the spacecraft) which provides a unique opportunity to measure radar scattering at nonzero phase angles. In either case our radargrammetric sensor models first reconstruct the observed range and Doppler frequency from recorded image coordinates, then determine the ground location with a corrected trajectory on a more detailed topographic surface. The essential difference for bistatic radar is that range and Doppler shift depend on the transmitter as well as receiver trajectory. Incidental differences include the preparation of the images in a different (map projected) coordinate system and use of "squint" (i.e., imaging at nonzero rather than zero Doppler shift) to achieve the desired phase angle. Our approach to the problem is to reconstruct the time-of-observation, range, and Doppler shift of the image pixel by pixel in terms of rigorous geometric optics, then fit these functions with low-order polynomials accurate to a small fraction of a pixel. Range and Doppler estimated by using these polynomials can then be georeferenced rigorously on a new surface with an updated trajectory. This "semi-rigorous" approach (based on rigorous physics but involving fitting functions) speeds the calculation and avoids the need to manage both the original and adjusted trajectory data. We demonstrate the improvement in registration of the bistatic images for

  12. The EO-1 hyperion and advanced land imager sensors for use in tundra classification studies within the Upper Kuparuk River Basin, Alaska

    Science.gov (United States)

    Hall-Brown, Mary

    The heterogeneity of Arctic vegetation can make land cover classification vey difficult when using medium to small resolution imagery (Schneider et al., 2009; Muller et al., 1999). Using high radiometric and spatial resolution imagery, such as the SPOT 5 and IKONOS satellites, have helped arctic land cover classification accuracies rise into the 80 and 90 percentiles (Allard, 2003; Stine et al., 2010; Muller et al., 1999). However, those increases usually come at a high price. High resolution imagery is very expensive and can often add tens of thousands of dollars onto the cost of the research. The EO-1 satellite launched in 2002 carries two sensors that have high specral and/or high spatial resolutions and can be an acceptable compromise between the resolution versus cost issues. The Hyperion is a hyperspectral sensor with the capability of collecting 242 spectral bands of information. The Advanced Land Imager (ALI) is an advanced multispectral sensor whose spatial resolution can be sharpened to 10 meters. This dissertation compares the accuracies of arctic land cover classifications produced by the Hyperion and ALI sensors to the classification accuracies produced by the Systeme Pour l' Observation de le Terre (SPOT), the Landsat Thematic Mapper (TM) and the Landsat Enhanced Thematic Mapper Plus (ETM+) sensors. Hyperion and ALI images from August 2004 were collected over the Upper Kuparuk River Basin, Alaska. Image processing included the stepwise discriminant analysis of pixels that were positively classified from coinciding ground control points, geometric and radiometric correction, and principle component analysis. Finally, stratified random sampling was used to perform accuracy assessments on satellite derived land cover classifications. Accuracy was estimated from an error matrix (confusion matrix) that provided the overall, producer's and user's accuracies. This research found that while the Hyperion sensor produced classfication accuracies that were

  13. A Survey of Visual Sensor Networks

    National Research Council Canada - National Science Library

    Soro, Stanislava; Heinzelman, Wendi

    2009-01-01

    ..., recent developments in sensor networking and distributed processing have encouraged the use of image sensors in these networks, which has resulted in a new ubiquitous paradigm--visual sensor networks. Visual sensor networks (VSNs) consist of tiny visual sensor nodes called camera nodes, which integrate the image sensor, embedded processor, and wireless tra...

  14. Pixel pitch and particle energy influence on the dark current distribution of neutron irradiated CMOS image sensors.

    Science.gov (United States)

    Belloir, Jean-Marc; Goiffon, Vincent; Virmontois, Cédric; Raine, Mélanie; Paillet, Philippe; Duhamel, Olivier; Gaillardin, Marc; Molina, Romain; Magnan, Pierre; Gilard, Olivier

    2016-02-22

    The dark current produced by neutron irradiation in CMOS Image Sensors (CIS) is investigated. Several CIS with different photodiode types and pixel pitches are irradiated with various neutron energies and fluences to study the influence of each of these optical detector and irradiation parameters on the dark current distribution. An empirical model is tested on the experimental data and validated on all the irradiated optical imagers. This model is able to describe all the presented dark current distributions with no parameter variation for neutron energies of 14 MeV or higher, regardless of the optical detector and irradiation characteristics. For energies below 1 MeV, it is shown that a single parameter has to be adjusted because of the lower mean damage energy per nuclear interaction. This model and these conclusions can be transposed to any silicon based solid-state optical imagers such as CIS or Charged Coupled Devices (CCD). This work can also be used when designing an optical imager instrument, to anticipate the dark current increase or to choose a mitigation technique.

  15. Features extraction of flotation froth images and BP neural network soft-sensor model of concentrate grade optimized by shuffled cuckoo searching algorithm.

    Science.gov (United States)

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia

    2014-01-01

    For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy.

  16. Features Extraction of Flotation Froth Images and BP Neural Network Soft-Sensor Model of Concentrate Grade Optimized by Shuffled Cuckoo Searching Algorithm

    Directory of Open Access Journals (Sweden)

    Jie-sheng Wang

    2014-01-01

    Full Text Available For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy.

  17. Fiber optic spectroscopic digital imaging sensor and method for flame properties monitoring

    Science.gov (United States)

    Zelepouga, Serguei A [Hoffman Estates, IL; Rue, David M [Chicago, IL; Saveliev, Alexei V [Chicago, IL

    2011-03-15

    A system for real-time monitoring of flame properties in combustors and gasifiers which includes an imaging fiber optic bundle having a light receiving end and a light output end and a spectroscopic imaging system operably connected with the light output end of the imaging fiber optic bundle. Focusing of the light received by the light receiving end of the imaging fiber optic bundle by a wall disposed between the light receiving end of the fiber optic bundle and a light source, which wall forms a pinhole opening aligned with the light receiving end.

  18. Integrated semiconductor optical sensors for chronic, minimally-invasive imaging of brain function.

    Science.gov (United States)

    Lee, Thomas T; Levi, Ofer; Cang, Jianhua; Kaneko, Megumi; Stryker, Michael P; Smith, Stephen J; Shenoy, Krishna V; Harris, James S

    2006-01-01

    Intrinsic optical signal (IOS) imaging is a widely accepted technique for imaging brain activity. We propose an integrated device consisting of interleaved arrays of gallium arsenide (GaAs) based semiconductor light sources and detectors operating at telecommunications wavelengths in the near-infrared. Such a device will allow for long-term, minimally invasive monitoring of neural activity in freely behaving subjects, and will enable the use of structured illumination patterns to improve system performance. In this work we describe the proposed system and show that near-infrared IOS imaging at wavelengths compatible with semiconductor devices can produce physiologically significant images in mice, even through skull.

  19. Semisynthetic fluorescent pH sensors for imaging exocytosis and endocytosis.

    Science.gov (United States)

    Martineau, Magalie; Somasundaram, Agila; Grimm, Jonathan B; Gruber, Todd D; Choquet, Daniel; Taraska, Justin W; Lavis, Luke D; Perrais, David

    2017-11-10

    The GFP-based superecliptic pHluorin (SEP) enables detection of exocytosis and endocytosis, but its performance has not been duplicated in red fluorescent protein scaffolds. Here we describe "semisynthetic" pH-sensitive protein conjugates with organic fluorophores, carbofluorescein, and Virginia Orange that match the properties of SEP. Conjugation to genetically encoded self-labeling tags or antibodies allows visualization of both exocytosis and endocytosis, constituting new bright sensors for these key steps of synaptic transmission.

  20. Semisynthetic fluorescent pH sensors for imaging exocytosis and endocytosis

    OpenAIRE

    Martineau, Magalie; Somasundaram, Agila; Grimm, Jonathan B.; Gruber, Todd D.; Choquet, Daniel; Taraska, Justin W.; Lavis, Luke D.; Perrais, David

    2017-01-01

    The GFP-based superecliptic pHluorin (SEP) enables detection of exocytosis and endocytosis, but its performance has not been duplicated in red fluorescent protein scaffolds. Here we describe “semisynthetic” pH-sensitive protein conjugates with organic fluorophores, carbofluorescein, and Virginia Orange that match the properties of SEP. Conjugation to genetically encoded self-labeling tags or antibodies allows visualization of both exocytosis and endocytosis, constituting new bright sensors fo...