WorldWideScience

Sample records for range image sensor

  1. High dynamic range imaging sensors and architectures

    CERN Document Server

    Darmont, Arnaud

    2013-01-01

    Illumination is a crucial element in many applications, matching the luminance of the scene with the operational range of a camera. When luminance cannot be adequately controlled, a high dynamic range (HDR) imaging system may be necessary. These systems are being increasingly used in automotive on-board systems, road traffic monitoring, and other industrial, security, and military applications. This book provides readers with an intermediate discussion of HDR image sensors and techniques for industrial and non-industrial applications. It describes various sensor and pixel architectures capable

  2. Introduction to sensors for ranging and imaging

    CERN Document Server

    Brooker, Graham

    2009-01-01

    ""This comprehensive text-reference provides a solid background in active sensing technology. It is concerned with active sensing, starting with the basics of time-of-flight sensors (operational principles, components), and going through the derivation of the radar range equation and the detection of echo signals, both fundamental to the understanding of radar, sonar and lidar imaging. Several chapters cover signal propagation of both electromagnetic and acoustic energy, target characteristics, stealth, and clutter. The remainder of the book introduces the range measurement process, active ima

  3. Characterization of modulated time-of-flight range image sensors

    Science.gov (United States)

    Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.

    2009-01-01

    A number of full field image sensors have been developed that are capable of simultaneously measuring intensity and distance (range) for every pixel in a given scene using an indirect time-of-flight measurement technique. A light source is intensity modulated at a frequency between 10-100 MHz, and an image sensor is modulated at the same frequency, synchronously sampling light reflected from objects in the scene (homodyne detection). The time of flight is manifested as a phase shift in the illumination modulation envelope, which can be determined from the sampled data simultaneously for each pixel in the scene. This paper presents a method of characterizing the high frequency modulation response of these image sensors, using a pico-second laser pulser. The characterization results allow the optimal operating parameters, such as the modulation frequency, to be identified in order to maximize the range measurement precision for a given sensor. A number of potential sources of error exist when using these sensors, including deficiencies in the modulation waveform shape, duty cycle, or phase, resulting in contamination of the resultant range data. From the characterization data these parameters can be identified and compensated for by modifying the sensor hardware or through post processing of the acquired range measurements.

  4. Increasing Linear Dynamic Range of a CMOS Image Sensor

    Science.gov (United States)

    Pain, Bedabrata

    2007-01-01

    A generic design and a corresponding operating sequence have been developed for increasing the linear-response dynamic range of a complementary metal oxide/semiconductor (CMOS) image sensor. The design provides for linear calibrated dual-gain pixels that operate at high gain at a low signal level and at low gain at a signal level above a preset threshold. Unlike most prior designs for increasing dynamic range of an image sensor, this design does not entail any increase in noise (including fixed-pattern noise), decrease in responsivity or linearity, or degradation of photometric calibration. The figure is a simplified schematic diagram showing the circuit of one pixel and pertinent parts of its column readout circuitry. The conventional part of the pixel circuit includes a photodiode having a small capacitance, CD. The unconventional part includes an additional larger capacitance, CL, that can be connected to the photodiode via a transfer gate controlled in part by a latch. In the high-gain mode, the signal labeled TSR in the figure is held low through the latch, which also helps to adapt the gain on a pixel-by-pixel basis. Light must be coupled to the pixel through a microlens or by back illumination in order to obtain a high effective fill factor; this is necessary to ensure high quantum efficiency, a loss of which would minimize the efficacy of the dynamic- range-enhancement scheme. Once the level of illumination of the pixel exceeds the threshold, TSR is turned on, causing the transfer gate to conduct, thereby adding CL to the pixel capacitance. The added capacitance reduces the conversion gain, and increases the pixel electron-handling capacity, thereby providing an extension of the dynamic range. By use of an array of comparators also at the bottom of the column, photocharge voltages on sampling capacitors in each column are compared with a reference voltage to determine whether it is necessary to switch from the high-gain to the low-gain mode. Depending upon

  5. Contactless respiratory monitoring system for magnetic resonance imaging applications using a laser range sensor

    Directory of Open Access Journals (Sweden)

    Krug Johannes W.

    2016-09-01

    Full Text Available During a magnetic resonance imaging (MRI exam, a respiratory signal can be required for different purposes, e.g. for patient monitoring, motion compensation or for research studies such as in functional MRI. In addition, respiratory information can be used as a biofeedback for the patient in order to control breath holds or shallow breathing. To reduce patient preparation time or distortions of the MR imaging system, we propose the use of a contactless approach for gathering information about the respiratory activity. An experimental setup based on a commercially available laser range sensor was used to detect respiratory induced motion of the chest or abdomen. This setup was tested using a motion phantom and different human subjects in an MRI scanner. A nasal airflow sensor served as a reference. For both, the phantom as well as the different human subjects, the motion frequency was precisely measured. These results show that a low cost, contactless, laser-based approach can be used to obtain information about the respiratory motion during an MRI exam.

  6. Range-Measuring Video Sensors

    Science.gov (United States)

    Howard, Richard T.; Briscoe, Jeri M.; Corder, Eric L.; Broderick, David

    2006-01-01

    Optoelectronic sensors of a proposed type would perform the functions of both electronic cameras and triangulation- type laser range finders. That is to say, these sensors would both (1) generate ordinary video or snapshot digital images and (2) measure the distances to selected spots in the images. These sensors would be well suited to use on robots that are required to measure distances to targets in their work spaces. In addition, these sensors could be used for all the purposes for which electronic cameras have been used heretofore. The simplest sensor of this type, illustrated schematically in the upper part of the figure, would include a laser, an electronic camera (either video or snapshot), a frame-grabber/image-capturing circuit, an image-data-storage memory circuit, and an image-data processor. There would be no moving parts. The laser would be positioned at a lateral distance d to one side of the camera and would be aimed parallel to the optical axis of the camera. When the range of a target in the field of view of the camera was required, the laser would be turned on and an image of the target would be stored and preprocessed to locate the angle (a) between the optical axis and the line of sight to the centroid of the laser spot.

  7. High Dynamic Range Imaging at the Quantum Limit with Single Photon Avalanche Diode-Based Image Sensors

    Science.gov (United States)

    Mattioli Della Rocca, Francescopaolo

    2018-01-01

    This paper examines methods to best exploit the High Dynamic Range (HDR) of the single photon avalanche diode (SPAD) in a high fill-factor HDR photon counting pixel that is scalable to megapixel arrays. The proposed method combines multi-exposure HDR with temporal oversampling in-pixel. We present a silicon demonstration IC with 96 × 40 array of 8.25 µm pitch 66% fill-factor SPAD-based pixels achieving >100 dB dynamic range with 3 back-to-back exposures (short, mid, long). Each pixel sums 15 bit-planes or binary field images internally to constitute one frame providing 3.75× data compression, hence the 1k frames per second (FPS) output off-chip represents 45,000 individual field images per second on chip. Two future projections of this work are described: scaling SPAD-based image sensors to HDR 1 MPixel formats and shrinking the pixel pitch to 1–3 µm. PMID:29641479

  8. Image Sensor

    OpenAIRE

    Jerram, Paul; Stefanov, Konstantin

    2017-01-01

    An image sensor of the type for providing charge multiplication by impact ionisation has plurality of multiplication elements. Each element is arranged to receive charge from photosensitive elements of an image area and each element comprises a sequence of electrodes to move charge along a transport path. Each of the electrodes has an edge defining a boundary with a first electrode, a maximum width across the charge transport path and a leading edge that defines a boundary with a second elect...

  9. Test of the Practicality and Feasibility of EDoF-Empowered Image Sensors for Long-Range Biometrics.

    Science.gov (United States)

    Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao

    2016-11-25

    For many practical applications of image sensors, how to extend the depth-of-field (DoF) is an important research topic; if successfully implemented, it could be beneficial in various applications, from photography to biometrics. In this work, we want to examine the feasibility and practicability of a well-known "extended DoF" (EDoF) technique, or "wavefront coding," by building real-time long-range iris recognition and performing large-scale iris recognition. The key to the success of long-range iris recognition includes long DoF and image quality invariance toward various object distance, which is strict and harsh enough to test the practicality and feasibility of EDoF-empowered image sensors. Besides image sensor modification, we also explored the possibility of varying enrollment/testing pairs. With 512 iris images from 32 Asian people as the database, 400-mm focal length and F/6.3 optics over 3 m working distance, our results prove that a sophisticated coding design scheme plus homogeneous enrollment/testing setups can effectively overcome the blurring caused by phase modulation and omit Wiener-based restoration. In our experiments, which are based on 3328 iris images in total, the EDoF factor can achieve a result 3.71 times better than the original system without a loss of recognition accuracy.

  10. Test of the Practicality and Feasibility of EDoF-Empowered Image Sensors for Long-Range Biometrics

    Directory of Open Access Journals (Sweden)

    Sheng-Hsun Hsieh

    2016-11-01

    Full Text Available For many practical applications of image sensors, how to extend the depth-of-field (DoF is an important research topic; if successfully implemented, it could be beneficial in various applications, from photography to biometrics. In this work, we want to examine the feasibility and practicability of a well-known “extended DoF” (EDoF technique, or “wavefront coding,” by building real-time long-range iris recognition and performing large-scale iris recognition. The key to the success of long-range iris recognition includes long DoF and image quality invariance toward various object distance, which is strict and harsh enough to test the practicality and feasibility of EDoF-empowered image sensors. Besides image sensor modification, we also explored the possibility of varying enrollment/testing pairs. With 512 iris images from 32 Asian people as the database, 400-mm focal length and F/6.3 optics over 3 m working distance, our results prove that a sophisticated coding design scheme plus homogeneous enrollment/testing setups can effectively overcome the blurring caused by phase modulation and omit Wiener-based restoration. In our experiments, which are based on 3328 iris images in total, the EDoF factor can achieve a result 3.71 times better than the original system without a loss of recognition accuracy.

  11. A contest of sensors in close range 3D imaging: performance evaluation with a new metric test object

    Directory of Open Access Journals (Sweden)

    M. Hess

    2014-06-01

    Full Text Available An independent means of 3D image quality assessment is introduced, addressing non-professional users of sensors and freeware, which is largely characterized as closed-sourced and by the absence of quality metrics for processing steps, such as alignment. A performance evaluation of commercially available, state-of-the-art close range 3D imaging technologies is demonstrated with the help of a newly developed Portable Metric Test Artefact. The use of this test object provides quality control by a quantitative assessment of 3D imaging sensors. It will enable users to give precise specifications which spatial resolution and geometry recording they expect as outcome from their 3D digitizing process. This will lead to the creation of high-quality 3D digital surrogates and 3D digital assets. The paper is presented in the form of a competition of teams, and a possible winner will emerge.

  12. Simplified wide dynamic range CMOS image sensor with 3t APS reset-drain actuation

    OpenAIRE

    Carlos Augusto de Moraes Cruz

    2014-01-01

    Um sensor de imagem é uma matriz de pequenas células fotossensíveis chamadas sensores de pixeis. Um pixel, elemento el fotográfico (picture) pix, é a menor porção de uma imagem. Assim o sensor de pixel é a menor célula de um sensor de imagem, capaz de detectar um ponto singular da imagem. Este ponto é então usado para reconstruir um quadro completo de imagem. Sensores de imagem CMOS são atualmente largamente utilizados tanto em câmeras profissionais como em aparelhos moveis em geral como celu...

  13. A Dynamic Range Enhanced Readout Technique with a Two-Step TDC for High Speed Linear CMOS Image Sensors

    Directory of Open Access Journals (Sweden)

    Zhiyuan Gao

    2015-11-01

    Full Text Available This paper presents a dynamic range (DR enhanced readout technique with a two-step time-to-digital converter (TDC for high speed linear CMOS image sensors. A multi-capacitor and self-regulated capacitive trans-impedance amplifier (CTIA structure is employed to extend the dynamic range. The gain of the CTIA is auto adjusted by switching different capacitors to the integration node asynchronously according to the output voltage. A column-parallel ADC based on a two-step TDC is utilized to improve the conversion rate. The conversion is divided into coarse phase and fine phase. An error calibration scheme is also proposed to correct quantization errors caused by propagation delay skew within −Tclk~+Tclk. A linear CMOS image sensor pixel array is designed in the 0.13 μm CMOS process to verify this DR-enhanced high speed readout technique. The post simulation results indicate that the dynamic range of readout circuit is 99.02 dB and the ADC achieves 60.22 dB SNDR and 9.71 bit ENOB at a conversion rate of 2 MS/s after calibration, with 14.04 dB and 2.4 bit improvement, compared with SNDR and ENOB of that without calibration.

  14. Applications of the Integrated High-Performance CMOS Image Sensor to Range Finders — from Optical Triangulation to the Automotive Field

    Directory of Open Access Journals (Sweden)

    Joe-Air Jiang

    2008-03-01

    Full Text Available With their significant features, the applications of complementary metal-oxidesemiconductor (CMOS image sensors covers a very extensive range, from industrialautomation to traffic applications such as aiming systems, blind guidance, active/passiverange finders, etc. In this paper CMOS image sensor-based active and passive rangefinders are presented. The measurement scheme of the proposed active/passive rangefinders is based on a simple triangulation method. The designed range finders chieflyconsist of a CMOS image sensor and some light sources such as lasers or LEDs. Theimplementation cost of our range finders is quite low. Image processing software to adjustthe exposure time (ET of the CMOS image sensor to enhance the performance oftriangulation-based range finders was also developed. An extensive series of experimentswere conducted to evaluate the performance of the designed range finders. From theexperimental results, the distance measurement resolutions achieved by the active rangefinder and the passive range finder can be better than 0.6% and 0.25% within themeasurement ranges of 1 to 8 m and 5 to 45 m, respectively. Feasibility tests onapplications of the developed CMOS image sensor-based range finders to the automotivefield were also conducted. The experimental results demonstrated that our range finders arewell-suited for distance measurements in this field.

  15. Focus on image sensors

    NARCIS (Netherlands)

    Jos Gunsing; Daniël Telgen; Johan van Althuis; Jaap van de Loosdrecht; Mark Stappers; Peter Klijn

    2013-01-01

    Robots need sensors to operate properly. Using a single image sensor, various aspects of a robot operating in its environment can be measured or monitored. Over the past few years, image sensors have improved a lot: frame rate and resolution have increased, while prices have fallen. As a result,

  16. Nanophotonic Image Sensors.

    Science.gov (United States)

    Chen, Qin; Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R S

    2016-09-01

    The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial-based THz image sensors, filter-free nanowire image sensors and nanostructured-based multispectral image sensors. This novel combination of cutting edge photonics research and well-developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Large area CMOS image sensors

    International Nuclear Information System (INIS)

    Turchetta, R; Guerrini, N; Sedgwick, I

    2011-01-01

    CMOS image sensors, also known as CMOS Active Pixel Sensors (APS) or Monolithic Active Pixel Sensors (MAPS), are today the dominant imaging devices. They are omnipresent in our daily life, as image sensors in cellular phones, web cams, digital cameras, ... In these applications, the pixels can be very small, in the micron range, and the sensors themselves tend to be limited in size. However, many scientific applications, like particle or X-ray detection, require large format, often with large pixels, as well as other specific performance, like low noise, radiation hardness or very fast readout. The sensors are also required to be sensitive to a broad spectrum of radiation: photons from the silicon cut-off in the IR down to UV and X- and gamma-rays through the visible spectrum as well as charged particles. This requirement calls for modifications to the substrate to be introduced to provide optimized sensitivity. This paper will review existing CMOS image sensors, whose size can be as large as a single CMOS wafer, and analyse the technical requirements and specific challenges of large format CMOS image sensors.

  18. High dynamic range coding imaging system

    Science.gov (United States)

    Wu, Renfan; Huang, Yifan; Hou, Guangqi

    2014-10-01

    We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.

  19. Optimization of CMOS image sensor utilizing variable temporal multisampling partial transfer technique to achieve full-frame high dynamic range with superior low light and stop motion capability

    Science.gov (United States)

    Kabir, Salman; Smith, Craig; Armstrong, Frank; Barnard, Gerrit; Schneider, Alex; Guidash, Michael; Vogelsang, Thomas; Endsley, Jay

    2018-03-01

    Differential binary pixel technology is a threshold-based timing, readout, and image reconstruction method that utilizes the subframe partial charge transfer technique in a standard four-transistor (4T) pixel CMOS image sensor to achieve a high dynamic range video with stop motion. This technology improves low light signal-to-noise ratio (SNR) by up to 21 dB. The method is verified in silicon using a Taiwan Semiconductor Manufacturing Company's 65 nm 1.1 μm pixel technology 1 megapixel test chip array and is compared with a traditional 4 × oversampling technique using full charge transfer to show low light SNR superiority of the presented technology.

  20. Long range image enhancement

    CSIR Research Space (South Africa)

    Duvenhage, B

    2015-11-01

    Full Text Available the surveillance system performance. This paper discusses an image processing method that tracks the behaviour of the PSF and then de-warps the image to reduce the disruptive effects of turbulence. Optical flow, an average image filter and a simple unsharp mask...

  1. Photon-counting image sensors

    CERN Document Server

    Teranishi, Nobukazu; Theuwissen, Albert; Stoppa, David; Charbon, Edoardo

    2017-01-01

    The field of photon-counting image sensors is advancing rapidly with the development of various solid-state image sensor technologies including single photon avalanche detectors (SPADs) and deep-sub-electron read noise CMOS image sensor pixels. This foundational platform technology will enable opportunities for new imaging modalities and instrumentation for science and industry, as well as new consumer applications. Papers discussing various photon-counting image sensor technologies and selected new applications are presented in this all-invited Special Issue.

  2. Precipitable water and surface humidity over global oceans from special sensor microwave imager and European Center for Medium Range Weather Forecasts

    Science.gov (United States)

    Liu, W. T.; Tang, Wenqing; Wentz, Frank J.

    1992-01-01

    Global fields of precipitable water W from the special sensor microwave imager were compared with those from the European Center for Medium Range Weather Forecasts (ECMWF) model. They agree over most ocean areas; both data sets capture the two annual cycles examined and the interannual anomalies during an ENSO episode. They show significant differences in the dry air masses over the eastern tropical-subtropical oceans, particularly in the Southern Hemisphere. In these regions, comparisons with radiosonde data indicate that overestimation by the ECMWF model accounts for a large part of the differences. As a check on the W differences, surface-level specific humidity Q derived from W, using a statistical relation, was compared with Q from the ECMWF model. The differences in Q were found to be consistent with the differences in W, indirectly validating the Q-W relation. In both W and Q, SSMI was able to discern clearly the equatorial extension of the tongues of dry air in the eastern tropical ocean, while both ECMWF and climatological fields have reduced spatial gradients and weaker intensity.

  3. CMOS sensors for atmospheric imaging

    Science.gov (United States)

    Pratlong, Jérôme; Burt, David; Jerram, Paul; Mayer, Frédéric; Walker, Andrew; Simpson, Robert; Johnson, Steven; Hubbard, Wendy

    2017-09-01

    Recent European atmospheric imaging missions have seen a move towards the use of CMOS sensors for the visible and NIR parts of the spectrum. These applications have particular challenges that are completely different to those that have driven the development of commercial sensors for applications such as cell-phone or SLR cameras. This paper will cover the design and performance of general-purpose image sensors that are to be used in the MTG (Meteosat Third Generation) and MetImage satellites and the technology challenges that they have presented. We will discuss how CMOS imagers have been designed with 4T pixel sizes of up to 250 μm square achieving good charge transfer efficiency, or low lag, with signal levels up to 2M electrons and with high line rates. In both devices a low noise analogue read-out chain is used with correlated double sampling to suppress the readout noise and give a maximum dynamic range that is significantly larger than in standard commercial devices. Radiation hardness is a particular challenge for CMOS detectors and both of these sensors have been designed to be fully radiation hard with high latch-up and single-event-upset tolerances, which is now silicon proven on MTG. We will also cover the impact of ionising radiation on these devices. Because with such large pixels the photodiodes have a large open area, front illumination technology is sufficient to meet the detection efficiency requirements but with thicker than standard epitaxial silicon to give improved IR response (note that this makes latch up protection even more important). However with narrow band illumination reflections from the front and back of the dielectric stack on the top of the sensor produce Fabry-Perot étalon effects, which have been minimised with process modifications. We will also cover the addition of precision narrow band filters inside the MTG package to provide a complete imaging subsystem. Control of reflected light is also critical in obtaining the

  4. Passive long range acousto-optic sensor

    Science.gov (United States)

    Slater, Dan

    2006-08-01

    Alexander Graham Bell's photophone of 1880 was a simple free space optical communication device that used the sun to illuminate a reflective acoustic diaphragm. A selenium photocell located 213 m (700 ft) away converted the acoustically modulated light beam back into sound. A variation of the photophone is presented here that uses naturally formed free space acousto-optic communications links to provide passive multichannel long range acoustic sensing. This system, called RAS (remote acoustic sensor), functions as a long range microphone with a demonstrated range in excess of 40 km (25 miles).

  5. Thermal infrared panoramic imaging sensor

    Science.gov (United States)

    Gutin, Mikhail; Tsui, Eddy K.; Gutin, Olga; Wang, Xu-Ming; Gutin, Alexey

    2006-05-01

    Panoramic cameras offer true real-time, 360-degree coverage of the surrounding area, valuable for a variety of defense and security applications, including force protection, asset protection, asset control, security including port security, perimeter security, video surveillance, border control, airport security, coastguard operations, search and rescue, intrusion detection, and many others. Automatic detection, location, and tracking of targets outside protected area ensures maximum protection and at the same time reduces the workload on personnel, increases reliability and confidence of target detection, and enables both man-in-the-loop and fully automated system operation. Thermal imaging provides the benefits of all-weather, 24-hour day/night operation with no downtime. In addition, thermal signatures of different target types facilitate better classification, beyond the limits set by camera's spatial resolution. The useful range of catadioptric panoramic cameras is affected by their limited resolution. In many existing systems the resolution is optics-limited. Reflectors customarily used in catadioptric imagers introduce aberrations that may become significant at large camera apertures, such as required in low-light and thermal imaging. Advantages of panoramic imagers with high image resolution include increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) combines the strengths of improved, high-resolution panoramic optics with thermal imaging in the 8 - 14 micron spectral range, leveraged by intelligent video processing for automated detection, location, and tracking of moving targets. The work in progress supports the Future Combat Systems (FCS) and the

  6. Sensor assembly method using silicon interposer with trenches for three-dimensional binocular range sensors

    Science.gov (United States)

    Nakajima, Kazuhiro; Yamamoto, Yuji; Arima, Yutaka

    2018-04-01

    To easily assemble a three-dimensional binocular range sensor, we devised an alignment method for two image sensors using a silicon interposer with trenches. The trenches were formed using deep reactive ion etching (RIE) equipment. We produced a three-dimensional (3D) range sensor using the method and experimentally confirmed that sufficient alignment accuracy was realized. It was confirmed that the alignment accuracy of the two image sensors when using the proposed method is more than twice that of the alignment assembly method on a conventional board. In addition, as a result of evaluating the deterioration of the detection performance caused by the alignment accuracy, it was confirmed that the vertical deviation between the corresponding pixels in the two image sensors is substantially proportional to the decrease in detection performance. Therefore, we confirmed that the proposed method can realize more than twice the detection performance of the conventional method. Through these evaluations, the effectiveness of the 3D binocular range sensor aligned by the silicon interposer with the trenches was confirmed.

  7. CMOS foveal image sensor chip

    Science.gov (United States)

    Bandera, Cesar (Inventor); Scott, Peter (Inventor); Sridhar, Ramalingam (Inventor); Xia, Shu (Inventor)

    2002-01-01

    A foveal image sensor integrated circuit comprising a plurality of CMOS active pixel sensors arranged both within and about a central fovea region of the chip. The pixels in the central fovea region have a smaller size than the pixels arranged in peripheral rings about the central region. A new photocharge normalization scheme and associated circuitry normalizes the output signals from the different size pixels in the array. The pixels are assembled into a multi-resolution rectilinear foveal image sensor chip using a novel access scheme to reduce the number of analog RAM cells needed. Localized spatial resolution declines monotonically with offset from the imager's optical axis, analogous to biological foveal vision.

  8. High dynamic range vision sensor for automotive applications

    Science.gov (United States)

    Grenet, Eric; Gyger, Steve; Heim, Pascal; Heitger, Friedrich; Kaess, Francois; Nussbaum, Pascal; Ruedi, Pierre-Francois

    2005-02-01

    A 128 x 128 pixels, 120 dB vision sensor extracting at the pixel level the contrast magnitude and direction of local image features is used to implement a lane tracking system. The contrast representation (relative change of illumination) delivered by the sensor is independent of the illumination level. Together with the high dynamic range of the sensor, it ensures a very stable image feature representation even with high spatial and temporal inhomogeneities of the illumination. Dispatching off chip image feature is done according to the contrast magnitude, prioritizing features with high contrast magnitude. This allows to reduce drastically the amount of data transmitted out of the chip, hence the processing power required for subsequent processing stages. To compensate for the low fill factor (9%) of the sensor, micro-lenses have been deposited which increase the sensitivity by a factor of 5, corresponding to an equivalent of 2000 ASA. An algorithm exploiting the contrast representation output by the vision sensor has been developed to estimate the position of a vehicle relative to the road markings. The algorithm first detects the road markings based on the contrast direction map. Then, it performs quadratic fits on selected kernel of 3 by 3 pixels to achieve sub-pixel accuracy on the estimation of the lane marking positions. The resulting precision on the estimation of the vehicle lateral position is 1 cm. The algorithm performs efficiently under a wide variety of environmental conditions, including night and rainy conditions.

  9. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process

    Directory of Open Access Journals (Sweden)

    Isao Takayanagi

    2018-01-01

    Full Text Available To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR approach.

  10. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process.

    Science.gov (United States)

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-12

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke - . Readout noise under the highest pixel gain condition is 1 e - with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7", 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach.

  11. Sampling Number Effects in 2D and Range Imaging of Range-gated Acquisition

    International Nuclear Information System (INIS)

    Kwon, Seong-Ouk; Park, Seung-Kyu; Baik, Sung-Hoon; Cho, Jai-Wan; Jeong, Kyung-Min

    2015-01-01

    In this paper, we analyzed the number effects of sampling images for making a 2D image and a range image from acquired RGI images. We analyzed the number effects of RGI images for making a 2D image and a range image using a RGI vision system. As the results, 2D image quality was not much depended on the number of sampling images but on how much well extract efficient RGI images. But, the number of RGI images was important for making a range image because range image quality was proportional to the number of RGI images. Image acquiring in a monitoring area of nuclear industry is an important function for safety inspection and preparing appropriate control plans. To overcome the non-visualization problem caused by airborne obstacle particles, vision systems should have extra-functions, such as active illumination lightening through disturbance airborne particles. One of these powerful active vision systems is a range-gated imaging system. The vision system based on the range-gated imaging system can acquire image data from raining or smoking environments. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra-short exposure time to only get the illumination light. Here, the illuminant illuminates objects by flashing strong light through airborne disturbance particles. Thus, in contrast to passive conventional vision systems, the RGI active vision technology robust for low-visibility environments

  12. Sampling Number Effects in 2D and Range Imaging of Range-gated Acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Seong-Ouk; Park, Seung-Kyu; Baik, Sung-Hoon; Cho, Jai-Wan; Jeong, Kyung-Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In this paper, we analyzed the number effects of sampling images for making a 2D image and a range image from acquired RGI images. We analyzed the number effects of RGI images for making a 2D image and a range image using a RGI vision system. As the results, 2D image quality was not much depended on the number of sampling images but on how much well extract efficient RGI images. But, the number of RGI images was important for making a range image because range image quality was proportional to the number of RGI images. Image acquiring in a monitoring area of nuclear industry is an important function for safety inspection and preparing appropriate control plans. To overcome the non-visualization problem caused by airborne obstacle particles, vision systems should have extra-functions, such as active illumination lightening through disturbance airborne particles. One of these powerful active vision systems is a range-gated imaging system. The vision system based on the range-gated imaging system can acquire image data from raining or smoking environments. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra-short exposure time to only get the illumination light. Here, the illuminant illuminates objects by flashing strong light through airborne disturbance particles. Thus, in contrast to passive conventional vision systems, the RGI active vision technology robust for low-visibility environments.

  13. Visual Image Sensor Organ Replacement

    Science.gov (United States)

    Maluf, David A.

    2014-01-01

    This innovation is a system that augments human vision through a technique called "Sensing Super-position" using a Visual Instrument Sensory Organ Replacement (VISOR) device. The VISOR device translates visual and other sensors (i.e., thermal) into sounds to enable very difficult sensing tasks. Three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. Because the human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns, the translation of images into sounds reduces the risk of accidentally filtering out important clues. The VISOR device was developed to augment the current state-of-the-art head-mounted (helmet) display systems. It provides the ability to sense beyond the human visible light range, to increase human sensing resolution, to use wider angle visual perception, and to improve the ability to sense distances. It also allows compensation for movement by the human or changes in the scene being viewed.

  14. Image-based occupancy sensor

    Science.gov (United States)

    Polese, Luigi Gentile; Brackney, Larry

    2015-05-19

    An image-based occupancy sensor includes a motion detection module that receives and processes an image signal to generate a motion detection signal, a people detection module that receives the image signal and processes the image signal to generate a people detection signal, a face detection module that receives the image signal and processes the image signal to generate a face detection signal, and a sensor integration module that receives the motion detection signal from the motion detection module, receives the people detection signal from the people detection module, receives the face detection signal from the face detection module, and generates an occupancy signal using the motion detection signal, the people detection signal, and the face detection signal, with the occupancy signal indicating vacancy or occupancy, with an occupancy indication specifying that one or more people are detected within the monitored volume.

  15. Calibration and control for range imaging in mobile robot navigation

    Energy Technology Data Exchange (ETDEWEB)

    Dorum, O.H. [Norges Tekniske Hoegskole, Trondheim (Norway). Div. of Computer Systems and Telematics; Hoover, A. [University of South Florida, Tampa, FL (United States). Dept. of Computer Science and Engineering; Jones, J.P. [Oak Ridge National Lab., TN (United States)

    1994-06-01

    This paper addresses some issues in the development of sensor-based systems for mobile robot navigation which use range imaging sensors as the primary source for geometric information about the environment. In particular, we describe a model of scanning laser range cameras which takes into account the properties of the mechanical system responsible for image formation and a calibration procedure which yields improved accuracy over previous models. In addition, we describe an algorithm which takes the limitations of these sensors into account in path planning and path execution. In particular, range imaging sensors are characterized by a limited field of view and a standoff distance -- a minimum distance nearer than which surfaces cannot be sensed. These limitations can be addressed by enriching the concept of configuration space to include information about what can be sensed from a given configuration, and using this information to guide path planning and path following.

  16. Automated Registration Of Images From Multiple Sensors

    Science.gov (United States)

    Rignot, Eric J. M.; Kwok, Ronald; Curlander, John C.; Pang, Shirley S. N.

    1994-01-01

    Images of terrain scanned in common by multiple Earth-orbiting remote sensors registered automatically with each other and, where possible, on geographic coordinate grid. Simulated image of terrain viewed by sensor computed from ancillary data, viewing geometry, and mathematical model of physics of imaging. In proposed registration algorithm, simulated and actual sensor images matched by area-correlation technique.

  17. Understanding synthesis imaging dynamic range

    Science.gov (United States)

    Braun, R.

    2013-03-01

    We develop a general framework for quantifying the many different contributions to the noise budget of an image made with an array of dishes or aperture array stations. Each noise contribution to the visibility data is associated with a relevant correlation timescale and frequency bandwidth so that the net impact on a complete observation can be assessed when a particular effect is not captured in the instrumental calibration. All quantities are parameterised as function of observing frequency and the visibility baseline length. We apply the resulting noise budget analysis to a wide range of existing and planned telescope systems that will operate between about 100 MHz and 5 GHz to ascertain the magnitude of the calibration challenges that they must overcome to achieve thermal noise limited performance. We conclude that calibration challenges are increased in several respects by small dimensions of the dishes or aperture array stations. It will be more challenging to achieve thermal noise limited performance using 15 m class dishes rather than the 25 m dishes of current arrays. Some of the performance risks are mitigated by the deployment of phased array feeds and more with the choice of an (alt,az,pol) mount, although a larger dish diameter offers the best prospects for risk mitigation. Many improvements to imaging performance can be anticipated at the expense of greater complexity in calibration algorithms. However, a fundamental limitation is ultimately imposed by an insufficient number of data constraints relative to calibration variables. The upcoming aperture array systems will be operating in a regime that has never previously been addressed, where a wide range of effects are expected to exceed the thermal noise by two to three orders of magnitude. Achieving routine thermal noise limited imaging performance with these systems presents an extreme challenge. The magnitude of that challenge is inversely related to the aperture array station diameter.

  18. Fusion of Images from Dissimilar Sensor Systems

    National Research Council Canada - National Science Library

    Chow, Khin

    2004-01-01

    Different sensors exploit different regions of the electromagnetic spectrum; therefore a multi-sensor image fusion system can take full advantage of the complementary capabilities of individual sensors in the suit...

  19. Thresholded Range Aggregation in Sensor Networks

    DEFF Research Database (Denmark)

    Yiu, Man Lung; Lin, Zhifeng; Mamoulis, Nikos

    2010-01-01

    ' status in each local region. In order to process the (snapshot) TRA query, we develop energy-efficient protocols based on appropriate operators and filters in sensor nodes. The design of these operators and filters is non-trivial, due to the fact that each sensor measurement influences the actual results...

  20. Toward CMOS image sensor based glucose monitoring.

    Science.gov (United States)

    Devadhasan, Jasmine Pramila; Kim, Sanghyo

    2012-09-07

    Complementary metal oxide semiconductor (CMOS) image sensor is a powerful tool for biosensing applications. In this present study, CMOS image sensor has been exploited for detecting glucose levels by simple photon count variation with high sensitivity. Various concentrations of glucose (100 mg dL(-1) to 1000 mg dL(-1)) were added onto a simple poly-dimethylsiloxane (PDMS) chip and the oxidation of glucose was catalyzed with the aid of an enzymatic reaction. Oxidized glucose produces a brown color with the help of chromogen during enzymatic reaction and the color density varies with the glucose concentration. Photons pass through the PDMS chip with varying color density and hit the sensor surface. Photon count was recognized by CMOS image sensor depending on the color density with respect to the glucose concentration and it was converted into digital form. By correlating the obtained digital results with glucose concentration it is possible to measure a wide range of blood glucose levels with great linearity based on CMOS image sensor and therefore this technique will promote a convenient point-of-care diagnosis.

  1. Dual-Emitting Fluorescent Metal-Organic Framework Nanocomposites as a Broad-Range pH Sensor for Fluorescence Imaging.

    Science.gov (United States)

    Chen, Haiyong; Wang, Jing; Shan, Duoliang; Chen, Jing; Zhang, Shouting; Lu, Xiaoquan

    2018-05-15

    pH plays an important role in understanding physiological/pathologic processes, and abnormal pH is a symbol of many common diseases such as cancer, stroke, and Alzheimer's disease. In this work, an effective dual-emission fluorescent metal-organic framework nanocomposite probe (denoted as RB-PCN) has been constructed for sensitive and broad-range detection of pH. RB-PCN was prepared by encapsulating the DBI-PEG-NH 2 -functionalized Fe 3 O 4 into Zr-MOFs and then further reacting it with rhodamine B isothiocyanates (RBITC). In RB-PCN, RBITC is capable of sensing changes in pH in acidic solutions. Zr-MOFs not only enrich the target analyte but also exhibit a fluorescence response to pH changes in alkaline solutions. Based on the above structural and compositional features, RB-PCN could detect a wide range of pH changes. Importantly, such a nanoprobe could "see" the intracellular pH changes by fluorescence confocal imaging as well as "measure" the wider range of pH in actual samples by fluorescence spectroscopy. To the best of our knowledge, this is the first time a MOF-based dual-emitting fluorescent nanoprobe has been used for a wide range of pH detection.

  2. POTENTIALS OF IMAGE BASED ACTIVE RANGING TO CAPTURE DYNAMIC SCENES

    Directory of Open Access Journals (Sweden)

    B. Jutzi

    2012-09-01

    Full Text Available Obtaining a 3D description of man-made and natural environments is a basic task in Computer Vision and Remote Sensing. To this end, laser scanning is currently one of the dominating techniques to gather reliable 3D information. The scanning principle inherently needs a certain time interval to acquire the 3D point cloud. On the other hand, new active sensors provide the possibility of capturing range information by images with a single measurement. With this new technique image-based active ranging is possible which allows capturing dynamic scenes, e.g. like walking pedestrians in a yard or moving vehicles. Unfortunately most of these range imaging sensors have strong technical limitations and are not yet sufficient for airborne data acquisition. It can be seen from the recent development of highly specialized (far-range imaging sensors – so called flash-light lasers – that most of the limitations could be alleviated soon, so that future systems will be equipped with improved image size and potentially expanded operating range. The presented work is a first step towards the development of methods capable for application of range images in outdoor environments. To this end, an experimental setup was set up for investigating these proposed possibilities. With the experimental setup a measurement campaign was carried out and first results will be presented within this paper.

  3. Range-Free Localization Schemes for Large Scale Sensor Networks

    National Research Council Canada - National Science Library

    He, Tian; Huang, Chengdu; Blum, Brain M; Stankovic, John A; Abdelzaher, Tarek

    2003-01-01

    .... Because coarse accuracy is sufficient for most sensor network applications, solutions in range-free localization are being pursued as a cost-effective alternative to more expensive range-based approaches...

  4. CMOS Imaging Sensor Technology for Aerial Mapping Cameras

    Science.gov (United States)

    Neumann, Klaus; Welzenbach, Martin; Timm, Martin

    2016-06-01

    In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.

  5. Temperature Sensors Integrated into a CMOS Image Sensor

    NARCIS (Netherlands)

    Abarca Prouza, A.N.; Xie, S.; Markenhof, Jules; Theuwissen, A.J.P.

    2017-01-01

    In this work, a novel approach is presented for measuring relative temperature variations inside the pixel array of a CMOS image sensor itself. This approach can give important information when compensation for dark (current) fixed pattern noise (FPN) is needed. The test image sensor consists of

  6. Range-Based Localization in Mobile Sensor Networks

    NARCIS (Netherlands)

    Dil, B.J.; Dil, B.; Dulman, S.O.; Havinga, Paul J.M.; Romer, K.; Karl, H.; Mattern, F.

    2006-01-01

    Localization schemes for wireless sensor networks can be classified as range-based or range-free. They differ in the information used for localization. Range-based methods use range measurements, while range-free techniques only use the content of the messages. None of the existing algorithms

  7. First Experiences with Kinect v2 Sensor for Close Range 3d Modelling

    Science.gov (United States)

    Lachat, E.; Macher, H.; Mittet, M.-A.; Landes, T.; Grussenmeyer, P.

    2015-02-01

    RGB-D cameras, also known as range imaging cameras, are a recent generation of sensors. As they are suitable for measuring distances to objects at high frame rate, such sensors are increasingly used for 3D acquisitions, and more generally for applications in robotics or computer vision. This kind of sensors became popular especially since the Kinect v1 (Microsoft) arrived on the market in November 2010. In July 2014, Windows has released a new sensor, the Kinect for Windows v2 sensor, based on another technology as its first device. However, due to its initial development for video games, the quality assessment of this new device for 3D modelling represents a major investigation axis. In this paper first experiences with Kinect v2 sensor are related, and the ability of close range 3D modelling is investigated. For this purpose, error sources on output data as well as a calibration approach are presented.

  8. FIRST EXPERIENCES WITH KINECT V2 SENSOR FOR CLOSE RANGE 3D MODELLING

    Directory of Open Access Journals (Sweden)

    E. Lachat

    2015-02-01

    Full Text Available RGB-D cameras, also known as range imaging cameras, are a recent generation of sensors. As they are suitable for measuring distances to objects at high frame rate, such sensors are increasingly used for 3D acquisitions, and more generally for applications in robotics or computer vision. This kind of sensors became popular especially since the Kinect v1 (Microsoft arrived on the market in November 2010. In July 2014, Windows has released a new sensor, the Kinect for Windows v2 sensor, based on another technology as its first device. However, due to its initial development for video games, the quality assessment of this new device for 3D modelling represents a major investigation axis. In this paper first experiences with Kinect v2 sensor are related, and the ability of close range 3D modelling is investigated. For this purpose, error sources on output data as well as a calibration approach are presented.

  9. A Biologically Inspired CMOS Image Sensor

    CERN Document Server

    Sarkar, Mukul

    2013-01-01

    Biological systems are a source of inspiration in the development of small autonomous sensor nodes. The two major types of optical vision systems found in nature are the single aperture human eye and the compound eye of insects. The latter are among the most compact and smallest vision sensors. The eye is a compound of individual lenses with their own photoreceptor arrays.  The visual system of insects allows them to fly with a limited intelligence and brain processing power. A CMOS image sensor replicating the perception of vision in insects is discussed and designed in this book for industrial (machine vision) and medical applications. The CMOS metal layer is used to create an embedded micro-polarizer able to sense polarization information. This polarization information is shown to be useful in applications like real time material classification and autonomous agent navigation. Further the sensor is equipped with in pixel analog and digital memories which allow variation of the dynamic range and in-pixel b...

  10. RADIANCE DOMAIN COMPOSITING FOR HIGH DYNAMIC RANGE IMAGING

    Directory of Open Access Journals (Sweden)

    M.R. Renu

    2013-02-01

    Full Text Available High dynamic range imaging aims at creating an image with a range of intensity variations larger than the range supported by a camera sensor. Most commonly used methods combine multiple exposure low dynamic range (LDR images, to obtain the high dynamic range (HDR image. Available methods typically neglect the noise term while finding appropriate weighting functions to estimate the camera response function as well as the radiance map. We look at the HDR imaging problem in a denoising frame work and aim at reconstructing a low noise radiance map from noisy low dynamic range images, which is tone mapped to get the LDR equivalent of the HDR image. We propose a maximum aposteriori probability (MAP based reconstruction of the HDR image using Gibb’s prior to model the radiance map, with total variation (TV as the prior to avoid unnecessary smoothing of the radiance field. To make the computation with TV prior efficient, we extend the majorize-minimize method of upper bounding the total variation by a quadratic function to our case which has a nonlinear term arising from the camera response function. A theoretical justification for doing radiance domain denoising as opposed to image domain denoising is also provided.

  11. CMOS Active-Pixel Image Sensor With Intensity-Driven Readout

    Science.gov (United States)

    Langenbacher, Harry T.; Fossum, Eric R.; Kemeny, Sabrina

    1996-01-01

    Proposed complementary metal oxide/semiconductor (CMOS) integrated-circuit image sensor automatically provides readouts from pixels in order of decreasing illumination intensity. Sensor operated in integration mode. Particularly useful in number of image-sensing tasks, including diffractive laser range-finding, three-dimensional imaging, event-driven readout of sparse sensor arrays, and star tracking.

  12. Establishing imaging sensor specifications for digital still cameras

    Science.gov (United States)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  13. RSA/Legacy Wind Sensor Comparison. Part 2; Eastern Range

    Science.gov (United States)

    Short, David A.; Wheeler, Mark M.

    2006-01-01

    This report describes a comparison of data from ultrasonic and propeller-and-vane anemometers on 5 wind towers at Kennedy Space Center and Cape Canaveral Air Force Station. The ultrasonic sensors are scheduled to replace the Legacy propeller-and-vane sensors under the Range Standardization and Automation (RSA) program. Because previous studies have noted differences between peak wind speeds reported by mechanical and ultrasonic wind sensors, the latter having no moving parts, the 30th and 45th Weather Squadrons wanted to understand possible differences between the two sensor types. The period-of-record was 13-30 May 2005, A total of 357,626 readings of 1-minute average and peak wind speed/direction from each sensor type were used. Statistics of differences in speed and direction were used to identify 15 out of 19 RSA sensors having the most consistent performance, with respect to the Legacy sensors. RSA average wind speed data from these 15 showed a small positive bias of 0.38 kts. A slightly larger positive bias of 0.94 kts was found in the RSA peak wind speed.

  14. RSA/Legacy Wind Sensor Comparison. Part 1; Western Range

    Science.gov (United States)

    Short, David A.; Wheeler, Mark M.

    2006-01-01

    This report describes a comparison of data from ultrasonic and cup-and-vane anemometers on 5 wind towers at Vandenberg AFB. The ultrasonic sensors are scheduled to replace the Legacy cup-and-vane sensors under the Range Standardization and Automation (RSA) program. Because previous studies have noted differences between peak wind speeds reported by mechanical and ultrasonic wind sensors, the latter having no moving parts, the 30th and 45th Weather Squadrons wanted to understand possible differences between the two sensor types. The period-of-record was 13-30 May 2005. A total of 153,961 readings of I-minute average and peak wind speed/direction from each sensor type were used. Statistics of differences in speed and direction were used to identify 18 out of 34 RSA sensors having the most consistent performance, with respect to the Legacy sensors. Data from these 18 were used to form a composite comparison. A small positive bias in the composite RSA average wind speed increased from +0.5 kts at 15 kts, to +1 kt at 25 kts. A slightly larger positive bias in the RSA peak wind speed increased from +1 kt at 15 kts, to +2 kts at 30 kts.

  15. High Dynamic Range Imaging Using Multiple Exposures

    Science.gov (United States)

    Hou, Xinglin; Luo, Haibo; Zhou, Peipei; Zhou, Wei

    2017-06-01

    It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range (LDR) camera. This paper presents an approach for improving the dynamic range of cameras by using multiple exposure images of same scene taken under different exposure times. First, the camera response function (CRF) is recovered by solving a high-order polynomial in which only the ratios of the exposures are used. Then, the HDR radiance image is reconstructed by weighted summation of the each radiance maps. After that, a novel local tone mapping (TM) operator is proposed for the display of the HDR radiance image. By solving the high-order polynomial, the CRF can be recovered quickly and easily. Taken the local image feature and characteristic of histogram statics into consideration, the proposed TM operator could preserve the local details efficiently. Experimental result demonstrates the effectiveness of our method. By comparison, the method outperforms other methods in terms of imaging quality.

  16. Visualization of heavy ion-induced charge production in a CMOS image sensor

    CERN Document Server

    Végh, J; Klamra, W; Molnár, J; Norlin, LO; Novák, D; Sánchez-Crespo, A; Van der Marel, J; Fenyvesi, A; Valastyan, I; Sipos, A

    2004-01-01

    A commercial CMOS image sensor was irradiated with heavy ion beams in the several MeV energy range. The image sensor is equipped with a standard video output. The data were collected on-line through frame grabbing and analysed off-line after digitisation. It was shown that the response of the image sensor to the heavy ion bombardment varied with the type and energy of the projectiles. The sensor will be used for the CMS Barrel Muon Alignment system.

  17. Imaging in scattering media using correlation image sensors and sparse convolutional coding

    KAUST Repository

    Heide, Felix; Xiao, Lei; Kolb, Andreas; Hullin, Matthias B.; Heidrich, Wolfgang

    2014-01-01

    Correlation image sensors have recently become popular low-cost devices for time-of-flight, or range cameras. They usually operate under the assumption of a single light path contributing to each pixel. We show that a more thorough analysis of the sensor data from correlation sensors can be used can be used to analyze the light transport in much more complex environments, including applications for imaging through scattering and turbid media. The key of our method is a new convolutional sparse coding approach for recovering transient (light-in-flight) images from correlation image sensors. This approach is enabled by an analysis of sparsity in complex transient images, and the derivation of a new physically-motivated model for transient images with drastically improved sparsity.

  18. Imaging in scattering media using correlation image sensors and sparse convolutional coding

    KAUST Repository

    Heide, Felix

    2014-10-17

    Correlation image sensors have recently become popular low-cost devices for time-of-flight, or range cameras. They usually operate under the assumption of a single light path contributing to each pixel. We show that a more thorough analysis of the sensor data from correlation sensors can be used can be used to analyze the light transport in much more complex environments, including applications for imaging through scattering and turbid media. The key of our method is a new convolutional sparse coding approach for recovering transient (light-in-flight) images from correlation image sensors. This approach is enabled by an analysis of sparsity in complex transient images, and the derivation of a new physically-motivated model for transient images with drastically improved sparsity.

  19. Priority image transmission in wireless sensor networks

    International Nuclear Information System (INIS)

    Nasri, M.; Helali, A.; Sghaier, H.; Maaref, H.

    2011-01-01

    The emerging technology during the last years allowed the development of new sensors equipped with wireless communication which can be organized into a cooperative autonomous network. Some application areas for wireless sensor networks (WSNs) are home automations, health care services, military domain, and environment monitoring. The required constraints are limited capacity of processing, limited storage capability, and especially these nodes are limited in energy. In addition, such networks are tiny battery powered which their lifetime is very limited. During image processing and transmission to the destination, the lifetime of sensor network is decreased quickly due to battery and processing power constraints. Therefore, digital image transmissions are a significant challenge for image sensor based Wireless Sensor Networks (WSNs). Based on a wavelet image compression, we propose a novel, robust and energy-efficient scheme, called Priority Image Transmission (PIT) in WSN by providing various priority levels during image transmissions. Different priorities in the compressed image are considered. The information for the significant wavelet coeffcients are transmitted with higher quality assurance, whereas relatively less important coefficients are transmitted with lower overhead. Simulation results show that the proposed scheme prolongs the system lifetime and achieves higher energy efficiency in WSN with an acceptable compromise on the image quality.

  20. A Short-Range Distance Sensor with Exceptional Linearity

    Science.gov (United States)

    Simmons, Steven; Youngquist, Robert

    2013-01-01

    A sensor has been demonstrated that can measure distance over a total range of about 300 microns to an accuracy of about 0.1 nm (resolution of about 0.01 nm). This represents an exceptionally large dynamic range of operation - over 1,000,000. The sensor is optical in nature, and requires the attachment of a mirror to the object whose distance is being measured. This work resulted from actively developing a white light interferometric system to be used to measure the depths of defects in the Space Shuttle Orbiter windows. The concept was then applied to measuring distance. The concept later expanded to include spectrometer calibration. In summary, broadband (i.e., white) light is launched into a Michelson interferometer, one mirror of which is fixed and one of which is attached to the object whose distance is to be measured. The light emerging from the interferometer has traveled one of two distances: either the distance to the fixed mirror and back, or the distance to the moving mirror and back. These two light beams mix and produce an interference pattern where some wavelengths interfere constructively and some destructively. Sending this light into a spectrometer allows this interference pattern to be analyzed, yielding the net distance difference between the two paths. The unique feature of this distance sensor is its ability to measure accurately distance over a dynamic range of more than one million, the ratio of its range (about 300 microns) to its accuracy (about 0.1 nanometer). Such a large linear operating range is rare and arises here because both amplitude and phase-matching algorithms contribute to the performance. The sensor is limited by the need to attach a mirror of some kind to the object being tracked, and by the fairly small total range, but the exceptional dynamic range should make it of interest.

  1. Design and Fabrication of Vertically-Integrated CMOS Image Sensors

    Science.gov (United States)

    Skorka, Orit; Joseph, Dileepan

    2011-01-01

    Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors. PMID:22163860

  2. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  3. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    International Nuclear Information System (INIS)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  4. A video-rate range sensor based on depth from defocus

    OpenAIRE

    Ghita, Ovidiu; Whelan, Paul F.

    2001-01-01

    Recovering the depth information derived from dynamic scenes implies real-time range estimation. This paper addresses the implementation of a bifocal range sensor which estimates the depth by measuring the relative blurring between two images captured with different focal settings. To recover the depth accurately even in cases when the scene is textureless, one possible solution is to project a structured light on the scene. As a consequence, in the scene's spectrum a spatial frequency derive...

  5. Parametric Optimization of Lateral NIPIN Phototransistors for Flexible Image Sensors

    Directory of Open Access Journals (Sweden)

    Min Seok Kim

    2017-08-01

    Full Text Available Curved image sensors, which are a key component in bio-inspired imaging systems, have been widely studied because they can improve an imaging system in various aspects such as low optical aberrations, small-form, and simple optics configuration. Many methods and materials to realize a curvilinear imager have been proposed to address the drawbacks of conventional imaging/optical systems. However, there have been few theoretical studies in terms of electronics on the use of a lateral photodetector as a flexible image sensor. In this paper, we demonstrate the applicability of a Si-based lateral phototransistor as the pixel of a high-efficiency curved photodetector by conducting various electrical simulations with technology computer aided design (TCAD. The single phototransistor is analyzed with different device parameters: the thickness of the active cell, doping concentration, and structure geometry. This work presents a method to improve the external quantum efficiency (EQE, linear dynamic range (LDR, and mechanical stability of the phototransistor. We also evaluated the dark current in a matrix form of phototransistors to estimate the feasibility of the device as a flexible image sensor. Moreover, we fabricated and demonstrated an array of phototransistors based on our study. The theoretical study and design guidelines of a lateral phototransistor create new opportunities in flexible image sensors.

  6. Research on range-gated laser active imaging seeker

    Science.gov (United States)

    You, Mu; Wang, PengHui; Tan, DongJie

    2013-09-01

    Compared with other imaging methods such as millimeter wave imaging, infrared imaging and visible light imaging, laser imaging provides both a 2-D array of reflected intensity data as well as 2-D array of range data, which is the most important data for use in autonomous target acquisition .In terms of application, it can be widely used in military fields such as radar, guidance and fuse. In this paper, we present a laser active imaging seeker system based on range-gated laser transmitter and sensor technology .The seeker system presented here consist of two important part, one is laser image system, which uses a negative lens to diverge the light from a pulse laser to flood illuminate a target, return light is collected by a camera lens, each laser pulse triggers the camera delay and shutter. The other is stabilization gimbals, which is designed to be a rotatable structure both in azimuth and elevation angles. The laser image system consists of transmitter and receiver. The transmitter is based on diode pumped solid-state lasers that are passively Q-switched at 532nm wavelength. A visible wavelength was chosen because the receiver uses a Gen III image intensifier tube with a spectral sensitivity limited to wavelengths less than 900nm.The receiver is image intensifier tube's micro channel plate coupled into high sensitivity charge coupled device camera. The image has been taken at range over one kilometer and can be taken at much longer range in better weather. Image frame frequency can be changed according to requirement of guidance with modifiable range gate, The instantaneous field of views of the system was found to be 2×2 deg. Since completion of system integration, the seeker system has gone through a series of tests both in the lab and in the outdoor field. Two different kinds of buildings have been chosen as target, which is located at range from 200m up to 1000m.To simulate dynamic process of range change between missile and target, the seeker system has

  7. Progress in sensor performance testing, modeling and range prediction using the TOD method: an overview

    Science.gov (United States)

    Bijl, Piet; Hogervorst, Maarten A.; Toet, Alexander

    2017-05-01

    The Triangle Orientation Discrimination (TOD) methodology includes i) a widely applicable, accurate end-to-end EO/IR sensor test, ii) an image-based sensor system model and iii) a Target Acquisition (TA) range model. The method has been extensively validated against TA field performance for a wide variety of well- and under-sampled imagers, systems with advanced image processing techniques such as dynamic super resolution and local adaptive contrast enhancement, and sensors showing smear or noise drift, for both static and dynamic test stimuli and as a function of target contrast. Recently, significant progress has been made in various directions. Dedicated visual and NIR test charts for lab and field testing are available and thermal test benches are on the market. Automated sensor testing using an objective synthetic human observer is within reach. Both an analytical and an image-based TOD model have recently been developed and are being implemented in the European Target Acquisition model ECOMOS and in the EOSTAR TDA. Further, the methodology is being applied for design optimization of high-end security camera systems. Finally, results from a recent perception study suggest that DRI ranges for real targets can be predicted by replacing the relevant distinctive target features by TOD test patterns of the same characteristic size and contrast, enabling a new TA modeling approach. This paper provides an overview.

  8. Thermoelectric infrared imaging sensors for automotive applications

    Science.gov (United States)

    Hirota, Masaki; Nakajima, Yasushi; Saito, Masanori; Satou, Fuminori; Uchiyama, Makoto

    2004-07-01

    This paper describes three low-cost thermoelectric infrared imaging sensors having a 1,536, 2,304, and 10,800 element thermoelectric focal plane array (FPA) respectively and two experimental automotive application systems. The FPAs are basically fabricated with a conventional IC process and micromachining technologies and have a low cost potential. Among these sensors, the sensor having 2,304 elements provide high responsivity of 5,500 V/W and a very small size with adopting a vacuum-sealed package integrated with a wide-angle ZnS lens. One experimental system incorporated in the Nissan ASV-2 is a blind spot pedestrian warning system that employs four infrared imaging sensors. This system helps alert the driver to the presence of a pedestrian in a blind spot by detecting the infrared radiation emitted from the person"s body. The system can also prevent the vehicle from moving in the direction of the pedestrian. The other is a rearview camera system with an infrared detection function. This system consists of a visible camera and infrared sensors, and it helps alert the driver to the presence of a pedestrian in a rear blind spot. Various issues that will need to be addressed in order to expand the automotive applications of IR imaging sensors in the future are also summarized. This performance is suitable for consumer electronics as well as automotive applications.

  9. Hardware test program for evaluation of baseline range/range rate sensor concept

    Science.gov (United States)

    Pernic, E.

    1985-01-01

    The test program Phase II effort provides additional design information in terms of range and range rate (R/R) sensor performance when observing and tracking a typical spacecraft target. The target used in the test program was a one-third scale model of the Hubble Space Telescope (HST) available at the MSFC test site where the tests were performed. A modified Bendix millimeter wave radar served as the R/R sensor test bed for evaluation of range and range rate tracking performance, and generation of radar signature characteristics of the spacecraft target. A summary of program test results and conclusions are presented along with detailed description of the Bendix test bed radar with accompaning instrumentation. The MSFC test site and facilities are described. The test procedures used to establish background levels, and the calibration procedures used in the range accuracy tests and RCS (radar cross section) signature measurements, are presented and a condensed version of the daily log kept during the 5 September through 17 September test period is also presented. The test program results are given starting with the RCS signature measurements, then continuing with range measurement accuracy test results and finally the range and range rate tracking accuracy test results.

  10. CMOS image sensor with contour enhancement

    Science.gov (United States)

    Meng, Liya; Lai, Xiaofeng; Chen, Kun; Yuan, Xianghui

    2010-10-01

    Imitating the signal acquisition and processing of vertebrate retina, a CMOS image sensor with bionic pre-processing circuit is designed. Integration of signal-process circuit on-chip can reduce the requirement of bandwidth and precision of the subsequent interface circuit, and simplify the design of the computer-vision system. This signal pre-processing circuit consists of adaptive photoreceptor, spatial filtering resistive network and Op-Amp calculation circuit. The adaptive photoreceptor unit with a dynamic range of approximately 100 dB has a good self-adaptability for the transient changes in light intensity instead of intensity level itself. Spatial low-pass filtering resistive network used to mimic the function of horizontal cell, is composed of the horizontal resistor (HRES) circuit and OTA (Operational Transconductance Amplifier) circuit. HRES circuit, imitating dendrite of the neuron cell, comprises of two series MOS transistors operated in weak inversion region. Appending two diode-connected n-channel transistors to a simple transconductance amplifier forms the OTA Op-Amp circuit, which provides stable bias voltage for the gate of MOS transistors in HRES circuit, while serves as an OTA voltage follower to provide input voltage for the network nodes. The Op-Amp calculation circuit with a simple two-stage Op-Amp achieves the image contour enhancing. By adjusting the bias voltage of the resistive network, the smoothing effect can be tuned to change the effect of image's contour enhancement. Simulations of cell circuit and 16×16 2D circuit array are implemented using CSMC 0.5μm DPTM CMOS process.

  11. Three dimensional multi perspective imaging with randomly distributed sensors

    International Nuclear Information System (INIS)

    DaneshPanah, Mehdi; Javidi, Bahrain

    2008-01-01

    In this paper, we review a three dimensional (3D) passive imaging system that exploits the visual information captured from the scene from multiple perspectives to reconstruct the scene voxel by voxel in 3D space. The primary contribution of this work is to provide a computational reconstruction scheme based on randomly distributed sensor locations in space. In virtually all of multi perspective techniques (e.g. integral imaging, synthetic aperture integral imaging, etc), there is an implicit assumption that the sensors lie on a simple, regular pickup grid. Here, we relax this assumption and suggest a computational reconstruction framework that unifies the available methods as its special cases. The importance of this work is that it enables three dimensional imaging technology to be implemented in a multitude of novel application domains such as 3D aerial imaging, collaborative imaging, long range 3D imaging and etc, where sustaining a regular pickup grid is not possible and/or the parallax requirements call for a irregular or sparse synthetic aperture mode. Although the sensors can be distributed in any random arrangement, we assume that the pickup position is measured at the time of capture of each elemental image. We demonstrate the feasibility of the methods proposed here by experimental results.

  12. System overview and applications of a panoramic imaging perimeter sensor

    International Nuclear Information System (INIS)

    Pritchard, D.A.

    1995-01-01

    This paper presents an overview of the design and potential applications of a 360-degree scanning, multi-spectral intrusion detection sensor. This moderate-resolution, true panoramic imaging sensor is intended for exterior use at ranges from 50 to 1,500 meters. This Advanced Exterior Sensor (AES) simultaneously uses three sensing technologies (infrared, visible, and radar) along with advanced data processing methods to provide low false-alarm intrusion detection, tracking, and immediate visual assessment. The images from the infrared and visible detector sets and the radar range data are updated as the sensors rotate once per second. The radar provides range data with one-meter resolution. This sensor has been designed for easy use and rapid deployment to cover wide areas beyond or in place of typical perimeters, and tactical applications around fixed or temporary high-value assets. AES prototypes are in development. Applications discussed in this paper include replacements, augmentations, or new installations at fixed sites where topological features, atmospheric conditions, environmental restrictions, ecological regulations, and archaeological features limit the use of conventional security components and systems

  13. Imaging using long range dipolar field effects

    International Nuclear Information System (INIS)

    Gutteridge, Sarah

    2002-01-01

    The work in this thesis has been undertaken by the author, except where indicated in reference, within the Magnetic Resonance Centre, at the University of Nottingham during the period from October 1998 to March 2001. This thesis details the different characteristics of the long range dipolar field and its application to magnetic resonance imaging. The long range dipolar field is usually neglected in nuclear magnetic resonance experiments, as molecular tumbling decouples its effect at short distances. However, in highly polarised samples residual long range components have a significant effect on the evolution of the magnetisation, giving rise to multiple spin echoes and unexpected quantum coherences. Three applications utilising these dipolar field effects are documented in this thesis. The first demonstrates the spatial sensitivity of the signal generated via dipolar field effects in structured liquid state samples. The second utilises the signal produced by the dipolar field to create proton spin density maps. These maps directly yield an absolute value for the water content of the sample that is unaffected by relaxation and any RF inhomogeneity or calibration errors in the radio frequency pulses applied. It has also been suggested that the signal generated by dipolar field effects may provide novel contrast in functional magnetic resonance imaging. In the third application, the effects of microscopic susceptibility variation on the signal are studied and the relaxation rate of the signal is compared to that of a conventional spin echo. (author)

  14. Cell phones as imaging sensors

    Science.gov (United States)

    Bhatti, Nina; Baker, Harlyn; Marguier, Joanna; Berclaz, Jérôme; Süsstrunk, Sabine

    2010-04-01

    Camera phones are ubiquitous, and consumers have been adopting them faster than any other technology in modern history. When connected to a network, though, they are capable of more than just picture taking: Suddenly, they gain access to the power of the cloud. We exploit this capability by providing a series of image-based personal advisory services. These are designed to work with any handset over any cellular carrier using commonly available Multimedia Messaging Service (MMS) and Short Message Service (SMS) features. Targeted at the unsophisticated consumer, these applications must be quick and easy to use, not requiring download capabilities or preplanning. Thus, all application processing occurs in the back-end system (i.e., as a cloud service) and not on the handset itself. Presenting an image to an advisory service in the cloud, a user receives information that can be acted upon immediately. Two of our examples involve color assessment - selecting cosmetics and home décor paint palettes; the third provides the ability to extract text from a scene. In the case of the color imaging applications, we have shown that our service rivals the advice quality of experts. The result of this capability is a new paradigm for mobile interactions - image-based information services exploiting the ubiquity of camera phones.

  15. Hardware test program for evaluation of baseline range-range rate sensor concept

    Science.gov (United States)

    1985-01-01

    The baseline range/range rate sensor concept was evaluated. The Interrupted CW (ICW) mode of operation continued with emphasis on establishing the sensitivity of the video portion of the receiver was 7 dB less than the theoretical value. This departs from test results of previous implementations in which achieved sensitivity was within 1.5 to 2 dB of the theoretical value. Several potential causes of this discrepancy in performance were identified and are scheduled for further investigation. Results indicate that a cost savings in both per unit and program costs are realizable by eliminating one of the modes of operation. An acquisition (total program) cost savings of approximately 10% is projected by eliminating the CW mode of operation. The modified R/R sensor would operate in the ICW mode only and would provide coverage from initial acquisition at 12 nmi to within a few hundred feet of the OMV. If the ICW mode only were selected, then an accompanying sensor would be required to provide coverage from a few hundred feet to docking.

  16. Lightning Imaging Sensor (LIS) on TRMM Science Data V4

    Data.gov (United States)

    National Aeronautics and Space Administration — The Lightning Imaging Sensor (LIS) Science Data was collected by the Lightning Imaging Sensor (LIS), which was an instrument on the Tropical Rainfall Measurement...

  17. Performance study of double SOI image sensors

    Science.gov (United States)

    Miyoshi, T.; Arai, Y.; Fujita, Y.; Hamasaki, R.; Hara, K.; Ikegami, Y.; Kurachi, I.; Nishimura, R.; Ono, S.; Tauchi, K.; Tsuboyama, T.; Yamada, M.

    2018-02-01

    Double silicon-on-insulator (DSOI) sensors composed of two thin silicon layers and one thick silicon layer have been developed since 2011. The thick substrate consists of high resistivity silicon with p-n junctions while the thin layers are used as SOI-CMOS circuitry and as shielding to reduce the back-gate effect and crosstalk between the sensor and the circuitry. In 2014, a high-resolution integration-type pixel sensor, INTPIX8, was developed based on the DSOI concept. This device is fabricated using a Czochralski p-type (Cz-p) substrate in contrast to a single SOI (SSOI) device having a single thin silicon layer and a Float Zone p-type (FZ-p) substrate. In the present work, X-ray spectra of both DSOI and SSOI sensors were obtained using an Am-241 radiation source at four gain settings. The gain of the DSOI sensor was found to be approximately three times that of the SSOI device because the coupling capacitance is reduced by the DSOI structure. An X-ray imaging demonstration was also performed and high spatial resolution X-ray images were obtained.

  18. Smart CMOS image sensor for lightning detection and imaging.

    Science.gov (United States)

    Rolando, Sébastien; Goiffon, Vincent; Magnan, Pierre; Corbière, Franck; Molina, Romain; Tulet, Michel; Bréart-de-Boisanger, Michel; Saint-Pé, Olivier; Guiry, Saïprasad; Larnaudie, Franck; Leone, Bruno; Perez-Cuevas, Leticia; Zayer, Igor

    2013-03-01

    We present a CMOS image sensor dedicated to lightning detection and imaging. The detector has been designed to evaluate the potentiality of an on-chip lightning detection solution based on a smart sensor. This evaluation is performed in the frame of the predevelopment phase of the lightning detector that will be implemented in the Meteosat Third Generation Imager satellite for the European Space Agency. The lightning detection process is performed by a smart detector combining an in-pixel frame-to-frame difference comparison with an adjustable threshold and on-chip digital processing allowing an efficient localization of a faint lightning pulse on the entire large format array at a frequency of 1 kHz. A CMOS prototype sensor with a 256×256 pixel array and a 60 μm pixel pitch has been fabricated using a 0.35 μm 2P 5M technology and tested to validate the selected detection approach.

  19. Event-Based Color Segmentation With a High Dynamic Range Sensor

    Directory of Open Access Journals (Sweden)

    Alexandre Marcireau

    2018-04-01

    Full Text Available This paper introduces a color asynchronous neuromorphic event-based camera and a methodology to process color output from the device to perform color segmentation and tracking at the native temporal resolution of the sensor (down to one microsecond. Our color vision sensor prototype is a combination of three Asynchronous Time-based Image Sensors, sensitive to absolute color information. We devise a color processing algorithm leveraging this information. It is designed to be computationally cheap, thus showing how low level processing benefits from asynchronous acquisition and high temporal resolution data. The resulting color segmentation and tracking performance is assessed both with an indoor controlled scene and two outdoor uncontrolled scenes. The tracking's mean error to the ground truth for the objects of the outdoor scenes ranges from two to twenty pixels.

  20. A protein-dye hybrid system as a narrow range tunable intracellular pH sensor.

    Science.gov (United States)

    Anees, Palapuravan; Sudheesh, Karivachery V; Jayamurthy, Purushothaman; Chandrika, Arunkumar R; Omkumar, Ramakrishnapillai V; Ajayaghosh, Ayyappanpillai

    2016-11-18

    Accurate monitoring of pH variations inside cells is important for the early diagnosis of diseases such as cancer. Even though a variety of different pH sensors are available, construction of a custom-made sensor array for measuring minute variations in a narrow biological pH window, using easily available constituents, is a challenge. Here we report two-component hybrid sensors derived from a protein and organic dye nanoparticles whose sensitivity range can be tuned by choosing different ratios of the components, to monitor the minute pH variations in a given system. The dye interacts noncovalently with the protein at lower pH and covalently at higher pH, triggering two distinguishable fluorescent signals at 700 and 480 nm, respectively. The pH sensitivity region of the probe can be tuned for every unit of the pH window resulting in custom-made pH sensors. These narrow range tunable pH sensors have been used to monitor pH variations in HeLa cells using the fluorescence imaging technique.

  1. A contribution to laser range imaging technology

    Science.gov (United States)

    Defigueiredo, Rui J. P.; Denney, Bradley S.

    1991-01-01

    The goal of the project was to develop a methodology for fusion of a Laser Range Imaging Device (LRID) and camera data. Our initial work in the project led to the conclusion that none of the LRID's that were available were sufficiently adequate for this purpose. Thus we spent the time and effort on the development of the new LRID with several novel features which elicit the desired fusion objectives. In what follows, we describe the device developed and built under contract. The Laser Range Imaging Device (LRID) is an instrument which scans a scene using a laser and returns range and reflection intensity data. Such a system would be extremely useful in scene analysis in industry and space applications. The LRID will be eventually implemented on board a mobile robot. The current system has several advantages over some commercially available systems. One improvement is the use of X-Y galvonometer scanning mirrors instead of polygonal mirrors present in some systems. The advantage of the X-Y scanning mirrors is that the mirror system can be programmed to provide adjustable scanning regions. For each mirror there are two controls accessible by the computer. The first is the mirror position and the second is a zoom factor which modifies the amplitude of the position of the parameter. Another advantage of the LRID is the use of a visible low power laser. Some of the commercial systems use a higher intensity invisible laser which causes safety concerns. By using a low power visible laser, not only can one see the beam and avoid direct eye contact, but also the lower intensity reduces the risk of damage to the eye, and no protective eyeware is required.

  2. Commercial CMOS image sensors as X-ray imagers and particle beam monitors

    International Nuclear Information System (INIS)

    Castoldi, A.; Guazzoni, C.; Maffessanti, S.; Montemurro, G.V.; Carraresi, L.

    2015-01-01

    CMOS image sensors are widely used in several applications such as mobile handsets webcams and digital cameras among others. Furthermore they are available across a wide range of resolutions with excellent spectral and chromatic responses. In order to fulfill the need of cheap systems as beam monitors and high resolution image sensors for scientific applications we exploited the possibility of using commercial CMOS image sensors as X-rays and proton detectors. Two different sensors have been mounted and tested. An Aptina MT9v034, featuring 752 × 480 pixels, 6μm × 6μm pixel size has been mounted and successfully tested as bi-dimensional beam profile monitor, able to take pictures of the incoming proton bunches at the DeFEL beamline (1–6 MeV pulsed proton beam) of the LaBeC of INFN in Florence. The naked sensor is able to successfully detect the interactions of the single protons. The sensor point-spread-function (PSF) has been qualified with 1MeV protons and is equal to one pixel (6 mm) r.m.s. in both directions. A second sensor MT9M032, featuring 1472 × 1096 pixels, 2.2 × 2.2 μm pixel size has been mounted on a dedicated board as high-resolution imager to be used in X-ray imaging experiments with table-top generators. In order to ease and simplify the data transfer and the image acquisition the system is controlled by a dedicated micro-processor board (DM3730 1GHz SoC ARM Cortex-A8) on which a modified LINUX kernel has been implemented. The paper presents the architecture of the sensor systems and the results of the experimental measurements

  3. CMOS image sensor-based implantable glucose sensor using glucose-responsive fluorescent hydrogel.

    Science.gov (United States)

    Tokuda, Takashi; Takahashi, Masayuki; Uejima, Kazuhiro; Masuda, Keita; Kawamura, Toshikazu; Ohta, Yasumi; Motoyama, Mayumi; Noda, Toshihiko; Sasagawa, Kiyotaka; Okitsu, Teru; Takeuchi, Shoji; Ohta, Jun

    2014-11-01

    A CMOS image sensor-based implantable glucose sensor based on an optical-sensing scheme is proposed and experimentally verified. A glucose-responsive fluorescent hydrogel is used as the mediator in the measurement scheme. The wired implantable glucose sensor was realized by integrating a CMOS image sensor, hydrogel, UV light emitting diodes, and an optical filter on a flexible polyimide substrate. Feasibility of the glucose sensor was verified by both in vitro and in vivo experiments.

  4. Robust Dehaze Algorithm for Degraded Image of CMOS Image Sensors

    Directory of Open Access Journals (Sweden)

    Chen Qu

    2017-09-01

    Full Text Available The CMOS (Complementary Metal-Oxide-Semiconductor is a new type of solid image sensor device widely used in object tracking, object recognition, intelligent navigation fields, and so on. However, images captured by outdoor CMOS sensor devices are usually affected by suspended atmospheric particles (such as haze, causing a reduction in image contrast, color distortion problems, and so on. In view of this, we propose a novel dehazing approach based on a local consistent Markov random field (MRF framework. The neighboring clique in traditional MRF is extended to the non-neighboring clique, which is defined on local consistent blocks based on two clues, where both the atmospheric light and transmission map satisfy the character of local consistency. In this framework, our model can strengthen the restriction of the whole image while incorporating more sophisticated statistical priors, resulting in more expressive power of modeling, thus, solving inadequate detail recovery effectively and alleviating color distortion. Moreover, the local consistent MRF framework can obtain details while maintaining better results for dehazing, which effectively improves the image quality captured by the CMOS image sensor. Experimental results verified that the method proposed has the combined advantages of detail recovery and color preservation.

  5. CMOS Image Sensors: Electronic Camera On A Chip

    Science.gov (United States)

    Fossum, E. R.

    1995-01-01

    Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.

  6. Retinal fundus imaging with a plenoptic sensor

    Science.gov (United States)

    Thurin, Brice; Bloch, Edward; Nousias, Sotiris; Ourselin, Sebastien; Keane, Pearse; Bergeles, Christos

    2018-02-01

    Vitreoretinal surgery is moving towards 3D visualization of the surgical field. This require acquisition system capable of recording such 3D information. We propose a proof of concept imaging system based on a light-field camera where an array of micro-lenses is placed in front of a conventional sensor. With a single snapshot, a stack of images focused at different depth are produced on the fly, which provides enhanced depth perception for the surgeon. Difficulty in depth localization of features and frequent focus-change during surgery are making current vitreoretinal heads-up surgical imaging systems cumbersome to use. To improve the depth perception and eliminate the need to manually refocus on the instruments during the surgery, we designed and implemented a proof-of-concept ophthalmoscope equipped with a commercial light-field camera. The sensor of our camera is composed of an array of micro-lenses which are projecting an array of overlapped micro-images. We show that with a single light-field snapshot we can digitally refocus between the retina and a tool located in front of the retina or display an extended depth-of-field image where everything is in focus. The design and system performances of the plenoptic fundus camera are detailed. We will conclude by showing in vivo data recorded with our device.

  7. SENSOR CORRECTION AND RADIOMETRIC CALIBRATION OF A 6-BAND MULTISPECTRAL IMAGING SENSOR FOR UAV REMOTE SENSING

    Directory of Open Access Journals (Sweden)

    J. Kelcey

    2012-07-01

    Full Text Available The increased availability of unmanned aerial vehicles (UAVs has resulted in their frequent adoption for a growing range of remote sensing tasks which include precision agriculture, vegetation surveying and fine-scale topographic mapping. The development and utilisation of UAV platforms requires broad technical skills covering the three major facets of remote sensing: data acquisition, data post-processing, and image analysis. In this study, UAV image data acquired by a miniature 6-band multispectral imaging sensor was corrected and calibrated using practical image-based data post-processing techniques. Data correction techniques included dark offset subtraction to reduce sensor noise, flat-field derived per-pixel look-up-tables to correct vignetting, and implementation of the Brown- Conrady model to correct lens distortion. Radiometric calibration was conducted with an image-based empirical line model using pseudo-invariant features (PIFs. Sensor corrections and radiometric calibration improve the quality of the data, aiding quantitative analysis and generating consistency with other calibrated datasets.

  8. 3D CAPTURING PERFORMANCES OF LOW-COST RANGE SENSORS FOR MASS-MARKET APPLICATIONS

    Directory of Open Access Journals (Sweden)

    G. Guidi

    2016-06-01

    Full Text Available Since the advent of the first Kinect as motion controller device for the Microsoft XBOX platform (November 2010, several similar active and low-cost range sensing devices have been introduced on the mass-market for several purposes, including gesture based interfaces, 3D multimedia interaction, robot navigation, finger tracking, 3D body scanning for garment design and proximity sensors for automotive. However, given their capability to generate a real time stream of range images, these has been used in some projects also as general purpose range devices, with performances that for some applications might be satisfying. This paper shows the working principle of the various devices, analyzing them in terms of systematic errors and random errors for exploring the applicability of them in standard 3D capturing problems. Five actual devices have been tested featuring three different technologies: i Kinect V1 by Microsoft, Structure Sensor by Occipital, and Xtion PRO by ASUS, all based on different implementations of the Primesense sensor; ii F200 by Intel/Creative, implementing the Realsense pattern projection technology; Kinect V2 by Microsoft, equipped with the Canesta TOF Camera. A critical analysis of the results tries first of all to compare them, and secondarily to focus the range of applications for which such devices could actually work as a viable solution.

  9. Beam imaging sensor and method for using same

    Energy Technology Data Exchange (ETDEWEB)

    McAninch, Michael D.; Root, Jeffrey J.

    2017-01-03

    The present invention relates generally to the field of sensors for beam imaging and, in particular, to a new and useful beam imaging sensor for use in determining, for example, the power density distribution of a beam including, but not limited to, an electron beam or an ion beam. In one embodiment, the beam imaging sensor of the present invention comprises, among other items, a circumferential slit that is either circular, elliptical or polygonal in nature. In another embodiment, the beam imaging sensor of the present invention comprises, among other things, a discontinuous partially circumferential slit. Also disclosed is a method for using the various beams sensor embodiments of the present invention.

  10. Optical Sensor for Diverse Organic Vapors at ppm Concentration Ranges

    Directory of Open Access Journals (Sweden)

    Dora M. Paolucci

    2011-03-01

    Full Text Available A broadly responsive optical organic vapor sensor is described that responds to low concentrations of organic vapors without significant interference from water vapor. Responses to several classes of organic vapors are highlighted, and trends within classes are presented. The relationship between molecular properties (vapor pressure, boiling point, polarizability, and refractive index and sensor response are discussed.

  11. Real-time image processing of TOF range images using a reconfigurable processor system

    Science.gov (United States)

    Hussmann, S.; Knoll, F.; Edeler, T.

    2011-07-01

    During the last years, Time-of-Flight sensors achieved a significant impact onto research fields in machine vision. In comparison to stereo vision system and laser range scanners they combine the advantages of active sensors providing accurate distance measurements and camera-based systems recording a 2D matrix at a high frame rate. Moreover low cost 3D imaging has the potential to open a wide field of additional applications and solutions in markets like consumer electronics, multimedia, digital photography, robotics and medical technologies. This paper focuses on the currently implemented 4-phase-shift algorithm in this type of sensors. The most time critical operation of the phase-shift algorithm is the arctangent function. In this paper a novel hardware implementation of the arctangent function using a reconfigurable processor system is presented and benchmarked against the state-of-the-art CORDIC arctangent algorithm. Experimental results show that the proposed algorithm is well suited for real-time processing of the range images of TOF cameras.

  12. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    Science.gov (United States)

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  13. Extended-Range Passive RFID and Sensor Tags

    Science.gov (United States)

    Fink, Patrick W.; Kennedy, Timothy F.; Lin, Gregory Y.; Barton, Richard

    2012-01-01

    Extended-range passive radio-frequency identification (RFID) tags and related sensor tags are undergoing development. A tag of this type incorporates a retroreflective antenna array, so that it reflects significantly more signal power back toward an interrogating radio transceiver than does a comparable passive RFID tag of prior design, which does not incorporate a retroreflective antenna array. Therefore, for a given amount of power radiated by the transmitter in the interrogating transceiver, a tag of this type can be interrogated at a distance greater than that of the comparable passive RFID or sensor tag of prior design. The retroreflective antenna array is, more specifically, a Van Atta array, named after its inventor and first published in a patent issued in 1959. In its simplest form, a Van Atta array comprises two antenna elements connected by a transmission line so that the signal received by each antenna element is reradiated by the other antenna element (see Figure 1). The phase relationships among the received and reradiated signals are such as to produce constructive interference of the reradiated signals; that is, to concentrate the reradiated signal power in a direction back toward the source. Hence, an RFID tag equipped with a Van Atta antenna array automatically tracks the interrogating transceiver. The effective gain of a Van Atta array is the same as that of a traditional phased antenna array having the same number of antenna elements. Additional pairs of antenna elements connected by equal-length transmission lines can be incorporated into a Van Atta array to increase its directionality. Like some RFID tags here-to-fore commercially available, an RFID or sensor tag of the present developmental type includes one-port surface-acoustic-wave (SAW) devices. In simplified terms, the mode of operation of a basic one-port SAW device as used heretofore in an RFID device is the following: An interrogating radio signal is converted, at an input end, from

  14. Characteristics of different frequency ranges in scanning electron microscope images

    International Nuclear Information System (INIS)

    Sim, K. S.; Nia, M. E.; Tan, T. L.; Tso, C. P.; Ee, C. S.

    2015-01-01

    We demonstrate a new approach to characterize the frequency range in general scanning electron microscope (SEM) images. First, pure frequency images are generated from low frequency to high frequency, and then, the magnification of each type of frequency image is implemented. By comparing the edge percentage of the SEM image to the self-generated frequency images, we can define the frequency ranges of the SEM images. Characterization of frequency ranges of SEM images benefits further processing and analysis of those SEM images, such as in noise filtering and contrast enhancement

  15. Characteristics of different frequency ranges in scanning electron microscope images

    Energy Technology Data Exchange (ETDEWEB)

    Sim, K. S., E-mail: kssim@mmu.edu.my; Nia, M. E.; Tan, T. L.; Tso, C. P.; Ee, C. S. [Faculty of Engineering and Technology, Multimedia University, 75450 Melaka (Malaysia)

    2015-07-22

    We demonstrate a new approach to characterize the frequency range in general scanning electron microscope (SEM) images. First, pure frequency images are generated from low frequency to high frequency, and then, the magnification of each type of frequency image is implemented. By comparing the edge percentage of the SEM image to the self-generated frequency images, we can define the frequency ranges of the SEM images. Characterization of frequency ranges of SEM images benefits further processing and analysis of those SEM images, such as in noise filtering and contrast enhancement.

  16. ISAR imaging using the instantaneous range instantaneous Doppler method

    CSIR Research Space (South Africa)

    Wazna, TM

    2015-10-01

    Full Text Available In Inverse Synthetic Aperture Radar (ISAR) imaging, the Range Instantaneous Doppler (RID) method is used to compensate for the nonuniform rotational motion of the target that degrades the Doppler resolution of the ISAR image. The Instantaneous Range...

  17. Hydrogen peroxide sensor: Uniformly decorated silver nanoparticles on polypyrrole for wide detection range

    International Nuclear Information System (INIS)

    Nia, Pooria Moozarm; Meng, Woi Pei; Alias, Y.

    2015-01-01

    Graphical abstract: - Highlights: • Electrochemical method was used for depositing silver nanoparticles and polypyrrole. • Silver nanoparticles (25 nm) were uniformly decorated on electrodeposited polypyrrole. • (Ag(NH 3 ) 2 OH) precursor showed better electrochemical performance than (AgNO 3 ). • The sensor showed superior performance toward H 2 O 2 . - Abstract: Electrochemically synthesized polypyrrole (PPy) decorated with silver nanoparticles (AgNPs) was prepared and used as a nonenzymatic sensor for hydrogen peroxide (H 2 O 2 ) detection. Polypyrrole was fabricated through electrodeposition, while silver nanoparticles were deposited on polypyrrole by the same technique. The field emission scanning electron microscopy (FESEM) images showed that the electrodeposited AgNPs were aligned along the PPy uniformly and the mean particle size of AgNPs is around 25 nm. The electrocatalytic activity of AgNPs-PPy-GCE toward H 2 O 2 was studied using chronoamperometry and cyclic voltammetry. The first linear section was in the range of 0.1–5 mM with a limit of detection of 0.115 μmol l −1 and the second linear section was raised to 120 mM with a correlation factor of 0.256 μmol l −1 (S/N of 3). Moreover, the sensor presented excellent stability, selectivity, repeatability and reproducibility. These excellent performances make AgNPs-PPy/GCE an ideal nonenzymatic H 2 O 2 sensor.

  18. Development of integrated semiconductor optical sensors for functional brain imaging

    Science.gov (United States)

    Lee, Thomas T.

    Optical imaging of neural activity is a widely accepted technique for imaging brain function in the field of neuroscience research, and has been used to study the cerebral cortex in vivo for over two decades. Maps of brain activity are obtained by monitoring intensity changes in back-scattered light, called Intrinsic Optical Signals (IOS), that correspond to fluctuations in blood oxygenation and volume associated with neural activity. Current imaging systems typically employ bench-top equipment including lamps and CCD cameras to study animals using visible light. Such systems require the use of anesthetized or immobilized subjects with craniotomies, which imposes limitations on the behavioral range and duration of studies. The ultimate goal of this work is to overcome these limitations by developing a single-chip semiconductor sensor using arrays of sources and detectors operating at near-infrared (NIR) wavelengths. A single-chip implementation, combined with wireless telemetry, will eliminate the need for immobilization or anesthesia of subjects and allow in vivo studies of free behavior. NIR light offers additional advantages because it experiences less absorption in animal tissue than visible light, which allows for imaging through superficial tissues. This, in turn, reduces or eliminates the need for traumatic surgery and enables long-term brain-mapping studies in freely-behaving animals. This dissertation concentrates on key engineering challenges of implementing the sensor. This work shows the feasibility of using a GaAs-based array of vertical-cavity surface emitting lasers (VCSELs) and PIN photodiodes for IOS imaging. I begin with in-vivo studies of IOS imaging through the skull in mice, and use these results along with computer simulations to establish minimum performance requirements for light sources and detectors. I also evaluate the performance of a current commercial VCSEL for IOS imaging, and conclude with a proposed prototype sensor.

  19. Characterization of the range effect in synthetic aperture radar images of concrete specimens for width estimation

    Science.gov (United States)

    Alzeyadi, Ahmed; Yu, Tzuyang

    2018-03-01

    Nondestructive evaluation (NDE) is an indispensable approach for the sustainability of critical civil infrastructure systems such as bridges and buildings. Recently, microwave/radar sensors are widely used for assessing the condition of concrete structures. Among existing imaging techniques in microwave/radar sensors, synthetic aperture radar (SAR) imaging enables researchers to conduct surface and subsurface inspection of concrete structures in the range-cross-range representation of SAR images. The objective of this paper is to investigate the range effect of concrete specimens in SAR images at various ranges (15 cm, 50 cm, 75 cm, 100 cm, and 200 cm). One concrete panel specimen (water-to-cement ratio = 0.45) of 30-cm-by-30-cm-by-5-cm was manufactured and scanned by a 10 GHz SAR imaging radar sensor inside an anechoic chamber. Scatterers in SAR images representing two corners of the concrete panel were used to estimate the width of the panel. It was found that the range-dependent pattern of corner scatters can be used to predict the width of concrete panels. Also, the maximum SAR amplitude decreases when the range increases. An empirical model was also proposed for width estimation of concrete panels.

  20. A 128 x 128 CMOS Active Pixel Image Sensor for Highly Integrated Imaging Systems

    Science.gov (United States)

    Mendis, Sunetra K.; Kemeny, Sabrina E.; Fossum, Eric R.

    1993-01-01

    A new CMOS-based image sensor that is intrinsically compatible with on-chip CMOS circuitry is reported. The new CMOS active pixel image sensor achieves low noise, high sensitivity, X-Y addressability, and has simple timing requirements. The image sensor was fabricated using a 2 micrometer p-well CMOS process, and consists of a 128 x 128 array of 40 micrometer x 40 micrometer pixels. The CMOS image sensor technology enables highly integrated smart image sensors, and makes the design, incorporation and fabrication of such sensors widely accessible to the integrated circuit community.

  1. Quantitative Analysis of Range Image Patches by NEB Method

    Directory of Open Access Journals (Sweden)

    Wang Wen

    2017-01-01

    Full Text Available In this paper we analyze sampled high dimensional data with the NEB method from a range image database. Select a large random sample of log-valued, high contrast, normalized, 8×8 range image patches from the Brown database. We make a density estimator and we establish 1-dimensional cell complexes from the range image patch data. We find topological properties of 8×8 range image patches, prove that there exist two types of subsets of 8×8 range image patches modelled as a circle.

  2. Scannerless laser range imaging using loss modulation

    Energy Technology Data Exchange (ETDEWEB)

    Sandusky, John V [Albuquerque, NM

    2011-08-09

    A scannerless 3-D imaging apparatus is disclosed which utilizes an amplitude modulated cw light source to illuminate a field of view containing a target of interest. Backscattered light from the target is passed through one or more loss modulators which are modulated at the same frequency as the light source, but with a phase delay .delta. which can be fixed or variable. The backscattered light is demodulated by the loss modulator and detected with a CCD, CMOS or focal plane array (FPA) detector to construct a 3-D image of the target. The scannerless 3-D imaging apparatus, which can operate in the eye-safe wavelength region 1.4-1.7 .mu.m and which can be constructed as a flash LADAR, has applications for vehicle collision avoidance, autonomous rendezvous and docking, robotic vision, industrial inspection and measurement, 3-D cameras, and facial recognition.

  3. An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor

    Science.gov (United States)

    Liscombe, Michael

    3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.

  4. Passive Wireless Temperature Sensors with Enhanced Sensitivity and Range, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal describes the development of passive surface acoustic wave (SAW) temperature sensors with enhanced sensitivity and detection range for NASA application...

  5. Enhanced Strain Measurement Range of an FBG Sensor Embedded in Seven-Wire Steel Strands.

    Science.gov (United States)

    Kim, Jae-Min; Kim, Chul-Min; Choi, Song-Yi; Lee, Bang Yeon

    2017-07-18

    FBG sensors offer many advantages, such as a lack of sensitivity to electromagnetic waves, small size, high durability, and high sensitivity. However, their maximum strain measurement range is lower than the yield strain range (about 1.0%) of steel strands when embedded in steel strands. This study proposes a new FBG sensing technique in which an FBG sensor is recoated with polyimide and protected by a polyimide tube in an effort to enhance the maximum strain measurement range of FBG sensors embedded in strands. The validation test results showed that the proposed FBG sensing technique has a maximum strain measurement range of 1.73% on average, which is 1.73 times higher than the yield strain of the strands. It was confirmed that recoating the FBG sensor with polyimide and protecting the FBG sensor using a polyimide tube could effectively enhance the maximum strain measurement range of FBG sensors embedded in strands.

  6. An ultrasensitive strain sensor with a wide strain range based on graphene armour scales.

    Science.gov (United States)

    Yang, Yi-Fan; Tao, Lu-Qi; Pang, Yu; Tian, He; Ju, Zhen-Yi; Wu, Xiao-Ming; Yang, Yi; Ren, Tian-Ling

    2018-06-12

    An ultrasensitive strain sensor with a wide strain range based on graphene armour scales is demonstrated in this paper. The sensor shows an ultra-high gauge factor (GF, up to 1054) and a wide strain range (ε = 26%), both of which present an advantage compared to most other flexible sensors. Moreover, the sensor is developed by a simple fabrication process. Due to the excellent performance, this strain sensor can meet the demands of subtle, large and complex human motion monitoring, which indicates its tremendous application potential in health monitoring, mechanical control, real-time motion monitoring and so on.

  7. Automatic Generation of Wide Dynamic Range Image without Pseudo-Edge Using Integration of Multi-Steps Exposure Images

    Science.gov (United States)

    Migiyama, Go; Sugimura, Atsuhiko; Osa, Atsushi; Miike, Hidetoshi

    Recently, digital cameras are offering technical advantages rapidly. However, the shot image is different from the sight image generated when that scenery is seen with the naked eye. There are blown-out highlights and crushed blacks in the image that photographed the scenery of wide dynamic range. The problems are hardly generated in the sight image. These are contributory cause of difference between the shot image and the sight image. Blown-out highlights and crushed blacks are caused by the difference of dynamic range between the image sensor installed in a digital camera such as CCD and CMOS and the human visual system. Dynamic range of the shot image is narrower than dynamic range of the sight image. In order to solve the problem, we propose an automatic method to decide an effective exposure range in superposition of edges. We integrate multi-step exposure images using the method. In addition, we try to erase pseudo-edges using the process to blend exposure values. Afterwards, we get a pseudo wide dynamic range image automatically.

  8. Multiple-Event, Single-Photon Counting Imaging Sensor

    Science.gov (United States)

    Zheng, Xinyu; Cunningham, Thomas J.; Sun, Chao; Wang, Kang L.

    2011-01-01

    The single-photon counting imaging sensor is typically an array of silicon Geiger-mode avalanche photodiodes that are monolithically integrated with CMOS (complementary metal oxide semiconductor) readout, signal processing, and addressing circuits located in each pixel and the peripheral area of the chip. The major problem is its single-event method for photon count number registration. A single-event single-photon counting imaging array only allows registration of up to one photon count in each of its pixels during a frame time, i.e., the interval between two successive pixel reset operations. Since the frame time can t be too short, this will lead to very low dynamic range and make the sensor merely useful for very low flux environments. The second problem of the prior technique is a limited fill factor resulting from consumption of chip area by the monolithically integrated CMOS readout in pixels. The resulting low photon collection efficiency will substantially ruin any benefit gained from the very sensitive single-photon counting detection. The single-photon counting imaging sensor developed in this work has a novel multiple-event architecture, which allows each of its pixels to register as more than one million (or more) photon-counting events during a frame time. Because of a consequently boosted dynamic range, the imaging array of the invention is capable of performing single-photon counting under ultra-low light through high-flux environments. On the other hand, since the multiple-event architecture is implemented in a hybrid structure, back-illumination and close-to-unity fill factor can be realized, and maximized quantum efficiency can also be achieved in the detector array.

  9. Passive ranging using a filter-based non-imaging method based on oxygen absorption.

    Science.gov (United States)

    Yu, Hao; Liu, Bingqi; Yan, Zongqun; Zhang, Yu

    2017-10-01

    To solve the problem of poor real-time measurement caused by a hyperspectral imaging system and to simplify the design in passive ranging technology based on oxygen absorption spectrum, a filter-based non-imaging ranging method is proposed. In this method, three bandpass filters are used to obtain the source radiation intensities that are located in the oxygen absorption band near 762 nm and the band's left and right non-absorption shoulders, and a photomultiplier tube is used as the non-imaging sensor of the passive ranging system. Range is estimated by comparing the calculated values of band-average transmission due to oxygen absorption, τ O 2 , against the predicted curve of τ O 2 versus range. The method is tested under short-range conditions. Accuracy of 6.5% is achieved with the designed experimental ranging system at the range of 400 m.

  10. Image Alignment for Multiple Camera High Dynamic Range Microscopy

    OpenAIRE

    Eastwood, Brian S.; Childs, Elisabeth C.

    2012-01-01

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability fo...

  11. Micro-digital sun sensor: an imaging sensor for space applications

    NARCIS (Netherlands)

    Xie, N.; Theuwissen, A.J.P.; Büttgen, B.; Hakkesteegt, H.C.; Jasen, H.; Leijtens, J.A.P.

    2010-01-01

    Micro-Digital Sun Sensor is an attitude sensor which senses relative position of micro-satellites to the sun in space. It is composed of a solar cell power supply, a RF communication block and an imaging chip which is called APS+. The APS+ integrates a CMOS Active Pixel Sensor (APS) of 512×512

  12. A Wildlife Monitoring System Based on Wireless Image Sensor Networks

    Directory of Open Access Journals (Sweden)

    Junguo Zhang

    2014-10-01

    Full Text Available Survival and development of wildlife sustains the balance and stability of the entire ecosystem. Wildlife monitoring can provide lots of information such as wildlife species, quantity, habits, quality of life and habitat conditions, to help researchers grasp the status and dynamics of wildlife resources, and to provide basis for the effective protection, sustainable use, and scientific management of wildlife resources. Wildlife monitoring is the foundation of wildlife protection and management. Wireless Sensor Networks (WSN technology has become the most popular technology in the field of information. With advance of the CMOS image sensor technology, wireless sensor networks combined with image sensors, namely Wireless Image Sensor Networks (WISN technology, has emerged as an alternative in monitoring applications. Monitoring wildlife is one of its most promising applications. In this paper, system architecture of the wildlife monitoring system based on the wireless image sensor networks was presented to overcome the shortcomings of the traditional monitoring methods. Specifically, some key issues including design of wireless image sensor nodes and software process design have been studied and presented. A self-powered rotatable wireless infrared image sensor node based on ARM and an aggregation node designed for large amounts of data were developed. In addition, their corresponding software was designed. The proposed system is able to monitor wildlife accurately, automatically, and remotely in all-weather condition, which lays foundations for applications of wireless image sensor networks in wildlife monitoring.

  13. Virtual View Image over Wireless Visual Sensor Network

    Directory of Open Access Journals (Sweden)

    Gamantyo Hendrantoro

    2011-12-01

    Full Text Available In general, visual sensors are applied to build virtual view images. When number of visual sensors increases then quantity and quality of the information improves. However, the view images generation is a challenging task in Wireless Visual Sensor Network environment due to energy restriction, computation complexity, and bandwidth limitation. Hence this paper presents a new method of virtual view images generation from selected cameras on Wireless Visual Sensor Network. The aim of the paper is to meet bandwidth and energy limitations without reducing information quality. The experiment results showed that this method could minimize number of transmitted imageries with sufficient information.

  14. Special Sensor Microwave Imager/Sounder (SSMIS) Sensor Data Record (SDR) in netCDF

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Special Sensor Microwave Imager/Sounder (SSMIS) is a series of passive microwave conically scanning imagers and sounders onboard the DMSP satellites beginning...

  15. Hydrogen peroxide sensor: Uniformly decorated silver nanoparticles on polypyrrole for wide detection range

    Science.gov (United States)

    Nia, Pooria Moozarm; Meng, Woi Pei; Alias, Y.

    2015-12-01

    Electrochemically synthesized polypyrrole (PPy) decorated with silver nanoparticles (AgNPs) was prepared and used as a nonenzymatic sensor for hydrogen peroxide (H2O2) detection. Polypyrrole was fabricated through electrodeposition, while silver nanoparticles were deposited on polypyrrole by the same technique. The field emission scanning electron microscopy (FESEM) images showed that the electrodeposited AgNPs were aligned along the PPy uniformly and the mean particle size of AgNPs is around 25 nm. The electrocatalytic activity of AgNPs-PPy-GCE toward H2O2 was studied using chronoamperometry and cyclic voltammetry. The first linear section was in the range of 0.1-5 mM with a limit of detection of 0.115 μmol l-1 and the second linear section was raised to 120 mM with a correlation factor of 0.256 μmol l-1 (S/N of 3). Moreover, the sensor presented excellent stability, selectivity, repeatability and reproducibility. These excellent performances make AgNPs-PPy/GCE an ideal nonenzymatic H2O2 sensor.

  16. Visual Control of Robots Using Range Images

    Directory of Open Access Journals (Sweden)

    Fernando Torres

    2010-08-01

    Full Text Available In the last years, 3D-vision systems based on the time-of-flight (ToF principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.

  17. Ageing effects on image sensors due to terrestrial cosmic radiation

    NARCIS (Netherlands)

    Nampoothiri, G.G.; Horemans, M.L.R.; Theuwissen, A.J.P.

    2011-01-01

    We analyze the “ageing” effect on image sensors introduced by neutrons present in natural (terrestrial) cosmic environment. The results obtained at sea level are corroborated for the first time with accelerated neutron beam tests and for various image sensor operation conditions. The results reveal

  18. Transducer-based fiber Bragg grating high-temperature sensor with enhanced range and stability

    Science.gov (United States)

    Mamidi, Venkata Reddy; Kamineni, Srimannarayana; Ravinuthala, Lakshmi Narayana Sai Prasad; Tumu, Venkatappa Rao

    2017-09-01

    Fiber Bragg grating (FBG)-based high-temperature sensor with enhanced-temperature range and stability has been developed and tested. The sensor consists of an FBG and a mechanical transducer, which furnishes a linear temperature-dependent tensile strain on FBG by means of differential linear thermal expansion of two different ceramic materials. The designed sensor is tested over a range: 20°C to 1160°C and is expected to measure up to 1500°C.

  19. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    Science.gov (United States)

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  20. Image acquisition system using on sensor compressed sampling technique

    Science.gov (United States)

    Gupta, Pravir Singh; Choi, Gwan Seong

    2018-01-01

    Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.

  1. Wearable Wide-Range Strain Sensors Based on Ionic Liquids and Monitoring of Human Activities

    Directory of Open Access Journals (Sweden)

    Shao-Hui Zhang

    2017-11-01

    Full Text Available Wearable sensors for detection of human activities have encouraged the development of highly elastic sensors. In particular, to capture subtle and large-scale body motion, stretchable and wide-range strain sensors are highly desired, but still a challenge. Herein, a highly stretchable and transparent stain sensor based on ionic liquids and elastic polymer has been developed. The as-obtained sensor exhibits impressive stretchability with wide-range strain (from 0.1% to 400%, good bending properties and high sensitivity, whose gauge factor can reach 7.9. Importantly, the sensors show excellent biological compatibility and succeed in monitoring the diverse human activities ranging from the complex large-scale multidimensional motions to subtle signals, including wrist, finger and elbow joint bending, finger touch, breath, speech, swallow behavior and pulse wave.

  2. Hydrogen peroxide sensor: Uniformly decorated silver nanoparticles on polypyrrole for wide detection range

    Energy Technology Data Exchange (ETDEWEB)

    Nia, Pooria Moozarm, E-mail: pooriamn@yahoo.com; Meng, Woi Pei, E-mail: pmwoi@um.edu.my; Alias, Y., E-mail: yatimah70@um.edu.my

    2015-12-01

    Graphical abstract: - Highlights: • Electrochemical method was used for depositing silver nanoparticles and polypyrrole. • Silver nanoparticles (25 nm) were uniformly decorated on electrodeposited polypyrrole. • (Ag(NH{sub 3}){sub 2}OH) precursor showed better electrochemical performance than (AgNO{sub 3}). • The sensor showed superior performance toward H{sub 2}O{sub 2}. - Abstract: Electrochemically synthesized polypyrrole (PPy) decorated with silver nanoparticles (AgNPs) was prepared and used as a nonenzymatic sensor for hydrogen peroxide (H{sub 2}O{sub 2}) detection. Polypyrrole was fabricated through electrodeposition, while silver nanoparticles were deposited on polypyrrole by the same technique. The field emission scanning electron microscopy (FESEM) images showed that the electrodeposited AgNPs were aligned along the PPy uniformly and the mean particle size of AgNPs is around 25 nm. The electrocatalytic activity of AgNPs-PPy-GCE toward H{sub 2}O{sub 2} was studied using chronoamperometry and cyclic voltammetry. The first linear section was in the range of 0.1–5 mM with a limit of detection of 0.115 μmol l{sup −1} and the second linear section was raised to 120 mM with a correlation factor of 0.256 μmol l{sup −1} (S/N of 3). Moreover, the sensor presented excellent stability, selectivity, repeatability and reproducibility. These excellent performances make AgNPs-PPy/GCE an ideal nonenzymatic H{sub 2}O{sub 2} sensor.

  3. High dynamic range image acquisition based on multiplex cameras

    Science.gov (United States)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  4. Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications

    Science.gov (United States)

    Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David

    2017-10-01

    The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.

  5. Image interpolation used in three-dimensional range data compression.

    Science.gov (United States)

    Zhang, Shaoze; Zhang, Jianqi; Huang, Xi; Liu, Delian

    2016-05-20

    Advances in the field of three-dimensional (3D) scanning have made the acquisition of 3D range data easier and easier. However, with the large size of 3D range data comes the challenge of storing and transmitting it. To address this challenge, this paper presents a framework to further compress 3D range data using image interpolation. We first use a virtual fringe-projection system to store 3D range data as images, and then apply the interpolation algorithm to the images to reduce their resolution to further reduce the data size. When the 3D range data are needed, the low-resolution image is scaled up to its original resolution by applying the interpolation algorithm, and then the scaled-up image is decoded and the 3D range data are recovered according to the decoded result. Experimental results show that the proposed method could further reduce the data size while maintaining a low rate of error.

  6. A time-resolved image sensor for tubeless streak cameras

    Science.gov (United States)

    Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji

    2014-03-01

    This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .

  7. Imaging Sensor Flight and Test Equipment Software

    Science.gov (United States)

    Freestone, Kathleen; Simeone, Louis; Robertson, Byran; Frankford, Maytha; Trice, David; Wallace, Kevin; Wilkerson, DeLisa

    2007-01-01

    The Lightning Imaging Sensor (LIS) is one of the components onboard the Tropical Rainfall Measuring Mission (TRMM) satellite, and was designed to detect and locate lightning over the tropics. The LIS flight code was developed to run on a single onboard digital signal processor, and has operated the LIS instrument since 1997 when the TRMM satellite was launched. The software provides controller functions to the LIS Real-Time Event Processor (RTEP) and onboard heaters, collects the lightning event data from the RTEP, compresses and formats the data for downlink to the satellite, collects housekeeping data and formats the data for downlink to the satellite, provides command processing and interface to the spacecraft communications and data bus, and provides watchdog functions for error detection. The Special Test Equipment (STE) software was designed to operate specific test equipment used to support the LIS hardware through development, calibration, qualification, and integration with the TRMM spacecraft. The STE software provides the capability to control instrument activation, commanding (including both data formatting and user interfacing), data collection, decompression, and display and image simulation. The LIS STE code was developed for the DOS operating system in the C programming language. Because of the many unique data formats implemented by the flight instrument, the STE software was required to comprehend the same formats, and translate them for the test operator. The hardware interfaces to the LIS instrument using both commercial and custom computer boards, requiring that the STE code integrate this variety into a working system. In addition, the requirement to provide RTEP test capability dictated the need to provide simulations of background image data with short-duration lightning transients superimposed. This led to the development of unique code used to control the location, intensity, and variation above background for simulated lightning strikes

  8. Image Alignment for Multiple Camera High Dynamic Range Microscopy.

    Science.gov (United States)

    Eastwood, Brian S; Childs, Elisabeth C

    2012-01-09

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.

  9. Collaborative Image Coding and Transmission over Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Min Wu

    2007-01-01

    Full Text Available The imaging sensors are able to provide intuitive visual information for quick recognition and decision. However, imaging sensors usually generate vast amount of data. Therefore, processing and coding of image data collected in a sensor network for the purpose of energy efficient transmission poses a significant technical challenge. In particular, multiple sensors may be collecting similar visual information simultaneously. We propose in this paper a novel collaborative image coding and transmission scheme to minimize the energy for data transmission. First, we apply a shape matching method to coarsely register images to find out maximal overlap to exploit the spatial correlation between images acquired from neighboring sensors. For a given image sequence, we transmit background image only once. A lightweight and efficient background subtraction method is employed to detect targets. Only the regions of target and their spatial locations are transmitted to the monitoring center. The whole image can then be reconstructed by fusing the background and the target images as well as their spatial locations. Experimental results show that the energy for image transmission can indeed be greatly reduced with collaborative image coding and transmission.

  10. Third-generation imaging sensor system concepts

    Science.gov (United States)

    Reago, Donald A.; Horn, Stuart B.; Campbell, James, Jr.; Vollmerhausen, Richard H.

    1999-07-01

    Second generation forward looking infrared sensors, based on either parallel scanning, long wave (8 - 12 um) time delay and integration HgCdTe detectors or mid wave (3 - 5 um), medium format staring (640 X 480 pixels) InSb detectors, are being fielded. The science and technology community is now turning its attention toward the definition of a future third generation of FLIR sensors, based on emerging research and development efforts. Modeled third generation sensor performance demonstrates a significant improvement in performance over second generation, resulting in enhanced lethality and survivability on the future battlefield. In this paper we present the current thinking on what third generation sensors systems will be and the resulting requirements for third generation focal plane array detectors. Three classes of sensors have been identified. The high performance sensor will contain a megapixel or larger array with at least two colors. Higher operating temperatures will also be the goal here so that power and weight can be reduced. A high performance uncooled sensor is also envisioned that will perform somewhere between first and second generation cooled detectors, but at significantly lower cost, weight, and power. The final third generation sensor is a very low cost micro sensor. This sensor can open up a whole new IR market because of its small size, weight, and cost. Future unattended throwaway sensors, micro UAVs, and helmet mounted IR cameras will be the result of this new class.

  11. Fingerprint image reconstruction for swipe sensor using Predictive Overlap Method

    Directory of Open Access Journals (Sweden)

    Mardiansyah Ahmad Zafrullah

    2018-01-01

    Full Text Available Swipe sensor is one of many biometric authentication sensor types that widely applied to embedded devices. The sensor produces an overlap on every pixel block of the image, so the picture requires a reconstruction process before heading to the feature extraction process. Conventional reconstruction methods require extensive computation, causing difficult to apply to embedded devices that have limited computing process. In this paper, image reconstruction is proposed using predictive overlap method, which determines the image block shift from the previous set of change data. The experiments were performed using 36 images generated by a swipe sensor with 128 x 8 pixels size of the area, where each image has an overlap in each block. The results reveal computation can increase up to 86.44% compared with conventional methods, with accuracy decreasing to 0.008% in average.

  12. Novel birefringence interrogation for Sagnac loop interferometer sensor with unlimited linear measurement range.

    Science.gov (United States)

    He, Haijun; Shao, Liyang; Qian, Heng; Zhang, Xinpu; Liang, Jiawei; Luo, Bin; Pan, Wei; Yan, Lianshan

    2017-03-20

    A novel demodulation method for Sagnac loop interferometer based sensor has been proposed and demonstrated, by unwrapping the phase changes with birefringence interrogation. A temperature sensor based on Sagnac loop interferometer has been used to verify the feasibility of the proposed method. Several tests with 40 °C temperature range have been accomplished with a great linearity of 0.9996 in full range. The proposed scheme is universal for all Sagnac loop interferometer based sensors and it has unlimited linear measurable range which overwhelming the conventional demodulation method with peak/dip tracing. Furthermore, the influence of the wavelength sampling interval and wavelength span on the demodulation error has been discussed in this work. The proposed interrogation method has a great significance for Sagnac loop interferometer sensor and it might greatly enhance the availability of this type of sensors in practical application.

  13. SNAPSHOT SPECTRAL AND COLOR IMAGING USING A REGULAR DIGITAL CAMERA WITH A MONOCHROMATIC IMAGE SENSOR

    Directory of Open Access Journals (Sweden)

    J. Hauser

    2017-10-01

    Full Text Available Spectral imaging (SI refers to the acquisition of the three-dimensional (3D spectral cube of spatial and spectral data of a source object at a limited number of wavelengths in a given wavelength range. Snapshot spectral imaging (SSI refers to the instantaneous acquisition (in a single shot of the spectral cube, a process suitable for fast changing objects. Known SSI devices exhibit large total track length (TTL, weight and production costs and relatively low optical throughput. We present a simple SSI camera based on a regular digital camera with (i an added diffusing and dispersing phase-only static optical element at the entrance pupil (diffuser and (ii tailored compressed sensing (CS methods for digital processing of the diffused and dispersed (DD image recorded on the image sensor. The diffuser is designed to mix the spectral cube data spectrally and spatially and thus to enable convergence in its reconstruction by CS-based algorithms. In addition to performing SSI, this SSI camera is capable to perform color imaging using a monochromatic or gray-scale image sensor without color filter arrays.

  14. Multi-sensor image fusion and its applications

    CERN Document Server

    Blum, Rick S

    2005-01-01

    Taking another lesson from nature, the latest advances in image processing technology seek to combine image data from several diverse types of sensors in order to obtain a more accurate view of the scene: very much the same as we rely on our five senses. Multi-Sensor Image Fusion and Its Applications is the first text dedicated to the theory and practice of the registration and fusion of image data, covering such approaches as statistical methods, color-related techniques, model-based methods, and visual information display strategies.After a review of state-of-the-art image fusion techniques,

  15. Expanding the dynamic measurement range for polymeric nanoparticle pH sensors

    DEFF Research Database (Denmark)

    Sun, Honghao; Almdal, Kristoffer; Andresen, Thomas Lars

    2011-01-01

    Conventional optical nanoparticle pH sensors that are designed for ratiometric measurements in cells have been based on utilizing one sensor fluorophore and one reference fluorophore in each nanoparticle, which results in a relatively narrow dynamic measurement range. This results in substantial...

  16. A low-power CMOS integrated sensor for CO2 detection in the percentage range

    NARCIS (Netherlands)

    Humbert, A.; Tuerlings, B.J.; Hoofman, R.J.O.M.; Tan, Z.; Gravesteijn, D.J.; Pertijs, M.A.P.; Bastiaansen, C.W.M.; Soccol, D.

    2013-01-01

    Within the Catrene project PASTEUR, a low-cost, low-power capacitive carbon dioxide sensor has been developed for tracking CO2 concentration in the percentage range. This paper describes this sensor, which operates at room temperature where it exhibits short response times as well as reversible

  17. Fully wireless pressure sensor based on endoscopy images

    Science.gov (United States)

    Maeda, Yusaku; Mori, Hirohito; Nakagawa, Tomoaki; Takao, Hidekuni

    2018-04-01

    In this paper, the result of developing a fully wireless pressure sensor based on endoscopy images for an endoscopic surgery is reported for the first time. The sensor device has structural color with a nm-scale narrow gap, and the gap is changed by air pressure. The structural color of the sensor is acquired from camera images. Pressure detection can be realized with existing endoscope configurations only. The inner air pressure of the human body should be measured under flexible-endoscope operation using the sensor. Air pressure monitoring, has two important purposes. The first is to quantitatively measure tumor size under a constant air pressure for treatment selection. The second purpose is to prevent the endangerment of a patient due to over transmission of air. The developed sensor was evaluated, and the detection principle based on only endoscopy images has been successfully demonstrated.

  18. SEGMENTATION AND QUALITY ANALYSIS OF LONG RANGE CAPTURED IRIS IMAGE

    Directory of Open Access Journals (Sweden)

    Anand Deshpande

    2016-05-01

    Full Text Available The iris segmentation plays a major role in an iris recognition system to increase the performance of the system. This paper proposes a novel method for segmentation of iris images to extract the iris part of long range captured eye image and an approach to select best iris frame from the iris polar image sequences by analyzing the quality of iris polar images. The quality of iris image is determined by the frequency components present in the iris polar images. The experiments are carried out on CASIA-long range captured iris image sequences. The proposed segmentation method is compared with Hough transform based segmentation and it has been determined that the proposed method gives higher accuracy for segmentation than Hough transform.

  19. Heterodyne range imaging as an alternative to photogrammetry

    Science.gov (United States)

    Dorrington, Adrian; Cree, Michael; Carnegie, Dale; Payne, Andrew; Conroy, Richard

    2007-01-01

    Solid-state full-field range imaging technology, capable of determining the distance to objects in a scene simultaneously for every pixel in an image, has recently achieved sub-millimeter distance measurement precision. With this level of precision, it is becoming practical to use this technology for high precision three-dimensional metrology applications. Compared to photogrammetry, range imaging has the advantages of requiring only one viewing angle, a relatively short measurement time, and simplistic fast data processing. In this paper we fist review the range imaging technology, then describe an experiment comparing both photogrammetric and range imaging measurements of a calibration block with attached retro-reflective targets. The results show that the range imaging approach exhibits errors of approximately 0.5 mm in-plane and almost 5 mm out-of-plane; however, these errors appear to be mostly systematic. We then proceed to examine the physical nature and characteristics of the image ranging technology and discuss the possible causes of these systematic errors. Also discussed is the potential for further system characterization and calibration to compensate for the range determination and other errors, which could possibly lead to three-dimensional measurement precision approaching that of photogrammetry.

  20. Sensor Control And Film Annotation For Long Range, Standoff Reconnaissance

    Science.gov (United States)

    Schmidt, Thomas G.; Peters, Owen L.; Post, Lawrence H.

    1984-12-01

    This paper describes a Reconnaissance Data Annotation System that incorporates off-the-shelf technology and system designs providing a high degree of adaptability and interoperability to satisfy future reconnaissance data requirements. The history of data annotation for reconnaissance is reviewed in order to provide the base from which future developments can be assessed and technical risks minimized. The system described will accommodate new developments in recording head assemblies and the incorporation of advanced cameras of both the film and electro-optical type. Use of microprocessor control and digital bus inter-face form the central design philosophy. For long range, high altitude, standoff missions, the Data Annotation System computes the projected latitude and longitude of central target position from aircraft position and attitude. This complements the use of longer ranges and high altitudes for reconnaissance missions.

  1. A CMOS image sensor with row and column profiling means

    NARCIS (Netherlands)

    Xie, N.; Theuwissen, A.J.P.; Wang, X.; Leijtens, J.A.P.; Hakkesteegt, H.; Jansen, H.

    2008-01-01

    This paper describes the implementation and firstmeasurement results of a new way that obtains row and column profile data from a CMOS Image Sensor, which is developed for a micro-Digital Sun Sensor (μDSS).The basic profiling action is achieved by the pixels with p-type MOS transistors which realize

  2. CMOS Active-Pixel Image Sensor With Simple Floating Gates

    Science.gov (United States)

    Fossum, Eric R.; Nakamura, Junichi; Kemeny, Sabrina E.

    1996-01-01

    Experimental complementary metal-oxide/semiconductor (CMOS) active-pixel image sensor integrated circuit features simple floating-gate structure, with metal-oxide/semiconductor field-effect transistor (MOSFET) as active circuit element in each pixel. Provides flexibility of readout modes, no kTC noise, and relatively simple structure suitable for high-density arrays. Features desirable for "smart sensor" applications.

  3. Vision communications based on LED array and imaging sensor

    Science.gov (United States)

    Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.

  4. High Resolution and Large Dynamic Range Resonant Pressure Sensor Based on Q-Factor Measurement

    Science.gov (United States)

    Gutierrez, Roman C. (Inventor); Stell, Christopher B. (Inventor); Tang, Tony K. (Inventor); Vorperian, Vatche (Inventor); Wilcox, Jaroslava (Inventor); Shcheglov, Kirill (Inventor); Kaiser, William J. (Inventor)

    2000-01-01

    A pressure sensor has a high degree of accuracy over a wide range of pressures. Using a pressure sensor relying upon resonant oscillations to determine pressure, a driving circuit drives such a pressure sensor at resonance and tracks resonant frequency and amplitude shifts with changes in pressure. Pressure changes affect the Q-factor of the resonating portion of the pressure sensor. Such Q-factor changes are detected by the driving/sensing circuit which in turn tracks the changes in resonant frequency to maintain the pressure sensor at resonance. Changes in the Q-factor are reflected in changes of amplitude of the resonating pressure sensor. In response, upon sensing the changes in the amplitude, the driving circuit changes the force or strength of the electrostatic driving signal to maintain the resonator at constant amplitude. The amplitude of the driving signals become a direct measure of the changes in pressure as the operating characteristics of the resonator give rise to a linear response curve for the amplitude of the driving signal. Pressure change resolution is on the order of 10(exp -6) torr over a range spanning from 7,600 torr to 10(exp -6) torr. No temperature compensation for the pressure sensor of the present invention is foreseen. Power requirements for the pressure sensor are generally minimal due to the low-loss mechanical design of the resonating pressure sensor and the simple control electronics.

  5. Near-IR Two-Photon Fluorescent Sensor for K(+) Imaging in Live Cells.

    Science.gov (United States)

    Sui, Binglin; Yue, Xiling; Kim, Bosung; Belfield, Kevin D

    2015-08-19

    A new two-photon excited fluorescent K(+) sensor is reported. The sensor comprises three moieties, a highly selective K(+) chelator as the K(+) recognition unit, a boron-dipyrromethene (BODIPY) derivative modified with phenylethynyl groups as the fluorophore, and two polyethylene glycol chains to afford water solubility. The sensor displays very high selectivity (>52-fold) in detecting K(+) over other physiological metal cations. Upon binding K(+), the sensor switches from nonfluorescent to highly fluorescent, emitting red to near-IR (NIR) fluorescence. The sensor exhibited a good two-photon absorption cross section, 500 GM at 940 nm. Moreover, it is not sensitive to pH in the physiological pH range. Time-dependent cell imaging studies via both one- and two-photon fluorescence microscopy demonstrate that the sensor is suitable for dynamic K(+) sensing in living cells.

  6. A Wide Spectral Range Reflectance and Luminescence Imaging System

    Directory of Open Access Journals (Sweden)

    Tapani Hirvonen

    2013-10-01

    Full Text Available In this study, we introduce a wide spectral range (200–2500 nm imaging system with a 250 μm minimum spatial resolution, which can be freely modified for a wide range of resolutions and measurement geometries. The system has been tested for reflectance and luminescence measurements, but can also be customized for transmittance measurements. This study includes the performance results of the developed system, as well as examples of spectral images. Discussion of the system relates it to existing systems and methods. The wide range spectral imaging system that has been developed is however highly customizable and has great potential in many practical applications.

  7. SAW passive wireless sensor-RFID tags with enhanced range, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal describes the development of passive wireless surface acoustic wave (SAW) RFID sensor-tags with enhanced range for remote monitoring of large groups of...

  8. X-ray detectors based on image sensors

    International Nuclear Information System (INIS)

    Costa, A.P.R.

    1983-01-01

    X-ray detectors based on image sensors are described and a comparison is made between the advantages and the disadvantages of such a kind of detectors with the position sensitive detectors. (L.C.) [pt

  9. Quality Factor Effect on the Wireless Range of Microstrip Patch Antenna Strain Sensors

    Science.gov (United States)

    Daliri, Ali; Galehdar, Amir; Rowe, Wayne S. T.; John, Sabu; Wang, Chun H.; Ghorbani, Kamran

    2014-01-01

    Recently introduced passive wireless strain sensors based on microstrip patch antennas have shown great potential for reliable health and usage monitoring in aerospace and civil industries. However, the wireless interrogation range of these sensors is limited to few centimeters, which restricts their practical application. This paper presents an investigation on the effect of circular microstrip patch antenna (CMPA) design on the quality factor and the maximum practical wireless reading range of the sensor. The results reveal that by using appropriate substrate materials the interrogation distance of the CMPA sensor can be increased four-fold, from the previously reported 5 to 20 cm, thus improving considerably the viability of this type of wireless sensors for strain measurement and damage detection. PMID:24451457

  10. Quality Factor Effect on the Wireless Range of Microstrip Patch Antenna Strain Sensors

    Directory of Open Access Journals (Sweden)

    Ali Daliri

    2014-01-01

    Full Text Available Recently introduced passive wireless strain sensors based on microstrip patch antennas have shown great potential for reliable health and usage monitoring in aerospace and civil industries. However, the wireless interrogation range of these sensors is limited to few centimeters, which restricts their practical application. This paper presents an investigation on the effect of circular microstrip patch antenna (CMPA design on the quality factor and the maximum practical wireless reading range of the sensor. The results reveal that by using appropriate substrate materials the interrogation distance of the CMPA sensor can be increased four-fold, from the previously reported 5 to 20 cm, thus improving considerably the viability of this type of wireless sensors for strain measurement and damage detection.

  11. Integrated imaging sensor systems with CMOS active pixel sensor technology

    Science.gov (United States)

    Yang, G.; Cunningham, T.; Ortiz, M.; Heynssens, J.; Sun, C.; Hancock, B.; Seshadri, S.; Wrigley, C.; McCarty, K.; Pain, B.

    2002-01-01

    This paper discusses common approaches to CMOS APS technology, as well as specific results on the five-wire programmable digital camera-on-a-chip developed at JPL. The paper also reports recent research in the design, operation, and performance of APS imagers for several imager applications.

  12. Active Sensor for Microwave Tissue Imaging with Bias-Switched Arrays.

    Science.gov (United States)

    Foroutan, Farzad; Nikolova, Natalia K

    2018-05-06

    A prototype of a bias-switched active sensor was developed and measured to establish the achievable dynamic range in a new generation of active arrays for microwave tissue imaging. The sensor integrates a printed slot antenna, a low-noise amplifier (LNA) and an active mixer in a single unit, which is sufficiently small to enable inter-sensor separation distance as small as 12 mm. The sensor’s input covers the bandwidth from 3 GHz to 7.5 GHz. Its output intermediate frequency (IF) is 30 MHz. The sensor is controlled by a simple bias-switching circuit, which switches ON and OFF the bias of the LNA and the mixer simultaneously. It was demonstrated experimentally that the dynamic range of the sensor, as determined by its ON and OFF states, is 109 dB and 118 dB at resolution bandwidths of 1 kHz and 100 Hz, respectively.

  13. Extended Special Sensor Microwave Imager (SSM/I) Sensor Data Record (SDR) in netCDF

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Special Sensor Microwave Imager (SSM/I) is a seven-channel linearly polarized passive microwave radiometer that operates at frequencies of 19.36 (vertically and...

  14. Short-range/Long-range Integrated Target (SLIT) for Video Guidance Sensor Rendezvous and Docking

    Science.gov (United States)

    Roe, Fred D. (Inventor); Bryan, Thomas C. (Inventor)

    2009-01-01

    A laser target reflector assembly for mounting upon spacecraft having a long-range reflector array formed from a plurality of unfiltered light reflectors embedded in an array pattern upon a hemispherical reflector disposed upon a mounting plate. The reflector assembly also includes a short-range reflector array positioned upon the mounting body proximate to the long-range reflector array. The short-range reflector array includes three filtered light reflectors positioned upon extensions from the mounting body. The three filtered light reflectors retro-reflect substantially all incident light rays that are transmissive by their monochromatic filters and received by the three filtered light reflectors. In one embodiment the short-range reflector array is embedded within the hemispherical reflector,

  15. Unsynchronized scanning with a low-cost laser range finder for real-time range imaging

    Science.gov (United States)

    Hatipoglu, Isa; Nakhmani, Arie

    2017-06-01

    Range imaging plays an essential role in many fields: 3D modeling, robotics, heritage, agriculture, forestry, reverse engineering. One of the most popular range-measuring technologies is laser scanner due to its several advantages: long range, high precision, real-time measurement capabilities, and no dependence on lighting conditions. However, laser scanners are very costly. Their high cost prevents widespread use in applications. Due to the latest developments in technology, now, low-cost, reliable, faster, and light-weight 1D laser range finders (LRFs) are available. A low-cost 1D LRF with a scanning mechanism, providing the ability of laser beam steering for additional dimensions, enables to capture a depth map. In this work, we present an unsynchronized scanning with a low-cost LRF to decrease scanning period and reduce vibrations caused by stop-scan in synchronized scanning. Moreover, we developed an algorithm for alignment of unsynchronized raw data and proposed range image post-processing framework. The proposed technique enables to have a range imaging system for a fraction of the price of its counterparts. The results prove that the proposed method can fulfill the need for a low-cost laser scanning for range imaging for static environments because the most significant limitation of the method is the scanning period which is about 2 minutes for 55,000 range points (resolution of 250x220 image). In contrast, scanning the same image takes around 4 minutes in synchronized scanning. Once faster, longer range, and narrow beam LRFs are available, the methods proposed in this work can produce better results.

  16. AN INDUCTION SENSOR FOR MEASURING CURRENTS OF NANOSECOND RANGE

    Directory of Open Access Journals (Sweden)

    S. P. Shalamov

    2016-11-01

    Full Text Available Purpose. A current meter based on the principle of electromagnetic induction is designed to register the current flowing in the rod lightning. The aim of the article is to describe the way of increasing the sensitivity of the converter by means of their serial communication. Methodology. The recorded current is in the nanosecond range. If compared with other methods, meters based on the principle of electromagnetic induction have several advantages, such as simplicity of construction, reliability, low cost, no need in a power source, relatively high sensitivity. Creation of such a meter is necessary, because in some cases there is no possibility to use a shunt. Transient properties of a meter are determined by the number of turns and the constant of integration. Sensitivity is determined by measuring the number of turns, the coil sectional area, the core material and the integration constant. For measuring the magnetic field pulses with a rise time of 5 ns to 50 ns a meter has turns from 5 to 15. The sensitivity of such a meter is low. When the number of turns is increased, the output signal and the front increase. Earlier described dependencies were used to select the main parameters of the converter. It was based on generally accepted and widely known equivalent circuit. The experience of created earlier pulse magnetic field meters was considered both for measuring the magnetic fields, and large pulse current. Originality. Series connection of converters has the property of a long line. The level of the transient response of the meter is calculated. The influence of parasitic parameters on the type of meter transient response is examined. The shown construction was not previously described. Practical value. The results of meter implementation are given. The design peculiarities of the given measuring instruments are shown.

  17. Image Denoising Using Interquartile Range Filter with Local Averaging

    OpenAIRE

    Jassim, Firas Ajil

    2013-01-01

    Image denoising is one of the fundamental problems in image processing. In this paper, a novel approach to suppress noise from the image is conducted by applying the interquartile range (IQR) which is one of the statistical methods used to detect outlier effect from a dataset. A window of size kXk was implemented to support IQR filter. Each pixel outside the IQR range of the kXk window is treated as noisy pixel. The estimation of the noisy pixels was obtained by local averaging. The essential...

  18. Imaging system design and image interpolation based on CMOS image sensor

    Science.gov (United States)

    Li, Yu-feng; Liang, Fei; Guo, Rui

    2009-11-01

    An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.

  19. Energy-Efficient Algorithm for Sensor Networks with Non-Uniform Maximum Transmission Range

    Directory of Open Access Journals (Sweden)

    Yimin Yu

    2011-06-01

    Full Text Available In wireless sensor networks (WSNs, the energy hole problem is a key factor affecting the network lifetime. In a circular multi-hop sensor network (modeled as concentric coronas, the optimal transmission ranges of all coronas can effectively improve network lifetime. In this paper, we investigate WSNs with non-uniform maximum transmission ranges, where sensor nodes deployed in different regions may differ in their maximum transmission range. Then, we propose an Energy-efficient algorithm for Non-uniform Maximum Transmission range (ENMT, which can search approximate optimal transmission ranges of all coronas in order to prolong network lifetime. Furthermore, the simulation results indicate that ENMT performs better than other algorithms.

  20. Increasing the Dynamic Range of Synthetic Aperture Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Jensen, Jørgen Arendt

    2014-01-01

    images. The emissions for the two imaging modes are interleaved 1-to-1 ratio, providing a high frame rate equal to the effective pulse repetition frequency of each imaging mode. The direction of the flow is estimated, and the velocity is then determined in that direction. This method Works for all angles...... standard deviations are 1.59% and 6.12%, respectively. The presented method can improve the estimates by synthesizing a lower pulse repetition frequency, thereby increasing the dynamic range of the vector velocity imaging....

  1. A High-Dynamic-Range Optical Remote Sensing Imaging Method for Digital TDI CMOS

    Directory of Open Access Journals (Sweden)

    Taiji Lan

    2017-10-01

    Full Text Available The digital time delay integration (digital TDI technology of the complementary metal-oxide-semiconductor (CMOS image sensor has been widely adopted and developed in the optical remote sensing field. However, the details of targets that have low illumination or low contrast in scenarios of high contrast are often drowned out because of the superposition of multi-stage images in digital domain multiplies the read noise and the dark noise, thus limiting the imaging dynamic range. Through an in-depth analysis of the information transfer model of digital TDI, this paper attempts to explore effective ways to overcome this issue. Based on the evaluation and analysis of multi-stage images, the entropy-maximized adaptive histogram equalization (EMAHE algorithm is proposed to improve the ability of images to express the details of dark or low-contrast targets. Furthermore, in this paper, an image fusion method is utilized based on gradient pyramid decomposition and entropy weighting of different TDI stage images, which can improve the detection ability of the digital TDI CMOS for complex scenes with high contrast, and obtain images that are suitable for recognition by the human eye. The experimental results show that the proposed methods can effectively improve the high-dynamic-range imaging (HDRI capability of the digital TDI CMOS. The obtained images have greater entropy and average gradients.

  2. Aerial Triangulation Close-range Images with Dual Quaternion

    Directory of Open Access Journals (Sweden)

    SHENG Qinghong

    2015-05-01

    Full Text Available A new method for the aerial triangulation of close-range images based on dual quaternion is presented. Using dual quaternion to represent the spiral screw motion of the beam in the space, the real part of dual quaternion represents the angular elements of all the beams in the close-range area networks, the real part and the dual part of dual quaternion represents the line elements corporately. Finally, an aerial triangulation adjustment model based on dual quaternion is established, and the elements of interior orientation and exterior orientation and the object coordinates of the ground points are calculated. Real images and large attitude angle simulated images are selected to run the experiments of aerial triangulation. The experimental results show that the new method for the aerial triangulation of close-range images based on dual quaternion can obtain higher accuracy.

  3. A novel track imaging system as a range counter

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Z. [National Institute of Radiological Sciences (Japan); Matsufuji, N. [National Institute of Radiological Sciences (Japan); Tokyo Institute of Technology (Japan); Kanayama, S. [Chiba University (Japan); Ishida, A. [National Institute of Radiological Sciences (Japan); Tokyo Institute of Technology (Japan); Kohno, T. [Tokyo Institute of Technology (Japan); Koba, Y.; Sekiguchi, M.; Kitagawa, A.; Murakami, T. [National Institute of Radiological Sciences (Japan)

    2016-05-01

    An image-intensified, camera-based track imaging system has been developed to measure the tracks of ions in a scintillator block. To study the performance of the detector unit in the system, two types of scintillators, a dosimetrically tissue-equivalent plastic scintillator EJ-240 and a CsI(Tl) scintillator, were separately irradiated with carbon ion ({sup 12}C) beams of therapeutic energy from HIMAC at NIRS. The images of individual ion tracks in the scintillators were acquired by the newly developed track imaging system. The ranges reconstructed from the images are reported here. The range resolution of the measurements is 1.8 mm for 290 MeV/u carbon ions, which is considered a significant improvement on the energy resolution of the conventional ΔE/E method. The detector is compact and easy to handle, and it can fit inside treatment rooms for in-situ studies, as well as satisfy clinical quality assurance purposes.

  4. APPLICATION OF SENSOR FUSION TO IMPROVE UAV IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    S. Jabari

    2017-08-01

    Full Text Available Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan camera along with either a colour camera or a four-band multi-spectral (MS camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC. We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  5. Application of Sensor Fusion to Improve Uav Image Classification

    Science.gov (United States)

    Jabari, S.; Fathollahi, F.; Zhang, Y.

    2017-08-01

    Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  6. Sparse Detector Imaging Sensor with Two-Class Silhouette Classification

    Directory of Open Access Journals (Sweden)

    David Russomanno

    2008-12-01

    Full Text Available This paper presents the design and test of a simple active near-infrared sparse detector imaging sensor. The prototype of the sensor is novel in that it can capture remarkable silhouettes or profiles of a wide-variety of moving objects, including humans, animals, and vehicles using a sparse detector array comprised of only sixteen sensing elements deployed in a vertical configuration. The prototype sensor was built to collect silhouettes for a variety of objects and to evaluate several algorithms for classifying the data obtained from the sensor into two classes: human versus non-human. Initial tests show that the classification of individually sensed objects into two classes can be achieved with accuracy greater than ninety-nine percent (99% with a subset of the sixteen detectors using a representative dataset consisting of 512 signatures. The prototype also includes a Webservice interface such that the sensor can be tasked in a network-centric environment. The sensor appears to be a low-cost alternative to traditional, high-resolution focal plane array imaging sensors for some applications. After a power optimization study, appropriate packaging, and testing with more extensive datasets, the sensor may be a good candidate for deployment in vast geographic regions for a myriad of intelligent electronic fence and persistent surveillance applications, including perimeter security scenarios.

  7. Recce NG: from Recce sensor to image intelligence (IMINT)

    Science.gov (United States)

    Larroque, Serge

    2001-12-01

    Recce NG (Reconnaissance New Generation) is presented as a complete and optimized Tactical Reconnaissance System. Based on a new generation Pod integrating high resolution Dual Band sensors, the system has been designed with the operational lessons learnt from the last Peace Keeping Operations in Bosnia and Kosovo. The technical solutions retained as component modules of a full IMINT acquisition system, take benefit of the state of art in the following key technologies: Advanced Mission Planning System for long range stand-off Manned Recce, Aircraft and/or Pod tasking, operating sophisticated back-up software tools, high resolution 3D geo data and improved/combat proven MMI to reduce planning delays, Mature Dual Band sensors technology to achieve the Day and Night Recce Mission, including advanced automatic operational functions, as azimuth and roll tracking capabilities, low risk in Pod integration and in carrier avionics, controls and displays upgrades, to save time in operational turn over and maintenance, High rate Imagery Down Link, for Real Time or Near Real Time transmission, fully compatible with STANAG 7085 requirements, Advanced IMINT Exploitation Ground Segment, combat proven, NATO interoperable (STANAG 7023), integrating high value software tools for accurate location, improved radiometric image processing and open link to the C4ISR systems. The choice of an industrial Prime contractor mastering across the full system, all the prior listed key products and technologies, is mandatory to a successful delivery in terms of low Cost, Risk and Time Schedule.

  8. Miniature large range multi-axis force-torque sensor for biomechanical applications

    International Nuclear Information System (INIS)

    Brookhuis, R A; Sanders, R G P; Ma, K; Lammerink, T S J; De Boer, M J; Krijnen, G J M; Wiegerink, R J

    2015-01-01

    A miniature force sensor for the measurement of forces and moments at a human fingertip is designed and realized. Thin silicon pillars inside the sensor provide in-plane guidance for shear force measurement and provide the spring constant in normal direction. A corrugated silicon ring around the force sensitive area provides the spring constant in shear direction and seals the interior of the sensor. To detect all load components, capacitive read-out is used. A novel electrode pattern results in a large shear force sensitivity. The fingertip force sensor has a wide force range of up to 60 N in normal direction, ± 30 N in shear direction and a torque range of ± 25 N mm. (paper)

  9. Highly Sensitive Multifilament Fiber Strain Sensors with Ultrabroad Sensing Range for Textile Electronics.

    Science.gov (United States)

    Lee, Jaehong; Shin, Sera; Lee, Sanggeun; Song, Jaekang; Kang, Subin; Han, Heetak; Kim, SeulGee; Kim, Seunghoe; Seo, Jungmok; Kim, DaeEun; Lee, Taeyoon

    2018-05-22

    Highly stretchable fiber strain sensors are one of the most important components for various applications in wearable electronics, electronic textiles, and biomedical electronics. Herein, we present a facile approach for fabricating highly stretchable and sensitive fiber strain sensors by embedding Ag nanoparticles into a stretchable fiber with a multifilament structure. The multifilament structure and Ag-rich shells of the fiber strain sensor enable the sensor to simultaneously achieve both a high sensitivity and largely wide sensing range despite its simple fabrication process and components. The fiber strain sensor simultaneously exhibits ultrahigh gauge factors (∼9.3 × 10 5 and ∼659 in the first stretching and subsequent stretching, respectively), a very broad strain-sensing range (450 and 200% for the first and subsequent stretching, respectively), and high durability for more than 10 000 stretching cycles. The fiber strain sensors can also be readily integrated into a glove to control a hand robot and effectively applied to monitor the large volume expansion of a balloon and a pig bladder for an artificial bladder system, thereby demonstrating the potential of the fiber strain sensors as candidates for electronic textiles, wearable electronics, and biomedical engineering.

  10. Six-axis force–torque sensor with a large range for biomechanical applications

    International Nuclear Information System (INIS)

    + Institute for Nanotechnology, University of Twente, Enschede (Netherlands))" data-affiliation=" (MESA+ Institute for Nanotechnology, University of Twente, Enschede (Netherlands))" >Brookhuis, R A; + Institute for Nanotechnology, University of Twente, Enschede (Netherlands))" data-affiliation=" (MESA+ Institute for Nanotechnology, University of Twente, Enschede (Netherlands))" >Droogendijk, H; + Institute for Nanotechnology, University of Twente, Enschede (Netherlands))" data-affiliation=" (MESA+ Institute for Nanotechnology, University of Twente, Enschede (Netherlands))" >De Boer, M J; + Institute for Nanotechnology, University of Twente, Enschede (Netherlands))" data-affiliation=" (MESA+ Institute for Nanotechnology, University of Twente, Enschede (Netherlands))" >Sanders, R G P; + Institute for Nanotechnology, University of Twente, Enschede (Netherlands))" data-affiliation=" (MESA+ Institute for Nanotechnology, University of Twente, Enschede (Netherlands))" >Lammerink, T S J; + Institute for Nanotechnology, University of Twente, Enschede (Netherlands))" data-affiliation=" (MESA+ Institute for Nanotechnology, University of Twente, Enschede (Netherlands))" >Wiegerink, R J; + Institute for Nanotechnology, University of Twente, Enschede (Netherlands))" data-affiliation=" (MESA+ Institute for Nanotechnology, University of Twente, Enschede (Netherlands))" >Krijnen, G J M

    2014-01-01

    A silicon six-axis force–torque sensor is designed and realized to be used for measurement of the power transfer between the human body and the environment. Capacitive read-out is used to detect all axial force components and all torque components simultaneously. Small electrode gaps in combination with mechanical amplification by the sensor structure result in a high sensitivity. The miniature sensor has a wide force range of up to 50 N in normal direction, 10 N in shear direction and 25 N mm of maximum torque around each axis. (paper)

  11. Contact CMOS imaging of gaseous oxygen sensor array.

    Science.gov (United States)

    Daivasagaya, Daisy S; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C; Chodavarapu, Vamsy P; Bright, Frank V

    2011-10-01

    We describe a compact luminescent gaseous oxygen (O 2 ) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O 2 -sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp) 3 ] 2+ ) encapsulated within sol-gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors.

  12. Short-Range Sensor for Underwater Robot Navigation using Line-lasers and Vision

    DEFF Research Database (Denmark)

    Hansen, Peter Nicholas; Nielsen, Mikkel Cornelius; Christensen, David Johan

    2015-01-01

    This paper investigates a minimalistic laser-based range sensor, used for underwater inspection by Autonomous Underwater Vehicles (AUV). This range detection system system comprise two lasers projecting vertical lines, parallel to a camera’s viewing axis, into the environment. Using both lasers...

  13. Advanced data visualization and sensor fusion: Conversion of techniques from medical imaging to Earth science

    Science.gov (United States)

    Savage, Richard C.; Chen, Chin-Tu; Pelizzari, Charles; Ramanathan, Veerabhadran

    1993-01-01

    Hughes Aircraft Company and the University of Chicago propose to transfer existing medical imaging registration algorithms to the area of multi-sensor data fusion. The University of Chicago's algorithms have been successfully demonstrated to provide pixel by pixel comparison capability for medical sensors with different characteristics. The research will attempt to fuse GOES (Geostationary Operational Environmental Satellite), AVHRR (Advanced Very High Resolution Radiometer), and SSM/I (Special Sensor Microwave Imager) sensor data which will benefit a wide range of researchers. The algorithms will utilize data visualization and algorithm development tools created by Hughes in its EOSDIS (Earth Observation SystemData/Information System) prototyping. This will maximize the work on the fusion algorithms since support software (e.g. input/output routines) will already exist. The research will produce a portable software library with documentation for use by other researchers.

  14. High-dynamic-range imaging for cloud segmentation

    Science.gov (United States)

    Dev, Soumyabrata; Savoy, Florian M.; Lee, Yee Hui; Winkler, Stefan

    2018-04-01

    Sky-cloud images obtained from ground-based sky cameras are usually captured using a fisheye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is overexposed, and the regions near the horizon are underexposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRCloudSeg - an effective method for cloud segmentation using high-dynamic-range (HDR) imaging based on multi-exposure fusion. We describe the HDR image generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR radiance maps for cloud segmentation and achieves very good results.

  15. Research-grade CMOS image sensors for demanding space applications

    Science.gov (United States)

    Saint-Pé, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Belliot, Pierre

    2017-11-01

    Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid- 90s, CMOS Image Sensors (CIS) have been competing with CCDs for more and more consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA, and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this talk will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments of CIS prototypes built using an imaging CMOS process and of devices based on improved designs will be presented.

  16. 3D-LSI technology for image sensor

    International Nuclear Information System (INIS)

    Motoyoshi, Makoto; Koyanagi, Mitsumasa

    2009-01-01

    Recently, the development of three-dimensional large-scale integration (3D-LSI) technologies has accelerated and has advanced from the research level or the limited production level to the investigation level, which might lead to mass production. By separating 3D-LSI technology into elementary technologies such as (1) through silicon via (TSV) formation, (2) bump formation, (3) wafer thinning, (4) chip/wafer alignment, and (5) chip/wafer stacking and reconstructing the entire process and structure, many methods to realize 3D-LSI devices can be developed. However, by considering a specific application, the supply chain of base wafers, and the purpose of 3D integration, a few suitable combinations can be identified. In this paper, we focus on the application of 3D-LSI technologies to image sensors. We describe the process and structure of the chip size package (CSP), developed on the basis of current and advanced 3D-LSI technologies, to be used in CMOS image sensors. Using the current LSI technologies, CSPs for 1.3 M, 2 M, and 5 M pixel CMOS image sensors were successfully fabricated without any performance degradation. 3D-LSI devices can be potentially employed in high-performance focal-plane-array image sensors. We propose a high-speed image sensor with an optical fill factor of 100% to be developed using next-generation 3D-LSI technology and fabricated using micro(μ)-bumps and micro(μ)-TSVs.

  17. Study of x-ray CCD image sensor and application

    Science.gov (United States)

    Wang, Shuyun; Li, Tianze

    2008-12-01

    In this paper, we expounded the composing, specialty, parameter, its working process, key techniques and methods for charge coupled devices (CCD) twice value treatment. Disposal process for CCD video signal quantification was expatiated; X-ray image intensifier's constitutes, function of constitutes, coupling technique of X-ray image intensifier and CCD were analyzed. We analyzed two effective methods to reduce the harm to human beings when X-ray was used in the medical image. One was to reduce X-ray's radiation and adopt to intensify the image penetrated by X-ray to gain the same effect. The other was to use the image sensor to transfer the images to the safe area for observation. On this base, a new method was presented that CCD image sensor and X-ray image intensifier were combined organically. A practical medical X-ray photo electricity system was designed which can be used in the records and time of the human's penetrating images. The system was mainly made up with the medical X-ray, X-ray image intensifier, CCD vidicon with high resolution, image processor, display and so on. Its characteristics are: change the invisible X-ray into the visible light image; output the vivid images; short image recording time etc. At the same time we analyzed the main aspects which affect the system's resolution. Medical photo electricity system using X-ray image sensor can reduce the X-ray harm to human sharply when it is used in the medical diagnoses. At last we analyzed and looked forward the system's application in medical engineering and the related fields.

  18. An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.

    Science.gov (United States)

    Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang

    2016-01-28

    In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.

  19. CSRQ: Communication-Efficient Secure Range Queries in Two-Tiered Sensor Networks

    Directory of Open Access Journals (Sweden)

    Hua Dai

    2016-02-01

    Full Text Available In recent years, we have seen many applications of secure query in two-tiered wireless sensor networks. Storage nodes are responsible for storing data from nearby sensor nodes and answering queries from Sink. It is critical to protect data security from a compromised storage node. In this paper, the Communication-efficient Secure Range Query (CSRQ—a privacy and integrity preserving range query protocol—is proposed to prevent attackers from gaining information of both data collected by sensor nodes and queries issued by Sink. To preserve privacy and integrity, in addition to employing the encoding mechanisms, a novel data structure called encrypted constraint chain is proposed, which embeds the information of integrity verification. Sink can use this encrypted constraint chain to verify the query result. The performance evaluation shows that CSRQ has lower communication cost than the current range query protocols.

  20. A protein?dye hybrid system as a narrow range tunable intracellular pH sensor? ?Electronic supplementary information (ESI) available: Figures depicting various photophysical properties, cytotoxicity studies and confocal fluorescence images. See DOI: 10.1039/c6sc02659a Click here for additional data file.

    OpenAIRE

    Anees, Palapuravan; Sudheesh, Karivachery V.; Jayamurthy, Purushothaman; Chandrika, Arunkumar R.; Omkumar, Ramakrishnapillai V.; Ajayaghosh, Ayyappanpillai

    2016-01-01

    Accurate monitoring of pH variations inside cells is important for the early diagnosis of diseases such as cancer. Even though a variety of different pH sensors are available, construction of a custom-made sensor array for measuring minute variations in a narrow biological pH window, using easily available constituents, is a challenge. Here we report two-component hybrid sensors derived from a protein and organic dye nanoparticles whose sensitivity range can be tuned by choosing different rat...

  1. Robust image registration for multiple exposure high dynamic range image synthesis

    Science.gov (United States)

    Yao, Susu

    2011-03-01

    Image registration is an important preprocessing technique in high dynamic range (HDR) image synthesis. This paper proposed a robust image registration method for aligning a group of low dynamic range images (LDR) that are captured with different exposure times. Illumination change and photometric distortion between two images would result in inaccurate registration. We propose to transform intensity image data into phase congruency to eliminate the effect of the changes in image brightness and use phase cross correlation in the Fourier transform domain to perform image registration. Considering the presence of non-overlapped regions due to photometric distortion, evolutionary programming is applied to search for the accurate translation parameters so that the accuracy of registration is able to be achieved at a hundredth of a pixel level. The proposed algorithm works well for under and over-exposed image registration. It has been applied to align LDR images for synthesizing high quality HDR images..

  2. Flexible, highly sensitive pressure sensor with a wide range based on graphene-silk network structure

    Science.gov (United States)

    Liu, Ying; Tao, Lu-Qi; Wang, Dan-Yang; Zhang, Tian-Yu; Yang, Yi; Ren, Tian-Ling

    2017-03-01

    In this paper, a flexible, simple-preparation, and low-cost graphene-silk pressure sensor based on soft silk substrate through thermal reduction was demonstrated. Taking silk as the support body, the device had formed a three-dimensional structure with ordered multi-layer structure. Through a simple and low-cost process technology, graphene-silk pressure sensor can achieve the sensitivity value of 0.4 kPa - 1 , and the measurement range can be as high as 140 kPa. Besides, pressure sensor can have a good combination with knitted clothing and textile product. The signal had good reproducibility in response to different pressures. Furthermore, graphene-silk pressure sensor can not only detect pressure higher than 100 kPa, but also can measure weak body signals. The characteristics of high-sensitivity, good repeatability, flexibility, and comfort for skin provide the high possibility to fit on various wearable electronics.

  3. Advanced pixel architectures for scientific image sensors

    CERN Document Server

    Coath, R; Godbeer, A; Wilson, M; Turchetta, R

    2009-01-01

    We present recent developments from two projects targeting advanced pixel architectures for scientific applications. Results are reported from FORTIS, a sensor demonstrating variants on a 4T pixel architecture. The variants include differences in pixel and diode size, the in-pixel source follower transistor size and the capacitance of the readout node to optimise for low noise and sensitivity to small amounts of charge. Results are also reported from TPAC, a complex pixel architecture with ~160 transistors per pixel. Both sensors were manufactured in the 0.18μm INMAPS process, which includes a special deep p-well layer and fabrication on a high resistivity epitaxial layer for improved charge collection efficiency.

  4. Extended SWIR imaging sensors for hyperspectral imaging applications

    Science.gov (United States)

    Weber, A.; Benecke, M.; Wendler, J.; Sieck, A.; Hübner, D.; Figgemeier, H.; Breiter, R.

    2016-05-01

    AIM has developed SWIR modules including FPAs based on liquid phase epitaxy (LPE) grown MCT usable in a wide range of hyperspectral imaging applications. Silicon read-out integrated circuits (ROIC) provide various integration and readout modes including specific functions for spectral imaging applications. An important advantage of MCT based detectors is the tunable band gap. The spectral sensitivity of MCT detectors can be engineered to cover the extended SWIR spectral region up to 2.5μm without compromising in performance. AIM developed the technology to extend the spectral sensitivity of its SWIR modules also into the VIS. This has been successfully demonstrated for 384x288 and 1024x256 FPAs with 24μm pitch. Results are presented in this paper. The FPAs are integrated into compact dewar cooler configurations using different types of coolers, like rotary coolers, AIM's long life split linear cooler MCC030 or extreme long life SF100 Pulse Tube cooler. The SWIR modules include command and control electronics (CCE) which allow easy interfacing using a digital standard interface. The development status and performance results of AIM's latest MCT SWIR modules suitable for hyperspectral systems and applications will be presented.

  5. Image-based environmental monitoring sensor application using an embedded wireless sensor network.

    Science.gov (United States)

    Paek, Jeongyeup; Hicks, John; Coe, Sharon; Govindan, Ramesh

    2014-08-28

    This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet's built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Cannot Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions.

  6. Image-Based Environmental Monitoring Sensor Application Using an Embedded Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Jeongyeup Paek

    2014-08-01

    Full Text Available This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet’s built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Jacinto Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions.

  7. Range Information Characterization of the Hokuyo UST-20LX LIDAR Sensor

    Directory of Open Access Journals (Sweden)

    Matthew A. Cooper

    2018-05-01

    Full Text Available This paper presents a study on the data measurements that the Hokuyo UST-20LX Laser Rangefinder produces, which compiles into an overall characterization of the LiDAR sensor relative to indoor environments. The range measurements, beam divergence, angular resolution, error effect due to some common painted and wooden surfaces, and the error due to target surface orientation are analyzed. It was shown that using a statistical average of sensor measurements provides a more accurate range measurement. It was also shown that the major source of errors for the Hokuyo UST-20LX sensor was caused by something that will be referred to as “mixed pixels”. Additional error sources are target surface material, and the range relative to the sensor. The purpose of this paper was twofold: (1 to describe a series of tests that can be performed to characterize various aspects of a LIDAR system from a user perspective, and (2 present a detailed characterization of the commonly-used Hokuyo UST-20LX LIDAR sensor.

  8. Range-Gated Laser Stroboscopic Imaging for Night Remote Surveillance

    International Nuclear Information System (INIS)

    Xin-Wei, Wang; Yan, Zhou; Song-Tao, Fan; Jun, He; Yu-Liang, Liu

    2010-01-01

    For night remote surveillance, we present a method, the range-gated laser stroboscopic imaging(RGLSI), which uses a new kind of time delay integration mode to integrate target signals so that night remote surveillance can be realized by a low-energy illuminated laser. The time delay integration in this method has no influence on the video frame rate. Compared with the traditional range-gated laser imaging, RGLSI can reduce scintillation and target speckle effects and significantly improve the image signal-to-noise ratio analyzed. Even under low light level and low visibility conditions, the RGLSI system can effectively work. In a preliminary experiment, we have detected and recognized a railway bridge one kilometer away under a visibility of six kilometers, when the effective illuminated energy is 29.5 μJ

  9. Self-Similarity Superresolution for Resource-Constrained Image Sensor Node in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yuehai Wang

    2014-01-01

    Full Text Available Wireless sensor networks, in combination with image sensors, open up a grand sensing application field. It is a challenging problem to recover a high resolution (HR image from its low resolution (LR counterpart, especially for low-cost resource-constrained image sensors with limited resolution. Sparse representation-based techniques have been developed recently and increasingly to solve this ill-posed inverse problem. Most of these solutions are based on an external dictionary learned from huge image gallery, consequently needing tremendous iteration and long time to match. In this paper, we explore the self-similarity inside the image itself, and propose a new combined self-similarity superresolution (SR solution, with low computation cost and high recover performance. In the self-similarity image super resolution model (SSIR, a small size sparse dictionary is learned from the image itself by the methods such as KSVD. The most similar patch is searched and specially combined during the sparse regulation iteration. Detailed information, such as edge sharpness, is preserved more faithfully and clearly. Experiment results confirm the effectiveness and efficiency of this double self-learning method in the image super resolution.

  10. X-ray imaging characterization of active edge silicon pixel sensors

    International Nuclear Information System (INIS)

    Ponchut, C; Ruat, M; Kalliopuska, J

    2014-01-01

    The aim of this work was the experimental characterization of edge effects in active-edge silicon pixel sensors, in the frame of X-ray pixel detectors developments for synchrotron experiments. We produced a set of active edge pixel sensors with 300 to 500 μm thickness, edge widths ranging from 100 μm to 150 μm, and n or p pixel contact types. The sensors with 256 × 256 pixels and 55 × 55 μm 2 pixel pitch were then bump-bonded to Timepix readout chips for X-ray imaging measurements. The reduced edge widths makes the edge pixels more sensitive to the electrical field distribution at the sensor boundaries. We characterized this effect by mapping the spatial response of the sensor edges with a finely focused X-ray synchrotron beam. One of the samples showed a distortion-free response on all four edges, whereas others showed variable degrees of distortions extending at maximum to 300 micron from the sensor edge. An application of active edge pixel sensors to coherent diffraction imaging with synchrotron beams is described

  11. Computational model of lightness perception in high dynamic range imaging

    Science.gov (United States)

    Krawczyk, Grzegorz; Myszkowski, Karol; Seidel, Hans-Peter

    2006-02-01

    An anchoring theory of lightness perception by Gilchrist et al. [1999] explains many characteristics of human visual system such as lightness constancy and its spectacular failures which are important in the perception of images. The principal concept of this theory is the perception of complex scenes in terms of groups of consistent areas (frameworks). Such areas, following the gestalt theorists, are defined by the regions of common illumination. The key aspect of the image perception is the estimation of lightness within each framework through the anchoring to the luminance perceived as white, followed by the computation of the global lightness. In this paper we provide a computational model for automatic decomposition of HDR images into frameworks. We derive a tone mapping operator which predicts lightness perception of the real world scenes and aims at its accurate reproduction on low dynamic range displays. Furthermore, such a decomposition into frameworks opens new grounds for local image analysis in view of human perception.

  12. Short-Range Noncontact Sensors for Healthcare and Other Emerging Applications: A Review

    Directory of Open Access Journals (Sweden)

    Changzhan Gu

    2016-07-01

    Full Text Available Short-range noncontact sensors are capable of remotely detecting the precise movements of the subjects or wirelessly estimating the distance from the sensor to the subject. They find wide applications in our day lives such as noncontact vital sign detection of heart beat and respiration, sleep monitoring, occupancy sensing, and gesture sensing. In recent years, short-range noncontact sensors are attracting more and more efforts from both academia and industry due to their vast applications. Compared to other radar architectures such as pulse radar and frequency-modulated continuous-wave (FMCW radar, Doppler radar is gaining more popularity in terms of system integration and low-power operation. This paper reviews the recent technical advances in Doppler radars for healthcare applications, including system hardware improvement, digital signal processing, and chip integration. This paper also discusses the hybrid FMCW-interferometry radars and the emerging applications and the future trends.

  13. Autonomous vision networking: miniature wireless sensor networks with imaging technology

    Science.gov (United States)

    Messinger, Gioia; Goldberg, Giora

    2006-09-01

    The recent emergence of integrated PicoRadio technology, the rise of low power, low cost, System-On-Chip (SOC) CMOS imagers, coupled with the fast evolution of networking protocols and digital signal processing (DSP), created a unique opportunity to achieve the goal of deploying large-scale, low cost, intelligent, ultra-low power distributed wireless sensor networks for the visualization of the environment. Of all sensors, vision is the most desired, but its applications in distributed sensor networks have been elusive so far. Not any more. The practicality and viability of ultra-low power vision networking has been proven and its applications are countless, from security, and chemical analysis to industrial monitoring, asset tracking and visual recognition, vision networking represents a truly disruptive technology applicable to many industries. The presentation discusses some of the critical components and technologies necessary to make these networks and products affordable and ubiquitous - specifically PicoRadios, CMOS imagers, imaging DSP, networking and overall wireless sensor network (WSN) system concepts. The paradigm shift, from large, centralized and expensive sensor platforms, to small, low cost, distributed, sensor networks, is possible due to the emergence and convergence of a few innovative technologies. Avaak has developed a vision network that is aided by other sensors such as motion, acoustic and magnetic, and plans to deploy it for use in military and commercial applications. In comparison to other sensors, imagers produce large data files that require pre-processing and a certain level of compression before these are transmitted to a network server, in order to minimize the load on the network. Some of the most innovative chemical detectors currently in development are based on sensors that change color or pattern in the presence of the desired analytes. These changes are easily recorded and analyzed by a CMOS imager and an on-board DSP processor

  14. Wireless Sensor Network Handles Image Data

    Science.gov (United States)

    2008-01-01

    To relay data from remote locations for NASA s Earth sciences research, Goddard Space Flight Center contributed to the development of "microservers" (wireless sensor network nodes), which are now used commercially as a quick and affordable means to capture and distribute geographical information, including rich sets of aerial and street-level imagery. NASA began this work out of a necessity for real-time recovery of remote sensor data. These microservers work much like a wireless office network, relaying information between devices. The key difference, however, is that instead of linking workstations within one office, the interconnected microservers operate miles away from one another. This attribute traces back to the technology s original use: The microservers were originally designed for seismology on remote glaciers and ice streams in Alaska, Greenland, and Antarctica-acquiring, storing, and relaying data wirelessly between ground sensors. The microservers boast three key attributes. First, a researcher in the field can establish a "managed network" of microservers and rapidly see the data streams (recovered wirelessly) on a field computer. This rapid feedback permits the researcher to reconfigure the network for different purposes over the course of a field campaign. Second, through careful power management, the microservers can dwell unsupervised in the field for up to 2 years, collecting tremendous amounts of data at a research location. The third attribute is the exciting potential to deploy a microserver network that works in synchrony with robotic explorers (e.g., providing ground truth validation for satellites, supporting rovers as they traverse the local environment). Managed networks of remote microservers that relay data unsupervised for up to 2 years can drastically reduce the costs of field instrumentation and data rec

  15. A STEP TOWARDS DYNAMIC SCENE ANALYSIS WITH ACTIVE MULTI-VIEW RANGE IMAGING SYSTEMS

    Directory of Open Access Journals (Sweden)

    M. Weinmann

    2012-07-01

    Full Text Available Obtaining an appropriate 3D description of the local environment remains a challenging task in photogrammetric research. As terrestrial laser scanners (TLSs perform a highly accurate, but time-dependent spatial scanning of the local environment, they are only suited for capturing static scenes. In contrast, new types of active sensors provide the possibility of simultaneously capturing range and intensity information by images with a single measurement, and the high frame rate also allows for capturing dynamic scenes. However, due to the limited field of view, one observation is not sufficient to obtain a full scene coverage and therefore, typically, multiple observations are collected from different locations. This can be achieved by either placing several fixed sensors at different known locations or by using a moving sensor. In the latter case, the relation between different observations has to be estimated by using information extracted from the captured data and then, a limited field of view may lead to problems if there are too many moving objects within it. Hence, a moving sensor platform with multiple and coupled sensor devices offers the advantages of an extended field of view which results in a stabilized pose estimation, an improved registration of the recorded point clouds and an improved reconstruction of the scene. In this paper, a new experimental setup for investigating the potentials of such multi-view range imaging systems is presented which consists of a moving cable car equipped with two synchronized range imaging devices. The presented setup allows for monitoring in low altitudes and it is suitable for getting dynamic observations which might arise from moving cars or from moving pedestrians. Relying on both 3D geometry and 2D imagery, a reliable and fully automatic approach for co-registration of captured point cloud data is presented which is essential for a high quality of all subsequent tasks. The approach involves using

  16. A new range-free localisation in wireless sensor networks using support vector machine

    Science.gov (United States)

    Wang, Zengfeng; Zhang, Hao; Lu, Tingting; Sun, Yujuan; Liu, Xing

    2018-02-01

    Location information of sensor nodes is of vital importance for most applications in wireless sensor networks (WSNs). This paper proposes a new range-free localisation algorithm using support vector machine (SVM) and polar coordinate system (PCS), LSVM-PCS. In LSVM-PCS, two sets of classes are first constructed based on sensor nodes' polar coordinates. Using the boundaries of the defined classes, the operation region of WSN field is partitioned into a finite number of polar grids. Each sensor node can be localised into one of the polar grids by executing two localisation algorithms that are developed on the basis of SVM classification. The centre of the resident polar grid is then estimated as the location of the sensor node. In addition, a two-hop mass-spring optimisation (THMSO) is also proposed to further improve the localisation accuracy of LSVM-PCS. In THMSO, both neighbourhood information and non-neighbourhood information are used to refine the sensor node location. The results obtained verify that the proposed algorithm provides a significant improvement over existing localisation methods.

  17. Retina-like sensor image coordinates transformation and display

    Science.gov (United States)

    Cao, Fengmei; Cao, Nan; Bai, Tingzhu; Song, Shengyu

    2015-03-01

    For a new kind of retina-like senor camera, the image acquisition, coordinates transformation and interpolation need to be realized. Both of the coordinates transformation and interpolation are computed in polar coordinate due to the sensor's particular pixels distribution. The image interpolation is based on sub-pixel interpolation and its relative weights are got in polar coordinates. The hardware platform is composed of retina-like senor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes the real-time image acquisition, coordinate transformation and interpolation.

  18. Flexible Ferroelectric Sensors with Ultrahigh Pressure Sensitivity and Linear Response over Exceptionally Broad Pressure Range.

    Science.gov (United States)

    Lee, Youngoh; Park, Jonghwa; Cho, Soowon; Shin, Young-Eun; Lee, Hochan; Kim, Jinyoung; Myoung, Jinyoung; Cho, Seungse; Kang, Saewon; Baig, Chunggi; Ko, Hyunhyub

    2018-04-24

    Flexible pressure sensors with a high sensitivity over a broad linear range can simplify wearable sensing systems without additional signal processing for the linear output, enabling device miniaturization and low power consumption. Here, we demonstrate a flexible ferroelectric sensor with ultrahigh pressure sensitivity and linear response over an exceptionally broad pressure range based on the material and structural design of ferroelectric composites with a multilayer interlocked microdome geometry. Due to the stress concentration between interlocked microdome arrays and increased contact area in the multilayer design, the flexible ferroelectric sensors could perceive static/dynamic pressure with high sensitivity (47.7 kPa -1 , 1.3 Pa minimum detection). In addition, efficient stress distribution between stacked multilayers enables linear sensing over exceptionally broad pressure range (0.0013-353 kPa) with fast response time (20 ms) and high reliability over 5000 repetitive cycles even at an extremely high pressure of 272 kPa. Our sensor can be used to monitor diverse stimuli from a low to a high pressure range including weak gas flow, acoustic sound, wrist pulse pressure, respiration, and foot pressure with a single device.

  19. Design and Performance Analysis of an Intrinsically Safe Ultrasonic Ranging Sensor.

    Science.gov (United States)

    Zhang, Hongjuan; Wang, Yu; Zhang, Xu; Wang, Dong; Jin, Baoquan

    2016-06-13

    In flammable or explosive environments, an ultrasonic sensor for distance measurement poses an important engineering safety challenge, because the driving circuit uses an intermediate frequency transformer as an impedance transformation element, in which the produced heat or spark is available for ignition. In this paper, an intrinsically safe ultrasonic ranging sensor is designed and implemented. The waterproof piezoelectric transducer with integrated transceiver is chosen as an energy transducing element. Then a novel transducer driving circuit is designed based on an impedance matching method considering safety spark parameters to replace an intermediate frequency transformer. Then, an energy limiting circuit is developed to achieve dual levels of over-voltage and over-current protection. The detail calculation and evaluation are executed and the electrical characteristics are analyzed to verify the intrinsic safety of the driving circuit. Finally, an experimental platform of the ultrasonic ranging sensor system is constructed, which involves short-circuit protection. Experimental results show that the proposed ultrasonic ranging sensor is excellent in both ranging performance and intrinsic safety.

  20. Distributed Algorithm for Voronoi Partition of Wireless Sensor Networks with a Limited Sensing Range.

    Science.gov (United States)

    He, Chenlong; Feng, Zuren; Ren, Zhigang

    2018-02-03

    For Wireless Sensor Networks (WSNs), the Voronoi partition of a region is a challenging problem owing to the limited sensing ability of each sensor and the distributed organization of the network. In this paper, an algorithm is proposed for each sensor having a limited sensing range to compute its limited Voronoi cell autonomously, so that the limited Voronoi partition of the entire WSN is generated in a distributed manner. Inspired by Graham's Scan (GS) algorithm used to compute the convex hull of a point set, the limited Voronoi cell of each sensor is obtained by sequentially scanning two consecutive bisectors between the sensor and its neighbors. The proposed algorithm called the Boundary Scan (BS) algorithm has a lower computational complexity than the existing Range-Constrained Voronoi Cell (RCVC) algorithm and reaches the lower bound of the computational complexity of the algorithms used to solve the problem of this kind. Moreover, it also improves the time efficiency of a key step in the Adjust-Sensing-Radius (ASR) algorithm used to compute the exact Voronoi cell. Extensive numerical simulations are performed to demonstrate the correctness and effectiveness of the BS algorithm. The distributed realization of the BS combined with a localization algorithm in WSNs is used to justify the WSN nature of the proposed algorithm.

  1. Time-of-flight range imaging for underwater applications

    Science.gov (United States)

    Merbold, Hannes; Catregn, Gion-Pol; Leutenegger, Tobias

    2018-02-01

    Precise and low-cost range imaging in underwater settings with object distances on the meter level is demonstrated. This is addressed through silicon-based time-of-flight (TOF) cameras operated with light emitting diodes (LEDs) at visible, rather than near-IR wavelengths. We find that the attainable performance depends on a variety of parameters, such as the wavelength dependent absorption of water, the emitted optical power and response times of the LEDs, or the spectral sensitivity of the TOF chip. An in-depth analysis of the interplay between the different parameters is given and the performance of underwater TOF imaging using different visible illumination wavelengths is analyzed.

  2. Study of photoconductor-based radiological image sensors

    International Nuclear Information System (INIS)

    Beaumont, Francois

    1989-01-01

    Because of the evolution of medical imaging techniques to digital Systems, it is necessary to replace radiological film which has many drawbacks, by a detector quite as efficient and quickly giving a digitizable signal. The purpose of this thesis is to find new X-ray digital imaging processes using photoconductor materials such as amorphous selenium. After reviewing the principle of direct radiology and functions to be served by the X-ray sensor (i.e. detection, memory, assignment, visualization), we explain specification. We especially show the constraints due to the object to be radiographed (condition of minimal exposure), and to the reading signal (electronic noise detection associated with a reading frequency). As a result of this study, a first photoconductor sensor could be designed. Its principle is based on photo-carrier trapping at dielectric-photoconductor structure interface. The reading System needs the scanning of a laser beam upon the sensor surface. The dielectric-photoconductor structure enabled us to estimate the possibilities offered by the sensor and to build a complete x-ray imaging System. The originality of thermo-dielectric sensor, that was next studied, is to allow a thermal assignment reading. The chosen System consists in varying the ferroelectric polymer capacity whose dielectric permittivity is weak at room temperature. The thermo-dielectric material was studied by thermal or Joule effect stimulation. During our experiments, trapping was found in a sensor made of amorphous selenium between two electrodes. This new effect was performed and enabled us to expose a first interpretation. Eventually, the comparison of these new sensor concepts with radiological film shows the advantage of the proposed solution. (author) [fr

  3. Information-efficient spectral imaging sensor

    Science.gov (United States)

    Sweatt, William C.; Gentry, Stephen M.; Boye, Clinton A.; Grotbeck, Carter L.; Stallard, Brian R.; Descour, Michael R.

    2003-01-01

    A programmable optical filter for use in multispectral and hyperspectral imaging. The filter splits the light collected by an optical telescope into two channels for each of the pixels in a row in a scanned image, one channel to handle the positive elements of a spectral basis filter and one for the negative elements of the spectral basis filter. Each channel for each pixel disperses its light into n spectral bins, with the light in each bin being attenuated in accordance with the value of the associated positive or negative element of the spectral basis vector. The spectral basis vector is constructed so that its positive elements emphasize the presence of a target and its negative elements emphasize the presence of the constituents of the background of the imaged scene. The attenuated light in the channels is re-imaged onto separate detectors for each pixel and then the signals from the detectors are combined to give an indication of the presence or not of the target in each pixel of the scanned scene. This system provides for a very efficient optical determination of the presence of the target, as opposed to the very data intensive data manipulations that are required in conventional hyperspectral imaging systems.

  4. An Integrated Tone Mapping for High Dynamic Range Image Visualization

    Science.gov (United States)

    Liang, Lei; Pan, Jeng-Shyang; Zhuang, Yongjun

    2018-01-01

    There are two type tone mapping operators for high dynamic range (HDR) image visualization. HDR image mapped by perceptual operators have strong sense of reality, but will lose local details. Empirical operators can maximize local detail information of HDR image, but realism is not strong. A common tone mapping operator suitable for all applications is not available. This paper proposes a novel integrated tone mapping framework which can achieve conversion between empirical operators and perceptual operators. In this framework, the empirical operator is rendered based on improved saliency map, which simulates the visual attention mechanism of the human eye to the natural scene. The results of objective evaluation prove the effectiveness of the proposed solution.

  5. High speed display algorithm for 3D medical images using Multi Layer Range Image

    International Nuclear Information System (INIS)

    Ban, Hideyuki; Suzuki, Ryuuichi

    1993-01-01

    We propose high speed algorithm that display 3D voxel images obtained from medical imaging systems such as MRI. This algorithm convert voxel image data to 6 Multi Layer Range Image (MLRI) data, which is an augmentation of the range image data. To avoid the calculation for invisible voxels, the algorithm selects at most 3 MLRI data from 6 in accordance with the view direction. The proposed algorithm displays 256 x 256 x 256 voxel data within 0.6 seconds using 22 MIPS Workstation without a special hardware such as Graphics Engine. Real-time display will be possible on 100 MIPS class Workstation by our algorithm. (author)

  6. Spatially digitized tactile pressure sensors with tunable sensitivity and sensing range.

    Science.gov (United States)

    Choi, Eunsuk; Sul, Onejae; Hwang, Soonhyung; Cho, Joonhyung; Chun, Hyunsuk; Kim, Hongjun; Lee, Seung-Beck

    2014-10-24

    When developing an electronic skin with touch sensation, an array of tactile pressure sensors with various ranges of pressure detection need to be integrated. This requires low noise, highly reliable sensors with tunable sensing characteristics. We demonstrate the operation of tactile pressure sensors that utilize the spatial distribution of contact electrodes to detect various ranges of tactile pressures. The device consists of a suspended elastomer diaphragm, with a carbon nanotube thin-film on the bottom, which makes contact with the electrodes on the substrate with applied pressure. The electrodes separated by set distances become connected in sequence with tactile pressure, enabling consecutive electrodes to produce a signal. Thus, the pressure is detected not by how much of a signal is produced but by which of the electrodes is registering an output. By modulating the diaphragm diameter, and suspension height, it was possible to tune the pressure sensitivity and sensing range. Also, adding a fingerprint ridge structure enabled the sensor to detect the periodicity of sub-millimeter grating patterns on a silicon wafer.

  7. Hierarchical tone mapping for high dynamic range image visualization

    Science.gov (United States)

    Qiu, Guoping; Duan, Jiang

    2005-07-01

    In this paper, we present a computationally efficient, practically easy to use tone mapping techniques for the visualization of high dynamic range (HDR) images in low dynamic range (LDR) reproduction devices. The new method, termed hierarchical nonlinear linear (HNL) tone-mapping operator maps the pixels in two hierarchical steps. The first step allocates appropriate numbers of LDR display levels to different HDR intensity intervals according to the pixel densities of the intervals. The second step linearly maps the HDR intensity intervals to theirs allocated LDR display levels. In the developed HNL scheme, the assignment of LDR display levels to HDR intensity intervals is controlled by a very simple and flexible formula with a single adjustable parameter. We also show that our new operators can be used for the effective enhancement of ordinary images.

  8. New amorphous-silicon image sensor for x-ray diagnostic medical imaging applications

    Science.gov (United States)

    Weisfield, Richard L.; Hartney, Mark A.; Street, Robert A.; Apte, Raj B.

    1998-07-01

    This paper introduces new high-resolution amorphous Silicon (a-Si) image sensors specifically configured for demonstrating film-quality medical x-ray imaging capabilities. The devices utilizes an x-ray phosphor screen coupled to an array of a-Si photodiodes for detecting visible light, and a-Si thin-film transistors (TFTs) for connecting the photodiodes to external readout electronics. We have developed imagers based on a pixel size of 127 micrometer X 127 micrometer with an approximately page-size imaging area of 244 mm X 195 mm, and array size of 1,536 data lines by 1,920 gate lines, for a total of 2.95 million pixels. More recently, we have developed a much larger imager based on the same pixel pattern, which covers an area of approximately 406 mm X 293 mm, with 2,304 data lines by 3,200 gate lines, for a total of nearly 7.4 million pixels. This is very likely to be the largest image sensor array and highest pixel count detector fabricated on a single substrate. Both imagers connect to a standard PC and are capable of taking an image in a few seconds. Through design rule optimization we have achieved a light sensitive area of 57% and optimized quantum efficiency for x-ray phosphor output in the green part of the spectrum, yielding an average quantum efficiency between 500 and 600 nm of approximately 70%. At the same time, we have managed to reduce extraneous leakage currents on these devices to a few fA per pixel, which allows for very high dynamic range to be achieved. We have characterized leakage currents as a function of photodiode bias, time and temperature to demonstrate high stability over these large sized arrays. At the electronics level, we have adopted a new generation of low noise, charge- sensitive amplifiers coupled to 12-bit A/D converters. Considerable attention was given to reducing electronic noise in order to demonstrate a large dynamic range (over 4,000:1) for medical imaging applications. Through a combination of low data lines capacitance

  9. Low-Power Low-Noise CMOS Imager Design : In Micro-Digital Sun Sensor Application

    NARCIS (Netherlands)

    Xie, N.

    2012-01-01

    A digital sun sensor is superior to an analog sun sensor in aspects of resolution, albedo immunity, and integration. The proposed Micro-Digital Sun Sensor (µDSS) is an autonomous digital sun sensor which is implemented by means of a CMOS image sensor, which is named APS+. The µDSS is designed

  10. A Compton Imaging Prototype for Range Verification in Particle Therapy

    International Nuclear Information System (INIS)

    Golnik, C.; Hueso Gonzalez, F.; Kormoll, T.; Pausch, G.; Rohling, H.; Fiedler, F.; Heidel, K.; Schoene, S.; Sobiella, M.; Wagner, A.; Enghardt, W.

    2013-06-01

    During the 2012 AAPM Annual Meeting 33 percent of the delegates considered the range uncertainty in proton therapy as the main obstacle of becoming a mainstream treatment modality. Utilizing prompt gamma emission, a side product of particle tissue interaction, opens the possibility of in-beam dose verification, due to the direct correlation between prompt gamma emission and particle dose deposition. Compton imaging has proven to be a technique to measure three dimensional gamma emission profiles and opens the possibility of adaptive dose monitoring and treatment correction. We successfully built a Compton Imaging prototype, characterized the detectors and showed the imaging capability of the complete device. The major advantage of CZT detectors is the high energy resolution and the high spatial resolution, which are key parameters for Compton Imaging. However, our measurements at the proton beam accelerator facility KVI in Groningen (Netherlands) disclosed a spectrum of prompt gamma rays under proton irradiation up to 4.4 MeV. As CZT detectors of 5 mm thickness do not efficiently absorb photons in such energy ranges, another absorption, based on a Siemens LSO block detector is added behind CZT1. This setup provides a higher absorption probability of high energy photons. With a size of 5.2 cm x 5.2 cm x 2.0 cm, this scintillation detector further increases the angular acceptance of Compton scattered photons due to geometric size. (authors)

  11. Stereo Vision-Based High Dynamic Range Imaging Using Differently-Exposed Image Pair

    Directory of Open Access Journals (Sweden)

    Won-Jae Park

    2017-06-01

    Full Text Available In this paper, a high dynamic range (HDR imaging method based on the stereo vision system is presented. The proposed method uses differently exposed low dynamic range (LDR images captured from a stereo camera. The stereo LDR images are first converted to initial stereo HDR images using the inverse camera response function estimated from the LDR images. However, due to the limited dynamic range of the stereo LDR camera, the radiance values in under/over-exposed regions of the initial main-view (MV HDR image can be lost. To restore these radiance values, the proposed stereo matching and hole-filling algorithms are applied to the stereo HDR images. Specifically, the auxiliary-view (AV HDR image is warped by using the estimated disparity between initial the stereo HDR images and then effective hole-filling is applied to the warped AV HDR image. To reconstruct the final MV HDR, the warped and hole-filled AV HDR image is fused with the initial MV HDR image using the weight map. The experimental results demonstrate objectively and subjectively that the proposed stereo HDR imaging method provides better performance compared to the conventional method.

  12. CMOS SPAD-based image sensor for single photon counting and time of flight imaging

    OpenAIRE

    Dutton, Neale Arthur William

    2016-01-01

    The facility to capture the arrival of a single photon, is the fundamental limit to the detection of quantised electromagnetic radiation. An image sensor capable of capturing a picture with this ultimate optical and temporal precision is the pinnacle of photo-sensing. The creation of high spatial resolution, single photon sensitive, and time-resolved image sensors in complementary metal oxide semiconductor (CMOS) technology offers numerous benefits in a wide field of applications....

  13. Oriented Edge-Based Feature Descriptor for Multi-Sensor Image Alignment and Enhancement

    Directory of Open Access Journals (Sweden)

    Myung-Ho Ju

    2013-10-01

    Full Text Available In this paper, we present an efficient image alignment and enhancement method for multi-sensor images. The shape of the object captured in a multi-sensor images can be determined by comparing variability of contrast using corresponding edges across multi-sensor image. Using this cue, we construct a robust feature descriptor based on the magnitudes of the oriented edges. Our proposed method enables fast image alignment by identifying matching features in multi-sensor images. We enhance the aligned multi-sensor images through the fusion of the salient regions from each image. The results of stitching the multi-sensor images and their enhancement demonstrate that our proposed method can align and enhance multi-sensor images more efficiently than previous methods.

  14. Displacement damage effects on CMOS APS image sensors induced by neutron irradiation from a nuclear reactor

    International Nuclear Information System (INIS)

    Wang, Zujun; Huang, Shaoyan; Liu, Minbo; Xiao, Zhigang; He, Baoping; Yao, Zhibin; Sheng, Jiangkun

    2014-01-01

    The experiments of displacement damage effects on CMOS APS image sensors induced by neutron irradiation from a nuclear reactor are presented. The CMOS APS image sensors are manufactured in the standard 0.35 μm CMOS technology. The flux of neutron beams was about 1.33 × 10 8 n/cm 2 s. The three samples were exposed by 1 MeV neutron equivalent-fluence of 1 × 10 11 , 5 × 10 11 , and 1 × 10 12 n/cm 2 , respectively. The mean dark signal (K D ), dark signal spike, dark signal non-uniformity (DSNU), noise (V N ), saturation output signal voltage (V S ), and dynamic range (DR) versus neutron fluence are investigated. The degradation mechanisms of CMOS APS image sensors are analyzed. The mean dark signal increase due to neutron displacement damage appears to be proportional to displacement damage dose. The dark images from CMOS APS image sensors irradiated by neutrons are presented to investigate the generation of dark signal spike

  15. Time-reversed lasing in the terahertz range and its preliminary study in sensor applications

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Yun, E-mail: shenyunoptics@gmail.com [Department of Physics, Nanchang University, Nanchang 330031 (China); Liu, Huaqing [Department of Physics, Nanchang University, Nanchang 330031 (China); Deng, Xiaohua [Institute of Space Science and Technology, Nanchang University, Nanchang 330031 (China); Wang, Guoping [Key Laboratory of Artificial Micro- and Nano-Structures of Ministry of Education and School of Physics and Technology, Wuhan University, Wuhan 430072 (China)

    2017-02-05

    Time-reversed lasing in a uniform slab and a grating structure are investigated in the terahertz range. The results show that both the uniform slab and grating can support terahertz time-reversed lasing. Nevertheless, due to the tunable effective refractive index, the grating structure can not only exhibit time-reversed lasing more effectively and flexibly than a uniform slab, but also can realize significant absorption in a broader operating frequency range. Furthermore, applications of terahertz time-reversed lasing for novel concentration/thickness sensors are preliminarily studied in a single-channel coherent perfect absorber system. - Highlights: • Time-reversed lasing are investigated in the terahertz range. • The grating structure exhibit time-reversed lasing more effectively and flexibly than a uniform slab. • THz time-reversed lasing for novel concentration/thickness sensors are studied.

  16. Sorting method to extend the dynamic range of the Shack-Hartmann wave-front sensor

    International Nuclear Information System (INIS)

    Lee, Junwon; Shack, Roland V.; Descour, Michael R.

    2005-01-01

    We propose a simple and powerful algorithm to extend the dynamic range of a Shack-Hartmann wave-front sensor. In a conventional Shack-Hartmann wave-front sensor the dynamic range is limited by the f-number of a lenslet, because the focal spot is required to remain in the area confined by the single lenslet. The sorting method proposed here eliminates such a limitation and extends the dynamic range by tagging each spot in a special sequence. Since the sorting method is a simple algorithm that does not change the measurement configuration, there is no requirement for extra hardware, multiple measurements, or complicated algorithms. We not only present the theory and a calculation example of the sorting method but also actually implement measurement of a highly aberrated wave front from nonrotational symmetric optics

  17. Comparison and experimental validation of two potential resonant viscosity sensors in the kilohertz range

    International Nuclear Information System (INIS)

    Lemaire, Etienne; Caillard, Benjamin; Dufour, Isabelle; Heinisch, Martin; Jakoby, Bernhard

    2013-01-01

    Oscillating microstructures are well established and find application in many fields. These include force sensors, e.g. AFM micro-cantilevers or accelerometers based on resonant suspended plates. This contribution presents two vibrating mechanical structures acting as force sensors in liquid media in order to measure hydrodynamic interactions. Rectangular cross section microcantilevers as well as circular cross section wires are investigated. Each structure features specific benefits, which are discussed in detail. Furthermore, their mechanical parameters and their deflection in liquids are characterized. Finally, an inverse analytical model is applied to calculate the complex viscosity near the resonant frequency for both types of structures. With this approach it is possible to determine rheological parameters in the kilohertz range in situ within a few seconds. The monitoring of the complex viscosity of yogurt during the fermentation process is used as a proof of concept to qualify at least one of the two sensors in opaque mixtures. (paper)

  18. Comparison and experimental validation of two potential resonant viscosity sensors in the kilohertz range

    Energy Technology Data Exchange (ETDEWEB)

    Lemaire, Etienne; Caillard, Benjamin; Dufour, Isabelle [Univ. Bordeaux, IMS, UMR 5218, F-33400 Talence (France); Heinisch, Martin; Jakoby, Bernhard [Institute for Microelectronics and Microsensors, Johannes Kepler University, Linz (Austria)

    2013-08-15

    Oscillating microstructures are well established and find application in many fields. These include force sensors, e.g. AFM micro-cantilevers or accelerometers based on resonant suspended plates. This contribution presents two vibrating mechanical structures acting as force sensors in liquid media in order to measure hydrodynamic interactions. Rectangular cross section microcantilevers as well as circular cross section wires are investigated. Each structure features specific benefits, which are discussed in detail. Furthermore, their mechanical parameters and their deflection in liquids are characterized. Finally, an inverse analytical model is applied to calculate the complex viscosity near the resonant frequency for both types of structures. With this approach it is possible to determine rheological parameters in the kilohertz range in situ within a few seconds. The monitoring of the complex viscosity of yogurt during the fermentation process is used as a proof of concept to qualify at least one of the two sensors in opaque mixtures. (paper)

  19. Comparison and experimental validation of two potential resonant viscosity sensors in the kilohertz range

    Science.gov (United States)

    Lemaire, Etienne; Heinisch, Martin; Caillard, Benjamin; Jakoby, Bernhard; Dufour, Isabelle

    2013-08-01

    Oscillating microstructures are well established and find application in many fields. These include force sensors, e.g. AFM micro-cantilevers or accelerometers based on resonant suspended plates. This contribution presents two vibrating mechanical structures acting as force sensors in liquid media in order to measure hydrodynamic interactions. Rectangular cross section microcantilevers as well as circular cross section wires are investigated. Each structure features specific benefits, which are discussed in detail. Furthermore, their mechanical parameters and their deflection in liquids are characterized. Finally, an inverse analytical model is applied to calculate the complex viscosity near the resonant frequency for both types of structures. With this approach it is possible to determine rheological parameters in the kilohertz range in situ within a few seconds. The monitoring of the complex viscosity of yogurt during the fermentation process is used as a proof of concept to qualify at least one of the two sensors in opaque mixtures.

  20. A wide range and highly sensitive optical fiber pH sensor using polyacrylamide hydrogel

    Science.gov (United States)

    Pathak, Akhilesh Kumar; Singh, Vinod Kumar

    2017-12-01

    In the present study we report the fabrication and characterization of no-core fiber sensor (NCFS) using smart hydrogel coating for pH measurement. The no-core fiber (NCF) is stubbed between two single-mode fibers with SMA connector before immobilizing of smart hydrogel. The wavelength interrogation technique is used to calculate the sensitivity of the proposed sensor. The result shows a high sensitivity of 1.94 nm/pH for a wide range of pH values varied from 3 to 10 with a good linear response. In addition to high sensitivity, the fabricated sensor provides a fast response time with a good stability, repeatability and reproducibility.

  1. A high-resolution full-field range imaging system

    Science.gov (United States)

    Carnegie, D. A.; Cree, M. J.; Dorrington, A. A.

    2005-08-01

    There exist a number of applications where the range to all objects in a field of view needs to be obtained. Specific examples include obstacle avoidance for autonomous mobile robots, process automation in assembly factories, surface profiling for shape analysis, and surveying. Ranging systems can be typically characterized as being either laser scanning systems where a laser point is sequentially scanned over a scene or a full-field acquisition where the range to every point in the image is simultaneously obtained. The former offers advantages in terms of range resolution, while the latter tend to be faster and involve no moving parts. We present a system for determining the range to any object within a camera's field of view, at the speed of a full-field system and the range resolution of some point laser scans. Initial results obtained have a centimeter range resolution for a 10 second acquisition time. Modifications to the existing system are discussed that should provide faster results with submillimeter resolution.

  2. High-speed Imaging of Global Surface Temperature Distributions on Hypersonic Ballistic-Range Projectiles

    Science.gov (United States)

    Wilder, Michael C.; Reda, Daniel C.

    2004-01-01

    The NASA-Ames ballistic range provides a unique capability for aerothermodynamic testing of configurations in hypersonic, real-gas, free-flight environments. The facility can closely simulate conditions at any point along practically any trajectory of interest experienced by a spacecraft entering an atmosphere. Sub-scale models of blunt atmospheric entry vehicles are accelerated by a two-stage light-gas gun to speeds as high as 20 times the speed of sound to fly ballistic trajectories through an 24 m long vacuum-rated test section. The test-section pressure (effective altitude), the launch velocity of the model (flight Mach number), and the test-section working gas (planetary atmosphere) are independently variable. The model travels at hypersonic speeds through a quiescent test gas, creating a strong bow-shock wave and real-gas effects that closely match conditions achieved during actual atmospheric entry. The challenge with ballistic range experiments is to obtain quantitative surface measurements from a model traveling at hypersonic speeds. The models are relatively small (less than 3.8 cm in diameter), which limits the spatial resolution possible with surface mounted sensors. Furthermore, since the model is in flight, surface-mounted sensors require some form of on-board telemetry, which must survive the massive acceleration loads experienced during launch (up to 500,000 gravities). Finally, the model and any on-board instrumentation will be destroyed at the terminal wall of the range. For these reasons, optical measurement techniques are the most practical means of acquiring data. High-speed thermal imaging has been employed in the Ames ballistic range to measure global surface temperature distributions and to visualize the onset of transition to turbulent-flow on the forward regions of hypersonic blunt bodies. Both visible wavelength and infrared high-speed cameras are in use. The visible wavelength cameras are intensified CCD imagers capable of integration

  3. Nanocomposite-Based Microstructured Piezoresistive Pressure Sensors for Low-Pressure Measurement Range

    Directory of Open Access Journals (Sweden)

    Vasileios Mitrakos

    2018-01-01

    Full Text Available Piezoresistive pressure sensors capable of detecting ranges of low compressive stresses have been successfully fabricated and characterised. The 5.5 × 5 × 1.6 mm3 sensors consist of a planar aluminium top electrode and a microstructured bottom electrode containing a two-by-two array of truncated pyramids with a piezoresistive composite layer sandwiched in-between. The responses of two different piezocomposite materials, a Multiwalled Carbon Nanotube (MWCNT-elastomer composite and a Quantum Tunneling Composite (QTC, have been characterised as a function of applied pressure and effective contact area. The MWCNT piezoresistive composite-based sensor was able to detect pressures as low as 200 kPa. The QTC-based sensor was capable of detecting pressures as low as 50 kPa depending on the contact area of the bottom electrode. Such sensors could find useful applications requiring the detection of small compressive loads such as those encountered in haptic sensing or robotics.

  4. Photon detection with CMOS sensors for fast imaging

    International Nuclear Information System (INIS)

    Baudot, J.; Dulinski, W.; Winter, M.; Barbier, R.; Chabanat, E.; Depasse, P.; Estre, N.

    2009-01-01

    Pixel detectors employed in high energy physics aim to detect single minimum ionizing particle with micrometric positioning resolution. Monolithic CMOS sensors succeed in this task thanks to a low equivalent noise charge per pixel of around 10 to 15 e - , and a pixel pitch varying from 10 to a few 10 s of microns. Additionally, due to the possibility for integration of some data treatment in the sensor itself, readout times of 100μs have been reached for 100 kilo-pixels sensors. These aspects of CMOS sensors are attractive for applications in photon imaging. For X-rays of a few keV, the efficiency is limited to a few % due to the thin sensitive volume. For visible photons, the back-thinned version of CMOS sensor is sensitive to low intensity sources, of a few hundred photons. When a back-thinned CMOS sensor is combined with a photo-cathode, a new hybrid detector results (EBCMOS) and operates as a fast single photon imager. The first EBCMOS was produced in 2007 and demonstrated single photon counting with low dark current capability in laboratory conditions. It has been compared, in two different biological laboratories, with existing CCD-based 2D cameras for fluorescence microscopy. The current EBCMOS sensitivity and frame rate is comparable to existing EMCCDs. On-going developments aim at increasing this frame rate by, at least, an order of magnitude. We report in conclusion, the first test of a new CMOS sensor, LUCY, which reaches 1000 frames per second.

  5. Broadband image sensor array based on graphene-CMOS integration

    Science.gov (United States)

    Goossens, Stijn; Navickaite, Gabriele; Monasterio, Carles; Gupta, Shuchi; Piqueras, Juan José; Pérez, Raúl; Burwell, Gregory; Nikitskiy, Ivan; Lasanta, Tania; Galán, Teresa; Puma, Eric; Centeno, Alba; Pesquera, Amaia; Zurutuza, Amaia; Konstantatos, Gerasimos; Koppens, Frank

    2017-06-01

    Integrated circuits based on complementary metal-oxide-semiconductors (CMOS) are at the heart of the technological revolution of the past 40 years, enabling compact and low-cost microelectronic circuits and imaging systems. However, the diversification of this platform into applications other than microcircuits and visible-light cameras has been impeded by the difficulty to combine semiconductors other than silicon with CMOS. Here, we report the monolithic integration of a CMOS integrated circuit with graphene, operating as a high-mobility phototransistor. We demonstrate a high-resolution, broadband image sensor and operate it as a digital camera that is sensitive to ultraviolet, visible and infrared light (300-2,000 nm). The demonstrated graphene-CMOS integration is pivotal for incorporating 2D materials into the next-generation microelectronics, sensor arrays, low-power integrated photonics and CMOS imaging systems covering visible, infrared and terahertz frequencies.

  6. Image sensor for testing refractive error of eyes

    Science.gov (United States)

    Li, Xiangning; Chen, Jiabi; Xu, Longyun

    2000-05-01

    It is difficult to detect ametropia and anisometropia for children. Image sensor for testing refractive error of eyes does not need the cooperation of children and can be used to do the general survey of ametropia and anisometropia for children. In our study, photographs are recorded by a CCD element in a digital form which can be directly processed by a computer. In order to process the image accurately by digital technique, formula considering the effect of extended light source and the size of lens aperture has been deduced, which is more reliable in practice. Computer simulation of the image sensing is made to verify the fineness of the results.

  7. CMOS image sensors: State-of-the-art

    Science.gov (United States)

    Theuwissen, Albert J. P.

    2008-09-01

    This paper gives an overview of the state-of-the-art of CMOS image sensors. The main focus is put on the shrinkage of the pixels : what is the effect on the performance characteristics of the imagers and on the various physical parameters of the camera ? How is the CMOS pixel architecture optimized to cope with the negative performance effects of the ever-shrinking pixel size ? On the other hand, the smaller dimensions in CMOS technology allow further integration on column level and even on pixel level. This will make CMOS imagers even smarter that they are already.

  8. BIOME: An Ecosystem Remote Sensor Based on Imaging Interferometry

    Science.gov (United States)

    Peterson, David L.; Hammer, Philip; Smith, William H.; Lawless, James G. (Technical Monitor)

    1994-01-01

    Until recent times, optical remote sensing of ecosystem properties from space has been limited to broad band multispectral scanners such as Landsat and AVHRR. While these sensor data can be used to derive important information about ecosystem parameters, they are very limited for measuring key biogeochemical cycling parameters such as the chemical content of plant canopies. Such parameters, for example the lignin and nitrogen contents, are potentially amenable to measurements by very high spectral resolution instruments using a spectroscopic approach. Airborne sensors based on grating imaging spectrometers gave the first promise of such potential but the recent decision not to deploy the space version has left the community without many alternatives. In the past few years, advancements in high performance deep well digital sensor arrays coupled with a patented design for a two-beam interferometer has produced an entirely new design for acquiring imaging spectroscopic data at the signal to noise levels necessary for quantitatively estimating chemical composition (1000:1 at 2 microns). This design has been assembled as a laboratory instrument and the principles demonstrated for acquiring remote scenes. An airborne instrument is in production and spaceborne sensors being proposed. The instrument is extremely promising because of its low cost, lower power requirements, very low weight, simplicity (no moving parts), and high performance. For these reasons, we have called it the first instrument optimized for ecosystem studies as part of a Biological Imaging and Observation Mission to Earth (BIOME).

  9. Low-power high-accuracy micro-digital sun sensor by means of a CMOS image sensor

    NARCIS (Netherlands)

    Xie, N.; Theuwissen, A.J.P.

    2013-01-01

    A micro-digital sun sensor (?DSS) is a sun detector which senses a satellite’s instant attitude angle with respect to the sun. The core of this sensor is a system-on-chip imaging chip which is referred to as APS+. The APS+ integrates a CMOS active pixel sensor (APS) array of 368×368??pixels , a

  10. Large dynamic range pressure sensor based on two semicircle-holes microstructured fiber.

    Science.gov (United States)

    Liu, Zhengyong; Htein, Lin; Lee, Kang-Kuen; Lau, Kin-Tak; Tam, Hwa-Yaw

    2018-01-08

    This paper presents a sensitive and large dynamic range pressure sensor based on a novel birefringence microstructured optical fiber (MOF) deployed in a Sagnac interferometer configuration. The MOF has two large semicircle holes in the cladding and a rectangular strut with germanium-doped core in the center. The fiber structure permits surrounding pressure to induce large effective index difference between the two polarized modes. The calculated and measured group birefringence of the fiber are 1.49 × 10 -4 , 1.23 × 10 -4 , respectively, at the wavelength of 1550 nm. Experimental results shown that the pressure sensitivity of the sensor varied from 45,000 pm/MPa to 50,000 pm/MPa, and minimum detectable pressure of 80 Pa and dynamic range of better than 116 dB could be achieved with the novel fiber sensor. The proposed sensor could be used in harsh environment and is an ideal candidate for downhole applications where high pressure measurement at elevated temperature up to 250 °C is needed.

  11. Multi-image acquisition-based distance sensor using agile laser spot beam.

    Science.gov (United States)

    Riza, Nabeel A; Amin, M Junaid

    2014-09-01

    We present a novel laser-based distance measurement technique that uses multiple-image-based spatial processing to enable distance measurements. Compared with the first-generation distance sensor using spatial processing, the modified sensor is no longer hindered by the classic Rayleigh axial resolution limit for the propagating laser beam at its minimum beam waist location. The proposed high-resolution distance sensor design uses an electronically controlled variable focus lens (ECVFL) in combination with an optical imaging device, such as a charged-coupled device (CCD), to produce and capture different laser spot size images on a target with these beam spot sizes different from the minimal spot size possible at this target distance. By exploiting the unique relationship of the target located spot sizes with the varying ECVFL focal length for each target distance, the proposed distance sensor can compute the target distance with a distance measurement resolution better than the axial resolution via the Rayleigh resolution criterion. Using a 30 mW 633 nm He-Ne laser coupled with an electromagnetically actuated liquid ECVFL, along with a 20 cm focal length bias lens, and using five spot images captured per target position by a CCD-based Nikon camera, a proof-of-concept proposed distance sensor is successfully implemented in the laboratory over target ranges from 10 to 100 cm with a demonstrated sub-cm axial resolution, which is better than the axial Rayleigh resolution limit at these target distances. Applications for the proposed potentially cost-effective distance sensor are diverse and include industrial inspection and measurement and 3D object shape mapping and imaging.

  12. 77 FR 26787 - Certain CMOS Image Sensors and Products Containing Same; Notice of Receipt of Complaint...

    Science.gov (United States)

    2012-05-07

    ... INTERNATIONAL TRADE COMMISSION [Docket No. 2895] Certain CMOS Image Sensors and Products.... International Trade Commission has received a complaint entitled Certain CMOS Image Sensors and Products... importation, and the sale within the United States after importation of certain CMOS image sensors and...

  13. 77 FR 33488 - Certain CMOS Image Sensors and Products Containing Same; Institution of Investigation Pursuant to...

    Science.gov (United States)

    2012-06-06

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-846] Certain CMOS Image Sensors and... image sensors and products containing same by reason of infringement of certain claims of U.S. Patent No... image sensors and products containing same that infringe one or more of claims 1 and 2 of the `126...

  14. Highly integrated image sensors enable low-cost imaging systems

    Science.gov (United States)

    Gallagher, Paul K.; Lake, Don; Chalmers, David; Hurwitz, J. E. D.

    1997-09-01

    The highest barriers to wide scale implementation of vision systems have been cost. This is closely followed by the level of difficulty of putting a complete imaging system together. As anyone who has every been in the position of creating a vision system knows, the various bits and pieces supplied by the many vendors are not under any type of standardization control. In short, unless you are an expert in imaging, electrical interfacing, computers, digital signal processing, and high speed storage techniques, you will likely spend more money trying to do it yourself rather than to buy the exceedingly expensive systems available. Another alternative is making headway into the imaging market however. The growing investment in highly integrated CMOS based imagers is addressing both the cost and the system integration difficulties. This paper discusses the benefits gained from CMOS based imaging, and how these benefits are already being applied.

  15. Indoor and Outdoor Depth Imaging of Leaves With Time-of-Flight and Stereo Vision Sensors

    DEFF Research Database (Denmark)

    Kazmi, Wajahat; Foix, Sergi; Alenya, Guilliem

    2014-01-01

    In this article we analyze the response of Time-of-Flight (ToF) cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. ToF cameras are sensitive to ambient light and have low resolution but deliver...... poorly under sunlight. Stereo vision is comparatively more robust to ambient illumination and provides high resolution depth data but is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves...

  16. Shack-Hartmann centroid detection method based on high dynamic range imaging and normalization techniques

    International Nuclear Information System (INIS)

    Vargas, Javier; Gonzalez-Fernandez, Luis; Quiroga, Juan Antonio; Belenguer, Tomas

    2010-01-01

    In the optical quality measuring process of an optical system, including diamond-turning components, the use of a laser light source can produce an undesirable speckle effect in a Shack-Hartmann (SH) CCD sensor. This speckle noise can deteriorate the precision and accuracy of the wavefront sensor measurement. Here we present a SH centroid detection method founded on computer-based techniques and capable of measurement in the presence of strong speckle noise. The method extends the dynamic range imaging capabilities of the SH sensor through the use of a set of different CCD integration times. The resultant extended range spot map is normalized to accurately obtain the spot centroids. The proposed method has been applied to measure the optical quality of the main optical system (MOS) of the mid-infrared instrument telescope smulator. The wavefront at the exit of this optical system is affected by speckle noise when it is illuminated by a laser source and by air turbulence because it has a long back focal length (3017 mm). Using the proposed technique, the MOS wavefront error was measured and satisfactory results were obtained.

  17. Imaging moving objects from multiply scattered waves and multiple sensors

    International Nuclear Information System (INIS)

    Miranda, Analee; Cheney, Margaret

    2013-01-01

    In this paper, we develop a linearized imaging theory that combines the spatial, temporal and spectral components of multiply scattered waves as they scatter from moving objects. In particular, we consider the case of multiple fixed sensors transmitting and receiving information from multiply scattered waves. We use a priori information about the multipath background. We use a simple model for multiple scattering, namely scattering from a fixed, perfectly reflecting (mirror) plane. We base our image reconstruction and velocity estimation technique on a modification of a filtered backprojection method that produces a phase-space image. We plot examples of point-spread functions for different geometries and waveforms, and from these plots, we estimate the resolution in space and velocity. Through this analysis, we are able to identify how the imaging system depends on parameters such as bandwidth and number of sensors. We ultimately show that enhanced phase-space resolution for a distribution of moving and stationary targets in a multipath environment may be achieved using multiple sensors. (paper)

  18. Handbook of ultra-wideband short-range sensing theory, sensors, applications

    CERN Document Server

    Sachs, Jürgen

    2013-01-01

    Ranging from the theoretical basis of UWB sensors via implementation issues to applications, this much-needed book bridges the gap between designers and appliers working in civil engineering, biotechnology, medical engineering, robotic, mechanical engineering, safety and homeland security. From the contents: * History * Signal and systems in time and frequency domain * Propagation of electromagnetic waves (in frequency and time domain) * UWB-Principles * UWB-antennas and applicators * Data processing * Applications.

  19. Image sensors for radiometric measurements in the ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Desa, E.S.; Desa, B.A.E.

    the sensors at a stabilised moderately cool temperature of 15 deg. C and to intelligently control the exposure time of the device, so as to reliably measure flux levels in the range 1 W/m super(2)/nm to 10/6 W/m super(2)/nm commonly encountered in the ocean...

  20. Sensitivity Range Analysis of Infrared (IR) Transmitter and Receiver Sensor to Detect Sample Position in Automatic Sample Changer

    International Nuclear Information System (INIS)

    Syirrazie Che Soh; Nolida Yussup; Nur Aira Abdul Rahman; Maslina Ibrahim

    2016-01-01

    Sensitivity range of IR Transmitter and Receiver Sensor influences the effectiveness of the sensor to detect position of a sample. Then the purpose of this analysis is to determine the suitable design and specification the electronic driver of the sensor to gain appropriate sensitivity range for required operation. The related activities to this analysis cover electronic design concept and specification, calibration of design specification and evaluation on design specification for required application. (author)

  1. AUTOMATIC 3D MAPPING USING MULTIPLE UNCALIBRATED CLOSE RANGE IMAGES

    Directory of Open Access Journals (Sweden)

    M. Rafiei

    2013-09-01

    Full Text Available Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure and camera pose (motion, it is commonly known as structure from motion (SfM. In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction. Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower.

  2. An interferometric radar sensor for monitoring the vibrations of structures at short ranges

    Directory of Open Access Journals (Sweden)

    Luzi Guido

    2018-01-01

    Full Text Available The Real-Aperture-Radar (RAR interferometry technique consolidated in the last decade as an operational tool for the monitoring of large civil engineering structures as bridges, towers, and buildings. In literature, experimental campaigns collected through a well-known commercial equipment have been widely documented, while the cases where different types of sensors have been tested are a few. On the bases of some experimental tests, a new sensor working at high frequency, providing some improved performances, is here discussed. The core of the proposed system is an off-the-shelf, linear frequency modulated continuous wave device. The development of this apparatus is aimed at achieving a proof-of-concept, tackling operative aspects related to the development of a low cost and reliable system. The capability to detect the natural frequencies of a lightpole has been verified; comparing the results of the proposed sensor with those ones obtained through a commercial system based on the same technique, a more detailed description of the vibrating structure has been achieved. The results of this investigation confirmed that the development of sensors working at higher frequencies, although deserving deeper studies, is very promising and could open new applications demanding higher spatial resolutions at close ranges.

  3. Self-Configuring Indoor Localization Based on Low-Cost Ultrasonic Range Sensors

    Directory of Open Access Journals (Sweden)

    Can Basaran

    2014-10-01

    Full Text Available In smart environments, target tracking is an essential service used by numerous applications from activity recognition to personalized infotaintment. The target tracking relies on sensors with known locations to estimate and keep track of the path taken by the target, and hence, it is crucial to have an accurate map of such sensors. However, the need for manually entering their locations after deployment and expecting them to remain fixed, significantly limits the usability of target tracking. To remedy this drawback, we present a self-configuring and device-free localization protocol based on genetic algorithms that autonomously identifies the geographic topology of a network of ultrasonic range sensors as well as automatically detects any change in the established network structure in less than a minute and generates a new map within seconds. The proposed protocol significantly reduces hardware and deployment costs thanks to the use of low-cost off-the-shelf sensors with no manual configuration. Experiments on two real testbeds of different sizes show that the proposed protocol achieves an error of 7.16~17.53 cm in topology mapping, while also tracking a mobile target with an average error of 11.71~18.43 cm and detecting displacements of 1.41~3.16 m in approximately 30 s.

  4. Honeywell's Compact, Wide-angle Uv-visible Imaging Sensor

    Science.gov (United States)

    Pledger, D.; Billing-Ross, J.

    1993-01-01

    Honeywell is currently developing the Earth Reference Attitude Determination System (ERADS). ERADS determines attitude by imaging the entire Earth's limb and a ring of the adjacent star field in the 2800-3000 A band of the ultraviolet. This is achieved through the use of a highly nonconventional optical system, an intensifier tube, and a mega-element CCD array. The optics image a 30 degree region in the center of the field, and an outer region typically from 128 to 148 degrees, which can be adjusted up to 180 degrees. Because of the design employed, the illumination at the outer edge of the field is only some 15 percent below that at the center, in contrast to the drastic rolloffs encountered in conventional wide-angle sensors. The outer diameter of the sensor is only 3 in; the volume and weight of the entire system, including processor, are 1000 cc and 6 kg, respectively.

  5. Optical Imaging Sensors and Systems for Homeland Security Applications

    CERN Document Server

    Javidi, Bahram

    2006-01-01

    Optical and photonic systems and devices have significant potential for homeland security. Optical Imaging Sensors and Systems for Homeland Security Applications presents original and significant technical contributions from leaders of industry, government, and academia in the field of optical and photonic sensors, systems and devices for detection, identification, prevention, sensing, security, verification and anti-counterfeiting. The chapters have recent and technically significant results, ample illustrations, figures, and key references. This book is intended for engineers and scientists in the relevant fields, graduate students, industry managers, university professors, government managers, and policy makers. Advanced Sciences and Technologies for Security Applications focuses on research monographs in the areas of -Recognition and identification (including optical imaging, biometrics, authentication, verification, and smart surveillance systems) -Biological and chemical threat detection (including bios...

  6. Optical fiber sensors for image formation in radiodiagnostic - preliminary essays

    International Nuclear Information System (INIS)

    Carvalho, Cesar C. de; Werneck, Marcelo M.

    1998-01-01

    This work describes preliminary experiments that will bring subsidies to analyze the capability to implement a system able to capture radiological images with new sensor system, comprised by FOs scanning process and I-CCD camera. These experiments have the main objective to analyze the optical response from FOs bundle, with several typos of scintillators associated with them, when it is submitted to medical x-rays exposition. (author)

  7. Multi sensor satellite imagers for commercial remote sensing

    Science.gov (United States)

    Cronje, T.; Burger, H.; Du Plessis, J.; Du Toit, J. F.; Marais, L.; Strumpfer, F.

    2005-10-01

    This paper will discuss and compare recent refractive and catodioptric imager designs developed and manufactured at SunSpace for Multi Sensor Satellite Imagers with Panchromatic, Multi-spectral, Area and Hyperspectral sensors on a single Focal Plane Array (FPA). These satellite optical systems were designed with applications to monitor food supplies, crop yield and disaster monitoring in mind. The aim of these imagers is to achieve medium to high resolution (2.5m to 15m) spatial sampling, wide swaths (up to 45km) and noise equivalent reflectance (NER) values of less than 0.5%. State-of-the-art FPA designs are discussed and address the choice of detectors to achieve these performances. Special attention is given to thermal robustness and compactness, the use of folding prisms to place multiple detectors in a large FPA and a specially developed process to customize the spectral selection with the need to minimize mass, power and cost. A refractive imager with up to 6 spectral bands (6.25m GSD) and a catodioptric imager with panchromatic (2.7m GSD), multi-spectral (6 bands, 4.6m GSD), hyperspectral (400nm to 2.35μm, 200 bands, 15m GSD) sensors on the same FPA will be discussed. Both of these imagers are also equipped with real time video view finding capabilities. The electronic units could be subdivided into the Front-End Electronics and Control Electronics with analogue and digital signal processing. A dedicated Analogue Front-End is used for Correlated Double Sampling (CDS), black level correction, variable gain and up to 12-bit digitizing and high speed LVDS data link to a mass memory unit.

  8. Hyperspectral Imaging Sensors and the Marine Coastal Zone

    Science.gov (United States)

    Richardson, Laurie L.

    2000-01-01

    Hyperspectral imaging sensors greatly expand the potential of remote sensing to assess, map, and monitor marine coastal zones. Each pixel in a hyperspectral image contains an entire spectrum of information. As a result, hyperspectral image data can be processed in two very different ways: by image classification techniques, to produce mapped outputs of features in the image on a regional scale; and by use of spectral analysis of the spectral data embedded within each pixel of the image. The latter is particularly useful in marine coastal zones because of the spectral complexity of suspended as well as benthic features found in these environments. Spectral-based analysis of hyperspectral (AVIRIS) imagery was carried out to investigate a marine coastal zone of South Florida, USA. Florida Bay is a phytoplankton-rich estuary characterized by taxonomically distinct phytoplankton assemblages and extensive seagrass beds. End-member spectra were extracted from AVIRIS image data corresponding to ground-truth sample stations and well-known field sites. Spectral libraries were constructed from the AVIRIS end-member spectra and used to classify images using the Spectral Angle Mapper (SAM) algorithm, a spectral-based approach that compares the spectrum, in each pixel of an image with each spectrum in a spectral library. Using this approach different phytoplankton assemblages containing diatoms, cyanobacteria, and green microalgae, as well as benthic community (seagrasses), were mapped.

  9. Planoconcave optical microresonator sensors for photoacoustic imaging: pushing the limits of sensitivity (Conference Presentation)

    Science.gov (United States)

    Guggenheim, James A.; Zhang, Edward Z.; Beard, Paul C.

    2016-03-01

    Most photoacoustic scanners use piezoelectric detectors but these have two key limitations. Firstly, they are optically opaque, inhibiting backward mode operation. Secondly, it is difficult to achieve adequate detection sensitivity with the small element sizes needed to provide near-omnidirectional response as required for tomographic imaging. Planar Fabry-Perot (FP) ultrasound sensing etalons can overcome both of these limitations and have proved extremely effective for superficial (beam. However, this has the disadvantage that beam walk-off due to the divergence of the beam fundamentally limits the etalon finesse and thus sensitivity - in essence, the problem is one of insufficient optical confinement. To overcome this, novel planoconcave micro-resonator sensors have been fabricated using precision ink-jet printed polymer domes with curvatures matching that of the laser wavefront. By providing near-perfect beam confinement, we show that it is possible to approach the maximum theoretical limit for finesse (f) imposed by the etalon mirror reflectivities (e.g. f=400 for R=99.2% in contrast to a typical planar sensor value of fbeam walk-off, viable sensors can be made with significantly greater thickness than planar FP sensors. This provides an additional sensitivity gain for deep tissue imaging applications such as breast imaging where detection bandwidths in the low MHz can be tolerated. For example, for a 250 μm thick planoconcave sensor with a -3dB bandwidth of 5MHz, the measured NEP was 4 Pa. This NEP is comparable to that provided by mm scale piezoelectric detectors used for breast imaging applications but with more uniform frequency response characteristics and an order-of-magnitude smaller element size. Following previous proof-of-concept work, several important advances towards practical application have been made. A family of sensors with bandwidths ranging from 3MHz to 20MHz have been fabricated and characterised. A novel interrogation scheme based on

  10. A Full Parallel Event Driven Readout Technique for Area Array SPAD FLIM Image Sensors

    Directory of Open Access Journals (Sweden)

    Kaiming Nie

    2016-01-01

    Full Text Available This paper presents a full parallel event driven readout method which is implemented in an area array single-photon avalanche diode (SPAD image sensor for high-speed fluorescence lifetime imaging microscopy (FLIM. The sensor only records and reads out effective time and position information by adopting full parallel event driven readout method, aiming at reducing the amount of data. The image sensor includes four 8 × 8 pixel arrays. In each array, four time-to-digital converters (TDCs are used to quantize the time of photons’ arrival, and two address record modules are used to record the column and row information. In this work, Monte Carlo simulations were performed in Matlab in terms of the pile-up effect induced by the readout method. The sensor’s resolution is 16 × 16. The time resolution of TDCs is 97.6 ps and the quantization range is 100 ns. The readout frame rate is 10 Mfps, and the maximum imaging frame rate is 100 fps. The chip’s output bandwidth is 720 MHz with an average power of 15 mW. The lifetime resolvability range is 5–20 ns, and the average error of estimated fluorescence lifetimes is below 1% by employing CMM to estimate lifetimes.

  11. Positron range in PET imaging: non-conventional isotopes

    International Nuclear Information System (INIS)

    Jødal, L; Le Loirec, C; Champion, C

    2014-01-01

    In addition to conventional short-lived radionuclides, longer-lived isotopes are becoming increasingly important to positron emission tomography (PET). The longer half-life both allows for circumvention of the in-house production of radionuclides, and expands the spectrum of physiological processes amenable to PET imaging, including processes with prohibitively slow kinetics for investigation with short-lived radiotracers. However, many of these radionuclides emit ‘high-energy’ positrons and gamma rays which affect the spatial resolution and quantitative accuracy of PET images. The objective of the present work is to investigate the positron range distribution for some of these long-lived isotopes. Based on existing Monte Carlo simulations of positron interactions in water, the probability distribution of the line of response displacement have been empirically described by means of analytic displacement functions. Relevant distributions have been derived for the isotopes 22 Na, 52 Mn, 89 Zr, 45 Ti, 51 Mn, 94m Tc, 52m Mn, 38 K, 64 Cu, 86 Y, 124 I, and 120 I. It was found that the distribution functions previously found for a series of conventional isotopes (Jødal et al 2012 Phys. Med. Bio. 57 3931–43), were also applicable to these non-conventional isotopes, except that for 120 I, 124 I, 89 Zr, 52 Mn, and 64 Cu, parameters in the formulae were less well predicted by mean positron energy alone. Both conventional and non-conventional range distributions can be described by relatively simple analytic expressions. The results will be applicable to image-reconstruction software to improve the resolution. (paper)

  12. Fast regional readout CMOS Image Sensor for dynamic MLC tracking

    Science.gov (United States)

    Zin, H.; Harris, E.; Osmond, J.; Evans, P.

    2014-03-01

    Advanced radiotherapy techniques such as volumetric modulated arc therapy (VMAT) require verification of the complex beam delivery including tracking of multileaf collimators (MLC) and monitoring the dose rate. This work explores the feasibility of a prototype Complementary metal-oxide semiconductor Image Sensor (CIS) for tracking these complex treatments by utilising fast, region of interest (ROI) read out functionality. An automatic edge tracking algorithm was used to locate the MLC leaves edges moving at various speeds (from a moving triangle field shape) and imaged with various sensor frame rates. The CIS demonstrates successful edge detection of the dynamic MLC motion within accuracy of 1.0 mm. This demonstrates the feasibility of the sensor to verify treatment delivery involving dynamic MLC up to ~400 frames per second (equivalent to the linac pulse rate), which is superior to any current techniques such as using electronic portal imaging devices (EPID). CIS provides the basis to an essential real-time verification tool, useful in accessing accurate delivery of complex high energy radiation to the tumour and ultimately to achieve better cure rates for cancer patients.

  13. Fast regional readout CMOS image sensor for dynamic MLC tracking

    International Nuclear Information System (INIS)

    Zin, H; Harris, E; Osmond, J; Evans, P

    2014-01-01

    Advanced radiotherapy techniques such as volumetric modulated arc therapy (VMAT) require verification of the complex beam delivery including tracking of multileaf collimators (MLC) and monitoring the dose rate. This work explores the feasibility of a prototype Complementary metal-oxide semiconductor Image Sensor (CIS) for tracking these complex treatments by utilising fast, region of interest (ROI) read out functionality. An automatic edge tracking algorithm was used to locate the MLC leaves edges moving at various speeds (from a moving triangle field shape) and imaged with various sensor frame rates. The CIS demonstrates successful edge detection of the dynamic MLC motion within accuracy of 1.0 mm. This demonstrates the feasibility of the sensor to verify treatment delivery involving dynamic MLC up to ∼400 frames per second (equivalent to the linac pulse rate), which is superior to any current techniques such as using electronic portal imaging devices (EPID). CIS provides the basis to an essential real-time verification tool, useful in accessing accurate delivery of complex high energy radiation to the tumour and ultimately to achieve better cure rates for cancer patients.

  14. Radio frequency (RF) time-of-flight ranging for wireless sensor networks

    International Nuclear Information System (INIS)

    Thorbjornsen, B; White, N M; Brown, A D; Reeve, J S

    2010-01-01

    Position information of nodes within wireless sensor networks (WSNs) is often a requirement in order to make use of the data recorded by the sensors themselves. On deployment the nodes normally have no prior knowledge of their position and thus a locationing mechanism is required to determine their positions. In this paper, we describe a method to determine the point-to-point range between sensor nodes as part of the locationing process. A two-way time-of-flight (TOF) ranging scheme is presented using narrow-band RF. The frequency difference between the transceivers involved with the point-to-point measurement is used to obtain a sub-clock TOF phase offset measurement in order to achieve high resolution TOF measurements. The ranging algorithm has been developed and prototyped on a TI CC2430 development kit with no additional hardware being required. Performance results have been obtained for the line-of-sight (LOS), non-line-of-sight (NLOS) and indoor conditions. Accuracy is typically better than 7.0 m RMS for the LOS condition over 250.0 m and 15.8 m RMS for the NLOS condition over 120.0 m using a 100 sample average. Indoor accuracy is measured to 1.7 m RMS using a 1000 sample average over 8.0 m. Ranging error is linear and does not increase with the increased transmitter–receiver distance. Our TOA ranging scheme demonstrates a novel system where resolution and accuracy are time dependent in comparison with alternative frequency-dependent methods using narrow-band RF

  15. Soybean varieties discrimination using non-imaging hyperspectral sensor

    Science.gov (United States)

    da Silva Junior, Carlos Antonio; Nanni, Marcos Rafael; Shakir, Muhammad; Teodoro, Paulo Eduardo; de Oliveira-Júnior, José Francisco; Cezar, Everson; de Gois, Givanildo; Lima, Mendelson; Wojciechowski, Julio Cesar; Shiratsuchi, Luciano Shozo

    2018-03-01

    Infrared region of electromagnetic spectrum has remarkable applications in crop studies. Infrared along with Red band has been used to develop certain vegetation indices. These indices like NDVI, EVI provide important information on any crop physiological stages. The main objective of this research was to discriminate 4 different soybean varieties (BMX Potência, NA5909, FT Campo Mourão and Don Mario) using non-imaging hyperspectral sensor. The study was conducted in four agricultural areas in the municipality of Deodápolis (MS), Brazil. For spectral analysis, 2400 field samples were taken from soybean leaves by means of FieldSpec 3 JR spectroradiometer in the range from 350 to 2500 nm. The data were evaluated through multivariate analysis with the whole set of spectral curves isolated by blue, green, red and near infrared wavelengths along with the addition of vegetation indices like (Enhanced Vegetation Index - EVI, Normalized Difference Vegetation Index - NDVI, Green Normalized Difference Vegetation Index - GNDVI, Soil-adjusted Vegetation Index - SAVI, Transformed Vegetation Index - TVI and Optimized Soil-Adjusted Vegetation Index - OSAVI). A number of the analysis performed where, discriminant (60 and 80% of the data), simulated discriminant (40 and 20% of data), principal component (PC) and cluster analysis (CA). Discriminant and simulated discriminant analyze presented satisfactory results, with average global hit rates of 99.28 and 98.77%, respectively. The results obtained by PC and CA revealed considerable associations between the evaluated variables and the varieties, which indicated that each variety has a variable that discriminates it more effectively in relation to the others. There was great variation in the sample size (number of leaves) for estimating the mean of variables. However, it was possible to observe that 200 leaves allow to obtain a maximum error of 2% in relation to the mean.

  16. Survey on Ranging Sensors and Cooperative Techniques for Relative Positioning of Vehicles

    Directory of Open Access Journals (Sweden)

    Fabian de Ponte Müller

    2017-01-01

    Full Text Available Future driver assistance systems will rely on accurate, reliable and continuous knowledge on the position of other road participants, including pedestrians, bicycles and other vehicles. The usual approach to tackle this requirement is to use on-board ranging sensors inside the vehicle. Radar, laser scanners or vision-based systems are able to detect objects in their line-of-sight. In contrast to these non-cooperative ranging sensors, cooperative approaches follow a strategy in which other road participants actively support the estimation of the relative position. The limitations of on-board ranging sensors regarding their detection range and angle of view and the facility of blockage can be approached by using a cooperative approach based on vehicle-to-vehicle communication. The fusion of both, cooperative and non-cooperative strategies, seems to offer the largest benefits regarding accuracy, availability and robustness. This survey offers the reader a comprehensive review on different techniques for vehicle relative positioning. The reader will learn the important performance indicators when it comes to relative positioning of vehicles, the different technologies that are both commercially available and currently under research, their expected performance and their intrinsic limitations. Moreover, the latest research in the area of vision-based systems for vehicle detection, as well as the latest work on GNSS-based vehicle localization and vehicular communication for relative positioning of vehicles, are reviewed. The survey also includes the research work on the fusion of cooperative and non-cooperative approaches to increase the reliability and the availability.

  17. Imaging intracellular pH in live cells with a genetically encoded red fluorescent protein sensor.

    Science.gov (United States)

    Tantama, Mathew; Hung, Yin Pun; Yellen, Gary

    2011-07-06

    Intracellular pH affects protein structure and function, and proton gradients underlie the function of organelles such as lysosomes and mitochondria. We engineered a genetically encoded pH sensor by mutagenesis of the red fluorescent protein mKeima, providing a new tool to image intracellular pH in live cells. This sensor, named pHRed, is the first ratiometric, single-protein red fluorescent sensor of pH. Fluorescence emission of pHRed peaks at 610 nm while exhibiting dual excitation peaks at 440 and 585 nm that can be used for ratiometric imaging. The intensity ratio responds with an apparent pK(a) of 6.6 and a >10-fold dynamic range. Furthermore, pHRed has a pH-responsive fluorescence lifetime that changes by ~0.4 ns over physiological pH values and can be monitored with single-wavelength two-photon excitation. After characterizing the sensor, we tested pHRed's ability to monitor intracellular pH by imaging energy-dependent changes in cytosolic and mitochondrial pH.

  18. High-speed imaging using CMOS image sensor with quasi pixel-wise exposure

    Science.gov (United States)

    Sonoda, T.; Nagahara, H.; Endo, K.; Sugiyama, Y.; Taniguchi, R.

    2017-02-01

    Several recent studies in compressive video sensing have realized scene capture beyond the fundamental trade-off limit between spatial resolution and temporal resolution using random space-time sampling. However, most of these studies showed results for higher frame rate video that were produced by simulation experiments or using an optically simulated random sampling camera, because there are currently no commercially available image sensors with random exposure or sampling capabilities. We fabricated a prototype complementary metal oxide semiconductor (CMOS) image sensor with quasi pixel-wise exposure timing that can realize nonuniform space-time sampling. The prototype sensor can reset exposures independently by columns and fix these amount of exposure by rows for each 8x8 pixel block. This CMOS sensor is not fully controllable via the pixels, and has line-dependent controls, but it offers flexibility when compared with regular CMOS or charge-coupled device sensors with global or rolling shutters. We propose a method to realize pseudo-random sampling for high-speed video acquisition that uses the flexibility of the CMOS sensor. We reconstruct the high-speed video sequence from the images produced by pseudo-random sampling using an over-complete dictionary.

  19. Laser Doppler Blood Flow Imaging Using a CMOS Imaging Sensor with On-Chip Signal Processing

    Directory of Open Access Journals (Sweden)

    Cally Gill

    2013-09-01

    Full Text Available The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.

  20. Laser doppler blood flow imaging using a CMOS imaging sensor with on-chip signal processing.

    Science.gov (United States)

    He, Diwei; Nguyen, Hoang C; Hayes-Gill, Barrie R; Zhu, Yiqun; Crowe, John A; Gill, Cally; Clough, Geraldine F; Morgan, Stephen P

    2013-09-18

    The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.

  1. Light-Addressable Potentiometric Sensors for Quantitative Spatial Imaging of Chemical Species.

    Science.gov (United States)

    Yoshinobu, Tatsuo; Miyamoto, Ko-Ichiro; Werner, Carl Frederik; Poghossian, Arshak; Wagner, Torsten; Schöning, Michael J

    2017-06-12

    A light-addressable potentiometric sensor (LAPS) is a semiconductor-based chemical sensor, in which a measurement site on the sensing surface is defined by illumination. This light addressability can be applied to visualize the spatial distribution of pH or the concentration of a specific chemical species, with potential applications in the fields of chemistry, materials science, biology, and medicine. In this review, the features of this chemical imaging sensor technology are compared with those of other technologies. Instrumentation, principles of operation, and various measurement modes of chemical imaging sensor systems are described. The review discusses and summarizes state-of-the-art technologies, especially with regard to the spatial resolution and measurement speed; for example, a high spatial resolution in a submicron range and a readout speed in the range of several tens of thousands of pixels per second have been achieved with the LAPS. The possibility of combining this technology with microfluidic devices and other potential future developments are discussed.

  2. Miniature infrared hyperspectral imaging sensor for airborne applications

    Science.gov (United States)

    Hinnrichs, Michele; Hinnrichs, Bradford; McCutchen, Earl

    2017-05-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera, both MWIR and LWIR, small enough to serve as a payload on a miniature unmanned aerial vehicles. The optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of the sensor. This new and innovative approach to infrared hyperspectral imaging spectrometer uses micro-optics and will be explained in this paper. The micro-optics are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a mini-UAV or commercial quadcopter. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the spatial resolution. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4 lenslet array on a 1024 x 1024 pixel element focal plane array which gives 16 spectral images of 256 x 256 pixel resolution each

  3. Development of wide range charge integration application specified integrated circuit for photo-sensor

    Energy Technology Data Exchange (ETDEWEB)

    Katayose, Yusaku, E-mail: katayose@ynu.ac.jp [Department of Physics, Yokohama National University, 79-5 Tokiwadai, Hodogaya-ku, Yokohama, Kanagawa 240-8501 (Japan); Ikeda, Hirokazu [Institute of Space and Astronautical Science (ISAS)/Japan Aerospace Exploration Agency (JAXA), 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252-5210 (Japan); Tanaka, Manobu [National Laboratory for High Energy Physics, KEK, 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan); Shibata, Makio [Department of Physics, Yokohama National University, 79-5 Tokiwadai, Hodogaya-ku, Yokohama, Kanagawa 240-8501 (Japan)

    2013-01-21

    A front-end application specified integrated circuit (ASIC) is developed with a wide dynamic range amplifier (WDAMP) to read-out signals from a photo-sensor like a photodiode. The WDAMP ASIC consists of a charge sensitive preamplifier, four wave-shaping circuits with different amplification factors and Wilkinson-type analog-to-digital converter (ADC). To realize a wider range, the integrating capacitor in the preamplifier can be changed from 4 pF to 16 pF by a two-bit switch. The output of a preamplifier is shared by the four wave-shaping circuits with four gains of 1, 4, 16 and 64 to adapt the input range of ADC. A 0.25-μm CMOS process (of UMC electronics CO., LTD) is used to fabricate the ASIC with four-channels. The dynamic range of four orders of magnitude is achieved with the maximum range over 20 pC and the noise performance of 0.46 fC + 6.4×10{sup −4} fC/pF. -- Highlights: ► A front-end ASIC is developed with a wide dynamic range amplifier. ► The ASIC consists of a CSA, four wave-shaping circuits and pulse-height-to-time converters. ► The dynamic range of four orders of magnitude is achieved with the maximum range over 20 pC.

  4. Assessment and Calibration of a RGB-D Camera (Kinect v2 Sensor Towards a Potential Use for Close-Range 3D Modeling

    Directory of Open Access Journals (Sweden)

    Elise Lachat

    2015-10-01

    Full Text Available In the last decade, RGB-D cameras - also called range imaging cameras - have known a permanent evolution. Because of their limited cost and their ability to measure distances at a high frame rate, such sensors are especially appreciated for applications in robotics or computer vision. The Kinect v1 (Microsoft release in November 2010 promoted the use of RGB-D cameras, so that a second version of the sensor arrived on the market in July 2014. Since it is possible to obtain point clouds of an observed scene with a high frequency, one could imagine applying this type of sensors to answer to the need for 3D acquisition. However, due to the technology involved, some questions have to be considered such as, for example, the suitability and accuracy of RGB-D cameras for close range 3D modeling. In that way, the quality of the acquired data represents a major axis. In this paper, the use of a recent Kinect v2 sensor to reconstruct small objects in three dimensions has been investigated. To achieve this goal, a survey of the sensor characteristics as well as a calibration approach are relevant. After an accuracy assessment of the produced models, the benefits and drawbacks of Kinect v2 compared to the first version of the sensor and then to photogrammetry are discussed.

  5. Two-dimensional pixel image lag simulation and optimization in a 4-T CMOS image sensor

    Energy Technology Data Exchange (ETDEWEB)

    Yu Junting; Li Binqiao; Yu Pingping; Xu Jiangtao [School of Electronics Information Engineering, Tianjin University, Tianjin 300072 (China); Mou Cun, E-mail: xujiangtao@tju.edu.c [Logistics Management Office, Hebei University of Technology, Tianjin 300130 (China)

    2010-09-15

    Pixel image lag in a 4-T CMOS image sensor is analyzed and simulated in a two-dimensional model. Strategies of reducing image lag are discussed from transfer gate channel threshold voltage doping adjustment, PPD N-type doping dose/implant tilt adjustment and transfer gate operation voltage adjustment for signal electron transfer. With the computer analysis tool ISE-TCAD, simulation results show that minimum image lag can be obtained at a pinned photodiode n-type doping dose of 7.0 x 10{sup 12} cm{sup -2}, an implant tilt of -2{sup 0}, a transfer gate channel doping dose of 3.0 x 10{sup 12} cm{sup -2} and an operation voltage of 3.4 V. The conclusions of this theoretical analysis can be a guideline for pixel design to improve the performance of 4-T CMOS image sensors. (semiconductor devices)

  6. Improved laser-based triangulation sensor with enhanced range and resolution through adaptive optics-based active beam control.

    Science.gov (United States)

    Reza, Syed Azer; Khwaja, Tariq Shamim; Mazhar, Mohsin Ali; Niazi, Haris Khan; Nawab, Rahma

    2017-07-20

    Various existing target ranging techniques are limited in terms of the dynamic range of operation and measurement resolution. These limitations arise as a result of a particular measurement methodology, the finite processing capability of the hardware components deployed within the sensor module, and the medium through which the target is viewed. Generally, improving the sensor range adversely affects its resolution and vice versa. Often, a distance sensor is designed for an optimal range/resolution setting depending on its intended application. Optical triangulation is broadly classified as a spatial-signal-processing-based ranging technique and measures target distance from the location of the reflected spot on a position sensitive detector (PSD). In most triangulation sensors that use lasers as a light source, beam divergence-which severely affects sensor measurement range-is often ignored in calculations. In this paper, we first discuss in detail the limitations to ranging imposed by beam divergence, which, in effect, sets the sensor dynamic range. Next, we show how the resolution of laser-based triangulation sensors is limited by the interpixel pitch of a finite-sized PSD. In this paper, through the use of tunable focus lenses (TFLs), we propose a novel design of a triangulation-based optical rangefinder that improves both the sensor resolution and its dynamic range through adaptive electronic control of beam propagation parameters. We present the theory and operation of the proposed sensor and clearly demonstrate a range and resolution improvement with the use of TFLs. Experimental results in support of our claims are shown to be in strong agreement with theory.

  7. Improved linearity using harmonic error rejection in a full-field range imaging system

    Science.gov (United States)

    Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.

    2008-02-01

    Full field range imaging cameras are used to simultaneously measure the distance for every pixel in a given scene using an intensity modulated illumination source and a gain modulated receiver array. The light is reflected from an object in the scene, and the modulation envelope experiences a phase shift proportional to the target distance. Ideally the waveforms are sinusoidal, allowing the phase, and hence object range, to be determined from four measurements using an arctangent function. In practice these waveforms are often not perfectly sinusoidal, and in some cases square waveforms are instead used to simplify the electronic drive requirements. The waveforms therefore commonly contain odd harmonics which contribute a nonlinear error to the phase determination, and therefore an error in the range measurement. We have developed a unique sampling method to cancel the effect of these harmonics, with the results showing an order of magnitude improvement in the measurement linearity without the need for calibration or lookup tables, while the acquisition time remains unchanged. The technique can be applied to existing range imaging systems without having to change or modify the complex illumination or sensor systems, instead only requiring a change to the signal generation and timing electronics.

  8. High speed global shutter image sensors for professional applications

    Science.gov (United States)

    Wu, Xu; Meynants, Guy

    2015-04-01

    Global shutter imagers expand the use to miscellaneous applications, such as machine vision, 3D imaging, medical imaging, space etc. to eliminate motion artifacts in rolling shutter imagers. A low noise global shutter pixel requires more than one non-light sensitive memory to reduce the read noise. But larger memory area reduces the fill-factor of the pixels. Modern micro-lenses technology can compensate this fill-factor loss. Backside illumination (BSI) is another popular technique to improve the pixel fill-factor. But some pixel architecture may not reach sufficient shutter efficiency with backside illumination. Non-light sensitive memory elements make the fabrication with BSI possible. Machine vision like fast inspection system, medical imaging like 3D medical or scientific applications always ask for high frame rate global shutter image sensors. Thanks to the CMOS technology, fast Analog-to-digital converters (ADCs) can be integrated on chip. Dual correlated double sampling (CDS) on chip ADC with high interface digital data rate reduces the read noise and makes more on-chip operation control. As a result, a global shutter imager with digital interface is a very popular solution for applications with high performance and high frame rate requirements. In this paper we will review the global shutter architectures developed in CMOSIS, discuss their optimization process and compare their performances after fabrication.

  9. BMRC: A Bitmap-Based Maximum Range Counting Approach for Temporal Data in Sensor Monitoring Networks

    Directory of Open Access Journals (Sweden)

    Bin Cao

    2017-09-01

    Full Text Available Due to the rapid development of the Internet of Things (IoT, many feasible deployments of sensor monitoring networks have been made to capture the events in physical world, such as human diseases, weather disasters and traffic accidents, which generate large-scale temporal data. Generally, the certain time interval that results in the highest incidence of a severe event has significance for society. For example, there exists an interval that covers the maximum number of people who have the same unusual symptoms, and knowing this interval can help doctors to locate the reason behind this phenomenon. As far as we know, there is no approach available for solving this problem efficiently. In this paper, we propose the Bitmap-based Maximum Range Counting (BMRC approach for temporal data generated in sensor monitoring networks. Since sensor nodes can update their temporal data at high frequency, we present a scalable strategy to support the real-time insert and delete operations. The experimental results show that the BMRC outperforms the baseline algorithm in terms of efficiency.

  10. Impedance measurements on a fast transition-edge sensor for optical and near-infrared range

    International Nuclear Information System (INIS)

    Taralli, E; Portesi, C; Lolli, L; Monticone, E; Rajteri, M; Novikov, I; Beyer, J

    2010-01-01

    Impedance measurements of superconducting transition-edge sensors (TESs) are a powerful tool to obtain information about the TES thermal and electrical properties. We apply this technique to a 20 μm x 20 μm Ti/Au TES, suitable for application in the optical and near-infrared range, and extend the measurements up to 250 kHz in order to obtain a complete frequency response in the complex plane. From these measurements we obtain important thermal and electrical device parameters such as heat capacity C, thermal conductance G and effective thermal time constant τ eff that will be compared with the corresponding values obtained from noise measurements.

  11. Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors

    Science.gov (United States)

    Dutton, Neale A. W.; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K.

    2016-01-01

    SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed. PMID:27447643

  12. ANALYSIS OF SPECTRAL CHARACTERISTICS AMONG DIFFERENT SENSORS BY USE OF SIMULATED RS IMAGES

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This research, by use of RS image-simulating method, simulated apparent reflectance images at sensor level and ground-reflectance images of SPOT-HRV,CBERS-CCD,Landsat-TM and NOAA14-AVHRR' s corresponding bands. These images were used to analyze sensor's differences caused by spectral sensitivity and atmospheric impacts. The differences were analyzed on Normalized Difference Vegetation Index(NDVI). The results showed that the differences of sensors' spectral characteristics cause changes of their NDVI and reflectance. When multiple sensors' data are applied to digital analysis, the error should be taken into account. Atmospheric effect makes NDVI smaller, and atn~pheric correction has the tendency of increasing NDVI values. The reflectance and their NDVIs of different sensors can be used to analyze the differences among sensor' s features. The spectral analysis method based on RS simulated images can provide a new way to design the spectral characteristics of new sensors.

  13. Target recognition of ladar range images using slice image: comparison of four improved algorithms

    Science.gov (United States)

    Xia, Wenze; Han, Shaokun; Cao, Jingya; Wang, Liang; Zhai, Yu; Cheng, Yang

    2017-07-01

    Compared with traditional 3-D shape data, ladar range images possess properties of strong noise, shape degeneracy, and sparsity, which make feature extraction and representation difficult. The slice image is an effective feature descriptor to resolve this problem. We propose four improved algorithms on target recognition of ladar range images using slice image. In order to improve resolution invariance of the slice image, mean value detection instead of maximum value detection is applied in these four improved algorithms. In order to improve rotation invariance of the slice image, three new improved feature descriptors-which are feature slice image, slice-Zernike moments, and slice-Fourier moments-are applied to the last three improved algorithms, respectively. Backpropagation neural networks are used as feature classifiers in the last two improved algorithms. The performance of these four improved recognition systems is analyzed comprehensively in the aspects of the three invariances, recognition rate, and execution time. The final experiment results show that the improvements for these four algorithms reach the desired effect, the three invariances of feature descriptors are not directly related to the final recognition performance of recognition systems, and these four improved recognition systems have different performances under different conditions.

  14. Infrared Range Sensor Array for 3D Sensing in Robotic Applications

    Directory of Open Access Journals (Sweden)

    Yongtae Do

    2013-04-01

    Full Text Available This paper presents the design and testing of multiple infrared range detectors arranged in a two-dimensional (2D array. The proposed system can collect the sparse three-dimensional (3D data of objects and surroundings for robotics applications. Three kinds of tasks are considered using the system: detecting obstacles that lie ahead of a mobile robot, sensing the ground profile for the safe navigation of a mobile robot, and sensing the shape and position of an object on a conveyor belt for pickup by a robot manipulator. The developed system is potentially a simple alternative to high-resolution (and expensive 3D sensing systems, such as stereo cameras or laser scanners. In addition, the system can provide shape information about target objects and surroundings that cannot be obtained using simple ultrasonic sensors. Laboratory prototypes of the system were built with nine infrared range sensors arranged in a 3×3 array and test results confirmed the validity of system.

  15. CCD image sensor induced error in PIV applications

    Science.gov (United States)

    Legrand, M.; Nogueira, J.; Vargas, A. A.; Ventas, R.; Rodríguez-Hidalgo, M. C.

    2014-06-01

    The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (˜0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described.

  16. CCD image sensor induced error in PIV applications

    International Nuclear Information System (INIS)

    Legrand, M; Nogueira, J; Vargas, A A; Ventas, R; Rodríguez-Hidalgo, M C

    2014-01-01

    The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (∼0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described. (paper)

  17. Laser Doppler perfusion imaging with a complimentary metal oxide semiconductor image sensor

    NARCIS (Netherlands)

    Serov, Alexander; Steenbergen, Wiendelt; de Mul, F.F.M.

    2002-01-01

    We utilized a complimentary metal oxide semiconductor video camera for fast f low imaging with the laser Doppler technique. A single sensor is used for both observation of the area of interest and measurements of the interference signal caused by dynamic light scattering from moving particles inside

  18. A novel method of range measuring for a mobile robot based on multi-sensor information fusion

    International Nuclear Information System (INIS)

    Zhang Yi; Luo Yuan; Wang Jifeng

    2005-01-01

    The traditional measuring range for a mobile robot is based on a sonar sensor. Because of different working environments, it is very difficult to obtain high precision by using just one single method of range measurement. So, a hybrid sonar sensor and laser scanner method is put forward to overcome these shortcomings. A novel fusion model is proposed based on basic theory and a method of information fusion. An optimal measurement result has been obtained with information fusion from different sensors. After large numbers of experiments and performance analysis, a conclusion can be drawn that the laser scanner and sonar sensor method with multi-sensor information fusion have a higher precision than the single method of sonar. It can also be the same with different environments

  19. Image processing pipeline for segmentation and material classification based on multispectral high dynamic range polarimetric images.

    Science.gov (United States)

    Martínez-Domingo, Miguel Ángel; Valero, Eva M; Hernández-Andrés, Javier; Tominaga, Shoji; Horiuchi, Takahiko; Hirai, Keita

    2017-11-27

    We propose a method for the capture of high dynamic range (HDR), multispectral (MS), polarimetric (Pol) images of indoor scenes using a liquid crystal tunable filter (LCTF). We have included the adaptive exposure estimation (AEE) method to fully automatize the capturing process. We also propose a pre-processing method which can be applied for the registration of HDR images after they are already built as the result of combining different low dynamic range (LDR) images. This method is applied to ensure a correct alignment of the different polarization HDR images for each spectral band. We have focused our efforts in two main applications: object segmentation and classification into metal and dielectric classes. We have simplified the segmentation using mean shift combined with cluster averaging and region merging techniques. We compare the performance of our segmentation with that of Ncut and Watershed methods. For the classification task, we propose to use information not only in the highlight regions but also in their surrounding area, extracted from the degree of linear polarization (DoLP) maps. We present experimental results which proof that the proposed image processing pipeline outperforms previous techniques developed specifically for MSHDRPol image cubes.

  20. Quantum dots in imaging, drug delivery and sensor applications.

    Science.gov (United States)

    Matea, Cristian T; Mocan, Teodora; Tabaran, Flaviu; Pop, Teodora; Mosteanu, Ofelia; Puia, Cosmin; Iancu, Cornel; Mocan, Lucian

    2017-01-01

    Quantum dots (QDs), also known as nanoscale semiconductor crystals, are nanoparticles with unique optical and electronic properties such as bright and intensive fluorescence. Since most conventional organic label dyes do not offer the near-infrared (>650 nm) emission possibility, QDs, with their tunable optical properties, have gained a lot of interest. They possess characteristics such as good chemical and photo-stability, high quantum yield and size-tunable light emission. Different types of QDs can be excited with the same light wavelength, and their narrow emission bands can be detected simultaneously for multiple assays. There is an increasing interest in the development of nano-theranostics platforms for simultaneous sensing, imaging and therapy. QDs have great potential for such applications, with notable results already published in the fields of sensors, drug delivery and biomedical imaging. This review summarizes the latest developments available in literature regarding the use of QDs for medical applications.

  1. The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors

    Science.gov (United States)

    Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.

    2015-12-01

    Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and

  2. Enhancing the dynamic range of Ultrasound Imaging Velocimetry using interleaved imaging

    NARCIS (Netherlands)

    Poelma, C.; Fraser, K.H.

    2013-01-01

    In recent years, non-invasive velocity field measurement based on correlation of ultrasound images has been introduced as a promising technique for fundamental research into disease processes, as well as a diagnostic tool. A major drawback of the method is the relatively limited dynamic range when

  3. Efficient demodulation scheme for rolling-shutter-patterning of CMOS image sensor based visible light communications.

    Science.gov (United States)

    Chen, Chia-Wei; Chow, Chi-Wai; Liu, Yang; Yeh, Chien-Hung

    2017-10-02

    Recently even the low-end mobile-phones are equipped with a high-resolution complementary-metal-oxide-semiconductor (CMOS) image sensor. This motivates using a CMOS image sensor for visible light communication (VLC). Here we propose and demonstrate an efficient demodulation scheme to synchronize and demodulate the rolling shutter pattern in image sensor based VLC. The implementation algorithm is discussed. The bit-error-rate (BER) performance and processing latency are evaluated and compared with other thresholding schemes.

  4. Dark current spectroscopy of space and nuclear environment induced displacement damage defects in pinned photodiode based CMOS image sensors

    International Nuclear Information System (INIS)

    Belloir, Jean-Marc

    2016-01-01

    CMOS image sensors are envisioned for an increasing number of high-end scientific imaging applications such as space imaging or nuclear experiments. Indeed, the performance of high-end CMOS image sensors has dramatically increased in the past years thanks to the unceasing improvements of microelectronics, and these image sensors have substantial advantages over CCDs which make them great candidates to replace CCDs in future space missions. However, in space and nuclear environments, CMOS image sensors must face harsh radiation which can rapidly degrade their electro-optical performances. In particular, the protons, electrons and ions travelling in space or the fusion neutrons from nuclear experiments can displace silicon atoms in the pixels and break the crystalline structure. These displacement damage effects lead to the formation of stable defects and to the introduction of states in the forbidden bandgap of silicon, which can allow the thermal generation of electron-hole pairs. Consequently, non ionizing radiation leads to a permanent increase of the dark current of the pixels and thus a decrease of the image sensor sensitivity and dynamic range. The aim of the present work is to extend the understanding of the effect of displacement damage on the dark current increase of CMOS image sensors. In particular, this work focuses on the shape of the dark current distribution depending on the particle type, energy and fluence but also on the image sensor physical parameters. Thanks to the many conditions tested, an empirical model for the prediction of the dark current distribution induced by displacement damage in nuclear or space environments is experimentally validated and physically justified. Another central part of this work consists in using the dark current spectroscopy technique for the first time on irradiated CMOS image sensors to detect and characterize radiation-induced silicon bulk defects. Many types of defects are detected and two of them are identified

  5. Imaging properties of small-pixel spectroscopic x-ray detectors based on cadmium telluride sensors

    International Nuclear Information System (INIS)

    Koenig, Thomas; Schulze, Julia; Zuber, Marcus; Rink, Kristian; Oelfke, Uwe; Butzer, Jochen; Hamann, Elias; Cecilia, Angelica; Zwerger, Andreas; Fauler, Alex; Fiederle, Michael

    2012-01-01

    Spectroscopic x-ray imaging by means of photon counting detectors has received growing interest during the past years. Critical to the image quality of such devices is their pixel pitch and the sensor material employed. This paper describes the imaging properties of Medipix2 MXR multi-chip assemblies bump bonded to 1 mm thick CdTe sensors. Two systems were investigated with pixel pitches of 110 and 165 μm, which are in the order of the mean free path lengths of the characteristic x-rays produced in their sensors. Peak widths were found to be almost constant across the energy range of 10 to 60 keV, with values of 2.3 and 2.2 keV (FWHM) for the two pixel pitches. The average number of pixels responding to a single incoming photon are about 1.85 and 1.45 at 60 keV, amounting to detective quantum efficiencies of 0.77 and 0.84 at a spatial frequency of zero. Energy selective CT acquisitions are presented, and the two pixel pitches' abilities to discriminate between iodine and gadolinium contrast agents are examined. It is shown that the choice of the pixel pitch translates into a minimum contrast agent concentration for which material discrimination is still possible. We finally investigate saturation effects at high x-ray fluxes and conclude with the finding that higher maximum count rates come at the cost of a reduced energy resolution. (paper)

  6. Wide-Range Highly-Efficient Wireless Power Receivers for Implantable Biomedical Sensors

    KAUST Repository

    Ouda, Mahmoud

    2016-11-01

    Wireless power transfer (WPT) is the key enabler for a myriad of applications, from low-power RFIDs, and wireless sensors, to wirelessly charged electric vehicles, and even massive power transmission from space solar cells. One of the major challenges in designing implantable biomedical devices is the size and lifetime of the battery. Thus, replacing the battery with a miniaturized wireless power receiver (WPRx) facilitates designing sustainable biomedical implants in smaller volumes for sentient medical applications. In the first part of this dissertation, we propose a miniaturized, fully integrated, wirelessly powered implantable sensor with on-chip antenna, designed and implemented in a standard 0.18μm CMOS process. As a batteryless device, it can be implanted once inside the body with no need for further invasive surgeries to replace batteries. The proposed single-chip solution is designed for intraocular pressure monitoring (IOPM), and can serve as a sustainable platform for implantable devices or IoT nodes. A custom setup is developed to test the chip in a saline solution with electrical properties similar to those of the aqueous humor of the eye. The proposed chip, in this eye-like setup, is wirelessly charged to 1V from a 5W transmitter 3cm away from the chip. In the second part, we propose a self-biased, differential rectifier with enhanced efficiency over an extended range of input power. A prototype is designed for the medical implant communication service (MICS) band at 433MHz. It demonstrates an efficiency improvement of more than 40% in the rectifier power conversion efficiency (PCE) and a dynamic range extension of more than 50% relative to the conventional cross-coupled rectifier. A sensitivity of -15.2dBm input power for 1V output voltage and a peak PCE of 65% are achieved for a 50k load. In the third part, we propose a wide-range, differential RF-to-DC power converter using an adaptive, self-biasing technique. The proposed architecture doubles

  7. Recognition of flow in everyday life using sensor agent robot with laser range finder

    Science.gov (United States)

    Goshima, Misa; Mita, Akira

    2011-04-01

    In the present paper, we suggest an algorithm for a sensor agent robot with a laser range finder to recognize the flows of residents in the living spaces in order to achieve flow recognition in the living spaces, recognition of the number of people in spaces, and the classification of the flows. House reform is or will be demanded to prolong the lifetime of the home. Adaption for the individuals is needed for our aging society which is growing at a rapid pace. Home autonomous mobile robots will become popular in the future for aged people to assist them in various situations. Therefore we have to collect various type of information of human and living spaces. However, a penetration in personal privacy must be avoided. It is essential to recognize flows in everyday life in order to assist house reforms and aging societies in terms of adaption for the individuals. With background subtraction, extra noise removal, and the clustering based k-means method, we got an average accuracy of more than 90% from the behavior from 1 to 3 persons, and also confirmed the reliability of our system no matter the position of the sensor. Our system can take advantages from autonomous mobile robots and protect the personal privacy. It hints at a generalization of flow recognition methods in the living spaces.

  8. Using polynomials to simplify fixed pattern noise and photometric correction of logarithmic CMOS image sensors.

    Science.gov (United States)

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-10-16

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.

  9. Full Waveform Analysis for Long-Range 3D Imaging Laser Radar

    Directory of Open Access Journals (Sweden)

    Wallace AndrewM

    2010-01-01

    Full Text Available The new generation of 3D imaging systems based on laser radar (ladar offers significant advantages in defense and security applications. In particular, it is possible to retrieve 3D shape information directly from the scene and separate a target from background or foreground clutter by extracting a narrow depth range from the field of view by range gating, either in the sensor or by postprocessing. We discuss and demonstrate the applicability of full-waveform ladar to produce multilayer 3D imagery, in which each pixel produces a complex temporal response that describes the scene structure. Such complexity caused by multiple and distributed reflection arises in many relevant scenarios, for example in viewing partially occluded targets, through semitransparent materials (e.g., windows and through distributed reflective media such as foliage. We demonstrate our methodology on 3D image data acquired by a scanning time-of-flight system, developed in our own laboratories, which uses the time-correlated single-photon counting technique.

  10. Nanosecond-laser induced crosstalk of CMOS image sensor

    Science.gov (United States)

    Zhu, Rongzhen; Wang, Yanbin; Chen, Qianrong; Zhou, Xuanfeng; Ren, Guangsen; Cui, Longfei; Li, Hua; Hao, Daoliang

    2018-02-01

    The CMOS Image Sensor (CIS) is photoelectricity image device which focused the photosensitive array, amplifier, A/D transfer, storage, DSP, computer interface circuit on the same silicon substrate[1]. It has low power consumption, high integration,low cost etc. With large scale integrated circuit technology progress, the noise suppression level of CIS is enhanced unceasingly, and its image quality is getting better and better. It has been in the security monitoring, biometrice, detection and imaging and even military reconnaissance and other field is widely used. CIS is easily disturbed and damaged while it is irradiated by laser. It is of great significance to study the effect of laser irradiation on optoelectronic countermeasure and device for the laser strengthening resistance is of great significance. There are some researchers have studied the laser induced disturbed and damaged of CIS. They focused on the saturation, supersaturated effects, and they observed different effects as for unsaturation, saturation, supersaturated, allsaturated and pixel flip etc. This paper research 1064nm laser interference effect in a typical before type CMOS, and observring the saturated crosstalk and half the crosstalk line. This paper extracted from cmos devices working principle and signal detection methods such as the Angle of the formation mechanism of the crosstalk line phenomenon are analyzed.

  11. Highly Specific and Wide Range NO2 Sensor with Color Readout.

    Science.gov (United States)

    Fàbrega, Cristian; Fernández, Luis; Monereo, Oriol; Pons-Balagué, Alba; Xuriguera, Elena; Casals, Olga; Waag, Andreas; Prades, Joan Daniel

    2017-11-22

    We present a simple and inexpensive method to implement a Griess-Saltzman-type reaction that combines the advantages of the liquid phase method (high specificity and fast response time) with the benefits of a solid implementation (easy to handle). We demonstrate that the measurements can be carried out using conventional RGB sensors; circumventing all the limitations around the measurement of the samples with spectrometers. We also present a method to optimize the measurement protocol and target a specific range of NO 2 concentrations. We demonstrate that it is possible to measure the concentration of NO 2 from 50 ppb to 300 ppm with high specificity and without modifying the Griess-Saltzman reagent.

  12. Inertial sensors as measurement tools of elbow range of motion in gerontology

    Science.gov (United States)

    Sacco, G; Turpin, JM; Marteu, A; Sakarovitch, C; Teboul, B; Boscher, L; Brocker, P; Robert, P; Guerin, O

    2015-01-01

    Background and purpose Musculoskeletal system deterioration among the aging is a major reason for loss of autonomy and directly affects the quality of life of the elderly. Articular evaluation is part of physiotherapeutic assessment and helps in establishing a precise diagnosis and deciding appropriate therapy. Reference instruments are valid but not easy to use for some joints. The main goal of our study was to determine reliability and intertester reproducibility of the MP-BV, an inertial sensor (the MotionPod® [MP]) combined with specific software (BioVal [BV]), for elbow passive range-of-motion measurements in geriatrics. Methods This open, monocentric, randomized study compared inertial sensor to inclinometer in patients hospitalized in an acute, post-acute, and long-term-care gerontology unit. Results Seventy-seven patients (mean age 83.5±6.4 years, sex ratio 1.08 [male/female]) were analyzed. The MP-BV was reliable for each of the three measurements (flexion, pronation, and supination) for 24.3% (CI 95% 13.9–32.8) of the patients. Separately, the percentages of reliable measures were 59.7% (49.2–70.5) for flexion, 68.8% (58.4–79.5) for pronation, and 62.3% (51.2–73.1) for supination. The intraclass correlation coefficients were 0.15 (0.07–0.73), 0.46 (0.27–0.98), and 0.50 (0.31–40 0.98) for flexion, pronation, and supination, respectively. Conclusion This study shows the convenience of the MP-BV in terms of ease of use and of export of measured data. However, this instrument seems less reliable and valuable compared to the reference instruments used to measure elbow range of motion in gerontology. PMID:25759568

  13. Mathematical Model and Calibration Experiment of a Large Measurement Range Flexible Joints 6-UPUR Six-Axis Force Sensor

    Directory of Open Access Journals (Sweden)

    Yanzhi Zhao

    2016-08-01

    Full Text Available Nowadays improving the accuracy and enlarging the measuring range of six-axis force sensors for wider applications in aircraft landing, rocket thrust, and spacecraft docking testing experiments has become an urgent objective. However, it is still difficult to achieve high accuracy and large measuring range with traditional parallel six-axis force sensors due to the influence of the gap and friction of the joints. Therefore, to overcome the mentioned limitations, this paper proposed a 6-Universal-Prismatic-Universal-Revolute (UPUR joints parallel mechanism with flexible joints to develop a large measurement range six-axis force sensor. The structural characteristics of the sensor are analyzed in comparison with traditional parallel sensor based on the Stewart platform. The force transfer relation of the sensor is deduced, and the force Jacobian matrix is obtained using screw theory in two cases of the ideal state and the state of flexibility of each flexible joint is considered. The prototype and loading calibration system are designed and developed. The K value method and least squares method are used to process experimental data, and in errors of kind Ι and kind II linearity are obtained. The experimental results show that the calibration error of the K value method is more than 13.4%, and the calibration error of the least squares method is 2.67%. The experimental results prove the feasibility of the sensor and the correctness of the theoretical analysis which are expected to be adopted in practical applications.

  14. Mathematical Model and Calibration Experiment of a Large Measurement Range Flexible Joints 6-UPUR Six-Axis Force Sensor.

    Science.gov (United States)

    Zhao, Yanzhi; Zhang, Caifeng; Zhang, Dan; Shi, Zhongpan; Zhao, Tieshi

    2016-08-11

    Nowadays improving the accuracy and enlarging the measuring range of six-axis force sensors for wider applications in aircraft landing, rocket thrust, and spacecraft docking testing experiments has become an urgent objective. However, it is still difficult to achieve high accuracy and large measuring range with traditional parallel six-axis force sensors due to the influence of the gap and friction of the joints. Therefore, to overcome the mentioned limitations, this paper proposed a 6-Universal-Prismatic-Universal-Revolute (UPUR) joints parallel mechanism with flexible joints to develop a large measurement range six-axis force sensor. The structural characteristics of the sensor are analyzed in comparison with traditional parallel sensor based on the Stewart platform. The force transfer relation of the sensor is deduced, and the force Jacobian matrix is obtained using screw theory in two cases of the ideal state and the state of flexibility of each flexible joint is considered. The prototype and loading calibration system are designed and developed. The K value method and least squares method are used to process experimental data, and in errors of kind Ι and kind II linearity are obtained. The experimental results show that the calibration error of the K value method is more than 13.4%, and the calibration error of the least squares method is 2.67%. The experimental results prove the feasibility of the sensor and the correctness of the theoretical analysis which are expected to be adopted in practical applications.

  15. Fabricating Optical Fiber Imaging Sensors Using Inkjet Printing Technology: a pH Sensor Proof-of-Concept

    Energy Technology Data Exchange (ETDEWEB)

    Carter, J C; Alvis, R M; Brown, S B; Langry, K C; Wilson, T S; McBride, M T; Myrick, M L; Cox, W R; Grove, M E; Colston, B W

    2005-03-01

    We demonstrate the feasibility of using Drop-on-Demand microjet printing technology for fabricating imaging sensors by reproducibly printing an array of photopolymerizable sensing elements, containing a pH sensitive indicator, on the surface of an optical fiber image guide. The reproducibility of the microjet printing process is excellent for microdot (i.e. micron-sized polymer) sensor diameter (92.2 {+-} 2.2 microns), height (35.0 {+-} 1.0 microns), and roundness (0.00072 {+-} 0.00023). pH sensors were evaluated in terms of pH sensing ability ({le}2% sensor variation), response time, and hysteresis using a custom fluorescence imaging system. In addition, the microjet technique has distinct advantages over other fabrication methods, which are discussed in detail.

  16. Ultrahigh sensitivity endoscopic camera using a new CMOS image sensor: providing with clear images under low illumination in addition to fluorescent images.

    Science.gov (United States)

    Aoki, Hisae; Yamashita, Hiromasa; Mori, Toshiyuki; Fukuyo, Tsuneo; Chiba, Toshio

    2014-11-01

    We developed a new ultrahigh-sensitive CMOS camera using a specific sensor that has a wide range of spectral sensitivity characteristics. The objective of this study is to present our updated endoscopic technology that has successfully integrated two innovative functions; ultrasensitive imaging as well as advanced fluorescent viewing. Two different experiments were conducted. One was carried out to evaluate the function of the ultrahigh-sensitive camera. The other was to test the availability of the newly developed sensor and its performance as a fluorescence endoscope. In both studies, the distance from the endoscopic tip to the target was varied and those endoscopic images in each setting were taken for further comparison. In the first experiment, the 3-CCD camera failed to display the clear images under low illumination, and the target was hardly seen. In contrast, the CMOS camera was able to display the targets regardless of the camera-target distance under low illumination. Under high illumination, imaging quality given by both cameras was quite alike. In the second experiment as a fluorescence endoscope, the CMOS camera was capable of clearly showing the fluorescent-activated organs. The ultrahigh sensitivity CMOS HD endoscopic camera is expected to provide us with clear images under low illumination in addition to the fluorescent images under high illumination in the field of laparoscopic surgery.

  17. Processor for Real-Time Atmospheric Compensation in Long-Range Imaging, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Long-range imaging is a critical component to many NASA applications including range surveillance, launch tracking, and astronomical observation. However,...

  18. Nitrogen-rich functional groups carbon nanoparticles based fluorescent pH sensor with broad-range responding for environmental and live cells applications.

    Science.gov (United States)

    Shi, Bingfang; Su, Yubin; Zhang, Liangliang; Liu, Rongjun; Huang, Mengjiao; Zhao, Shulin

    2016-08-15

    A nitrogen-rich functional groups carbon nanoparticles (N-CNs) based fluorescent pH sensor with a broad-range responding was prepared by one-pot hydrothermal treatment of melamine and triethanolamine. The as-prepared N-CNs exhibited excellent photoluminesence properties with an absolute quantum yield (QY) of 11.0%. Furthermore, the N-CNs possessed a broad-range pH response. The linear pH response range was 3.0 to 12.0, which is much wider than that of previously reported fluorescent pH sensors. The possible mechanism for the pH-sensitive response of the N-CNs was ascribed to photoinduced electron transfer (PET). Cell toxicity experiment showed that the as-prepared N-CNs exhibited low cytotoxicity and excellent biocompatibility with the cell viabilities of more than 87%. The proposed N-CNs-based pH sensor was used for pH monitoring of environmental water samples, and pH fluorescence imaging of live T24 cells. The N-CNs is promising as a convenient and general fluorescent pH sensor for environmental monitoring and bioimaging applications. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Recording and Modelling of MONUMENTS' Interior Space Using Range and Optical Sensors

    Science.gov (United States)

    Georgiadis, Charalampos; Patias, Petros; Tsioukas, Vasilios

    2016-06-01

    Three dimensional modelling of artefacts and building interiors is a highly active research field in our days. Several techniques are being utilized to perform such a task, spanning from traditional surveying techniques and photogrammetry to structured light scanners, laser scanners and so on. New technological advancements in both hardware and software create new recording techniques, tools and approaches. In this paper we present a new recording and modelling approach based on the SwissRanger SR4000 range camera coupled with a Canon 400D dSLR camera. The hardware component of our approach consists of a fixed base, which encloses the range and SLR cameras. The two sensors are fully calibrated and registered to each other thus we were able to produce colorized point clouds acquired from the range camera. In this paper we present the initial design and calibration of the system along with experimental data regarding the accuracy of the proposed approach. We are also providing results regarding the modelling of interior spaces and artefacts accompanied with accuracy tests from other modelling approaches based on photogrammetry and laser scanning.

  20. Collusion-Aware Privacy-Preserving Range Query in Tiered Wireless Sensor Networks†

    Science.gov (United States)

    Zhang, Xiaoying; Dong, Lei; Peng, Hui; Chen, Hong; Zhao, Suyun; Li, Cuiping

    2014-01-01

    Wireless sensor networks (WSNs) are indispensable building blocks for the Internet of Things (IoT). With the development of WSNs, privacy issues have drawn more attention. Existing work on the privacy-preserving range query mainly focuses on privacy preservation and integrity verification in two-tiered WSNs in the case of compromised master nodes, but neglects the damage of node collusion. In this paper, we propose a series of collusion-aware privacy-preserving range query protocols in two-tiered WSNs. To the best of our knowledge, this paper is the first to consider collusion attacks for a range query in tiered WSNs while fulfilling the preservation of privacy and integrity. To preserve the privacy of data and queries, we propose a novel encoding scheme to conceal sensitive information. To preserve the integrity of the results, we present a verification scheme using the correlation among data. In addition, two schemes are further presented to improve result accuracy and reduce communication cost. Finally, theoretical analysis and experimental results confirm the efficiency, accuracy and privacy of our proposals. PMID:25615731

  1. Collusion-aware privacy-preserving range query in tiered wireless sensor networks.

    Science.gov (United States)

    Zhang, Xiaoying; Dong, Lei; Peng, Hui; Chen, Hong; Zhao, Suyun; Li, Cuiping

    2014-12-11

    Wireless sensor networks (WSNs) are indispensable building blocks for the Internet of Things (IoT). With the development of WSNs, privacy issues have drawn more attention. Existing work on the privacy-preserving range query mainly focuses on privacy preservation and integrity verification in two-tiered WSNs in the case of compromisedmaster nodes, but neglects the damage of node collusion. In this paper, we propose a series of collusion-aware privacy-preserving range query protocols in two-tiered WSNs. To the best of our knowledge, this paper is the first to consider collusion attacks for a range query in tiered WSNs while fulfilling the preservation of privacy and integrity. To preserve the privacy of data and queries, we propose a novel encoding scheme to conceal sensitive information. To preserve the integrity of the results, we present a verification scheme using the correlation among data. In addition, two schemes are further presented to improve result accuracy and reduce communication cost. Finally, theoretical analysis and experimental results confirm the efficiency, accuracy and privacy of our proposals.

  2. Collusion-Aware Privacy-Preserving Range Query in Tiered Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xiaoying Zhang

    2014-12-01

    Full Text Available Wireless sensor networks (WSNs are indispensable building blocks for the Internet of Things (IoT. With the development of WSNs, privacy issues have drawn more attention. Existing work on the privacy-preserving range query mainly focuses on privacy preservation and integrity verification in two-tiered WSNs in the case of compromisedmaster nodes, but neglects the damage of node collusion. In this paper, we propose a series of collusion-aware privacy-preserving range query protocols in two-tiered WSNs. To the best of our knowledge, this paper is the first to consider collusion attacks for a range query in tiered WSNs while fulfilling the preservation of privacy and integrity. To preserve the privacy of data and queries, we propose a novel encoding scheme to conceal sensitive information. To preserve the integrity of the results, we present a verification scheme using the correlation among data. In addition, two schemes are further presented to improve result accuracy and reduce communication cost. Finally, theoretical analysis and experimental results confirm the efficiency, accuracy and privacy of our proposals.

  3. CMOS Active Pixel Sensors as energy-range detectors for proton Computed Tomography

    International Nuclear Information System (INIS)

    Esposito, M.; Waltham, C.; Allinson, N.M.; Anaxagoras, T.; Evans, P.M.; Poludniowski, G.; Green, S.; Parker, D.J.; Price, T.; Manolopoulos, S.; Nieto-Camero, J.

    2015-01-01

    Since the first proof of concept in the early 70s, a number of technologies has been proposed to perform proton CT (pCT), as a means of mapping tissue stopping power for accurate treatment planning in proton therapy. Previous prototypes of energy-range detectors for pCT have been mainly based on the use of scintillator-based calorimeters, to measure proton residual energy after passing through the patient. However, such an approach is limited by the need for only a single proton passing through the energy-range detector in a read-out cycle. A novel approach to this problem could be the use of pixelated detectors, where the independent read-out of each pixel allows to measure simultaneously the residual energy of a number of protons in the same read-out cycle, facilitating a faster and more efficient pCT scan. This paper investigates the suitability of CMOS Active Pixel Sensors (APSs) to track individual protons as they go through a number of CMOS layers, forming an energy-range telescope. Measurements performed at the iThemba Laboratories will be presented and analysed in terms of correlation, to confirm capability of proton tracking for CMOS APSs

  4. CMOS Active Pixel Sensors as energy-range detectors for proton Computed Tomography.

    Science.gov (United States)

    Esposito, M; Anaxagoras, T; Evans, P M; Green, S; Manolopoulos, S; Nieto-Camero, J; Parker, D J; Poludniowski, G; Price, T; Waltham, C; Allinson, N M

    2015-06-03

    Since the first proof of concept in the early 70s, a number of technologies has been proposed to perform proton CT (pCT), as a means of mapping tissue stopping power for accurate treatment planning in proton therapy. Previous prototypes of energy-range detectors for pCT have been mainly based on the use of scintillator-based calorimeters, to measure proton residual energy after passing through the patient. However, such an approach is limited by the need for only a single proton passing through the energy-range detector in a read-out cycle. A novel approach to this problem could be the use of pixelated detectors, where the independent read-out of each pixel allows to measure simultaneously the residual energy of a number of protons in the same read-out cycle, facilitating a faster and more efficient pCT scan. This paper investigates the suitability of CMOS Active Pixel Sensors (APSs) to track individual protons as they go through a number of CMOS layers, forming an energy-range telescope. Measurements performed at the iThemba Laboratories will be presented and analysed in terms of correlation, to confirm capability of proton tracking for CMOS APSs.

  5. A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors.

    Science.gov (United States)

    Song, Yu; Nuske, Stephen; Scherer, Sebastian

    2016-12-22

    State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight.

  6. Low-Voltage 96 dB Snapshot CMOS Image Sensor with 4.5 nW Power Dissipation per Pixel

    Directory of Open Access Journals (Sweden)

    Orly Yadid-Pecht

    2012-07-01

    Full Text Available Modern “smart” CMOS sensors have penetrated into various applications, such as surveillance systems, bio-medical applications, digital cameras, cellular phones and many others. Reducing the power of these sensors continuously challenges designers. In this paper, a low power global shutter CMOS image sensor with Wide Dynamic Range (WDR ability is presented. This sensor features several power reduction techniques, including a dual voltage supply, a selective power down, transistors with different threshold voltages, a non-rationed logic, and a low voltage static memory. A combination of all these approaches has enabled the design of the low voltage “smart” image sensor, which is capable of reaching a remarkable dynamic range, while consuming very low power. The proposed power-saving solutions have allowed the maintenance of the standard architecture of the sensor, reducing both the time and the cost of the design. In order to maintain the image quality, a relation between the sensor performance and power has been analyzed and a mathematical model, describing the sensor Signal to Noise Ratio (SNR and Dynamic Range (DR as a function of the power supplies, is proposed. The described sensor was implemented in a 0.18 um CMOS process and successfully tested in the laboratory. An SNR of 48 dB and DR of 96 dB were achieved with a power dissipation of 4.5 nW per pixel.

  7. Low-voltage 96 dB snapshot CMOS image sensor with 4.5 nW power dissipation per pixel.

    Science.gov (United States)

    Spivak, Arthur; Teman, Adam; Belenky, Alexander; Yadid-Pecht, Orly; Fish, Alexander

    2012-01-01

    Modern "smart" CMOS sensors have penetrated into various applications, such as surveillance systems, bio-medical applications, digital cameras, cellular phones and many others. Reducing the power of these sensors continuously challenges designers. In this paper, a low power global shutter CMOS image sensor with Wide Dynamic Range (WDR) ability is presented. This sensor features several power reduction techniques, including a dual voltage supply, a selective power down, transistors with different threshold voltages, a non-rationed logic, and a low voltage static memory. A combination of all these approaches has enabled the design of the low voltage "smart" image sensor, which is capable of reaching a remarkable dynamic range, while consuming very low power. The proposed power-saving solutions have allowed the maintenance of the standard architecture of the sensor, reducing both the time and the cost of the design. In order to maintain the image quality, a relation between the sensor performance and power has been analyzed and a mathematical model, describing the sensor Signal to Noise Ratio (SNR) and Dynamic Range (DR) as a function of the power supplies, is proposed. The described sensor was implemented in a 0.18 um CMOS process and successfully tested in the laboratory. An SNR of 48 dB and DR of 96 dB were achieved with a power dissipation of 4.5 nW per pixel.

  8. PROCESSING OF UAV BASED RANGE IMAGING DATA TO GENERATE DETAILED ELEVATION MODELS OF COMPLEX NATURAL STRUCTURES

    Directory of Open Access Journals (Sweden)

    T. K. Kohoutek

    2012-07-01

    Full Text Available Unmanned Aerial Vehicles (UAVs are more and more used in civil areas like geomatics. Autonomous navigated platforms have a great flexibility in flying and manoeuvring in complex environments to collect remote sensing data. In contrast to standard technologies such as aerial manned platforms (airplanes and helicopters UAVs are able to fly closer to the object and in small-scale areas of high-risk situations such as landslides, volcano and earthquake areas and floodplains. Thus, UAVs are sometimes the only practical alternative in areas where access is difficult and where no manned aircraft is available or even no flight permission is given. Furthermore, compared to terrestrial platforms, UAVs are not limited to specific view directions and could overcome occlusions from trees, houses and terrain structures. Equipped with image sensors and/or laser scanners they are able to provide elevation models, rectified images, textured 3D-models and maps. In this paper we will describe a UAV platform, which can carry a range imaging (RIM camera including power supply and data storage for the detailed mapping and monitoring of complex structures, such as alpine riverbed areas. The UAV platform NEO from Swiss UAV was equipped with the RIM camera CamCube 2.0 by PMD Technologies GmbH to capture the surface structures. Its navigation system includes an autopilot. To validate the UAV-trajectory a 360° prism was installed and tracked by a total station. Within the paper a workflow for the processing of UAV-RIM data is proposed, which is based on the processing of differential GNSS data in combination with the acquired range images. Subsequently, the obtained results for the trajectory are compared and verified with a track of a UAV (Falcon 8, Ascending Technologies carried out with a total station simultaneously to the GNSS data acquisition. The results showed that the UAV's position using differential GNSS could be determined in the centimetre to the decimetre

  9. Thermal effects of an ICL-based mid-infrared CH4 sensor within a wide atmospheric temperature range

    Science.gov (United States)

    Ye, Weilin; Zheng, Chuantao; Sanchez, Nancy P.; Girija, Aswathy V.; He, Qixin; Zheng, Huadan; Griffin, Robert J.; Tittel, Frank K.

    2018-03-01

    The thermal effects of an interband cascade laser (ICL) based mid-infrared methane (CH4) sensor that uses long-path absorption spectroscopy were studied. The sensor performance in the laboratory at a constant temperature of ∼25 °C was measured for 5 h and its Allan deviation was ∼2 ppbv with a 1 s averaging time. A LabVIEW-based simulation program was developed to study thermal effects on infrared absorption and a temperature compensation technique was developed to minimize these effects. An environmental test chamber was employed to investigate the thermal effects that occur in the sensor system with variation of the test chamber temperature between 10 and 30 °C. The thermal response of the sensor in a laboratory setting was observed using a 2.1 ppm CH4 standard gas sample. Indoor/outdoor CH4 measurements were conducted to evaluate the sensor performance within a wide atmospheric temperature range.

  10. Ultra-fast Sensor for Single-photon Detection in a Wide Range of the Electromagnetic Spectrum

    Directory of Open Access Journals (Sweden)

    Astghik KUZANYAN

    2016-12-01

    Full Text Available The results of computer simulation of heat distribution processes taking place after absorption of single photons of 1 eV-1 keV energy in three-layer sensor of the thermoelectric detector are being analyzed. Different geometries of the sensor with tungsten absorber, thermoelectric layer of cerium hexaboride and tungsten heat sink are considered. It is shown that by changing the sizes of the sensor layers it is possible to obtain transducers for registration of photons within the given spectral range with required energy resolution and count rate. It is concluded that, as compared to the single layer sensor, the thee-layer sensor has a number of advantages and demonstrate characteristics that make possible to consider the thermoelectric detector as a real alternative to superconducting single photon detectors.

  11. Real time three-dimensional space video rate sensors for millimeter waves imaging based very inexpensive plasma LED lamps

    Science.gov (United States)

    Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S.; Rozban, Daniel; Abramovich, Amir

    2014-10-01

    In recent years, much effort has been invested to develop inexpensive but sensitive Millimeter Wave (MMW) detectors that can be used in focal plane arrays (FPAs), in order to implement real time MMW imaging. Real time MMW imaging systems are required for many varied applications in many fields as homeland security, medicine, communications, military products and space technology. It is mainly because this radiation has high penetration and good navigability through dust storm, fog, heavy rain, dielectric materials, biological tissue, and diverse materials. Moreover, the atmospheric attenuation in this range of the spectrum is relatively low and the scattering is also low compared to NIR and VIS. The lack of inexpensive room temperature imaging systems makes it difficult to provide a suitable MMW system for many of the above applications. In last few years we advanced in research and development of sensors using very inexpensive (30-50 cents) Glow Discharge Detector (GDD) plasma indicator lamps as MMW detectors. This paper presents three kinds of GDD sensor based lamp Focal Plane Arrays (FPA). Those three kinds of cameras are different in the number of detectors, scanning operation, and detection method. The 1st and 2nd generations are 8 × 8 pixel array and an 18 × 2 mono-rail scanner array respectively, both of them for direct detection and limited to fixed imaging. The last designed sensor is a multiplexing frame rate of 16x16 GDD FPA. It permits real time video rate imaging of 30 frames/ sec and comprehensive 3D MMW imaging. The principle of detection in this sensor is a frequency modulated continuous wave (FMCW) system while each of the 16 GDD pixel lines is sampled simultaneously. Direct detection is also possible and can be done with a friendly user interface. This FPA sensor is built over 256 commercial GDD lamps with 3 mm diameter International Light, Inc., Peabody, MA model 527 Ne indicator lamps as pixel detectors. All three sensors are fully supported

  12. Experimental single-chip color HDTV image acquisition system with 8M-pixel CMOS image sensor

    Science.gov (United States)

    Shimamoto, Hiroshi; Yamashita, Takayuki; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed an experimental single-chip color HDTV image acquisition system using 8M-pixel CMOS image sensor. The sensor has 3840 × 2160 effective pixels and is progressively scanned at 60 frames per second. We describe the color filter array and interpolation method to improve image quality with a high-pixel-count single-chip sensor. We also describe an experimental image acquisition system we used to measured spatial frequency characteristics in the horizontal direction. The results indicate good prospects for achieving a high quality single chip HDTV camera that reduces pseudo signals and maintains high spatial frequency characteristics within the frequency band for HDTV.

  13. Special Sensor Microwave Imager/Sounder (SSMIS) Temperature Data Record (TDR) in netCDF

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Special Sensor Microwave Imager/Sounder (SSMIS) is a series of passive microwave conically scanning imagers and sounders onboard the DMSP satellites beginning...

  14. Evaluation of the AN/SAY-1 Thermal Imaging Sensor System

    National Research Council Canada - National Science Library

    Smith, John G; Middlebrook, Christopher T

    2002-01-01

    The AN/SAY-1 Thermal Imaging Sensor System "TISS" was developed to provide surface ships with a day/night imaging capability to detect low radar reflective, small cross-sectional area targets such as floating mines...

  15. IR sensitivity enhancement of CMOS Image Sensor with diffractive light trapping pixels.

    Science.gov (United States)

    Yokogawa, Sozo; Oshiyama, Itaru; Ikeda, Harumi; Ebiko, Yoshiki; Hirano, Tomoyuki; Saito, Suguru; Oinoue, Takashi; Hagimoto, Yoshiya; Iwamoto, Hayato

    2017-06-19

    We report on the IR sensitivity enhancement of back-illuminated CMOS Image Sensor (BI-CIS) with 2-dimensional diffractive inverted pyramid array structure (IPA) on crystalline silicon (c-Si) and deep trench isolation (DTI). FDTD simulations of semi-infinite thick c-Si having 2D IPAs on its surface whose pitches over 400 nm shows more than 30% improvement of light absorption at λ = 850 nm and the maximum enhancement of 43% with the 540 nm pitch at the wavelength is confirmed. A prototype BI-CIS sample with pixel size of 1.2 μm square containing 400 nm pitch IPAs shows 80% sensitivity enhancement at λ = 850 nm compared to the reference sample with flat surface. This is due to diffraction with the IPA and total reflection at the pixel boundary. The NIR images taken by the demo camera equip with a C-mount lens show 75% sensitivity enhancement in the λ = 700-1200 nm wavelength range with negligible spatial resolution degradation. Light trapping CIS pixel technology promises to improve NIR sensitivity and appears to be applicable to many different image sensor applications including security camera, personal authentication, and range finding Time-of-Flight camera with IR illuminations.

  16. A Sensitive Dynamic and Active Pixel Vision Sensor for Color or Neural Imaging Applications.

    Science.gov (United States)

    Moeys, Diederik Paul; Corradi, Federico; Li, Chenghan; Bamford, Simeon A; Longinotti, Luca; Voigt, Fabian F; Berry, Stewart; Taverni, Gemma; Helmchen, Fritjof; Delbruck, Tobi

    2018-02-01

    Applications requiring detection of small visual contrast require high sensitivity. Event cameras can provide higher dynamic range (DR) and reduce data rate and latency, but most existing event cameras have limited sensitivity. This paper presents the results of a 180-nm Towerjazz CIS process vision sensor called SDAVIS192. It outputs temporal contrast dynamic vision sensor (DVS) events and conventional active pixel sensor frames. The SDAVIS192 improves on previous DAVIS sensors with higher sensitivity for temporal contrast. The temporal contrast thresholds can be set down to 1% for negative changes in logarithmic intensity (OFF events) and down to 3.5% for positive changes (ON events). The achievement is possible through the adoption of an in-pixel preamplification stage. This preamplifier reduces the effective intrascene DR of the sensor (70 dB for OFF and 50 dB for ON), but an automated operating region control allows up to at least 110-dB DR for OFF events. A second contribution of this paper is the development of characterization methodology for measuring DVS event detection thresholds by incorporating a measure of signal-to-noise ratio (SNR). At average SNR of 30 dB, the DVS temporal contrast threshold fixed pattern noise is measured to be 0.3%-0.8% temporal contrast. Results comparing monochrome and RGBW color filter array DVS events are presented. The higher sensitivity of SDAVIS192 make this sensor potentially useful for calcium imaging, as shown in a recording from cultured neurons expressing calcium sensitive green fluorescent protein GCaMP6f.

  17. Frontside-micromachined planar piezoresistive vibration sensor: Evaluating performance in the low frequency test range

    Directory of Open Access Journals (Sweden)

    Lan Zhang

    2014-01-01

    Full Text Available Using a surface piezoresistor diffusion method and front-side only micromachining process, a planar piezoresistive vibration sensor was successfully developed with a simple structure, lower processing cost and fewer packaging difficulties. The vibration sensor had a large sector proof mass attached to a narrow flexure. Optimization of the boron diffusion piezoresistor placed on the edge of the narrow flexure greatly improved the sensitivity. Planar vibration sensors were fabricated and measured in order to analyze the effects of the sensor dimensions on performance, including the values of flexure width and the included angle of the sector. Sensitivities of fabricated planar sensors of 0.09–0.46 mV/V/g were measured up to a test frequency of 60 Hz. The sensor functioned at low voltages (<3 V and currents (<1 mA with a high sensitivity and low drift. At low background noise levels, the sensor had performance comparable to a commercial device.

  18. Frontside-micromachined planar piezoresistive vibration sensor: Evaluating performance in the low frequency test range

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lan; Lu, Jian, E-mail: jian-lu@aist.go.jp; Takagi, Hideki; Maeda, Ryutaro [Research Center for Ubiquitous MEMS and Micro Engineering (UMEMSME), National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki, 305-8564 (Japan)

    2014-01-15

    Using a surface piezoresistor diffusion method and front-side only micromachining process, a planar piezoresistive vibration sensor was successfully developed with a simple structure, lower processing cost and fewer packaging difficulties. The vibration sensor had a large sector proof mass attached to a narrow flexure. Optimization of the boron diffusion piezoresistor placed on the edge of the narrow flexure greatly improved the sensitivity. Planar vibration sensors were fabricated and measured in order to analyze the effects of the sensor dimensions on performance, including the values of flexure width and the included angle of the sector. Sensitivities of fabricated planar sensors of 0.09–0.46 mV/V/g were measured up to a test frequency of 60 Hz. The sensor functioned at low voltages (<3 V) and currents (<1 mA) with a high sensitivity and low drift. At low background noise levels, the sensor had performance comparable to a commercial device.

  19. Origin of high photoconductive gain in fully transparent heterojunction nanocrystalline oxide image sensors and interconnects.

    Science.gov (United States)

    Jeon, Sanghun; Song, Ihun; Lee, Sungsik; Ryu, Byungki; Ahn, Seung-Eon; Lee, Eunha; Kim, Young; Nathan, Arokia; Robertson, John; Chung, U-In

    2014-11-05

    A technique for invisible image capture using a photosensor array based on transparent conducting oxide semiconductor thin-film transistors and transparent interconnection technologies is presented. A transparent conducting layer is employed for the sensor electrodes as well as interconnection in the array, providing about 80% transmittance at visible-light wavelengths. The phototransistor is a Hf-In-Zn-O/In-Zn-O heterostructure yielding a high quantum-efficiency in the visible range. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Visible Wavelength Color Filters Using Dielectric Subwavelength Gratings for Backside-Illuminated CMOS Image Sensor Technologies.

    Science.gov (United States)

    Horie, Yu; Han, Seunghoon; Lee, Jeong-Yub; Kim, Jaekwan; Kim, Yongsung; Arbabi, Amir; Shin, Changgyun; Shi, Lilong; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Lee, Hong-Seok; Hwang, Sungwoo; Faraon, Andrei

    2017-05-10

    We report transmissive color filters based on subwavelength dielectric gratings that can replace conventional dye-based color filters used in backside-illuminated CMOS image sensor (BSI CIS) technologies. The filters are patterned in an 80 nm-thick poly silicon film on a 115 nm-thick SiO 2 spacer layer. They are optimized for operating at the primary RGB colors, exhibit peak transmittance of 60-80%, and have an almost insensitive response over a ± 20° angular range. This technology enables shrinking of the pixel sizes down to near a micrometer.

  1. A new capacitive long-range displacement nanometer sensor with differential sensing structure based on time-grating

    Science.gov (United States)

    Yu, Zhicheng; Peng, Kai; Liu, Xiaokang; Pu, Hongji; Chen, Ziran

    2018-05-01

    High-precision displacement sensors, which can measure large displacements with nanometer resolution, are key components in many ultra-precision fabrication machines. In this paper, a new capacitive nanometer displacement sensor with differential sensing structure is proposed for long-range linear displacement measurements based on an approach denoted time grating. Analytical models established using electric field coupling theory and an area integral method indicate that common-mode interference will result in a first-harmonic error in the measurement results. To reduce the common-mode interference, the proposed sensor design employs a differential sensing structure, which adopts a second group of induction electrodes spatially separated from the first group of induction electrodes by a half-pitch length. Experimental results based on a prototype sensor demonstrate that the measurement accuracy and the stability of the sensor are substantially improved after adopting the differential sensing structure. Finally, a prototype sensor achieves a measurement accuracy of  ±200 nm over the full 200 mm measurement range of the sensor.

  2. The effect of split pixel HDR image sensor technology on MTF measurements

    Science.gov (United States)

    Deegan, Brian M.

    2014-03-01

    Split-pixel HDR sensor technology is particularly advantageous in automotive applications, because the images are captured simultaneously rather than sequentially, thereby reducing motion blur. However, split pixel technology introduces artifacts in MTF measurement. To achieve a HDR image, raw images are captured from both large and small sub-pixels, and combined to make the HDR output. In some cases, a large sub-pixel is used for long exposure captures, and a small sub-pixel for short exposures, to extend the dynamic range. The relative size of the photosensitive area of the pixel (fill factor) plays a very significant role in the output MTF measurement. Given an identical scene, the MTF will be significantly different, depending on whether you use the large or small sub-pixels i.e. a smaller fill factor (e.g. in the short exposure sub-pixel) will result in higher MTF scores, but significantly greater aliasing. Simulations of split-pixel sensors revealed that, when raw images from both sub-pixels are combined, there is a significant difference in rising edge (i.e. black-to-white transition) and falling edge (white-to-black) reproduction. Experimental results showed a difference of ~50% in measured MTF50 between the falling and rising edges of a slanted edge test chart.

  3. A Single-Transistor Active Pixel CMOS Image Sensor Architecture

    International Nuclear Information System (INIS)

    Zhang Guo-An; He Jin; Zhang Dong-Wei; Su Yan-Mei; Wang Cheng; Chen Qin; Liang Hai-Lang; Ye Yun

    2012-01-01

    A single-transistor CMOS active pixel image sensor (1 T CMOS APS) architecture is proposed. By switching the photosensing pinned diode, resetting and selecting can be achieved by diode pull-up and capacitive coupling pull-down of the source follower. Thus, the reset and selected transistors can be removed. In addition, the reset and selected signal lines can be shared to reduce the metal signal line, leading to a very high fill factor. The pixel design and operation principles are discussed in detail. The functionality of the proposed 1T CMOS APS architecture has been experimentally verified using a fabricated chip in a standard 0.35 μm CMOS AMIS technology

  4. Polymer Optical Fibre Sensors for Endoscopic Opto-Acoustic Imaging

    DEFF Research Database (Denmark)

    Broadway, Christian; Gallego, Daniel; Woyessa, Getinet

    2015-01-01

    in existing publications. A great advantage can be obtained for endoscopy due to a small size and array potential to provide discrete imaging speed improvements. Optical fibre exhibits numerous advantages over conventional piezo-electric transducers, such as immunity from electromagnetic interference...... is the physical size of the device, allowing compatibility with current technology, while governing flexibility of the distal end of the endoscope based on the needs of the sensor. Polymer optical fibre (POF) presents a novel approach for endoscopic applications and has been positively discussed and compared...... and a higher resolution at small sizes. Furthermore, micro structured polymer optical fibres offer over 12 times the sensitivity of silica fibre. We present a polymer fibre Bragg grating ultrasound detector with a core diameter of 125 microns. We discuss the ultrasonic signals received and draw conclusions...

  5. High Dynamic Velocity Range Particle Image Velocimetry Using Multiple Pulse Separation Imaging

    Directory of Open Access Journals (Sweden)

    Tadhg S. O’Donovan

    2010-12-01

    Full Text Available The dynamic velocity range of particle image velocimetry (PIV is determined by the maximum and minimum resolvable particle displacement. Various techniques have extended the dynamic range, however flows with a wide velocity range (e.g., impinging jets still challenge PIV algorithms. A new technique is presented to increase the dynamic velocity range by over an order of magnitude. The multiple pulse separation (MPS technique (i records series of double-frame exposures with different pulse separations, (ii processes the fields using conventional multi-grid algorithms, and (iii yields a composite velocity field with a locally optimized pulse separation. A robust criterion determines the local optimum pulse separation, accounting for correlation strength and measurement uncertainty. Validation experiments are performed in an impinging jet flow, using laser-Doppler velocimetry as reference measurement. The precision of mean flow and turbulence quantities is significantly improved compared to conventional PIV, due to the increase in dynamic range. In a wide range of applications, MPS PIV is a robust approach to increase the dynamic velocity range without restricting the vector evaluation methods.

  6. High dynamic velocity range particle image velocimetry using multiple pulse separation imaging.

    Science.gov (United States)

    Persoons, Tim; O'Donovan, Tadhg S

    2011-01-01

    The dynamic velocity range of particle image velocimetry (PIV) is determined by the maximum and minimum resolvable particle displacement. Various techniques have extended the dynamic range, however flows with a wide velocity range (e.g., impinging jets) still challenge PIV algorithms. A new technique is presented to increase the dynamic velocity range by over an order of magnitude. The multiple pulse separation (MPS) technique (i) records series of double-frame exposures with different pulse separations, (ii) processes the fields using conventional multi-grid algorithms, and (iii) yields a composite velocity field with a locally optimized pulse separation. A robust criterion determines the local optimum pulse separation, accounting for correlation strength and measurement uncertainty. Validation experiments are performed in an impinging jet flow, using laser-Doppler velocimetry as reference measurement. The precision of mean flow and turbulence quantities is significantly improved compared to conventional PIV, due to the increase in dynamic range. In a wide range of applications, MPS PIV is a robust approach to increase the dynamic velocity range without restricting the vector evaluation methods.

  7. A design of an on-orbit radiometric calibration device for high dynamic range infrared remote sensors

    Science.gov (United States)

    Sheng, Yicheng; Jin, Weiqi; Dun, Xiong; Zhou, Feng; Xiao, Si

    2017-10-01

    With the demand of quantitative remote sensing technology growing, high reliability as well as high accuracy radiometric calibration technology, especially the on-orbit radiometric calibration device has become an essential orientation in term of quantitative remote sensing technology. In recent years, global launches of remote sensing satellites are equipped with innovative on-orbit radiometric calibration devices. In order to meet the requirements of covering a very wide dynamic range and no-shielding radiometric calibration system, we designed a projection-type radiometric calibration device for high dynamic range sensors based on the Schmidt telescope system. In this internal radiometric calibration device, we select the EF-8530 light source as the calibration blackbody. EF-8530 is a high emittance Nichrome (Ni-Cr) reference source. It can operate in steady or pulsed state mode at a peak temperature of 973K. The irradiance from the source was projected to the IRFPA. The irradiance needs to ensure that the IRFPA can obtain different amplitude of the uniform irradiance through the narrow IR passbands and cover the very wide dynamic range. Combining the internal on-orbit radiometric calibration device with the specially designed adaptive radiometric calibration algorithms, an on-orbit dynamic non-uniformity correction can be accomplished without blocking the optical beam from outside the telescope. The design optimizes optics, source design, and power supply electronics for irradiance accuracy and uniformity. The internal on-orbit radiometric calibration device not only satisfies a series of indexes such as stability, accuracy, large dynamic range and uniformity of irradiance, but also has the advantages of short heating and cooling time, small volume, lightweight, low power consumption and many other features. It can realize the fast and efficient relative radiometric calibration without shielding the field of view. The device can applied to the design and

  8. Self-amplified CMOS image sensor using a current-mode readout circuit

    Science.gov (United States)

    Santos, Patrick M.; de Lima Monteiro, Davies W.; Pittet, Patrick

    2014-05-01

    The feature size of the CMOS processes decreased during the past few years and problems such as reduced dynamic range have become more significant in voltage-mode pixels, even though the integration of more functionality inside the pixel has become easier. This work makes a contribution on both sides: the possibility of a high signal excursion range using current-mode circuits together with functionality addition by making signal amplification inside the pixel. The classic 3T pixel architecture was rebuild with small modifications to integrate a transconductance amplifier providing a current as an output. The matrix with these new pixels will operate as a whole large transistor outsourcing an amplified current that will be used for signal processing. This current is controlled by the intensity of the light received by the matrix, modulated pixel by pixel. The output current can be controlled by the biasing circuits to achieve a very large range of output signal levels. It can also be controlled with the matrix size and this permits a very high degree of freedom on the signal level, observing the current densities inside the integrated circuit. In addition, the matrix can operate at very small integration times. Its applications would be those in which fast imaging processing, high signal amplification are required and low resolution is not a major problem, such as UV image sensors. Simulation results will be presented to support: operation, control, design, signal excursion levels and linearity for a matrix of pixels that was conceived using this new concept of sensor.

  9. 77 FR 74513 - Certain CMOS Image Sensors and Products Containing Same; Investigations: Terminations...

    Science.gov (United States)

    2012-12-14

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-846] Certain CMOS Image Sensors and Products Containing Same; Investigations: Terminations, Modifications and Rulings AGENCY: U.S... United States after importation of certain CMOS image sensors and products containing the same based on...

  10. An Ultra-Low Power CMOS Image Sensor with On-Chip Energy Harvesting and Power Management Capability

    Directory of Open Access Journals (Sweden)

    Ismail Cevik

    2015-03-01

    Full Text Available An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability is introduced in this paper. The photodiode pixel array can not only capture images but also harvest solar energy. As such, the CMOS image sensor chip is able to switch between imaging and harvesting modes towards self-power operation. Moreover, an on-chip maximum power point tracking (MPPT-based power management system (PMS is designed for the dual-mode image sensor to further improve the energy efficiency. A new isolated P-well energy harvesting and imaging (EHI pixel with very high fill factor is introduced. Several ultra-low power design techniques such as reset and select boosting techniques have been utilized to maintain a wide pixel dynamic range. The chip was designed and fabricated in a 1.8 V, 1P6M 0.18 µm CMOS process. Total power consumption of the imager is 6.53 µW for a 96 × 96 pixel array with 1 V supply and 5 fps frame rate. Up to 30 μW of power could be generated by the new EHI pixels. The PMS is capable of providing 3× the power required during imaging mode with 50% efficiency allowing energy autonomous operation with a 72.5% duty cycle.

  11. An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability.

    Science.gov (United States)

    Cevik, Ismail; Huang, Xiwei; Yu, Hao; Yan, Mei; Ay, Suat U

    2015-03-06

    An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability is introduced in this paper. The photodiode pixel array can not only capture images but also harvest solar energy. As such, the CMOS image sensor chip is able to switch between imaging and harvesting modes towards self-power operation. Moreover, an on-chip maximum power point tracking (MPPT)-based power management system (PMS) is designed for the dual-mode image sensor to further improve the energy efficiency. A new isolated P-well energy harvesting and imaging (EHI) pixel with very high fill factor is introduced. Several ultra-low power design techniques such as reset and select boosting techniques have been utilized to maintain a wide pixel dynamic range. The chip was designed and fabricated in a 1.8 V, 1P6M 0.18 µm CMOS process. Total power consumption of the imager is 6.53 µW for a 96 × 96 pixel array with 1 V supply and 5 fps frame rate. Up to 30 μW of power could be generated by the new EHI pixels. The PMS is capable of providing 3× the power required during imaging mode with 50% efficiency allowing energy autonomous operation with a 72.5% duty cycle.

  12. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization

    Energy Technology Data Exchange (ETDEWEB)

    Hutcheson, Joshua A.; Majid, Aneeka A.; Powless, Amy J.; Muldoon, Timothy J., E-mail: tmuldoon@uark.edu [Department of Biomedical Engineering, University of Arkansas, 120 Engineering Hall, Fayetteville, Arkansas 72701 (United States)

    2015-09-15

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min{sup −1} with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels{sup −1}.

  13. Transition-edge sensor imaging arrays for astrophysics applications

    Science.gov (United States)

    Burney, Jennifer Anne

    Many interesting objects in our universe currently elude observation in the optical band: they are too faint or they vary rapidly and thus any structure in their radiation is lost over the period of an exposure. Conventional photon detectors cannot simultaneously provide energy resolution and time-stamping of individual photons at fast rates. Superconducting detectors have recently made the possibility of simultaneous photon counting, imaging, and energy resolution a reality. Our research group has pioneered the use of one such detector, the Transition-Edge Sensor (TES). TES physics is simple and elegant. A thin superconducting film, biased at its critical temperature, can act as a particle detector: an incident particle deposits energy and drives the film into its superconducting-normal transition. By inductively coupling the detector to a SQUID amplifier circuit, this resistance change can be read out as a current pulse, and its energy deduced by integrating over the pulse. TESs can be used to accurately time-stamp (to 0.1 [mu]s) and energy-resolve (0.15 eV at 1.6 eV) near-IR/visible/near-UV photons at rates of 30~kHz. The first astronomical observations using fiber-coupled detectors were made at the Stanford Student Observatory 0.6~m telescope in 1999. Further observations of the Crab Pulsar from the 107" telescope at the University of Texas McDonald Observatory showed rapid phase variations over the near-IR/visible/near-UV band. These preliminary observations provided a glimpse into a new realm of observations of pulsars, binary systems, and accreting black holes promised by TES arrays. This thesis describes the development, characterization, and preliminary use of the first camera system based on Transition-Edge Sensors. While single-device operation is relatively well-understood, the operation of a full imaging array poses significant challenges. This thesis addresses all aspects related to the creation and characterization of this cryogenic imaging

  14. Highly Sensitive and Wide-Dynamic-Range Multichannel Optical-Fiber pH Sensor Based on PWM Technique.

    Science.gov (United States)

    Khan, Md Rajibur Rahaman; Kang, Shin-Won

    2016-11-09

    In this study, we propose a highly sensitive multichannel pH sensor that is based on an optical-fiber pulse width modulation (PWM) technique. According to the optical-fiber PWM method, the received sensing signal's pulse width changes when the optical-fiber pH sensing-element of the array comes into contact with pH buffer solutions. The proposed optical-fiber PWM pH-sensing system offers a linear sensing response over a wide range of pH values from 2 to 12, with a high pH-sensing ability. The sensitivity of the proposed pH sensor is 0.46 µs/pH, and the correlation coefficient R² is approximately 0.997. Additional advantages of the proposed optical-fiber PWM pH sensor include a short/fast response-time of about 8 s, good reproducibility properties with a relative standard deviation (RSD) of about 0.019, easy fabrication, low cost, small size, reusability of the optical-fiber sensing-element, and the capability of remote sensing. Finally, the performance of the proposed PWM pH sensor was compared with that of potentiometric, optical-fiber modal interferometer, and optical-fiber Fabry-Perot interferometer pH sensors with respect to dynamic range width, linearity as well as response and recovery times. We observed that the proposed sensing systems have better sensing abilities than the above-mentioned pH sensors.

  15. Image enhancement circuit using nonlinear processing curve and constrained histogram range equalization

    NARCIS (Netherlands)

    Cvetkovic, S.D.; With, de P.H.N.; Panchanathan, S.; Vasudev, B.

    2004-01-01

    For real-time imaging in surveillance applications, image fidelity is of primary importance to ensure customer confidence. The obtained image fidelity is a result from amongst others dynamic range expansion and video signal enhancement. The dynamic range of the signal needs adaptation, because the

  16. Intelligent Data Transfer for Multiple Sensor Networks over a Broad Temperature Range

    Science.gov (United States)

    Krasowski, Michael (Inventor)

    2018-01-01

    A sensor network may be configured to operate in extreme temperature environments. A sensor may be configured to generate a frequency carrier, and transmit the frequency carrier to a node. The node may be configured to amplitude modulate the frequency carrier, and transmit the amplitude modulated frequency carrier to a receiver.

  17. Characterisation of a monolithic active pixel sensor for electron detection in the energy range 10-20 keV

    International Nuclear Information System (INIS)

    Matheson, J.; Moldovan, G.; Clark, A.; Prydderch, M.; Turchetta, R.; Derbyshire, G.; Kirkland, A.; Allinson, N.

    2009-01-01

    As part of a feasibility study into the use of novel electron detectors for X-ray photoelectron emission microscopes (XPEEM), we have characterised the imaging performance of a back-illuminated monolithic active pixel sensor (MAPS) operating under both integrating and counting modes for electrons in the energy range 10-20 keV. For integrating mode, we present the detective quantum efficiency (DQE), which shows marked improvements over conventional indirect detectors based on microchannel plates. We also present the modulation transfer function (MTF) and noise power spectrum (NPS), again demonstrating significantly improved performance. For counting mode, we present the quantum efficiency (QE) as a function of incident electron energy. We have evaluated the charge collection efficiency (CCE) and we thereby demonstrate the presence of a ∼200 nm thick dead layer that is linked with reduced CCE at low electron energies. Based on our findings, we believe that the MAPS technology is well matched to future XPEEM instruments using aberration correction.

  18. Wireless image-data transmission from an implanted image sensor through a living mouse brain by intra body communication

    Science.gov (United States)

    Hayami, Hajime; Takehara, Hiroaki; Nagata, Kengo; Haruta, Makito; Noda, Toshihiko; Sasagawa, Kiyotaka; Tokuda, Takashi; Ohta, Jun

    2016-04-01

    Intra body communication technology allows the fabrication of compact implantable biomedical sensors compared with RF wireless technology. In this paper, we report the fabrication of an implantable image sensor of 625 µm width and 830 µm length and the demonstration of wireless image-data transmission through a brain tissue of a living mouse. The sensor was designed to transmit output signals of pixel values by pulse width modulation (PWM). The PWM signals from the sensor transmitted through a brain tissue were detected by a receiver electrode. Wireless data transmission of a two-dimensional image was successfully demonstrated in a living mouse brain. The technique reported here is expected to provide useful methods of data transmission using micro sized implantable biomedical sensors.

  19. Comparison of the performance of intraoral X-ray sensors using objective image quality assessment.

    Science.gov (United States)

    Hellén-Halme, Kristina; Johansson, Curt; Nilsson, Mats

    2016-05-01

    The main aim of this study was to evaluate the performance of 10 individual sensors of the same make, using objective measures of key image quality parameters. A further aim was to compare 8 brands of sensors. Ten new sensors of 8 different models from 6 manufacturers (i.e., 80 sensors) were included in the study. All sensors were exposed in a standardized way using an X-ray tube voltage of 60 kVp and different exposure times. Sensor response, noise, low-contrast resolution, spatial resolution and uniformity were measured. Individual differences between sensors of the same brand were surprisingly large in some cases. There were clear differences in the characteristics of the different brands of sensors. The largest variations were found for individual sensor response for some of the brands studied. Also, noise level and low contrast resolution showed large variations between brands. Sensors, even of the same brand, vary significantly in their quality. It is thus valuable to establish action levels for the acceptance of newly delivered sensors and to use objective image quality control for commissioning purposes and periodic checks to ensure high performance of individual digital sensors. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Integrated arrays of air-dielectric graphene transistors as transparent active-matrix pressure sensors for wide pressure ranges.

    Science.gov (United States)

    Shin, Sung-Ho; Ji, Sangyoon; Choi, Seiho; Pyo, Kyoung-Hee; Wan An, Byeong; Park, Jihun; Kim, Joohee; Kim, Ju-Young; Lee, Ki-Suk; Kwon, Soon-Yong; Heo, Jaeyeong; Park, Byong-Guk; Park, Jang-Ung

    2017-03-31

    Integrated electronic circuitries with pressure sensors have been extensively researched as a key component for emerging electronics applications such as electronic skins and health-monitoring devices. Although existing pressure sensors display high sensitivities, they can only be used for specific purposes due to the narrow range of detectable pressure (under tens of kPa) and the difficulty of forming highly integrated arrays. However, it is essential to develop tactile pressure sensors with a wide pressure range in order to use them for diverse application areas including medical diagnosis, robotics or automotive electronics. Here we report an unconventional approach for fabricating fully integrated active-matrix arrays of pressure-sensitive graphene transistors with air-dielectric layers simply formed by folding two opposing panels. Furthermore, this realizes a wide tactile pressure sensing range from 250 Pa to ∼3 MPa. Additionally, fabrication of pressure sensor arrays and transparent pressure sensors are demonstrated, suggesting their substantial promise as next-generation electronics.

  1. Development of a thinned back-illuminated CMOS active pixel sensor for extreme ultraviolet spectroscopy and imaging in space science

    International Nuclear Information System (INIS)

    Waltham, N.R.; Prydderch, M.; Mapson-Menard, H.; Pool, P.; Harris, A.

    2007-01-01

    We describe our programme to develop a large-format, science-grade, monolithic CMOS active pixel sensor for future space science missions, and in particular an extreme ultraviolet (EUV) spectrograph for solar physics studies on ESA's Solar Orbiter. Our route to EUV sensitivity relies on adapting the back-thinning and rear-illumination techniques first developed for CCD sensors. Our first large-format sensor consists of 4kx3k 5 μm pixels fabricated on a 0.25 μm CMOS imager process. Wafer samples of these sensors have been thinned by e2v technologies with the aim of obtaining good sensitivity at EUV wavelengths. We present results from both front- and back-illuminated versions of this sensor. We also present our plans to develop a new sensor of 2kx2k 10 μm pixels, which will be fabricated on a 0.35 μm CMOS process. In progress towards this goal, we have designed a test-structure consisting of six arrays of 512x512 10 μm pixels. Each of the arrays has been given a different pixel design to allow verification of our models, and our progress towards optimizing a design for minimal system readout noise and maximum dynamic range. These sensors will also be back-thinned for characterization at EUV wavelengths

  2. Noise analysis of a novel hybrid active-passive pixel sensor for medical X-ray imaging

    International Nuclear Information System (INIS)

    Safavian, N.; Izadi, M.H.; Sultana, A.; Wu, D.; Karim, K.S.; Nathan, A.; Rowlands, J.A.

    2009-01-01

    Passive pixel sensor (PPS) is one of the most widely used architectures in large area amorphous silicon (a-Si) flat panel imagers. It consists of a detector and a thin film transistor (TFT) acting as a readout switch. While the PPS is advantageous in terms of providing a simple and small architecture suitable for high-resolution imaging, it directly exposes the signal to the noise of data line and external readout electronics, causing significant increase in the minimum readable sensor input signal. In this work we present the operation and noise performance of a hybrid 3-TFT current programmed, current output active pixel sensor (APS) suitable for real-time X-ray imaging. The pixel circuit extends the application of a-Si TFT from conventional switching element to on-pixel amplifier for enhanced signal-to-noise ratio and higher imager dynamic range. The capability of operation in both passive and active modes as well as being able to compensate for inherent instabilities of the TFTs makes the architecture a good candidate for X-ray imaging modalities with a wide range of incoming X-ray intensities. Measurement and theoretical calculations reveal a value for input refferd noise below the 1000 electron noise limit for real-time fluoroscopy. (copyright 2009 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  3. Discrimination between sedimentary rocks from close-range visible and very-near-infrared images

    NARCIS (Netherlands)

    Pozo, Susana Del; Lindenbergh, R.C.; Rodríguez-Gonzálvez, Pablo; Blom, J.C.; González-Aguilera, Diego

    2015-01-01

    Variation in the mineral composition of rocks results in a change of their spectral response capable of being studied by imaging spectroscopy. This paper proposes the use of a low-cost handy sensor, a calibrated visible-very near infrared (VIS-VNIR) multispectral camera for the recognition of

  4. Performance of a novel wafer scale CMOS active pixel sensor for bio-medical imaging

    International Nuclear Information System (INIS)

    Esposito, M; Evans, P M; Wells, K; Anaxagoras, T; Konstantinidis, A C; Zheng, Y; Speller, R D; Allinson, N M

    2014-01-01

    Recently CMOS active pixels sensors (APSs) have become a valuable alternative to amorphous silicon and selenium flat panel imagers (FPIs) in bio-medical imaging applications. CMOS APSs can now be scaled up to the standard 20 cm diameter wafer size by means of a reticle stitching block process. However, despite wafer scale CMOS APS being monolithic, sources of non-uniformity of response and regional variations can persist representing a significant challenge for wafer scale sensor response. Non-uniformity of stitched sensors can arise from a number of factors related to the manufacturing process, including variation of amplification, variation between readout components, wafer defects and process variations across the wafer due to manufacturing processes. This paper reports on an investigation into the spatial non-uniformity and regional variations of a wafer scale stitched CMOS APS. For the first time a per-pixel analysis of the electro-optical performance of a wafer CMOS APS is presented, to address inhomogeneity issues arising from the stitching techniques used to manufacture wafer scale sensors. A complete model of the signal generation in the pixel array has been provided and proved capable of accounting for noise and gain variations across the pixel array. This novel analysis leads to readout noise and conversion gain being evaluated at pixel level, stitching block level and in regions of interest, resulting in a coefficient of variation ⩽1.9%. The uniformity of the image quality performance has been further investigated in a typical x-ray application, i.e. mammography, showing a uniformity in terms of CNR among the highest when compared with mammography detectors commonly used in clinical practice. Finally, in order to compare the detection capability of this novel APS with the technology currently used (i.e. FPIs), theoretical evaluation of the detection quantum efficiency (DQE) at zero-frequency has been performed, resulting in a higher DQE for this

  5. Performance of a novel wafer scale CMOS active pixel sensor for bio-medical imaging.

    Science.gov (United States)

    Esposito, M; Anaxagoras, T; Konstantinidis, A C; Zheng, Y; Speller, R D; Evans, P M; Allinson, N M; Wells, K

    2014-07-07

    Recently CMOS active pixels sensors (APSs) have become a valuable alternative to amorphous silicon and selenium flat panel imagers (FPIs) in bio-medical imaging applications. CMOS APSs can now be scaled up to the standard 20 cm diameter wafer size by means of a reticle stitching block process. However, despite wafer scale CMOS APS being monolithic, sources of non-uniformity of response and regional variations can persist representing a significant challenge for wafer scale sensor response. Non-uniformity of stitched sensors can arise from a number of factors related to the manufacturing process, including variation of amplification, variation between readout components, wafer defects and process variations across the wafer due to manufacturing processes. This paper reports on an investigation into the spatial non-uniformity and regional variations of a wafer scale stitched CMOS APS. For the first time a per-pixel analysis of the electro-optical performance of a wafer CMOS APS is presented, to address inhomogeneity issues arising from the stitching techniques used to manufacture wafer scale sensors. A complete model of the signal generation in the pixel array has been provided and proved capable of accounting for noise and gain variations across the pixel array. This novel analysis leads to readout noise and conversion gain being evaluated at pixel level, stitching block level and in regions of interest, resulting in a coefficient of variation ⩽1.9%. The uniformity of the image quality performance has been further investigated in a typical x-ray application, i.e. mammography, showing a uniformity in terms of CNR among the highest when compared with mammography detectors commonly used in clinical practice. Finally, in order to compare the detection capability of this novel APS with the technology currently used (i.e. FPIs), theoretical evaluation of the detection quantum efficiency (DQE) at zero-frequency has been performed, resulting in a higher DQE for this

  6. Long-range surface plasmons for high-resolution surface plasmon resonance sensors

    Czech Academy of Sciences Publication Activity Database

    Nenninger, G. G.; Tobiška, Petr; Homola, Jiří; Yee, S. S.

    B74, 1/3 (2001), s. 145-151 ISSN 0925-4005. [European Conference on Optical Chemical Sensors and Biosensors EUROPT(R)ODE /5./. Lyon-Villeurbanne, 16.04.2000-19.04.2000] R&D Projects: GA ČR GA102/99/0549; GA ČR GA102/00/1536 Grant - others:Department of Defense(US) DAAD13-99-C-0032 Institutional research plan: CEZ:AV0Z2067918 Keywords : sensors * surface plasmons * biosensors Subject RIV: JB - Sensors, Measurment, Regulation Impact factor: 1.440, year: 2001

  7. An ultrasensitive method of real time pH monitoring with complementary metal oxide semiconductor image sensor.

    Science.gov (United States)

    Devadhasan, Jasmine Pramila; Kim, Sanghyo

    2015-02-09

    CMOS sensors are becoming a powerful tool in the biological and chemical field. In this work, we introduce a new approach on quantifying various pH solutions with a CMOS image sensor. The CMOS image sensor based pH measurement produces high-accuracy analysis, making it a truly portable and user friendly system. pH indicator blended hydrogel matrix was fabricated as a thin film to the accurate color development. A distinct color change of red, green and blue (RGB) develops in the hydrogel film by applying various pH solutions (pH 1-14). The semi-quantitative pH evolution was acquired by visual read out. Further, CMOS image sensor absorbs the RGB color intensity of the film and hue value converted into digital numbers with the aid of an analog-to-digital converter (ADC) to determine the pH ranges of solutions. Chromaticity diagram and Euclidean distance represent the RGB color space and differentiation of pH ranges, respectively. This technique is applicable to sense the various toxic chemicals and chemical vapors by situ sensing. Ultimately, the entire approach can be integrated into smartphone and operable with the user friendly manner. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. In-Vivo High Dynamic Range Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Jensen, Jørgen Arendt

    2015-01-01

    example with a high dynamic velocity range. Velocities with an order of magnitude apart are detected on the femoral artery of a 41 years old healthy individual. Three distinct heart cycles are captured during a 3 secs acquisition. The estimated vector velocities are compared against each other within...... the heart cycle. The relative standard deviation of the measured velocity magnitude between the three peak systoles was found to be 5.11% with a standard deviation on the detected angle of 1.06◦ . In the diastole, it was 1.46% and 6.18◦ , respectively. Results proves that the method is able to estimate flow...

  9. Inertial sensors as measurement tools of elbow range of motion in gerontology

    Directory of Open Access Journals (Sweden)

    Sacco G

    2015-02-01

    Full Text Available G Sacco,1–3,* JM Turpin,3,4,* A Marteu,5 C Sakarovitch,6 B Teboul,2 L Boscher,4,5 P Brocker,4 P Robert,1–3 O Guerin2,3,7 1Memory Center, Claude Pompidou Institut, Department of Geriatrics, University Hospital of Nice, Nice, France; 2Centre d’Innovation et d’Usages en Santé (CIU-S, University Hospital of Nice, Cimiez Hospital, Nice, France; 3CoBTeK Cognition Behaviour Technology EA 7276, Research Center Edmond and Lily Safra, Nice Sophia-Antipolis University, Nice, France; 4Rehabilitation Unit, Department of Geriatrics, University Hospital of Nice, Cimiez Hospital, Nice, France; 5Rehabilitation Unit, Department of Neurosciences, University Hospital of Nice, L’Archet Hospital, Nice, France; 6Department of Clinical Research and Innovation, University Hospital of Nice, Cimiez Hospital, Nice, France; 7Acute Geriatrics Unit, Department of Geriatrics, University Hospital of Nice, Cimiez Hospital, Nice, France *These authors contributed equally to this work Background and purpose: Musculoskeletal system deterioration among the aging is a major reason for loss of autonomy and directly affects the quality of life of the elderly. Articular evaluation is part of physiotherapeutic assessment and helps in establishing a precise diagnosis and deciding appropriate therapy. Reference instruments are valid but not easy to use for some joints. The main goal of our study was to determine reliability and intertester reproducibility of the MP-BV, an inertial sensor (the MotionPod® [MP] combined with specific software (BioVal [BV], for elbow passive range-of-motion measurements in geriatrics. Methods: This open, monocentric, randomized study compared inertial sensor to inclinometer in patients hospitalized in an acute, post-acute, and long-term-care gerontology unit. Results: Seventy-seven patients (mean age 83.5±6.4 years, sex ratio 1.08 [male/female] were analyzed. The MP-BV was reliable for each of the three measurements (flexion, pronation, and

  10. A Wireless Sensor Network for Vineyard Monitoring That Uses Image Processing

    Science.gov (United States)

    Lloret, Jaime; Bosch, Ignacio; Sendra, Sandra; Serrano, Arturo

    2011-01-01

    The first step to detect when a vineyard has any type of deficiency, pest or disease is to observe its stems, its grapes and/or its leaves. To place a sensor in each leaf of every vineyard is obviously not feasible in terms of cost and deployment. We should thus look for new methods to detect these symptoms precisely and economically. In this paper, we present a wireless sensor network where each sensor node takes images from the field and internally uses image processing techniques to detect any unusual status in the leaves. This symptom could be caused by a deficiency, pest, disease or other harmful agent. When it is detected, the sensor node sends a message to a sink node through the wireless sensor network in order to notify the problem to the farmer. The wireless sensor uses the IEEE 802.11 a/b/g/n standard, which allows connections from large distances in open air. This paper describes the wireless sensor network design, the wireless sensor deployment, how the node processes the images in order to monitor the vineyard, and the sensor network traffic obtained from a test bed performed in a flat vineyard in Spain. Although the system is not able to distinguish between deficiency, pest, disease or other harmful agents, a symptoms image database and a neuronal network could be added in order learn from the experience and provide an accurate problem diagnosis. PMID:22163948

  11. A wireless sensor network for vineyard monitoring that uses image processing.

    Science.gov (United States)

    Lloret, Jaime; Bosch, Ignacio; Sendra, Sandra; Serrano, Arturo

    2011-01-01

    The first step to detect when a vineyard has any type of deficiency, pest or disease is to observe its stems, its grapes and/or its leaves. To place a sensor in each leaf of every vineyard is obviously not feasible in terms of cost and deployment. We should thus look for new methods to detect these symptoms precisely and economically. In this paper, we present a wireless sensor network where each sensor node takes images from the field and internally uses image processing techniques to detect any unusual status in the leaves. This symptom could be caused by a deficiency, pest, disease or other harmful agent. When it is detected, the sensor node sends a message to a sink node through the wireless sensor network in order to notify the problem to the farmer. The wireless sensor uses the IEEE 802.11 a/b/g/n standard, which allows connections from large distances in open air. This paper describes the wireless sensor network design, the wireless sensor deployment, how the node processes the images in order to monitor the vineyard, and the sensor network traffic obtained from a test bed performed in a flat vineyard in Spain. Although the system is not able to distinguish between deficiency, pest, disease or other harmful agents, a symptoms image database and a neuronal network could be added in order learn from the experience and provide an accurate problem diagnosis.

  12. Frequency-modulated laser ranging sensor with closed-loop control

    Science.gov (United States)

    Müller, Fabian M.; Böttger, Gunnar; Janeczka, Christian; Arndt-Staufenbiel, Norbert; Schröder, Henning; Schneider-Ramelow, Martin

    2018-02-01

    Advances in autonomous driving and robotics are creating high demand for inexpensive and mass-producible distance sensors. A laser ranging system (Lidar), based on the frequency-modulated continuous-wave (FMCW) method is built in this work. The benefits of an FMCW Lidar system are the low-cost components and the performance in comparison to conventional time-of-flight Lidar systems. The basic system consists of a DFB laser diode (λ= 1308 nm) and an asymmetric fiber-coupled Mach-Zehnder interferometer with a fixed delay line in one arm. Linear tuning of the laser optical frequency via injection current modulation creates a beat signal at the interferometer output. The frequency of the beat signal is proportional to the optical path difference in the interferometer. Since the laser frequency-to-current response is non-linear, a closed-loop feed-back system is designed to improve the tuning linearity, and consequently the measurement resolution. For fast active control, an embedded system with FPGA is used, resulting in a nearly linear frequency tuning, realizing a narrow peak in the Fourier spectrum of the beat signal. For free-space measurements, a setup with two distinct interferometers is built. The fully fiber-coupled Mach-Zehnder reference interferometer is part of the feed-back loop system, while the other - a Michelson interferometer - has a free-space arm with collimator lens and reflective target. A resolution of 2:0 mm for a 560 mm distance is achieved. The results for varying target distances show high consistency and a linear relation to the measured beat-frequency.

  13. Secure Localization for Wireless Sensor Networks using Range-Independent Methods

    National Research Council Canada - National Science Library

    Lazos, Loukas; Poovendran, Radha

    2006-01-01

    Wireless Sensor Networks (WSNs) are envisioned to be integrated into our everyday lives, enabling a wealth of commercial applications such as environmental and habitat monitoring, disaster relief and emergency rescue operations...

  14. CMOS Imaging of Temperature Effects on Pin-Printed Xerogel Sensor Microarrays.

    Science.gov (United States)

    Lei Yao; Ka Yi Yung; Chodavarapu, Vamsy P; Bright, Frank V

    2011-04-01

    In this paper, we study the effect of temperature on the operation and performance of a xerogel-based sensor microarrays coupled to a complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC) that images the photoluminescence response from the sensor microarray. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. A correlated double sampling circuit and pixel address/digital control/signal integration circuit are also implemented on-chip. The CMOS imager data are read out as a serial coded signal. The sensor system uses a light-emitting diode to excite target analyte responsive organometallic luminophores doped within discrete xerogel-based sensor elements. As a proto type, we developed a 3 × 3 (9 elements) array of oxygen (O2) sensors. Each group of three sensor elements in the array (arranged in a column) is designed to provide a different and specific sensitivity to the target gaseous O2 concentration. This property of multiple sensitivities is achieved by using a mix of two O2 sensitive luminophores in each pin-printed xerogel sensor element. The CMOS imager is designed to be low noise and consumes a static power of 320.4 μW and an average dynamic power of 624.6 μW when operating at 100-Hz sampling frequency and 1.8-V dc power supply.

  15. Camera sensor arrangement for crop/weed detection accuracy in agronomic images.

    Science.gov (United States)

    Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo

    2013-04-02

    In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects.

  16. The enhanced cyan fluorescent protein: a sensitive pH sensor for fluorescence lifetime imaging.

    Science.gov (United States)

    Poëa-Guyon, Sandrine; Pasquier, Hélène; Mérola, Fabienne; Morel, Nicolas; Erard, Marie

    2013-05-01

    pH is an important parameter that affects many functions of live cells, from protein structure or function to several crucial steps of their metabolism. Genetically encoded pH sensors based on pH-sensitive fluorescent proteins have been developed and used to monitor the pH of intracellular compartments. The quantitative analysis of pH variations can be performed either by ratiometric or fluorescence lifetime detection. However, most available genetically encoded pH sensors are based on green and yellow fluorescent proteins and are not compatible with multicolor approaches. Taking advantage of the strong pH sensitivity of enhanced cyan fluorescent protein (ECFP), we demonstrate here its suitability as a sensitive pH sensor using fluorescence lifetime imaging. The intracellular ECFP lifetime undergoes large changes (32 %) in the pH 5 to pH 7 range, which allows accurate pH measurements to better than 0.2 pH units. By fusion of ECFP with the granular chromogranin A, we successfully measured the pH in secretory granules of PC12 cells, and we performed a kinetic analysis of intragranular pH variations in living cells exposed to ammonium chloride.

  17. Target recognition of log-polar ladar range images using moment invariants

    Science.gov (United States)

    Xia, Wenze; Han, Shaokun; Cao, Jie; Yu, Haoyong

    2017-01-01

    The ladar range image has received considerable attentions in the automatic target recognition field. However, previous research does not cover target recognition using log-polar ladar range images. Therefore, we construct a target recognition system based on log-polar ladar range images in this paper. In this system combined moment invariants and backpropagation neural network are selected as shape descriptor and shape classifier, respectively. In order to fully analyze the effect of log-polar sampling pattern on recognition result, several comparative experiments based on simulated and real range images are carried out. Eventually, several important conclusions are drawn: (i) if combined moments are computed directly by log-polar range images, translation, rotation and scaling invariant properties of combined moments will be invalid (ii) when object is located in the center of field of view, recognition rate of log-polar range images is less sensitive to the changing of field of view (iii) as object position changes from center to edge of field of view, recognition performance of log-polar range images will decline dramatically (iv) log-polar range images has a better noise robustness than Cartesian range images. Finally, we give a suggestion that it is better to divide field of view into recognition area and searching area in the real application.

  18. Study on polarized optical flow algorithm for imaging bionic polarization navigation micro sensor

    Science.gov (United States)

    Guan, Le; Liu, Sheng; Li, Shi-qi; Lin, Wei; Zhai, Li-yuan; Chu, Jin-kui

    2018-05-01

    At present, both the point source and the imaging polarization navigation devices only can output the angle information, which means that the velocity information of the carrier cannot be extracted from the polarization field pattern directly. Optical flow is an image-based method for calculating the velocity of pixel point movement in an image. However, for ordinary optical flow, the difference in pixel value as well as the calculation accuracy can be reduced in weak light. Polarization imaging technology has the ability to improve both the detection accuracy and the recognition probability of the target because it can acquire the extra polarization multi-dimensional information of target radiation or reflection. In this paper, combining the polarization imaging technique with the traditional optical flow algorithm, a polarization optical flow algorithm is proposed, and it is verified that the polarized optical flow algorithm has good adaptation in weak light and can improve the application range of polarization navigation sensors. This research lays the foundation for day and night all-weather polarization navigation applications in future.

  19. Optical and Electric Multifunctional CMOS Image Sensors for On-Chip Biosensing Applications

    Directory of Open Access Journals (Sweden)

    Kiyotaka Sasagawa

    2010-12-01

    Full Text Available In this review, the concept, design, performance, and a functional demonstration of multifunctional complementary metal-oxide-semiconductor (CMOS image sensors dedicated to on-chip biosensing applications are described. We developed a sensor architecture that allows flexible configuration of a sensing pixel array consisting of optical and electric sensing pixels, and designed multifunctional CMOS image sensors that can sense light intensity and electric potential or apply a voltage to an on-chip measurement target. We describe the sensors’ architecture on the basis of the type of electric measurement or imaging functionalities.

  20. CMOS Imaging of Pin-Printed Xerogel-Based Luminescent Sensor Microarrays.

    Science.gov (United States)

    Yao, Lei; Yung, Ka Yi; Khan, Rifat; Chodavarapu, Vamsy P; Bright, Frank V

    2010-12-01

    We present the design and implementation of a luminescence-based miniaturized multisensor system using pin-printed xerogel materials which act as host media for chemical recognition elements. We developed a CMOS imager integrated circuit (IC) to image the luminescence response of the xerogel-based sensor array. The imager IC uses a 26 × 20 (520 elements) array of active pixel sensors and each active pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. The imager includes a correlated double sampling circuit and pixel address/digital control circuit; the image data is read-out as coded serial signal. The sensor system uses a light-emitting diode (LED) to excite the target analyte responsive luminophores doped within discrete xerogel-based sensor elements. As a prototype, we developed a 4 × 4 (16 elements) array of oxygen (O 2 ) sensors. Each group of 4 sensor elements in the array (arranged in a row) is designed to provide a different and specific sensitivity to the target gaseous O 2 concentration. This property of multiple sensitivities is achieved by using a strategic mix of two oxygen sensitive luminophores ([Ru(dpp) 3 ] 2+ and ([Ru(bpy) 3 ] 2+ ) in each pin-printed xerogel sensor element. The CMOS imager consumes an average power of 8 mW operating at 1 kHz sampling frequency driven at 5 V. The developed prototype system demonstrates a low cost and miniaturized luminescence multisensor system.

  1. Implementation of large area CMOS image sensor module using the precision align inspection

    International Nuclear Information System (INIS)

    Kim, Byoung Wook; Kim, Toung Ju; Ryu, Cheol Woo; Lee, Kyung Yong; Kim, Jin Soo; Kim, Myung Soo; Cho, Gyu Seong

    2014-01-01

    This paper describes a large area CMOS image sensor module Implementation using the precision align inspection program. This work is needed because wafer cutting system does not always have high precision. The program check more than 8 point of sensor edges and align sensors with moving table. The size of a 2×1 butted CMOS image sensor module which except for the size of PCB is 170 mm×170 mm. And the pixel size is 55 μm×55 μm and the number of pixels is 3,072×3,072. The gap between the two CMOS image sensor module was arranged in less than one pixel size

  2. Implementation of large area CMOS image sensor module using the precision align inspection

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Byoung Wook; Kim, Toung Ju; Ryu, Cheol Woo [Radiation Imaging Technology Center, JBTP, Iksan (Korea, Republic of); Lee, Kyung Yong; Kim, Jin Soo [Nano Sol-Tech INC., Iksan (Korea, Republic of); Kim, Myung Soo; Cho, Gyu Seong [Dept. of Nuclear and Quantum Engineering, KAIST, Daejeon (Korea, Republic of)

    2014-12-15

    This paper describes a large area CMOS image sensor module Implementation using the precision align inspection program. This work is needed because wafer cutting system does not always have high precision. The program check more than 8 point of sensor edges and align sensors with moving table. The size of a 2×1 butted CMOS image sensor module which except for the size of PCB is 170 mm×170 mm. And the pixel size is 55 μm×55 μm and the number of pixels is 3,072×3,072. The gap between the two CMOS image sensor module was arranged in less than one pixel size.

  3. Technical guidance for the development of a solid state image sensor for human low vision image warping

    Science.gov (United States)

    Vanderspiegel, Jan

    1994-01-01

    This report surveys different technologies and approaches to realize sensors for image warping. The goal is to study the feasibility, technical aspects, and limitations of making an electronic camera with special geometries which implements certain transformations for image warping. This work was inspired by the research done by Dr. Juday at NASA Johnson Space Center on image warping. The study has looked into different solid-state technologies to fabricate image sensors. It is found that among the available technologies, CMOS is preferred over CCD technology. CMOS provides more flexibility to design different functions into the sensor, is more widely available, and is a lower cost solution. By using an architecture with row and column decoders one has the added flexibility of addressing the pixels at random, or read out only part of the image.

  4. The assessment of multi-sensor image fusion using wavelet transforms for mapping the Brazilian Savanna

    NARCIS (Netherlands)

    Weimar Acerbi, F.; Clevers, J.G.P.W.; Schaepman, M.E.

    2006-01-01

    Multi-sensor image fusion using the wavelet approach provides a conceptual framework for the improvement of the spatial resolution with minimal distortion of the spectral content of the source image. This paper assesses whether images with a large ratio of spatial resolution can be fused, and

  5. Novel method of optical image registration in wide wavelength range using matrix of piezoelectric crystals

    Science.gov (United States)

    Pigarev, Aleksey V.; Bazarov, Timur O.; Fedorov, Vladimir V.; Ryabushkin, Oleg A.

    2018-02-01

    Most modern systems of the optical image registration are based on the matrices of photosensitive semiconductor heterostructures. However, measurement of radiation intensities up to several MW/cm2 -level using such detectors is a great challenge because semiconductor elements have low optical damage threshold. Reflecting or absorbing filters that can be used for attenuation of radiation intensity, as a rule, distort beam profile. Furthermore, semiconductor based devices have relatively narrow measurement wavelength bandwidth. We introduce a novel matrix method of optical image registration. This approach doesn't require any attenuation when measuring high radiation intensities. A sensitive element is the matrix made of thin transparent piezoelectric crystals that absorb just a small part of incident optical power. Each crystal element has its own set of intrinsic (acoustic) vibration modes. These modes can be exited due to the inverse piezoelectric effect when the external electric field is applied to the crystal sample providing that the field frequency corresponds to one of the vibration mode frequencies. Such piezoelectric resonances (PR) can be observed by measuring the radiofrequency response spectrum of the crystal placed between the capacitor plates. PR frequencies strongly depend on the crystal temperature. Temperature calibration of PR frequencies is conducted in the uniform heating conditions. In the case a crystal matrix is exposed to the laser radiation the incident power can be obtained separately for each crystal element by measuring its PR frequency kinetics providing that the optical absorption coefficient is known. The operating wavelength range of such sensor is restricted by the transmission bandwidth of the applied crystals. A plane matrix constituting of LiNbO3 crystals was assembled in order to demonstrate the possibility of application of the proposed approach. The crystal elements were placed between two electrodes forming a capacitor which

  6. Temporal Noise Analysis of Charge-Domain Sampling Readout Circuits for CMOS Image Sensors

    Directory of Open Access Journals (Sweden)

    Xiaoliang Ge

    2018-02-01

    Full Text Available This paper presents a temporal noise analysis of charge-domain sampling readout circuits for Complementary Metal-Oxide Semiconductor (CMOS image sensors. In order to address the trade-off between the low input-referred noise and high dynamic range, a Gm-cell-based pixel together with a charge-domain correlated-double sampling (CDS technique has been proposed to provide a way to efficiently embed a tunable conversion gain along the read-out path. Such readout topology, however, operates in a non-stationery large-signal behavior, and the statistical properties of its temporal noise are a function of time. Conventional noise analysis methods for CMOS image sensors are based on steady-state signal models, and therefore cannot be readily applied for Gm-cell-based pixels. In this paper, we develop analysis models for both thermal noise and flicker noise in Gm-cell-based pixels by employing the time-domain linear analysis approach and the non-stationary noise analysis theory, which help to quantitatively evaluate the temporal noise characteristic of Gm-cell-based pixels. Both models were numerically computed in MATLAB using design parameters of a prototype chip, and compared with both simulation and experimental results. The good agreement between the theoretical and measurement results verifies the effectiveness of the proposed noise analysis models.

  7. A 75-ps Gated CMOS Image Sensor with Low Parasitic Light Sensitivity.

    Science.gov (United States)

    Zhang, Fan; Niu, Hanben

    2016-06-29

    In this study, a 40 × 48 pixel global shutter complementary metal-oxide-semiconductor (CMOS) image sensor with an adjustable shutter time as low as 75 ps was implemented using a 0.5-μm mixed-signal CMOS process. The implementation consisted of a continuous contact ring around each p+/n-well photodiode in the pixel array in order to apply sufficient light shielding. The parasitic light sensitivity of the in-pixel storage node was measured to be 1/8.5 × 10⁷ when illuminated by a 405-nm diode laser and 1/1.4 × 10⁴ when illuminated by a 650-nm diode laser. The pixel pitch was 24 μm, the size of the square p+/n-well photodiode in each pixel was 7 μm per side, the measured random readout noise was 217 e(-) rms, and the measured dynamic range of the pixel of the designed chip was 5500:1. The type of gated CMOS image sensor (CIS) that is proposed here can be used in ultra-fast framing cameras to observe non-repeatable fast-evolving phenomena.

  8. Thin-Film Quantum Dot Photodiode for Monolithic Infrared Image Sensors.

    Science.gov (United States)

    Malinowski, Pawel E; Georgitzikis, Epimitheas; Maes, Jorick; Vamvaka, Ioanna; Frazzica, Fortunato; Van Olmen, Jan; De Moor, Piet; Heremans, Paul; Hens, Zeger; Cheyns, David

    2017-12-10

    Imaging in the infrared wavelength range has been fundamental in scientific, military and surveillance applications. Currently, it is a crucial enabler of new industries such as autonomous mobility (for obstacle detection), augmented reality (for eye tracking) and biometrics. Ubiquitous deployment of infrared cameras (on a scale similar to visible cameras) is however prevented by high manufacturing cost and low resolution related to the need of using image sensors based on flip-chip hybridization. One way to enable monolithic integration is by replacing expensive, small-scale III-V-based detector chips with narrow bandgap thin-films compatible with 8- and 12-inch full-wafer processing. This work describes a CMOS-compatible pixel stack based on lead sulfide quantum dots (PbS QD) with tunable absorption peak. Photodiode with a 150-nm thick absorber in an inverted architecture shows dark current of 10 -6 A/cm² at -2 V reverse bias and EQE above 20% at 1440 nm wavelength. Optical modeling for top illumination architecture can improve the contact transparency to 70%. Additional cooling (193 K) can improve the sensitivity to 60 dB. This stack can be integrated on a CMOS ROIC, enabling order-of-magnitude cost reduction for infrared sensors.

  9. Proximity gettering technology for advanced CMOS image sensors using carbon cluster ion-implantation technique. A review

    Energy Technology Data Exchange (ETDEWEB)

    Kurita, Kazunari; Kadono, Takeshi; Okuyama, Ryousuke; Shigemastu, Satoshi; Hirose, Ryo; Onaka-Masada, Ayumi; Koga, Yoshihiro; Okuda, Hidehiko [SUMCO Corporation, Saga (Japan)

    2017-07-15

    A new technique is described for manufacturing advanced silicon wafers with the highest capability yet reported for gettering transition metallic, oxygen, and hydrogen impurities in CMOS image sensor fabrication processes. Carbon and hydrogen elements are localized in the projection range of the silicon wafer by implantation of ion clusters from a hydrocarbon molecular gas source. Furthermore, these wafers can getter oxygen impurities out-diffused to device active regions from a Czochralski grown silicon wafer substrate to the carbon cluster ion projection range during heat treatment. Therefore, they can reduce the formation of transition metals and oxygen-related defects in the device active regions and improve electrical performance characteristics, such as the dark current, white spot defects, pn-junction leakage current, and image lag characteristics. The new technique enables the formation of high-gettering-capability sinks for transition metals, oxygen, and hydrogen impurities under device active regions of CMOS image sensors. The wafers formed by this technique have the potential to significantly improve electrical devices performance characteristics in advanced CMOS image sensors. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  10. Two-Level Evaluation on Sensor Interoperability of Features in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Ya-Shuo Li

    2012-03-01

    Full Text Available Features used in fingerprint segmentation significantly affect the segmentation performance. Various features exhibit different discriminating abilities on fingerprint images derived from different sensors. One feature which has better discriminating ability on images derived from a certain sensor may not adapt to segment images derived from other sensors. This degrades the segmentation performance. This paper empirically analyzes the sensor interoperability problem of segmentation feature, which refers to the feature’s ability to adapt to the raw fingerprints captured by different sensors. To address this issue, this paper presents a two-level feature evaluation method, including the first level feature evaluation based on segmentation error rate and the second level feature evaluation based on decision tree. The proposed method is performed on a number of fingerprint databases which are obtained from various sensors. Experimental results show that the proposed method can effectively evaluate the sensor interoperability of features, and the features with good evaluation results acquire better segmentation accuracies of images originating from different sensors.

  11. Effects of Resolution, Range, and Image Contrast on Target Acquisition Performance.

    Science.gov (United States)

    Hollands, Justin G; Terhaar, Phil; Pavlovic, Nada J

    2018-05-01

    We sought to determine the joint influence of resolution, target range, and image contrast on the detection and identification of targets in simulated naturalistic scenes. Resolution requirements for target acquisition have been developed based on threshold values obtained using imaging systems, when target range was fixed, and image characteristics were determined by the system. Subsequent work has examined the influence of factors like target range and image contrast on target acquisition. We varied the resolution and contrast of static images in two experiments. Participants (soldiers) decided whether a human target was located in the scene (detection task) or whether a target was friendly or hostile (identification task). Target range was also varied (50-400 m). In Experiment 1, 30 participants saw color images with a single target exemplar. In Experiment 2, another 30 participants saw monochrome images containing different target exemplars. The effects of target range and image contrast were qualitatively different above and below 6 pixels per meter of target for both tasks in both experiments. Target detection and identification performance were a joint function of image resolution, range, and contrast for both color and monochrome images. The beneficial effects of increasing resolution for target acquisition performance are greater for closer (larger) targets.

  12. Test-bench for characterization of steady state magnetic sensors parameters in wide temperature range

    International Nuclear Information System (INIS)

    Kovařík, Karel; Ďuran, Ivan; Sentkerestiová, Jana; Šesták, David

    2013-01-01

    Highlights: •Prepared test bench for calibration of steady state magnetic sensors. •Test-bench design optimized for calibration up to 300 °C. •Test-bench is remotely controllable and allows long term measurements. •Construction allows easy manipulation with even irradiated samples. -- Abstract: Magnetic sensors in ITER tokamak and in other future fusion devices will face an environment with temperature often elevated well above 200 °C. Dedicated test benches are needed to allow characterization of performance of magnetic sensors at such elevated temperatures. This contribution describes realization of test bench for calibration of steady state magnetic sensors based on Hall effect. The core of the set-up is the coil providing DC calibration magnetic field. Optimization of coils design to ensure its compatibility with elevated temperature up to 300 °C is described. Optimized coil was manufactured, and calibrated both at room temperature and at temperature of 250 °C. Measured calibration magnetic field of the coil biased by a 30 A commercial laboratory power supplies is 224 mT. The coil is supplemented by PID regulated air cooling system for fine control of sensors temperature during measurements. Data acquisition system is composed from PC A/D converter boards with resolution below 1 μV. The key parameters of the test bench are remotely controllable and the system allows long term continuous measurements including tests of irradiated samples. The performance of the test bench is demonstrated on recent measurements with metal Hall sensors based on thin copper sensing layers

  13. Landsat 8 Operational Land Imager (OLI)_Thermal Infared Sensor (TIRS) V1

    Data.gov (United States)

    National Aeronautics and Space Administration — Abstract:The Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) are instruments onboard the Landsat 8 satellite, which was launched in February of...

  14. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Sensor Data Record (SDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sensor Data Records (SDRs), or Level 1b data, from the Visible Infrared Imaging Radiometer Suite (VIIRS) are the calibrated and geolocated radiance and reflectance...

  15. Researchers develop CCD image sensor with 20ns per row parallel readout time

    CERN Multimedia

    Bush, S

    2004-01-01

    "Scientists at the Rutherford Appleton Laboratory (RAL) in Oxfordshire have developed what they claim is the fastest CCD (charge-coupled device) image sensor, with a readout time which is 20ns per row" (1/2 page)

  16. NRT Lightning Imaging Sensor (LIS) on International Space Station (ISS) Science Data Vb0

    Data.gov (United States)

    National Aeronautics and Space Administration — The NRT Lightning Imaging Sensor (LIS) on International Space Station (ISS) Science Data were collected by the LIS instrument on the ISS used to detect the...

  17. NRT Lightning Imaging Sensor (LIS) on International Space Station (ISS) Provisional Science Data Vp0

    Data.gov (United States)

    National Aeronautics and Space Administration — The International Space Station (ISS) Lightning Imaging Sensor (LIS) datasets were collected by the LIS instrument on the ISS used to detect the distribution and...

  18. Extended Special Sensor Microwave Imager (SSM/I) Temperature Data Record (TDR) in netCDF

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Special Sensor Microwave Imager (SSM/I) is a seven-channel linearly polarized passive microwave radiometer that operates at frequencies of 19.36 (vertically and...

  19. Study of CMOS Image Sensors for the Alignment System of the CMS Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Virto, A. L.; Vila, I.; Rodrigo, T.; Matorras, F.; Figueroa, C. F.; Calvo, E.; Calderon, A.; Arce, P.; Oller, J. C.; Molinero, A.; Josa, M. I.; Fuentes, J.; Ferrando, A.; Fernandez, M. G.; Barcala, J. M.

    2002-07-01

    We report on an in-depth study made on commercial CMOS image sensors in order to determine their feasibility for beam light position detection in the CMS multipoint alignment scheme. (Author) 21 refs.

  20. Gimbal Integration to Small Format, Airborne, MWIR and LWIR Imaging Sensors, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed innovation is for enhanced sensor performance and high resolution imaging for Long Wave InfraRed (LWIR) and Medium Wave IR (MWIR) camera systems used in...

  1. Test–bench for characterization of steady state magnetic sensors parameters in wide temperature range

    Czech Academy of Sciences Publication Activity Database

    Kovařík, Karel; Ďuran, Ivan; Sentkerestiová, J.; Šesták, David

    2013-01-01

    Roč. 88, 6-8 (2013), s. 1319-1322 ISSN 0920-3796. [Symposium on Fusion Technology (SOFT-27)/27./. Liège, 24.09.2012-28.09.2012] R&D Projects: GA MŠk 7G10072; GA ČR GAP205/10/2055; GA MŠk(CZ) LM2011021 Institutional support: RVO:61389021 Keywords : plasma * tokamak * Magnetic sensor testing * Hall sensor * Fusion device Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 1.149, year: 2013 http://www.sciencedirect.com/science/article/pii/S0920379613002652#

  2. Design and Implementation of a Novel Compatible Encoding Scheme in the Time Domain for Image Sensor Communication

    Directory of Open Access Journals (Sweden)

    Trang Nguyen

    2016-05-01

    Full Text Available This paper presents a modulation scheme in the time domain based on On-Off-Keying and proposes various compatible supports for different types of image sensors. The content of this article is a sub-proposal to the IEEE 802.15.7r1 Task Group (TG7r1 aimed at Optical Wireless Communication (OWC using an image sensor as the receiver. The compatibility support is indispensable for Image Sensor Communications (ISC because the rolling shutter image sensors currently available have different frame rates, shutter speeds, sampling rates, and resolutions. However, focusing on unidirectional communications (i.e., data broadcasting, beacons, an asynchronous communication prototype is also discussed in the paper. Due to the physical limitations associated with typical image sensors (including low and varying frame rates, long exposures, and low shutter speeds, the link speed performance is critically considered. Based on the practical measurement of camera response to modulated light, an operating frequency range is suggested along with the similar system architecture, decoding procedure, and algorithms. A significant feature of our novel data frame structure is that it can support both typical frame rate cameras (in the oversampling mode as well as very low frame rate cameras (in the error detection mode for a camera whose frame rate is lower than the transmission packet rate. A high frame rate camera, i.e., no less than 20 fps, is supported in an oversampling mode in which a majority voting scheme for decoding data is applied. A low frame rate camera, i.e., when the frame rate drops to less than 20 fps at some certain time, is supported by an error detection mode in which any missing data sub-packet is detected in decoding and later corrected by external code. Numerical results and valuable analysis are also included to indicate the capability of the proposed schemes.

  3. Highly Sensitive and Wide-Dynamic-Range Multichannel Optical-Fiber pH Sensor Based on PWM Technique

    Science.gov (United States)

    Khan, Md. Rajibur Rahaman; Kang, Shin-Won

    2016-01-01

    In this study, we propose a highly sensitive multichannel pH sensor that is based on an optical-fiber pulse width modulation (PWM) technique. According to the optical-fiber PWM method, the received sensing signal’s pulse width changes when the optical-fiber pH sensing-element of the array comes into contact with pH buffer solutions. The proposed optical-fiber PWM pH-sensing system offers a linear sensing response over a wide range of pH values from 2 to 12, with a high pH-sensing ability. The sensitivity of the proposed pH sensor is 0.46 µs/pH, and the correlation coefficient R2 is approximately 0.997. Additional advantages of the proposed optical-fiber PWM pH sensor include a short/fast response-time of about 8 s, good reproducibility properties with a relative standard deviation (RSD) of about 0.019, easy fabrication, low cost, small size, reusability of the optical-fiber sensing-element, and the capability of remote sensing. Finally, the performance of the proposed PWM pH sensor was compared with that of potentiometric, optical-fiber modal interferometer, and optical-fiber Fabry–Perot interferometer pH sensors with respect to dynamic range width, linearity as well as response and recovery times. We observed that the proposed sensing systems have better sensing abilities than the above-mentioned pH sensors. PMID:27834865

  4. STUDY ON SHADOW EFFECTS OF VARIOUS FEATURES ON CLOSE RANGE THERMAL IMAGES

    Directory of Open Access Journals (Sweden)

    C. L. Liao

    2012-07-01

    Full Text Available Thermal infrared data become more popular in remote sensing investigation, for it could be acquired both in day and night. The change of temperature has special characteristic in natural environment, so the thermal infrared images could be used in monitoring volcanic landform, the urban development, and disaster prevention. Heat shadow is formed by reflecting radiating capacity which followed the objects. Because of poor spatial resolution of thermal infrared images in satellite sensor, shadow effects were usually ignored. This research focus on discussing the shadow effects of various features, which include metals and nonmetallic materials. An area-based thermal sensor, FLIR-T360 was selected to acquire thermal images. Various features with different emissivity were chosen as reflective surface to obtain thermal shadow in normal atmospheric temperature. Experiments found that the shadow effects depend on the distance between sensors and features, depression angle, object temperature and emissivity of reflective surface. The causes of shadow effects have been altered in the experiment for analyzing the variance in thermal infrared images. The result shows that there were quite different impacts by shadow effects between metals and nonmetallic materials. The further research would be produced a math model to describe the shadow effects of different features in the future work.

  5. High-content analysis of single cells directly assembled on CMOS sensor based on color imaging.

    Science.gov (United States)

    Tanaka, Tsuyoshi; Saeki, Tatsuya; Sunaga, Yoshihiko; Matsunaga, Tadashi

    2010-12-15

    A complementary metal oxide semiconductor (CMOS) image sensor was applied to high-content analysis of single cells which were assembled closely or directly onto the CMOS sensor surface. The direct assembling of cell groups on CMOS sensor surface allows large-field (6.66 mm×5.32 mm in entire active area of CMOS sensor) imaging within a second. Trypan blue-stained and non-stained cells in the same field area on the CMOS sensor were successfully distinguished as white- and blue-colored images under white LED light irradiation. Furthermore, the chemiluminescent signals of each cell were successfully visualized as blue-colored images on CMOS sensor only when HeLa cells were placed directly on the micro-lens array of the CMOS sensor. Our proposed approach will be a promising technique for real-time and high-content analysis of single cells in a large-field area based on color imaging. Copyright © 2010 Elsevier B.V. All rights reserved.

  6. Dense range images from sparse point clouds using multi-scale processing

    NARCIS (Netherlands)

    Do, Q.L.; Ma, L.; With, de P.H.N.

    2013-01-01

    Multi-modal data processing based on visual and depth/range images has become relevant in computer vision for 3D reconstruction applications such as city modeling, robot navigation etc. In this paper, we generate highaccuracy dense range images from sparse point clouds to facilitate such

  7. 110 °C range athermalization of wavefront coding infrared imaging systems

    Science.gov (United States)

    Feng, Bin; Shi, Zelin; Chang, Zheng; Liu, Haizheng; Zhao, Yaohong

    2017-09-01

    110 °C range athermalization is significant but difficult for designing infrared imaging systems. Our wavefront coding athermalized infrared imaging system adopts an optical phase mask with less manufacturing errors and a decoding method based on shrinkage function. The qualitative experiments prove that our wavefront coding athermalized infrared imaging system has three prominent merits: (1) working well over a temperature range of 110 °C; (2) extending the focal depth up to 15.2 times; (3) achieving a decoded image being approximate to its corresponding in-focus infrared image, with a mean structural similarity index (MSSIM) value greater than 0.85.

  8. Wireless wearable range-of-motion sensor system for upper and lower extremity joints: a validation study.

    Science.gov (United States)

    Kumar, Yogaprakash; Yen, Shih-Cheng; Tay, Arthur; Lee, Wangwei; Gao, Fan; Zhao, Ziyi; Li, Jingze; Hon, Benjamin; Tian-Ma Xu, Tim; Cheong, Angela; Koh, Karen; Ng, Yee-Sien; Chew, Effie; Koh, Gerald

    2015-02-01

    Range-of-motion (ROM) assessment is a critical assessment tool during the rehabilitation process. The conventional approach uses the goniometer which remains the most reliable instrument but it is usually time-consuming and subject to both intra- and inter-therapist measurement errors. An automated wireless wearable sensor system for the measurement of ROM has previously been developed by the current authors. Presented is the correlation and accuracy of the automated wireless wearable sensor system against a goniometer in measuring ROM in the major joints of upper (UEs) and lower extremities (LEs) in 19 healthy subjects and 20 newly disabled inpatients through intra (same) subject comparison of ROM assessments between the sensor system against goniometer measurements by physical therapists. In healthy subjects, ROM measurements using the new sensor system were highly correlated with goniometry, with 95% of differences sensor system were also highly correlated with goniometry, with 95% of the differences being < 20° and 25° for most movements in the major joints of UE and LE, respectively.

  9. The lucky image-motion prediction for simple scene observation based soft-sensor technology

    Science.gov (United States)

    Li, Yan; Su, Yun; Hu, Bin

    2015-08-01

    High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.

  10. Improved Feature Detection in Fused Intensity-Range Images with Complex SIFT (ℂSIFT

    Directory of Open Access Journals (Sweden)

    Boris Jutzi

    2011-09-01

    Full Text Available The real and imaginary parts are proposed as an alternative to the usual Polar representation of complex-valued images. It is proven that the transformation from Polar to Cartesian representation contributes to decreased mutual information, and hence to greater distinctiveness. The Complex Scale-Invariant Feature Transform (ℂSIFT detects distinctive features in complex-valued images. An evaluation method for estimating the uniformity of feature distributions in complex-valued images derived from intensity-range images is proposed. In order to experimentally evaluate the proposed methodology on intensity-range images, three different kinds of active sensing systems were used: Range Imaging, Laser Scanning, and Structured Light Projection devices (PMD CamCube 2.0, Z+F IMAGER 5003, Microsoft Kinect.

  11. Novel Hall sensors developed for magnetic field imaging systems

    International Nuclear Information System (INIS)

    Cambel, Vladimir; Karapetrov, Goran; Novosad, Valentyn; Bartolome, Elena; Gregusova, Dagmar; Fedor, Jan; Kudela, Robert; Soltys, Jan

    2007-01-01

    We report here on the fabrication and application of novel planar Hall sensors based on shallow InGaP/AlGaAs/GaAs heterostructure with a two-dimensional electron gas (2DEG) as an active layer. The sensors are developed for two kinds of experiments. In the first one, magnetic samples are placed directly on the Hall sensor. Room temperature experiments of permalloy objects evaporated onto the sensor are presented. In the second experiment, the sensor scans close over a multigranular superconducting sample prepared on a YBCO thin film. Large-area and high-resolution scanning experiments were performed at 4.2 K with the Hall probe scanning system in a liquid helium flow cryostat

  12. High-speed particle tracking in microscopy using SPAD image sensors

    Science.gov (United States)

    Gyongy, Istvan; Davies, Amy; Miguelez Crespo, Allende; Green, Andrew; Dutton, Neale A. W.; Duncan, Rory R.; Rickman, Colin; Henderson, Robert K.; Dalgarno, Paul A.

    2018-02-01

    Single photon avalanche diodes (SPADs) are used in a wide range of applications, from fluorescence lifetime imaging microscopy (FLIM) to time-of-flight (ToF) 3D imaging. SPAD arrays are becoming increasingly established, combining the unique properties of SPADs with widefield camera configurations. Traditionally, the photosensitive area (fill factor) of SPAD arrays has been limited by the in-pixel digital electronics. However, recent designs have demonstrated that by replacing the complex digital pixel logic with simple binary pixels and external frame summation, the fill factor can be increased considerably. A significant advantage of such binary SPAD arrays is the high frame rates offered by the sensors (>100kFPS), which opens up new possibilities for capturing ultra-fast temporal dynamics in, for example, life science cellular imaging. In this work we consider the use of novel binary SPAD arrays in high-speed particle tracking in microscopy. We demonstrate the tracking of fluorescent microspheres undergoing Brownian motion, and in intra-cellular vesicle dynamics, at high frame rates. We thereby show how binary SPAD arrays can offer an important advance in live cell imaging in such fields as intercellular communication, cell trafficking and cell signaling.

  13. Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor.

    Science.gov (United States)

    Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2016-02-22

    In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.

  14. Median filters as a tool to determine dark noise thresholds in high resolution smartphone image sensors for scientific imaging

    Science.gov (United States)

    Igoe, Damien P.; Parisi, Alfio V.; Amar, Abdurazaq; Rummenie, Katherine J.

    2018-01-01

    An evaluation of the use of median filters in the reduction of dark noise in smartphone high resolution image sensors is presented. The Sony Xperia Z1 employed has a maximum image sensor resolution of 20.7 Mpixels, with each pixel having a side length of just over 1 μm. Due to the large number of photosites, this provides an image sensor with very high sensitivity but also makes them prone to noise effects such as hot-pixels. Similar to earlier research with older models of smartphone, no appreciable temperature effects were observed in the overall average pixel values for images taken in ambient temperatures between 5 °C and 25 °C. In this research, hot-pixels are defined as pixels with intensities above a specific threshold. The threshold is determined using the distribution of pixel values of a set of images with uniform statistical properties associated with the application of median-filters of increasing size. An image with uniform statistics was employed as a training set from 124 dark images, and the threshold was determined to be 9 digital numbers (DN). The threshold remained constant for multiple resolutions and did not appreciably change even after a year of extensive field use and exposure to solar ultraviolet radiation. Although the temperature effects' uniformity masked an increase in hot-pixel occurrences, the total number of occurrences represented less than 0.1% of the total image. Hot-pixels were removed by applying a median filter, with an optimum filter size of 7 × 7; similar trends were observed for four additional smartphone image sensors used for validation. Hot-pixels were also reduced by decreasing image resolution. The method outlined in this research provides a methodology to characterise the dark noise behavior of high resolution image sensors for use in scientific investigations, especially as pixel sizes decrease.

  15. Space-based infrared sensors of space target imaging effect analysis

    Science.gov (United States)

    Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang

    2018-02-01

    Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.

  16. Nanoimprinted distributed feedback dye laser sensor for real-time imaging of small molecule diffusion

    DEFF Research Database (Denmark)

    Vannahme, Christoph; Dufva, Martin; Kristensen, Anders

    2014-01-01

    Label-free imaging is a promising tool for the study of biological processes such as cell adhesion and small molecule signaling processes. In order to image in two dimensions of space current solutions require motorized stages which results in low imaging frame rates. Here, a highly sensitive...... distributed feedback (DFB) dye laser sensor for real-time label-free imaging without any moving parts enabling a frame rate of 12 Hz is presented. The presence of molecules on the laser surface results in a wavelength shift which is used as sensor signal. The unique DFB laser structure comprises several areas...

  17. Particle detection and classification using commercial off the shelf CMOS image sensors

    Energy Technology Data Exchange (ETDEWEB)

    Pérez, Martín [Instituto Balseiro, Av. Bustillo 9500, Bariloche, 8400 (Argentina); Comisión Nacional de Energía Atómica (CNEA), Centro Atómico Bariloche, Av. Bustillo 9500, Bariloche 8400 (Argentina); Consejo Nacional de Investigaciones Científicas y Técnicas, Centro Atómico Bariloche, Av. Bustillo 9500, 8400 Bariloche (Argentina); Lipovetzky, Jose, E-mail: lipo@cab.cnea.gov.ar [Instituto Balseiro, Av. Bustillo 9500, Bariloche, 8400 (Argentina); Comisión Nacional de Energía Atómica (CNEA), Centro Atómico Bariloche, Av. Bustillo 9500, Bariloche 8400 (Argentina); Consejo Nacional de Investigaciones Científicas y Técnicas, Centro Atómico Bariloche, Av. Bustillo 9500, 8400 Bariloche (Argentina); Sofo Haro, Miguel; Sidelnik, Iván; Blostein, Juan Jerónimo; Alcalde Bessia, Fabricio; Berisso, Mariano Gómez [Instituto Balseiro, Av. Bustillo 9500, Bariloche, 8400 (Argentina); Consejo Nacional de Investigaciones Científicas y Técnicas, Centro Atómico Bariloche, Av. Bustillo 9500, 8400 Bariloche (Argentina)

    2016-08-11

    In this paper we analyse the response of two different Commercial Off The shelf CMOS image sensors as particle detectors. Sensors were irradiated using X-ray photons, gamma photons, beta particles and alpha particles from diverse sources. The amount of charge produced by different particles, and the size of the spot registered on the sensor are compared, and analysed by an algorithm to classify them. For a known incident energy spectrum, the employed sensors provide a dose resolution lower than microGray, showing their potentials in radioprotection, area monitoring, or medical applications.

  18. Crop status sensing system by multi-spectral imaging sensor, 1: Image processing and paddy field sensing

    International Nuclear Information System (INIS)

    Ishii, K.; Sugiura, R.; Fukagawa, T.; Noguchi, N.; Shibata, Y.

    2006-01-01

    The objective of the study is to construct a sensing system for precision farming. A Multi-Spectral Imaging Sensor (MSIS), which can obtain three images (G. R and NIR) simultaneously, was used for detecting growth status of plants. The sensor was mounted on an unmanned helicopter. An image processing method for acquiring information of crop status with high accuracy was developed. Crop parameters that were measured include SPAD, leaf height, and stems number. Both direct seeding variety and transplant variety of paddy rice were adopted in the research. The result of a field test showed that crop status of both varieties could be detected with sufficient accuracy to apply to precision farming

  19. Study of CT-based positron range correction in high resolution 3D PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Cal-Gonzalez, J., E-mail: jacobo@nuclear.fis.ucm.es [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Herraiz, J.L. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Espana, S. [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Vicente, E. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Instituto de Estructura de la Materia, Consejo Superior de Investigaciones Cientificas (CSIC), Madrid (Spain); Herranz, E. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital General Universitario Gregorio Maranon, Madrid (Spain); Vaquero, J.J. [Dpto. de Bioingenieria e Ingenieria Espacial, Universidad Carlos III, Madrid (Spain); Udias, J.M. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain)

    2011-08-21

    Positron range limits the spatial resolution of PET images and has a different effect for different isotopes and positron propagation materials. Therefore it is important to consider it during image reconstruction, in order to obtain optimal image quality. Positron range distributions for most common isotopes used in PET in different materials were computed using the Monte Carlo simulations with PeneloPET. The range profiles were introduced into the 3D OSEM image reconstruction software FIRST and employed to blur the image either in the forward projection or in the forward and backward projection. The blurring introduced takes into account the different materials in which the positron propagates. Information on these materials may be obtained, for instance, from a segmentation of a CT image. The results of introducing positron blurring in both forward and backward projection operations was compared to using it only during forward projection. Further, the effect of different shapes of positron range profile in the quality of the reconstructed images with positron range correction was studied. For high positron energy isotopes, the reconstructed images show significant improvement in spatial resolution when positron range is taken into account during reconstruction, compared to reconstructions without positron range modeling.

  20. Study of CT-based positron range correction in high resolution 3D PET imaging

    International Nuclear Information System (INIS)

    Cal-Gonzalez, J.; Herraiz, J.L.; Espana, S.; Vicente, E.; Herranz, E.; Desco, M.; Vaquero, J.J.; Udias, J.M.

    2011-01-01

    Positron range limits the spatial resolution of PET images and has a different effect for different isotopes and positron propagation materials. Therefore it is important to consider it during image reconstruction, in order to obtain optimal image quality. Positron range distributions for most common isotopes used in PET in different materials were computed using the Monte Carlo simulations with PeneloPET. The range profiles were introduced into the 3D OSEM image reconstruction software FIRST and employed to blur the image either in the forward projection or in the forward and backward projection. The blurring introduced takes into account the different materials in which the positron propagates. Information on these materials may be obtained, for instance, from a segmentation of a CT image. The results of introducing positron blurring in both forward and backward projection operations was compared to using it only during forward projection. Further, the effect of different shapes of positron range profile in the quality of the reconstructed images with positron range correction was studied. For high positron energy isotopes, the reconstructed images show significant improvement in spatial resolution when positron range is taken into account during reconstruction, compared to reconstructions without positron range modeling.

  1. A Solar Position Sensor Based on Image Vision.

    Science.gov (United States)

    Ruelas, Adolfo; Velázquez, Nicolás; Villa-Angulo, Carlos; Acuña, Alexis; Rosales, Pedro; Suastegui, José

    2017-07-29

    Solar collector technologies operate with better performance when the Sun beam direction is normal to the capturing surface, and for that to happen despite the relative movement of the Sun, solar tracking systems are used, therefore, there are rules and standards that need minimum accuracy for these tracking systems to be used in solar collectors' evaluation. Obtaining accuracy is not an easy job, hence in this document the design, construction and characterization of a sensor based on a visual system that finds the relative azimuth error and height of the solar surface of interest, is presented. With these characteristics, the sensor can be used as a reference in control systems and their evaluation. The proposed sensor is based on a microcontroller with a real-time clock, inertial measurement sensors, geolocation and a vision sensor, that obtains the angle of incidence from the sunrays' direction as well as the tilt and sensor position. The sensor's characterization proved how a measurement of a focus error or a Sun position can be made, with an accuracy of 0.0426° and an uncertainty of 0.986%, which can be modified to reach an accuracy under 0.01°. The validation of this sensor was determined showing the focus error on one of the best commercial solar tracking systems, a Kipp & Zonen SOLYS 2. To conclude, the solar tracking sensor based on a vision system meets the Sun detection requirements and components that meet the accuracy conditions to be used in solar tracking systems and their evaluation or, as a tracking and orientation tool, on photovoltaic installations and solar collectors.

  2. Methods and apparatuses for detection of radiation with semiconductor image sensors

    Science.gov (United States)

    Cogliati, Joshua Joseph

    2018-04-10

    A semiconductor image sensor is repeatedly exposed to high-energy photons while a visible light obstructer is in place to block visible light from impinging on the sensor to generate a set of images from the exposures. A composite image is generated from the set of images with common noise substantially removed so the composite image includes image information corresponding to radiated pixels that absorbed at least some energy from the high-energy photons. The composite image is processed to determine a set of bright points in the composite image, each bright point being above a first threshold. The set of bright points is processed to identify lines with two or more bright points that include pixels therebetween that are above a second threshold and identify a presence of the high-energy particles responsive to a number of lines.

  3. Multi-Sensor Fusion of Infrared and Electro-Optic Signals for High Resolution Night Images

    Directory of Open Access Journals (Sweden)

    Victor Lawrence

    2012-07-01

    Full Text Available Electro-optic (EO image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF of a uniform detector array and the incoherent optical transfer function (OTF of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1 inverse filter-based IR image transformation; (2 EO image edge detection; (3 registration; and (4 blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available.

  4. CMOS-sensors for energy-resolved X-ray imaging

    International Nuclear Information System (INIS)

    Doering, D.; Amar-Youcef, S.; Deveaux, M.; Linnik, B.; Müntz, C.; Stroth, Joachim; Baudot, J.; Dulinski, W.; Kachel, M.

    2016-01-01

    Due to their low noise, CMOS Monolithic Active Pixel Sensors are suited to sense X-rays with a few keV quantum energy, which is of interest for high resolution X-ray imaging. Moreover, the good energy resolution of the silicon sensors might be used to measure this quantum energy. Combining both features with the good spatial resolution of CMOS sensors opens the potential to build ''color sensitive' X-ray cameras. Taking such colored images is hampered by the need to operate the CMOS sensors in a single photon counting mode, which restricts the photon flux capability of the sensors. More importantly, the charge sharing between the pixels smears the potentially good energy resolution of the sensors. Based on our experience with CMOS sensors for charged particle tracking, we studied techniques to overcome the latter by means of an offline processing of the data obtained from a CMOS sensor prototype. We found that the energy resolution of the pixels can be recovered at the expense of reduced quantum efficiency. We will introduce the results of our study and discuss the feasibility of taking colored X-ray pictures with CMOS sensors

  5. Accurate dew-point measurement over a wide temperature range using a quartz crystal microbalance dew-point sensor

    Science.gov (United States)

    Kwon, Su-Yong; Kim, Jong-Chul; Choi, Buyng-Il

    2008-11-01

    Quartz crystal microbalance (QCM) dew-point sensors are based on frequency measurement, and so have fast response time, high sensitivity and high accuracy. Recently, we have reported that they have the very convenient attribute of being able to distinguish between supercooled dew and frost from a single scan through the resonant frequency of the quartz resonator as a function of the temperature. In addition to these advantages, by using three different types of heat sinks, we have developed a QCM dew/frost-point sensor with a very wide working temperature range (-90 °C to 15 °C). The temperature of the quartz surface can be obtained effectively by measuring the temperature of the quartz crystal holder and using temperature compensation curves (which showed a high level of repeatability and reproducibility). The measured dew/frost points showed very good agreement with reference values and were within ±0.1 °C over the whole temperature range.

  6. An off-on Fluorescent Sensor for Detecting a Wide Range of Water Content in Organic Solvents

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kanghyeon; Lee, Wanjin; Kim, Jae Nyoung; Kim, Hyung Jin [Chonnam National Univ., Gwangju (Korea, Republic of)

    2013-08-15

    This paper describes the synthesis and water sensing properties of a fluorescent photoinduced electron transfer (PET) sensor (5) with an extended operating sensing range. The 1,8-naphthalimide derivative (5) attached with a piperazine group and a carboxylic group was synthesized and applied as a fluorescent water sensor in water-miscible organic solvents. The fluorescence intensity of the dye 5 increased with increasing water content up to 80% (v/v) and the fluorescence intensities were enhanced 45-, 67- and 122-fold in aqueous EtOH, DMF and DMSO solutions, respectively. In aqueous acetone solution, the enhancement of the fluorescence intensities was somewhat lower (30-fold) but the response range was wider (0-90%, v/v)

  7. Cloud Classification in Wide-Swath Passive Sensor Images Aided by Narrow-Swath Active Sensor Data

    Directory of Open Access Journals (Sweden)

    Hongxia Wang

    2018-05-01

    Full Text Available It is a challenge to distinguish between different cloud types because of the complexity and diversity of cloud coverage, which is a significant clutter source that impacts on target detection and identification from the images of space-based infrared sensors. In this paper, a novel strategy for cloud classification in wide-swath passive sensor images is developed, which is aided by narrow-swath active sensor data. The strategy consists of three steps, that is, the orbit registration, most matching donor pixel selection, and cloud type assignment for each recipient pixel. A new criterion for orbit registration is proposed so as to improve the matching accuracy. The most matching donor pixel is selected via the Euclidean distance and the square sum of the radiance relative differences between the recipient and the potential donor pixels. Each recipient pixel is then assigned a cloud type that corresponds to the most matching donor. The cloud classification of the Moderate Resolution Imaging Spectroradiometer (MODIS images is performed with the aid of the data from Cloud Profiling Radar (CPR. The results are compared with the CloudSat product 2B-CLDCLASS, as well as those that are obtained using the method of the International Satellite Cloud Climatology Project (ISCCP, which demonstrates the superior classification performance of the proposed strategy.

  8. The challenge of sCMOS image sensor technology to EMCCD

    Science.gov (United States)

    Chang, Weijing; Dai, Fang; Na, Qiyue

    2018-02-01

    In the field of low illumination image sensor, the noise of the latest scientific-grade CMOS image sensor is close to EMCCD, and the industry thinks it has the potential to compete and even replace EMCCD. Therefore we selected several typical sCMOS and EMCCD image sensors and cameras to compare their performance parameters. The results show that the signal-to-noise ratio of sCMOS is close to EMCCD, and the other parameters are superior. But signal-to-noise ratio is very important for low illumination imaging, and the actual imaging results of sCMOS is not ideal. EMCCD is still the first choice in the high-performance application field.

  9. Multi-sensor radiation detection, imaging, and fusion

    Energy Technology Data Exchange (ETDEWEB)

    Vetter, Kai [Department of Nuclear Engineering, University of California, Berkeley, CA 94720 (United States); Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2016-01-01

    Glenn Knoll was one of the leaders in the field of radiation detection and measurements and shaped this field through his outstanding scientific and technical contributions, as a teacher, his personality, and his textbook. His Radiation Detection and Measurement book guided me in my studies and is now the textbook in my classes in the Department of Nuclear Engineering at UC Berkeley. In the spirit of Glenn, I will provide an overview of our activities at the Berkeley Applied Nuclear Physics program reflecting some of the breadth of radiation detection technologies and their applications ranging from fundamental studies in physics to biomedical imaging and to nuclear security. I will conclude with a discussion of our Berkeley Radwatch and Resilient Communities activities as a result of the events at the Dai-ichi nuclear power plant in Fukushima, Japan more than 4 years ago. - Highlights: • .Electron-tracking based gamma-ray momentum reconstruction. • .3D volumetric and 3D scene fusion gamma-ray imaging. • .Nuclear Street View integrates and associates nuclear radiation features with specific objects in the environment. • Institute for Resilient Communities combines science, education, and communities to minimize impact of disastrous events.

  10. Characterization of total ionizing dose damage in COTS pinned photodiode CMOS image sensors

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zujun, E-mail: wangzujun@nint.ac.cn; Ma, Wuying; Huang, Shaoyan; Yao, Zhibin; Liu, Minbo; He, Baoping; Sheng, Jiangkun; Xue, Yuan [State Key Laboratory of Intense Pulsed Radiation Simulation and Effect, Northwest Institute of Nuclear Technology, P.O.Box 69-10, Xi’an, Shaanxi 710024 (China); Liu, Jing [School of Materials Science and Engineering, Xiangtan University, Hunan (China)

    2016-03-15

    The characterization of total ionizing dose (TID) damage in COTS pinned photodiode (PPD) CMOS image sensors (CISs) is investigated. The radiation experiments are carried out at a {sup 60}Co γ-ray source. The CISs are produced by 0.18-μm CMOS technology and the pixel architecture is 8T global shutter pixel with correlated double sampling (CDS) based on a 4T PPD front end. The parameters of CISs such as temporal domain, spatial domain, and spectral domain are measured at the CIS test system as the EMVA 1288 standard before and after irradiation. The dark current, random noise, dark signal non-uniformity (DSNU), photo response non-uniformity (PRNU), overall system gain, saturation output, dynamic range (DR), signal to noise ratio (SNR), quantum efficiency (QE), and responsivity versus the TID are reported. The behaviors of the tested CISs show remarkable degradations after radiation. The degradation mechanisms of CISs induced by TID damage are also analyzed.

  11. A GRAPH READER USING A CCD IMAGE SENSOR

    African Journals Online (AJOL)

    2008-01-18

    Jan 18, 2008 ... using a stepper motor controlled by a software program in a ... Keywords: CCD sensor, microcontrollen stepper motor and microcomputer. 1. ... commercial applications (Awcock and ... on-chip amplifier, one pixel at a tirtjie.

  12. A Faraday effect position sensor for interventional magnetic resonance imaging.

    Science.gov (United States)

    Bock, M; Umathum, R; Sikora, J; Brenner, S; Aguor, E N; Semmler, W

    2006-02-21

    An optical sensor is presented which determines the position and one degree of orientation within a magnetic resonance tomograph. The sensor utilizes the Faraday effect to measure the local magnetic field, which is modulated by switching additional linear magnetic fields, the gradients. Existing methods for instrument localization during an interventional MR procedure often use electrically conducting structures at the instruments that can heat up excessively during MRI and are thus a significant danger for the patient. The proposed optical Faraday effect position sensor consists of non-magnetic and electrically non-conducting components only so that heating is avoided and the sensor could be applied safely even within the human body. With a non-magnetic prototype set-up, experiments were performed to demonstrate the possibility of measuring both the localization and the orientation in a magnetic resonance tomograph. In a 30 mT m(-1) gradient field, a localization uncertainty of 1.5 cm could be achieved.

  13. Influence of range-gated intensifiers on underwater imaging system SNR

    Science.gov (United States)

    Wang, Xia; Hu, Ling; Zhi, Qiang; Chen, Zhen-yue; Jin, Wei-qi

    2013-08-01

    Range-gated technology has been a hot research field in recent years due to its high effective back scattering eliminating. As a result, it can enhance the contrast between a target and its background and extent the working distance of the imaging system. The underwater imaging system is required to have the ability to image in low light level conditions, as well as the ability to eliminate the back scattering effect, which means that the receiver has to be high-speed external trigger function, high resolution, high sensitivity, low noise, higher gain dynamic range. When it comes to an intensifier, the noise characteristics directly restrict the observation effect and range of the imaging system. The background noise may decrease the image contrast and sharpness, even covering the signal making it impossible to recognize the target. So it is quite important to investigate the noise characteristics of intensifiers. SNR is an important parameter reflecting the noise features of a system. Through the use of underwater laser range-gated imaging prediction model, and according to the linear SNR system theory, the gated imaging noise performance of the present market adopted super second generation and generation Ⅲ intensifiers were theoretically analyzed. Based on the active laser underwater range-gated imaging model, the effect to the system by gated intensifiers and the relationship between the system SNR and MTF were studied. Through theoretical and simulation analysis to the image intensifier background noise and SNR, the different influence on system SNR by super second generation and generation Ⅲ ICCD was obtained. Range-gated system SNR formula was put forward, and compared the different effect influence on the system by using two kind of ICCDs was compared. According to the matlab simulation, a detailed analysis was carried out theoretically. All the work in this paper lays a theoretical foundation to further eliminating back scattering effect, improving

  14. Video-rate or high-precision: a flexible range imaging camera

    Science.gov (United States)

    Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John P.; Jongenelen, Adrian P. P.

    2008-02-01

    A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system's frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.

  15. Multi-exposure high dynamic range image synthesis with camera shake correction

    Science.gov (United States)

    Li, Xudong; Chen, Yongfu; Jiang, Hongzhi; Zhao, Huijie

    2017-10-01

    Machine vision plays an important part in industrial online inspection. Owing to the nonuniform illuminance conditions and variable working distances, the captured image tends to be over-exposed or under-exposed. As a result, when processing the image such as crack inspection, the algorithm complexity and computing time increase. Multiexposure high dynamic range (HDR) image synthesis is used to improve the quality of the captured image, whose dynamic range is limited. Inevitably, camera shake will result in ghost effect, which blurs the synthesis image to some extent. However, existed exposure fusion algorithms assume that the input images are either perfectly aligned or captured in the same scene. These assumptions limit the application. At present, widely used registration based on Scale Invariant Feature Transform (SIFT) is usually time consuming. In order to rapidly obtain a high quality HDR image without ghost effect, we come up with an efficient Low Dynamic Range (LDR) images capturing approach and propose a registration method based on ORiented Brief (ORB) and histogram equalization which can eliminate the illumination differences between the LDR images. The fusion is performed after alignment. The experiment results demonstrate that the proposed method is robust to illumination changes and local geometric distortion. Comparing with other exposure fusion methods, our method is more efficient and can produce HDR images without ghost effect by registering and fusing four multi-exposure images.

  16. An Approach for Unsupervised Change Detection in Multitemporal VHR Images Acquired by Different Multispectral Sensors

    Directory of Open Access Journals (Sweden)

    Yady Tatiana Solano-Correa

    2018-03-01

    Full Text Available This paper proposes an approach for the detection of changes in multitemporal Very High Resolution (VHR optical images acquired by different multispectral sensors. The proposed approach, which is inspired by a recent framework developed to support the design of change-detection systems for single-sensor VHR remote sensing images, addresses and integrates in the general approach a strategy to effectively deal with multisensor information, i.e., to perform change detection between VHR images acquired by different multispectral sensors on two dates. This is achieved by the definition of procedures for the homogenization of radiometric, spectral and geometric image properties. These procedures map images into a common feature space where the information acquired by different multispectral sensors becomes comparable across time. Although the approach is general, here we optimize it for the detection of changes in vegetation and urban areas by employing features based on linear transformations (Tasseled Caps and Orthogonal Equations, which are shown to be effective for representing the multisensor information in a homogeneous physical way irrespectively of the considered sensor. Experiments on multitemporal images acquired by different VHR satellite systems (i.e., QuickBird, WorldView-2 and GeoEye-1 confirm the effectiveness of the proposed approach.

  17. Highly sensitive digital optical sensor with large measurement range based on the dual-microring resonator with waveguide-coupled feedback

    International Nuclear Information System (INIS)

    Xiang Xing-Ye; Wang Kui-Ru; Yuan Jin-Hui; Jin Bo-Yuan; Sang Xin-Zhu; Yu Chong-Xiu

    2014-01-01

    We propose a novel high-performance digital optical sensor based on the Mach—Zehnder interferential effect and the dual-microring resonators with the waveguide-coupled feedback. The simulation results show that the sensitivity of the sensor can be orders of magnitude higher than that of a conventional sensor, and high quality factor is not critical in it. Moreover, by optimizing the length of the feedback waveguide to be equal to the perimeter of the ring, the measurement range of the proposed sensor is twice as much as that of the conventional sensor in the weak coupling case

  18. Pesticide residue quantification analysis by hyperspectral imaging sensors

    Science.gov (United States)

    Liao, Yuan-Hsun; Lo, Wei-Sheng; Guo, Horng-Yuh; Kao, Ching-Hua; Chou, Tau-Meu; Chen, Junne-Jih; Wen, Chia-Hsien; Lin, Chinsu; Chen, Hsian-Min; Ouyang, Yen-Chieh; Wu, Chao-Cheng; Chen, Shih-Yu; Chang, Chein-I.

    2015-05-01

    Pesticide residue detection in agriculture crops is a challenging issue and is even more difficult to quantify pesticide residue resident in agriculture produces and fruits. This paper conducts a series of base-line experiments which are particularly designed for three specific pesticides commonly used in Taiwan. The materials used for experiments are single leaves of vegetable produces which are being contaminated by various amount of concentration of pesticides. Two sensors are used to collected data. One is Fourier Transform Infrared (FTIR) spectroscopy. The other is a hyperspectral sensor, called Geophysical and Environmental Research (GER) 2600 spectroradiometer which is a batteryoperated field portable spectroradiometer with full real-time data acquisition from 350 nm to 2500 nm. In order to quantify data with different levels of pesticide residue concentration, several measures for spectral discrimination are developed. Mores specifically, new measures for calculating relative power between two sensors are particularly designed to be able to evaluate effectiveness of each of sensors in quantifying the used pesticide residues. The experimental results show that the GER is a better sensor than FTIR in the sense of pesticide residue quantification.

  19. A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images

    NARCIS (Netherlands)

    Moschini, Ugo; Meijster, Arnold; Wilkinson, Michael

    Max-trees, or component trees, are graph structures that represent the connected components of an image in a hierarchical way. Nowadays, many application fields rely on images with high-dynamic range or floating point values. Efficient sequential algorithms exist to build trees and compute

  20. Fusing range and intensity images for generating dense models of three-dimensional environments

    DEFF Research Database (Denmark)

    Ellekilde, Lars-Peter; Miró, Jaime Valls; Dissanayake., Gamini

    This paper presents a novel strategy for the construction of dense three-dimensional environment models by combining images from a conventional camera and a range imager. Ro- bust data association is ?rst accomplished by exploiting the Scale Invariant Feature Transformation (SIFT) technique...

  1. AROSICS: An Automated and Robust Open-Source Image Co-Registration Software for Multi-Sensor Satellite Data

    Directory of Open Access Journals (Sweden)

    Daniel Scheffler

    2017-07-01

    Full Text Available Geospatial co-registration is a mandatory prerequisite when dealing with remote sensing data. Inter- or intra-sensoral misregistration will negatively affect any subsequent image analysis, specifically when processing multi-sensoral or multi-temporal data. In recent decades, many algorithms have been developed to enable manual, semi- or fully automatic displacement correction. Especially in the context of big data processing and the development of automated processing chains that aim to be applicable to different remote sensing systems, there is a strong need for efficient, accurate and generally usable co-registration. Here, we present AROSICS (Automated and Robust Open-Source Image Co-Registration Software, a Python-based open-source software including an easy-to-use user interface for automatic detection and correction of sub-pixel misalignments between various remote sensing datasets. It is independent of spatial or spectral characteristics and robust against high degrees of cloud coverage and spectral and temporal land cover dynamics. The co-registration is based on phase correlation for sub-pixel shift estimation in the frequency domain utilizing the Fourier shift theorem in a moving-window manner. A dense grid of spatial shift vectors can be created and automatically filtered by combining various validation and quality estimation metrics. Additionally, the software supports the masking of, e.g., clouds and cloud shadows to exclude such areas from spatial shift detection. The software has been tested on more than 9000 satellite images acquired by different sensors. The results are evaluated exemplarily for two inter-sensoral and two intra-sensoral use cases and show registration results in the sub-pixel range with root mean square error fits around 0.3 pixels and better.

  2. RADIOMETRIC NORMALIZATION OF LARGE AIRBORNE IMAGE DATA SETS ACQUIRED BY DIFFERENT SENSOR TYPES

    Directory of Open Access Journals (Sweden)

    S. Gehrke

    2016-06-01

    Full Text Available Generating seamless mosaics of aerial images is a particularly challenging task when the mosaic comprises a large number of im-ages, collected over longer periods of time and with different sensors under varying imaging conditions. Such large mosaics typically consist of very heterogeneous image data, both spatially (different terrain types and atmosphere and temporally (unstable atmo-spheric properties and even changes in land coverage. We present a new radiometric normalization or, respectively, radiometric aerial triangulation approach that takes advantage of our knowledge about each sensor’s properties. The current implementation supports medium and large format airborne imaging sensors of the Leica Geosystems family, namely the ADS line-scanner as well as DMC and RCD frame sensors. A hierarchical modelling – with parameters for the overall mosaic, the sensor type, different flight sessions, strips and individual images – allows for adaptation to each sensor’s geometric and radiometric properties. Additional parameters at different hierarchy levels can compensate radiome-tric differences of various origins to compensate for shortcomings of the preceding radiometric sensor calibration as well as BRDF and atmospheric corrections. The final, relative normalization is based on radiometric tie points in overlapping images, absolute radiometric control points and image statistics. It is computed in a global least squares adjustment for the entire mosaic by altering each image’s histogram using a location-dependent mathematical model. This model involves contrast and brightness corrections at radiometric fix points with bilinear interpolation for corrections in-between. The distribution of the radiometry fixes is adaptive to each image and generally increases with image size, hence enabling optimal local adaptation even for very long image strips as typi-cally captured by a line-scanner sensor. The normalization approach is implemented in

  3. Object-Oriented Hierarchy Radiation Consistency for Different Temporal and Different Sensor Images

    Directory of Open Access Journals (Sweden)

    Nan Su

    2018-02-01

    Full Text Available In the paper, we propose a novel object-oriented hierarchy radiation consistency method for dense matching of different temporal and different sensor data in the 3D reconstruction. For different temporal images, our illumination consistency method is proposed to solve both the illumination uniformity for a single image and the relative illumination normalization for image pairs. Especially in the relative illumination normalization step, singular value equalization and linear relationship of the invariant pixels is combined used for the initial global illumination normalization and the object-oriented refined illumination normalization in detail, respectively. For different sensor images, we propose the union group sparse method, which is based on improving the original group sparse model. The different sensor images are set to a similar smoothness level by the same threshold of singular value from the union group matrix. Our method comprehensively considered the influence factors on the dense matching of the different temporal and different sensor stereoscopic image pairs to simultaneously improve the illumination consistency and the smoothness consistency. The radiation consistency experimental results verify the effectiveness and superiority of the proposed method by comparing two other methods. Moreover, in the dense matching experiment of the mixed stereoscopic image pairs, our method has more advantages for objects in the urban area.

  4. Accuracy of Shack-Hartmann wavefront sensor using a coherent wound fibre image bundle

    Science.gov (United States)

    Zheng, Jessica R.; Goodwin, Michael; Lawrence, Jon

    2018-03-01

    Shack-Hartmannwavefront sensors using wound fibre image bundles are desired for multi-object adaptive optical systems to provide large multiplex positioned by Starbugs. The use of a large-sized wound fibre image bundle provides the flexibility to use more sub-apertures wavefront sensor for ELTs. These compact wavefront sensors take advantage of large focal surfaces such as the Giant Magellan Telescope. The focus of this paper is to study the wound fibre image bundle structure defects effect on the centroid measurement accuracy of a Shack-Hartmann wavefront sensor. We use the first moment centroid method to estimate the centroid of a focused Gaussian beam sampled by a simulated bundle. Spot estimation accuracy with wound fibre image bundle and its structure impact on wavefront measurement accuracy statistics are addressed. Our results show that when the measurement signal-to-noise ratio is high, the centroid measurement accuracy is dominated by the wound fibre image bundle structure, e.g. tile angle and gap spacing. For the measurement with low signal-to-noise ratio, its accuracy is influenced by the read noise of the detector instead of the wound fibre image bundle structure defects. We demonstrate this both with simulation and experimentally. We provide a statistical model of the centroid and wavefront error of a wound fibre image bundle found through experiment.

  5. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    Science.gov (United States)

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

  6. A bio-image sensor for simultaneous detection of multi-neurotransmitters.

    Science.gov (United States)

    Lee, You-Na; Okumura, Koichi; Horio, Tomoko; Iwata, Tatsuya; Takahashi, Kazuhiro; Hattori, Toshiaki; Sawada, Kazuaki

    2018-03-01

    We report here a new bio-image sensor for simultaneous detection of spatial and temporal distribution of multi-neurotransmitters. It consists of multiple enzyme-immobilized membranes on a 128 × 128 pixel array with read-out circuit. Apyrase and acetylcholinesterase (AChE), as selective elements, are used to recognize adenosine 5'-triphosphate (ATP) and acetylcholine (ACh), respectively. To enhance the spatial resolution, hydrogen ion (H + ) diffusion barrier layers are deposited on top of the bio-image sensor and demonstrated their prevention capability. The results are used to design the space among enzyme-immobilized pixels and the null H + sensor to minimize the undesired signal overlap by H + diffusion. Using this bio-image sensor, we can obtain H + diffusion-independent imaging of concentration gradients of ATP and ACh in real-time. The sensing characteristics, such as sensitivity and detection of limit, are determined experimentally. With the proposed bio-image sensor the possibility exists for customizable monitoring of the activities of various neurochemicals by using different kinds of proton-consuming or generating enzymes. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Soft sensor design by multivariate fusion of image features and process measurements

    DEFF Research Database (Denmark)

    Lin, Bao; Jørgensen, Sten Bay

    2011-01-01

    This paper presents a multivariate data fusion procedure for design of dynamic soft sensors where suitably selected image features are combined with traditional process measurements to enhance the performance of data-driven soft sensors. A key issue of fusing multiple sensor data, i.e. to determine...... with a multivariate analysis technique from RGB pictures. The color information is also transformed to hue, saturation and intensity components. Both sets of image features are combined with traditional process measurements to obtain an inferential model by partial least squares (PLS) regression. A dynamic PLS model...... oxides (NOx) emission of cement kilns. On-site tests demonstrate improved performance over soft sensors based on conventional process measurements only....

  8. Luminescence imaging of water during proton-beam irradiation for range estimation

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Okumura, Satoshi; Komori, Masataka [Radiological and Medical Laboratory Sciences, Nagoya University Graduate School of Medicine, Nagoya 461-8673 (Japan); Toshito, Toshiyuki [Department of Proton Therapy Physics, Nagoya Proton Therapy Center, Nagoya City West Medical Center, Nagoya 462-8508 (Japan)

    2015-11-15

    Purpose: Proton therapy has the ability to selectively deliver a dose to the target tumor, so the dose distribution should be accurately measured by a precise and efficient method. The authors found that luminescence was emitted from water during proton irradiation and conjectured that this phenomenon could be used for estimating the dose distribution. Methods: To achieve more accurate dose distribution, the authors set water phantoms on a table with a spot scanning proton therapy system and measured the luminescence images of these phantoms with a high-sensitivity, cooled charge coupled device camera during proton-beam irradiation. The authors imaged the phantoms of pure water, fluorescein solution, and an acrylic block. Results: The luminescence images of water phantoms taken during proton-beam irradiation showed clear Bragg peaks, and the measured proton ranges from the images were almost the same as those obtained with an ionization chamber. Furthermore, the image of the pure-water phantom showed almost the same distribution as the tap-water phantom, indicating that the luminescence image was not related to impurities in the water. The luminescence image of the fluorescein solution had ∼3 times higher intensity than water, with the same proton range as that of water. The luminescence image of the acrylic phantom had a 14.5% shorter proton range than that of water; the proton range in the acrylic phantom generally matched the calculated value. The luminescence images of the tap-water phantom during proton irradiation could be obtained in less than 2 s. Conclusions: Luminescence imaging during proton-beam irradiation is promising as an effective method for range estimation in proton therapy.

  9. Luminescence imaging of water during proton-beam irradiation for range estimation

    International Nuclear Information System (INIS)

    Yamamoto, Seiichi; Okumura, Satoshi; Komori, Masataka; Toshito, Toshiyuki

    2015-01-01

    Purpose: Proton therapy has the ability to selectively deliver a dose to the target tumor, so the dose distribution should be accurately measured by a precise and efficient method. The authors found that luminescence was emitted from water during proton irradiation and conjectured that this phenomenon could be used for estimating the dose distribution. Methods: To achieve more accurate dose distribution, the authors set water phantoms on a table with a spot scanning proton therapy system and measured the luminescence images of these phantoms with a high-sensitivity, cooled charge coupled device camera during proton-beam irradiation. The authors imaged the phantoms of pure water, fluorescein solution, and an acrylic block. Results: The luminescence images of water phantoms taken during proton-beam irradiation showed clear Bragg peaks, and the measured proton ranges from the images were almost the same as those obtained with an ionization chamber. Furthermore, the image of the pure-water phantom showed almost the same distribution as the tap-water phantom, indicating that the luminescence image was not related to impurities in the water. The luminescence image of the fluorescein solution had ∼3 times higher intensity than water, with the same proton range as that of water. The luminescence image of the acrylic phantom had a 14.5% shorter proton range than that of water; the proton range in the acrylic phantom generally matched the calculated value. The luminescence images of the tap-water phantom during proton irradiation could be obtained in less than 2 s. Conclusions: Luminescence imaging during proton-beam irradiation is promising as an effective method for range estimation in proton therapy

  10. Luminescence imaging of water during carbon-ion irradiation for range estimation

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Komori, Masataka; Koyama, Shuji; Morishita, Yuki; Sekihara, Eri [Radiological and Medical Laboratory Sciences, Nagoya University Graduate School of Medicine, Higashi-ku, Nagoya, Aichi 461-8673 (Japan); Akagi, Takashi; Yamashita, Tomohiro [Hygo Ion Beam Medical Center, Hyogo 679-5165 (Japan); Toshito, Toshiyuki [Department of Proton Therapy Physics, Nagoya Proton Therapy Center, Nagoya City West Medical Center, Aichi 462-8508 (Japan)

    2016-05-15

    Purpose: The authors previously reported successful luminescence imaging of water during proton irradiation and its application to range estimation. However, since the feasibility of this approach for carbon-ion irradiation remained unclear, the authors conducted luminescence imaging during carbon-ion irradiation and estimated the ranges. Methods: The authors placed a pure-water phantom on the patient couch of a carbon-ion therapy system and measured the luminescence images with a high-sensitivity, cooled charge-coupled device camera during carbon-ion irradiation. The authors also carried out imaging of three types of phantoms (tap-water, an acrylic block, and a plastic scintillator) and compared their intensities and distributions with those of a phantom containing pure-water. Results: The luminescence images of pure-water phantoms during carbon-ion irradiation showed clear Bragg peaks, and the measured carbon-ion ranges from the images were almost the same as those obtained by simulation. The image of the tap-water phantom showed almost the same distribution as that of the pure-water phantom. The acrylic block phantom’s luminescence image produced seven times higher luminescence and had a 13% shorter range than that of the water phantoms; the range with the acrylic phantom generally matched the calculated value. The plastic scintillator showed ∼15 000 times higher light than that of water. Conclusions: Luminescence imaging during carbon-ion irradiation of water is not only possible but also a promising method for range estimation in carbon-ion therapy.

  11. Luminescence imaging of water during carbon-ion irradiation for range estimation

    International Nuclear Information System (INIS)

    Yamamoto, Seiichi; Komori, Masataka; Koyama, Shuji; Morishita, Yuki; Sekihara, Eri; Akagi, Takashi; Yamashita, Tomohiro; Toshito, Toshiyuki

    2016-01-01

    Purpose: The authors previously reported successful luminescence imaging of water during proton irradiation and its application to range estimation. However, since the feasibility of this approach for carbon-ion irradiation remained unclear, the authors conducted luminescence imaging during carbon-ion irradiation and estimated the ranges. Methods: The authors placed a pure-water phantom on the patient couch of a carbon-ion therapy system and measured the luminescence images with a high-sensitivity, cooled charge-coupled device camera during carbon-ion irradiation. The authors also carried out imaging of three types of phantoms (tap-water, an acrylic block, and a plastic scintillator) and compared their intensities and distributions with those of a phantom containing pure-water. Results: The luminescence images of pure-water phantoms during carbon-ion irradiation showed clear Bragg peaks, and the measured carbon-ion ranges from the images were almost the same as those obtained by simulation. The image of the tap-water phantom showed almost the same distribution as that of the pure-water phantom. The acrylic block phantom’s luminescence image produced seven times higher luminescence and had a 13% shorter range than that of the water phantoms; the range with the acrylic phantom generally matched the calculated value. The plastic scintillator showed ∼15 000 times higher light than that of water. Conclusions: Luminescence imaging during carbon-ion irradiation of water is not only possible but also a promising method for range estimation in carbon-ion therapy.

  12. Ultra-high resolution coded wavefront sensor

    KAUST Repository

    Wang, Congli; Dun, Xiong; Fu, Qiang; Heidrich, Wolfgang

    2017-01-01

    Wavefront sensors and more general phase retrieval methods have recently attracted a lot of attention in a host of application domains, ranging from astronomy to scientific imaging and microscopy. In this paper, we introduce a new class of sensor

  13. Sensors

    CERN Document Server

    Pigorsch, Enrico

    1997-01-01

    This is the 5th edition of the Metra Martech Directory "EUROPEAN CENTRES OF EXPERTISE - SENSORS." The entries represent a survey of European sensors development. The new edition contains 425 detailed profiles of companies and research institutions in 22 countries. This is reflected in the diversity of sensors development programmes described, from sensors for physical parameters to biosensors and intelligent sensor systems. We do not claim that all European organisations developing sensors are included, but this is a good cross section from an invited list of participants. If you see gaps or omissions, or would like your organisation to be included, please send details. The data base invites the formation of effective joint ventures by identifying and providing access to specific areas in which organisations offer collaboration. This issue is recognised to be of great importance and most entrants include details of collaboration offered and sought. We hope the directory on Sensors will help you to find the ri...

  14. Sensors

    Energy Technology Data Exchange (ETDEWEB)

    Jensen, H. [PBI-Dansensor A/S (Denmark); Toft Soerensen, O. [Risoe National Lab., Materials Research Dept. (Denmark)

    1999-10-01

    A new type of ceramic oxygen sensors based on semiconducting oxides was developed in this project. The advantage of these sensors compared to standard ZrO{sub 2} sensors is that they do not require a reference gas and that they can be produced in small sizes. The sensor design and the techniques developed for production of these sensors are judged suitable by the participating industry for a niche production of a new generation of oxygen sensors. Materials research on new oxygen ion conducting conductors both for applications in oxygen sensors and in fuel was also performed in this project and finally a new process was developed for fabrication of ceramic tubes by dip-coating. (EHS)

  15. Development of High Resolution Eddy Current Imaging Using an Electro-Mechanical Sensor (Preprint)

    Science.gov (United States)

    2011-11-01

    The Fluxgate Magnetometer ,” J. Phys. E: Sci. Instrum., Vol. 12: 241-253. 13. A. Abedi, J. J. Fellenstein, A. J. Lucas, and J. P. Wikswo, Jr., “A...206 (2006). 11. Ripka, P., 1992, Review of Fluxgate Sensors, Sensors and Actuators, A. 33, Elsevier Sequoia: 129-141. 12. Primdahl, F., 1979...superconducting quantum interference device magnetometer system for quantitative analysis and imaging of hidden corrosion activity in aircraft aluminum

  16. Development of High Resolution Eddy Current Imaging Using an Electro-Mechanical Sensor (Postprint)

    Science.gov (United States)

    2011-08-01

    Primdahl, F., 1979, “The Fluxgate Magnetometer ,” J. Phys. E: Sci. Instrum., Vol. 12: 241-253. 13. A. Abedi, J. J. Fellenstein, A. J. Lucas, and J. P...Issues 1-2, Pages 203-206 (2006). 11. Ripka, P., 1992, Review of Fluxgate Sensors, Sensors and Actuators, A. 33, Elsevier Sequoia: 129-141. 12...Wikswo, Jr., “A superconducting quantum interference device magnetometer system for quantitative analysis and imaging of hidden corrosion activity in

  17. Highly curved image sensors: a practical approach for improved optical performance.

    Science.gov (United States)

    Guenter, Brian; Joshi, Neel; Stoakley, Richard; Keefe, Andrew; Geary, Kevin; Freeman, Ryan; Hundley, Jake; Patterson, Pamela; Hammon, David; Herrera, Guillermo; Sherman, Elena; Nowak, Andrew; Schubert, Randall; Brewer, Peter; Yang, Louis; Mott, Russell; McKnight, Geoff

    2017-06-12

    The significant optical and size benefits of using a curved focal surface for imaging systems have been well studied yet never brought to market for lack of a high-quality, mass-producible, curved image sensor. In this work we demonstrate that commercial silicon CMOS image sensors can be thinned and formed into accurate, highly curved optical surfaces with undiminished functionality. Our key development is a pneumatic forming process that avoids rigid mechanical constraints and suppresses wrinkling instabilities. A combination of forming-mold design, pressure membrane elastic properties, and controlled friction forces enables us to gradually contact the die at the corners and smoothly press the sensor into a spherical shape. Allowing the die to slide into the concave target shape enables a threefold increase in the spherical curvature over prior approaches having mechanical constraints that resist deformation, and create a high-stress, stretch-dominated state. Our process creates a bridge between the high precision and low-cost but planar CMOS process, and ideal non-planar component shapes such as spherical imagers for improved optical systems. We demonstrate these curved sensors in prototype cameras with custom lenses, measuring exceptional resolution of 3220 line-widths per picture height at an aperture of f/1.2 and nearly 100% relative illumination across the field. Though we use a 1/2.3" format image sensor in this report, we also show this process is generally compatible with many state of the art imaging sensor formats. By example, we report photogrammetry test data for an APS-C sized silicon die formed to a 30° subtended spherical angle. These gains in sharpness and relative illumination enable a new generation of ultra-high performance, manufacturable, digital imaging systems for scientific, industrial, and artistic use.

  18. A back-illuminated megapixel CMOS image sensor

    Science.gov (United States)

    Pain, Bedabrata; Cunningham, Thomas; Nikzad, Shouleh; Hoenk, Michael; Jones, Todd; Wrigley, Chris; Hancock, Bruce

    2005-01-01

    In this paper, we present the test and characterization results for a back-illuminated megapixel CMOS imager. The imager pixel consists of a standard junction photodiode coupled to a three transistor-per-pixel switched source-follower readout [1]. The imager also consists of integrated timing and control and bias generation circuits, and provides analog output. The analog column-scan circuits were implemented in such a way that the imager could be configured to run in off-chip correlated double-sampling (CDS) mode. The imager was originally designed for normal front-illuminated operation, and was fabricated in a commercially available 0.5 pn triple-metal CMOS-imager compatible process. For backside illumination, the imager was thinned by etching away the substrate was etched away in a post-fabrication processing step.

  19. Simulation and measurement of total ionizing dose radiation induced image lag increase in pinned photodiode CMOS image sensors

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Jing [School of Materials Science and Engineering, Xiangtan University, Hunan (China); State Key Laboratory of Intense Pulsed Irradiation Simulation and Effect, Northwest Institute of Nuclear Technology, P.O.Box 69-10, Xi’an (China); Chen, Wei, E-mail: chenwei@nint.ac.cn [State Key Laboratory of Intense Pulsed Irradiation Simulation and Effect, Northwest Institute of Nuclear Technology, P.O.Box 69-10, Xi’an (China); Wang, Zujun, E-mail: wangzujun@nint.ac.cn [State Key Laboratory of Intense Pulsed Irradiation Simulation and Effect, Northwest Institute of Nuclear Technology, P.O.Box 69-10, Xi’an (China); Xue, Yuanyuan; Yao, Zhibin; He, Baoping; Ma, Wuying; Jin, Junshan; Sheng, Jiangkun; Dong, Guantao [State Key Laboratory of Intense Pulsed Irradiation Simulation and Effect, Northwest Institute of Nuclear Technology, P.O.Box 69-10, Xi’an (China)

    2017-06-01

    This paper presents an investigation of total ionizing dose (TID) induced image lag sources in pinned photodiodes (PPD) CMOS image sensors based on radiation experiments and TCAD simulation. The radiation experiments have been carried out at the Cobalt −60 gamma-ray source. The experimental results show the image lag degradation is more and more serious with increasing TID. Combining with the TCAD simulation results, we can confirm that the junction of PPD and transfer gate (TG) is an important region forming image lag during irradiation. These simulations demonstrate that TID can generate a potential pocket leading to incomplete transfer.

  20. Toward 1-mm depth precision with a solid state full-field range imaging system

    Science.gov (United States)

    Dorrington, Adrian A.; Carnegie, Dale A.; Cree, Michael J.

    2006-02-01

    Previously, we demonstrated a novel heterodyne based solid-state full-field range-finding imaging system. This system is comprised of modulated LED illumination, a modulated image intensifier, and a digital video camera. A 10 MHz drive is provided with 1 Hz difference between the LEDs and image intensifier. A sequence of images of the resulting beating intensifier output are captured and processed to determine phase and hence distance to the object for each pixel. In a previous publication, we detailed results showing a one-sigma precision of 15 mm to 30 mm (depending on signal strength). Furthermore, we identified the limitations of the system and potential improvements that were expected to result in a range precision in the order of 1 mm. These primarily include increasing the operating frequency and improving optical coupling and sensitivity. In this paper, we report on the implementation of these improvements and the new system characteristics. We also comment on the factors that are important for high precision image ranging and present configuration strategies for best performance. Ranging with sub-millimeter precision is demonstrated by imaging a planar surface and calculating the deviations from a planar fit. The results are also illustrated graphically by imaging a garden gnome.

  1. Refractometers for different refractive index range by surface plasmon resonance sensors in multimode optical fibers with different metals

    Science.gov (United States)

    Zuppella, P.; Corso, Alain J.; Pelizzo, Maria G.; Cennamo, N.; Zeni, L.

    2016-09-01

    We have realized a plasmonic sensor based on Au/Pd metal bilayer in a multimode plastic optical fiber. This metal bilayer, based on a metal with high imaginary part of the refractive index and gold, shows interesting properties in terms of sensitivity and performances, in different refractive index ranges. The development of highly sensitive platforms for high refractive index detection (higher than 1.38) is interesting for chemical applications based on molecularly imprinted polymer as receptors, while the aqueous medium is the refractive index range of biosensors based on bio-receptors. In this work we have presented an Au/Pd metal bilayer optimized for 1.38-1.42 refractive index range.

  2. Study on super-resolution three-dimensional range-gated imaging technology

    Science.gov (United States)

    Guo, Huichao; Sun, Huayan; Wang, Shuai; Fan, Youchen; Li, Yuanmiao

    2018-04-01

    Range-gated three dimensional imaging technology is a hotspot in recent years, because of the advantages of high spatial resolution, high range accuracy, long range, and simultaneous reflection of target reflectivity information. Based on the study of the principle of intensity-related method, this paper has carried out theoretical analysis and experimental research. The experimental system adopts the high power pulsed semiconductor laser as light source, gated ICCD as the imaging device, can realize the imaging depth and distance flexible adjustment to achieve different work mode. The imaging experiment of small imaging depth is carried out aiming at building 500m away, and 26 group images were obtained with distance step 1.5m. In this paper, the calculation method of 3D point cloud based on triangle method is analyzed, and 15m depth slice of the target 3D point cloud are obtained by using two frame images, the distance precision is better than 0.5m. The influence of signal to noise ratio, illumination uniformity and image brightness on distance accuracy are analyzed. Based on the comparison with the time-slicing method, a method for improving the linearity of point cloud is proposed.

  3. Histogram Matching Extends Acceptable Signal Strength Range on Optical Coherence Tomography Images

    Science.gov (United States)

    Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Sigal, Ian A.; Kagemann, Larry; Schuman, Joel S.

    2015-01-01

    Purpose. We minimized the influence of image quality variability, as measured by signal strength (SS), on optical coherence tomography (OCT) thickness measurements using the histogram matching (HM) method. Methods. We scanned 12 eyes from 12 healthy subjects with the Cirrus HD-OCT device to obtain a series of OCT images with a wide range of SS (maximal range, 1–10) at the same visit. For each eye, the histogram of an image with the highest SS (best image quality) was set as the reference. We applied HM to the images with lower SS by shaping the input histogram into the reference histogram. Retinal nerve fiber layer (RNFL) thickness was automatically measured before and after HM processing (defined as original and HM measurements), and compared to the device output (device measurements). Nonlinear mixed effects models were used to analyze the relationship between RNFL thickness and SS. In addition, the lowest tolerable SSs, which gave the RNFL thickness within the variability margin of manufacturer recommended SS range (6–10), were determined for device, original, and HM measurements. Results. The HM measurements showed less variability across a wide range of image quality than the original and device measurements (slope = 1.17 vs. 4.89 and 1.72 μm/SS, respectively). The lowest tolerable SS was successfully reduced to 4.5 after HM processing. Conclusions. The HM method successfully extended the acceptable SS range on OCT images. This would qualify more OCT images with low SS for clinical assessment, broadening the OCT application to a wider range of subjects. PMID:26066749

  4. Decoding mobile-phone image sensor rolling shutter effect for visible light communications

    Science.gov (United States)

    Liu, Yang

    2016-01-01

    Optical wireless communication (OWC) using visible lights, also known as visible light communication (VLC), has attracted significant attention recently. As the traditional OWC and VLC receivers (Rxs) are based on PIN photo-diode or avalanche photo-diode, deploying the complementary metal-oxide-semiconductor (CMOS) image sensor as the VLC Rx is attractive since nowadays nearly every person has a smart phone with embedded CMOS image sensor. However, deploying the CMOS image sensor as the VLC Rx is challenging. In this work, we propose and demonstrate two simple contrast ratio (CR) enhancement schemes to improve the contrast of the rolling shutter pattern. Then we describe their processing algorithms one by one. The experimental results show that both the proposed CR enhancement schemes can significantly mitigate the high-intensity fluctuations of the rolling shutter pattern and improve the bit-error-rate performance.

  5. Highly Sensitive and Wide-Dynamic-Range Multichannel Optical-Fiber pH Sensor Based on PWM Technique

    OpenAIRE

    Md. Rajibur Rahaman Khan; Shin-Won Kang

    2016-01-01

    In this study, we propose a highly sensitive multichannel pH sensor that is based on an optical-fiber pulse width modulation (PWM) technique. According to the optical-fiber PWM method, the received sensing signal?s pulse width changes when the optical-fiber pH sensing-element of the array comes into contact with pH buffer solutions. The proposed optical-fiber PWM pH-sensing system offers a linear sensing response over a wide range of pH values from 2 to 12, with a high pH-sensing ability. The...

  6. Edge pixel response studies of edgeless silicon sensor technology for pixellated imaging detectors

    Science.gov (United States)

    Maneuski, D.; Bates, R.; Blue, A.; Buttar, C.; Doonan, K.; Eklund, L.; Gimenez, E. N.; Hynds, D.; Kachkanov, S.; Kalliopuska, J.; McMullen, T.; O'Shea, V.; Tartoni, N.; Plackett, R.; Vahanen, S.; Wraight, K.

    2015-03-01

    Silicon sensor technologies with reduced dead area at the sensor's perimeter are under development at a number of institutes. Several fabrication methods for sensors which are sensitive close to the physical edge of the device are under investigation utilising techniques such as active-edges, passivated edges and current-terminating rings. Such technologies offer the goal of a seamlessly tiled detection surface with minimum dead space between the individual modules. In order to quantify the performance of different geometries and different bulk and implant types, characterisation of several sensors fabricated using active-edge technology were performed at the B16 beam line of the Diamond Light Source. The sensors were fabricated by VTT and bump-bonded to Timepix ROICs. They were 100 and 200 μ m thick sensors, with the last pixel-to-edge distance of either 50 or 100 μ m. The sensors were fabricated as either n-on-n or n-on-p type devices. Using 15 keV monochromatic X-rays with a beam spot of 2.5 μ m, the performance at the outer edge and corners pixels of the sensors was evaluated at three bias voltages. The results indicate a significant change in the charge collection properties between the edge and 5th (up to 275 μ m) from edge pixel for the 200 μ m thick n-on-n sensor. The edge pixel performance of the 100 μ m thick n-on-p sensors is affected only for the last two pixels (up to 110 μ m) subject to biasing conditions. Imaging characteristics of all sensor types investigated are stable over time and the non-uniformities can be minimised by flat-field corrections. The results from the synchrotron tests combined with lab measurements are presented along with an explanation of the observed effects.

  7. A multimodal image sensor system for identifying water stress in grapevines

    Science.gov (United States)

    Zhao, Yong; Zhang, Qin; Li, Minzan; Shao, Yongni; Zhou, Jianfeng; Sun, Hong

    2012-11-01

    Water stress is one of the most common limitations of fruit growth. Water is the most limiting resource for crop growth. In grapevines, as well as in other fruit crops, fruit quality benefits from a certain level of water deficit which facilitates to balance vegetative and reproductive growth and the flow of carbohydrates to reproductive structures. A multi-modal sensor system was designed to measure the reflectance signature of grape plant surfaces and identify different water stress levels in this paper. The multi-modal sensor system was equipped with one 3CCD camera (three channels in R, G, and IR). The multi-modal sensor can capture and analyze grape canopy from its reflectance features, and identify the different water stress levels. This research aims at solving the aforementioned problems. The core technology of this multi-modal sensor system could further be used as a decision support system that combines multi-modal sensory data to improve plant stress detection and identify the causes of stress. The images were taken by multi-modal sensor which could output images in spectral bands of near-infrared, green and red channel. Based on the analysis of the acquired images, color features based on color space and reflectance features based on image process method were calculated. The results showed that these parameters had the potential as water stress indicators. More experiments and analysis are needed to validate the conclusion.

  8. CMOS image sensor-based immunodetection by refractive-index change.

    Science.gov (United States)

    Devadhasan, Jasmine P; Kim, Sanghyo

    2012-01-01

    A complementary metal oxide semiconductor (CMOS) image sensor is an intriguing technology for the development of a novel biosensor. Indeed, the CMOS image sensor mechanism concerning the detection of the antigen-antibody (Ag-Ab) interaction at the nanoscale has been ambiguous so far. To understand the mechanism, more extensive research has been necessary to achieve point-of-care diagnostic devices. This research has demonstrated a CMOS image sensor-based analysis of cardiovascular disease markers, such as C-reactive protein (CRP) and troponin I, Ag-Ab interactions on indium nanoparticle (InNP) substrates by simple photon count variation. The developed sensor is feasible to detect proteins even at a fg/mL concentration under ordinary room light. Possible mechanisms, such as dielectric constant and refractive-index changes, have been studied and proposed. A dramatic change in the refractive index after protein adsorption on an InNP substrate was observed to be a predominant factor involved in CMOS image sensor-based immunoassay.

  9. Coded aperture detector: an image sensor with sub 20-nm pixel resolution.

    Science.gov (United States)

    Miyakawa, Ryan; Mayer, Rafael; Wojdyla, Antoine; Vannier, Nicolas; Lesser, Ian; Aron-Dine, Shifrah; Naulleau, Patrick

    2014-08-11

    We describe the coded aperture detector, a novel image sensor based on uniformly redundant arrays (URAs) with customizable pixel size, resolution, and operating photon energy regime. In this sensor, a coded aperture is scanned laterally at the image plane of an optical system, and the transmitted intensity is measured by a photodiode. The image intensity is then digitally reconstructed using a simple convolution. We present results from a proof-of-principle optical prototype, demonstrating high-fidelity image sensing comparable to a CCD. A 20-nm half-pitch URA fabricated by the Center for X-ray Optics (CXRO) nano-fabrication laboratory is presented that is suitable for high-resolution image sensing at EUV and soft X-ray wavelengths.

  10. Image dynamic range test and evaluation of Gaofen-2 dual cameras

    Science.gov (United States)

    Zhang, Zhenhua; Gan, Fuping; Wei, Dandan

    2015-12-01

    In order to fully understand the dynamic range of Gaofen-2 satellite data and support the data processing, application and next satellites development, in this article, we evaluated the dynamic range by calculating some statistics such as maximum ,minimum, average and stand deviation of four images obtained at the same time by Gaofen-2 dual cameras in Beijing area; then the maximum ,minimum, average and stand deviation of each longitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of each camera's dynamic range consistency; and these four statistics of each latitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of the dynamic range consistency between PMS1 and PMS2 at last. The results suggest that there is a wide dynamic range of DN value in the image obtained by PMS1 and PMS2 which contains rich information of ground objects; in general, the consistency of dynamic range between the single camera images is in close agreement, but also a little difference, so do the dual cameras. The consistency of dynamic range between the single camera images is better than the dual cameras'.

  11. Kilovoltage energy imaging with a radiotherapy linac with a continuously variable energy range.

    Science.gov (United States)

    Roberts, D A; Hansen, V N; Thompson, M G; Poludniowski, G; Niven, A; Seco, J; Evans, P M

    2012-03-01

    In this paper, the effect on image quality of significantly reducing the primary electron energy of a radiotherapy accelerator is investigated using a novel waveguide test piece. The waveguide contains a novel variable coupling device (rotovane), allowing for a wide continuously variable energy range of between 1.4 and 9 MeV suitable for both imaging and therapy. Imaging at linac accelerating potentials close to 1 MV was investigated experimentally and via Monte Carlo simulations. An imaging beam line was designed, and planar and cone beam computed tomography images were obtained to enable qualitative and quantitative comparisons with kilovoltage and megavoltage imaging systems. The imaging beam had an electron energy of 1.4 MeV, which was incident on a water cooled electron window consisting of stainless steel, a 5 mm carbon electron absorber and 2.5 mm aluminium filtration. Images were acquired with an amorphous silicon detector sensitive to diagnostic x-ray energies. The x-ray beam had an average energy of 220 keV and half value layer of 5.9 mm of copper. Cone beam CT images with the same contrast to noise ratio as a gantry mounted kilovoltage imaging system were obtained with doses as low as 2 cGy. This dose is equivalent to a single 6 MV portal image. While 12 times higher than a 100 kVp CBCT system (Elekta XVI), this dose is 140 times lower than a 6 MV cone beam imaging system and 6 times lower than previously published LowZ imaging beams operating at higher (4-5 MeV) energies. The novel coupling device provides for a wide range of electron energies that are suitable for kilovoltage quality imaging and therapy. The imaging system provides high contrast images from the therapy portal at low dose, approaching that of gantry mounted kilovoltage x-ray systems. Additionally, the system provides low dose imaging directly from the therapy portal, potentially allowing for target tracking during radiotherapy treatment. There is the scope with such a tuneable system

  12. The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications

    Directory of Open Access Journals (Sweden)

    Keunyeol Park

    2018-02-01

    Full Text Available This paper presents a single-bit CMOS image sensor (CIS that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel is 2.84 mm2 with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB on an 8-bit ADC basis at a 50 MHz sampling frequency.

  13. The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications.

    Science.gov (United States)

    Park, Keunyeol; Song, Minkyu; Kim, Soo Youn

    2018-02-24

    This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm² with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency.

  14. Column-Parallel Single Slope ADC with Digital Correlated Multiple Sampling for Low Noise CMOS Image Sensors

    NARCIS (Netherlands)

    Chen, Y.; Theuwissen, A.J.P.; Chae, Y.

    2011-01-01

    This paper presents a low noise CMOS image sensor (CIS) using 10/12 bit configurable column-parallel single slope ADCs (SS-ADCs) and digital correlated multiple sampling (CMS). The sensor used is a conventional 4T active pixel with a pinned-photodiode as photon detector. The test sensor was

  15. A CMOS Image Sensor With In-Pixel Buried-Channel Source Follower and Optimized Row Selector

    NARCIS (Netherlands)

    Chen, Y.; Wang, X.; Mierop, A.J.; Theuwissen, A.J.P.

    2009-01-01

    This paper presents a CMOS imager sensor with pinned-photodiode 4T active pixels which use in-pixel buried-channel source followers (SFs) and optimized row selectors. The test sensor has been fabricated in a 0.18-mum CMOS process. The sensor characterization was carried out successfully, and the

  16. CMOS active pixel sensor type imaging system on a chip

    Science.gov (United States)

    Fossum, Eric R. (Inventor); Nixon, Robert (Inventor)

    2011-01-01

    A single chip camera which includes an .[.intergrated.]. .Iadd.integrated .Iaddend.image acquisition portion and control portion and which has double sampling/noise reduction capabilities thereon. Part of the .[.intergrated.]. .Iadd.integrated .Iaddend.structure reduces the noise that is picked up during imaging.

  17. Synthetic SAR Image Generation using Sensor, Terrain and Target Models

    DEFF Research Database (Denmark)

    Kusk, Anders; Abulaitijiang, Adili; Dall, Jørgen

    2016-01-01

    A tool to generate synthetic SAR images of objects set on a clutter background is described. The purpose is to generate images for training Automatic Target Recognition and Identification algorithms. The tool employs a commercial electromagnetic simulation program to calculate radar cross section...

  18. Continued development of a portable widefield hyperspectral imaging (HSI) sensor for standoff detection of explosive, chemical, and narcotic residues

    Science.gov (United States)

    Nelson, Matthew P.; Gardner, Charles W.; Klueva, Oksana; Tomas, David

    2014-05-01

    Passive, standoff detection of chemical, explosive and narcotic threats employing widefield, shortwave infrared (SWIR) hyperspectral imaging (HSI) continues to gain acceptance in defense and security fields. A robust and user-friendly portable platform with such capabilities increases the effectiveness of locating and identifying threats while reducing risks to personnel. In 2013 ChemImage Sensor Systems (CISS) introduced Aperio, a handheld sensor, using real-time SWIR HSI for wide area surveillance and standoff detection of explosives, chemical threats, and narcotics. That SWIR HSI system employed a liquid-crystal tunable filter for real-time automated detection and display of threats. In these proceedings, we report on a next generation device called VeroVision™, which incorporates an improved optical design that enhances detection performance at greater standoff distances with increased sensitivity and detection speed. A tripod mounted sensor head unit (SHU) with an optional motorized pan-tilt unit (PTU) is available for precision pointing and sensor stabilization. This option supports longer standoff range applications which are often seen at checkpoint vehicle inspection where speed and precision is necessary. Basic software has been extended to include advanced algorithms providing multi-target display functionality, automatic threshold determination, and an automated detection recipe capability for expanding the library as new threats emerge. In these proceedings, we report on the improvements associated with the next generation portable widefield SWIR HSI sensor, VeroVision™. Test data collected during development are presented in this report which supports the targeted applications for use of VeroVision™ for screening residue and bulk levels of explosive and drugs on vehicles and personnel at checkpoints as well as various applications for other secure areas. Additionally, we highlight a forensic application of the technology for assisting forensic

  19. Context-dependent JPEG backward-compatible high-dynamic range image compression

    Science.gov (United States)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  20. Development of a 750x750 pixels CMOS imager sensor for tracking applications

    Science.gov (United States)

    Larnaudie, Franck; Guardiola, Nicolas; Saint-Pé, Olivier; Vignon, Bruno; Tulet, Michel; Davancens, Robert; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Estribeau, Magali

    2017-11-01

    Solid-state optical sensors are now commonly used in space applications (navigation cameras, astronomy imagers, tracking sensors...). Although the charge-coupled devices are still widely used, the CMOS image sensor (CIS), which performances are continuously improving, is a strong challenger for Guidance, Navigation and Control (GNC) systems. This paper describes a 750x750 pixels CMOS image sensor that has been specially designed and developed for star tracker and tracking sensor applications. Such detector, that is featuring smart architecture enabling very simple and powerful operations, is built using the AMIS 0.5μm CMOS technology. It contains 750x750 rectangular pixels with 20μm pitch. The geometry of the pixel sensitive zone is optimized for applications based on centroiding measurements. The main feature of this device is the on-chip control and timing function that makes the device operation easier by drastically reducing the number of clocks to be applied. This powerful function allows the user to operate the sensor with high flexibility: measurement of dark level from masked lines, direct access to the windows of interest… A temperature probe is also integrated within the CMOS chip allowing a very precise measurement through the video stream. A complete electro-optical characterization of the sensor has been performed. The major parameters have been evaluated: dark current and its uniformity, read-out noise, conversion gain, Fixed Pattern Noise, Photo Response Non Uniformity, quantum efficiency, Modulation Transfer Function, intra-pixel scanning. The characterization tests are detailed in the paper. Co60 and protons irradiation tests have been also carried out on the image sensor and the results are presented. The specific features of the 750x750 image sensor such as low power CMOS design (3.3V, power consumption<100mW), natural windowing (that allows efficient and robust tracking algorithms), simple proximity electronics (because of the on

  1. Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback

    Directory of Open Access Journals (Sweden)

    Haoting Liu

    2017-02-01

    Full Text Available An imaging sensor-based intelligent Light Emitting Diode (LED lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes.

  2. Accurate dew-point measurement over a wide temperature range using a quartz crystal microbalance dew-point sensor

    International Nuclear Information System (INIS)

    Kwon, Su-Yong; Kim, Jong-Chul; Choi, Buyng-Il

    2008-01-01

    Quartz crystal microbalance (QCM) dew-point sensors are based on frequency measurement, and so have fast response time, high sensitivity and high accuracy. Recently, we have reported that they have the very convenient attribute of being able to distinguish between supercooled dew and frost from a single scan through the resonant frequency of the quartz resonator as a function of the temperature. In addition to these advantages, by using three different types of heat sinks, we have developed a QCM dew/frost-point sensor with a very wide working temperature range (−90 °C to 15 °C). The temperature of the quartz surface can be obtained effectively by measuring the temperature of the quartz crystal holder and using temperature compensation curves (which showed a high level of repeatability and reproducibility). The measured dew/frost points showed very good agreement with reference values and were within ±0.1 °C over the whole temperature range

  3. Wide-range frequency selectivity in an acoustic sensor fabricated using a microbeam array with non-uniform thickness

    International Nuclear Information System (INIS)

    Shintaku, Hirofumi; Kotera, Hidetoshi; Kobayashi, Takayuki; Zusho, Kazuki; Kawano, Satoyuki

    2013-01-01

    In this study, we have demonstrated the fabrication of a microbeam array (MBA) with various thicknesses and investigated the suitability it for an acoustic sensor with wide-range frequency selectivity. For this, an MBA composed of 64 beams, with thicknesses varying from 2.99–142 µm, was fabricated by using single gray-scale lithography and a thick negative photoresist. The vibration of the beams in air was measured using a laser Doppler vibrometer; the resonant frequencies of the beams were measured to be from 11.5 to 290 kHz. Lastly, the frequency range of the MBA with non-uniform thickness was 10.9 times that of the MBA with uniform thickness. (paper)

  4. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    Science.gov (United States)

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  5. Evaluation of onboard hyperspectral-image compression techniques for a parallel push-broom sensor

    Energy Technology Data Exchange (ETDEWEB)

    Briles, S.

    1996-04-01

    A single hyperspectral imaging sensor can produce frames with spatially-continuous rows of differing, but adjacent, spectral wavelength. If the frame sample-rate of the sensor is such that subsequent hyperspectral frames are spatially shifted by one row, then the sensor can be thought of as a parallel (in wavelength) push-broom sensor. An examination of data compression techniques for such a sensor is presented. The compression techniques are intended to be implemented onboard a space-based platform and to have implementation speeds that match the date rate of the sensor. Data partitions examined extend from individually operating on a single hyperspectral frame to operating on a data cube comprising the two spatial axes and the spectral axis. Compression algorithms investigated utilize JPEG-based image compression, wavelet-based compression and differential pulse code modulation. Algorithm performance is quantitatively presented in terms of root-mean-squared error and root-mean-squared correlation coefficient error. Implementation issues are considered in algorithm development.

  6. Target recognition of ladar range images using even-order Zernike moments.

    Science.gov (United States)

    Liu, Zheng-Jun; Li, Qi; Xia, Zhi-Wei; Wang, Qi

    2012-11-01

    Ladar range images have attracted considerable attention in automatic target recognition fields. In this paper, Zernike moments (ZMs) are applied to classify the target of the range image from an arbitrary azimuth angle. However, ZMs suffer from high computational costs. To improve the performance of target recognition based on small samples, even-order ZMs with serial-parallel backpropagation neural networks (BPNNs) are applied to recognize the target of the range image. It is found that the rotation invariance and classified performance of the even-order ZMs are both better than for odd-order moments and for moments compressed by principal component analysis. The experimental results demonstrate that combining the even-order ZMs with serial-parallel BPNNs can significantly improve the recognition rate for small samples.

  7. Improvement of range spatial resolution of medical ultrasound imaging by element-domain signal processing

    Science.gov (United States)

    Hasegawa, Hideyuki

    2017-07-01

    The range spatial resolution is an important factor determining the image quality in ultrasonic imaging. The range spatial resolution in ultrasonic imaging depends on the ultrasonic pulse length, which is determined by the mechanical response of the piezoelectric element in an ultrasonic probe. To improve the range spatial resolution without replacing the transducer element, in the present study, methods based on maximum likelihood (ML) estimation and multiple signal classification (MUSIC) were proposed. The proposed methods were applied to echo signals received by individual transducer elements in an ultrasonic probe. The basic experimental results showed that the axial half maximum of the echo from a string phantom was improved from 0.21 mm (conventional method) to 0.086 mm (ML) and 0.094 mm (MUSIC).

  8. Positron range in PET imaging: an alternative approach for assessing and correcting the blurring

    DEFF Research Database (Denmark)

    Jødal, Lars; Le Loirec, Cindy; Champion, Christophe

    2012-01-01

    Background: Positron range impairs resolution in PET imaging, especially for high-energy emitters and for small-animal PET. De-blurring in image reconstruction is possible if the blurring distribution is known. Further, the percentage of annihilation events within a given distance from the point...... on allowed-decay isotopes. Methods: It is argued that blurring at the detection level should not be described by positron range r, but instead the 2D-projected distance δ (equal to the closest distance between decay and line-of-response). To determine these 2D distributions, results from a dedicated positron...... is important for improved resolution in PET imaging. Relevant distributions for positron range have been derived for seven isotopes. Distributions for other allowed-decay isotopes may be estimated with the above formulas....

  9. High frame rate multi-resonance imaging refractometry with distributed feedback dye laser sensor

    DEFF Research Database (Denmark)

    Vannahme, Christoph; Dufva, Martin; Kristensen, Anders

    2015-01-01

    imaging refractometry without moving parts is presented. DFB dye lasers are low-cost and highly sensitive refractive index sensors. The unique multi-wavelength DFB laser structure presented here comprises several areas with different grating periods. Imaging in two dimensions of space is enabled...... by analyzing laser light from all areas in parallel with an imaging spectrometer. With this multi-resonance imaging refractometry method, the spatial position in one direction is identified from the horizontal, i.e., spectral position of the multiple laser lines which is obtained from the spectrometer charged...

  10. White-light full-field OCT resolution improvement by image sensor colour balance adjustment: numerical simulation

    International Nuclear Information System (INIS)

    Kalyanov, A L; Lychagov, V V; Ryabukho, V P; Smirnov, I V

    2012-01-01

    The possibility of improving white-light full-field optical coherence tomography (OCT) resolution by image sensor colour balance tuning is shown numerically. We calculated the full-width at half-maximum (FWHM) of a coherence pulse registered by a silicon colour image sensor under various colour balance settings. The calculations were made for both a halogen lamp and white LED sources. The results show that the interference pulse width can be reduced by the proper choice of colour balance coefficients. The reduction is up to 18%, as compared with a colour image sensor with regular settings, and up to 20%, as compared with a monochrome sensor. (paper)

  11. Optical Inspection In Hostile Industrial Environments: Single-Sensor VS. Imaging Methods

    Science.gov (United States)

    Cielo, P.; Dufour, M.; Sokalski, A.

    1988-11-01

    On-line and unsupervised industrial inspection for quality control and process monitoring is increasingly required in the modern automated factory. Optical techniques are particularly well suited to industrial inspection in hostile environments because of their noncontact nature, fast response time and imaging capabilities. Optical sensors can be used for remote inspection of high temperature products or otherwise inaccessible parts, provided they are in a line-of-sight relation with the sensor. Moreover, optical sensors are much easier to adapt to a variety of part shapes, position or orientation and conveyor speeds as compared to contact-based sensors. This is an important requirement in a flexible automation environment. A number of choices are possible in the design of optical inspection systems. General-purpose two-dimensional (2-D) or three-dimensional (3-D) imaging techniques have advanced very rapidly in the last years thanks to a substantial research effort as well as to the availability of increasingly powerful and affordable hardware and software. Imaging can be realized using 2-D arrays or simpler one-dimensional (1-D) line-array detectors. Alternatively, dedicated single-spot sensors require a smaller amount of data processing and often lead to robust sensors which are particularly appropriate to on-line operation in hostile industrial environments. Many specialists now feel that dedicated sensors or clusters of sensors are often more effective for specific industrial automation and control tasks, at least in the short run. This paper will discuss optomechanical and electro-optical choices with reference to the design of a number of on-line inspection sensors which have been recently developed at our institute. Case studies will include real-time surface roughness evaluation on polymer cables extruded at high speed, surface characterization of hot-rolled or galvanized-steel sheets, temperature evaluation and pinhole detection in aluminum foil, multi

  12. Model-based restoration using light vein for range-gated imaging systems.

    Science.gov (United States)

    Wang, Canjin; Sun, Tao; Wang, Tingfeng; Wang, Rui; Guo, Jin; Tian, Yuzhen

    2016-09-10

    The images captured by an airborne range-gated imaging system are degraded by many factors, such as light scattering, noise, defocus of the optical system, atmospheric disturbances, platform vibrations, and so on. The characteristics of low illumination, few details, and high noise make the state-of-the-art restoration method fail. In this paper, we present a restoration method especially for range-gated imaging systems. The degradation process is divided into two parts: the static part and the dynamic part. For the static part, we establish the physical model of the imaging system according to the laser transmission theory, and estimate the static point spread function (PSF). For the dynamic part, a so-called light vein feature extraction method is presented to estimate the fuzzy parameter of the atmospheric disturbance and platform movement, which make contributions to the dynamic PSF. Finally, combined with the static and dynamic PSF, an iterative updating framework is used to restore the image. Compared with the state-of-the-art methods, the proposed method can effectively suppress ringing artifacts and achieve better performance in a range-gated imaging system.

  13. High resolution axicon-based endoscopic FD OCT imaging with a large depth range

    Science.gov (United States)

    Lee, Kye-Sung; Hurley, William; Deegan, John; Dean, Scott; Rolland, Jannick P.

    2010-02-01

    Endoscopic imaging in tubular structures, such as the tracheobronchial tree, could benefit from imaging optics with an extended depth of focus (DOF). This optics could accommodate for varying sizes of tubular structures across patients and along the tree within a single patient. In the paper, we demonstrate an extended DOF without sacrificing resolution showing rotational images in biological tubular samples with 2.5 μm axial resolution, 10 ìm lateral resolution, and > 4 mm depth range using a custom designed probe.

  14. Change Detection with GRASS GIS – Comparison of images taken by different sensors

    Directory of Open Access Journals (Sweden)

    Michael Fuchs

    2009-04-01

    Full Text Available Images of American military reconnaissance satellites of the Sixties (CORONA in combination with modern sensors (SPOT, QuickBird were used for detection of changes in land use. The pilot area was located about 40 km northwest of Yemen’s capital Sana’a and covered approximately 100 km2 . To produce comparable layers from images of distinctly different sources, the moving window technique was applied, using the diversity parameter. The resulting difference layers reveal plausible and interpretable change patterns, particularly in areas where urban sprawl occurs.The comparison of CORONA images with images taken by modern sensors proved to be an additional tool to visualize and quantify major changes in land use. The results should serve as additional basic data eg. in regional planning.The computation sequence was executed in GRASS GIS.

  15. Image sensor system with bio-inspired efficient coding and adaptation.

    Science.gov (United States)

    Okuno, Hirotsugu; Yagi, Tetsuya

    2012-08-01

    We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.

  16. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    Directory of Open Access Journals (Sweden)

    Zhuowen Lv

    2015-01-01

    Full Text Available Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach.

  17. [Influence of human body target's spectral characteristics on visual range of low light level image intensifiers].

    Science.gov (United States)

    Zhang, Jun-Ju; Yang, Wen-Bin; Xu, Hui; Liu, Lei; Tao, Yuan-Yaun

    2013-11-01

    To study the effect of different human target's spectral reflective characteristic on low light level (LLL) image intensifier's distance, based on the spectral characteristics of the night-sky radiation and the spectral reflective coefficients of common clothes, we established a equation of human body target's spectral reflective distribution, and analyzed the spectral reflective characteristics of different human targets wearing the clothes of different color and different material, and from the actual detection equation of LLL image intensifier distance, discussed the detection capability of LLL image intensifier for different human target. The study shows that the effect of different human target's spectral reflective characteristic on LLL image intensifier distance is mainly reflected in the average reflectivity rho(-) and the initial contrast of the target and the background C0. Reflective coefficient and spectral reflection intensity of cotton clothes are higher than polyester clothes, and detection capability of LLL image intensifier is stronger for the human target wearing cotton clothes. Experimental results show that the LLL image intensifiers have longer visual ranges for targets who wear cotton clothes than targets who wear same color but polyester clothes, and have longer visual ranges for targets who wear light-colored clothes than targets who wear dark-colored clothes. And in the full moon illumination conditions, LLL image intensifiers are more sensitive to the clothes' material.

  18. Measuring the Contractile Response of Isolated Tissue Using an Image Sensor

    Directory of Open Access Journals (Sweden)

    David Díaz-Martín

    2015-04-01

    Full Text Available Isometric or isotonic transducers have traditionally been used to study the contractile/relaxation effects of drugs on isolated tissues. However, these mechanical sensors are expensive and delicate, and they are associated with certain disadvantages when performing experiments in the laboratory. In this paper, a method that uses an image sensor to measure the contractile effect of drugs on blood vessel rings and other luminal organs is presented. The new method is based on an image-processing algorithm, and it provides a fast, easy and non-expensive way to analyze the effects of such drugs. In our tests, we have obtained dose-response curves from rat aorta rings that are equivalent to those achieved with classical mechanic sensors.

  19. Low-Power Smart Imagers for Vision-Enabled Sensor Networks

    CERN Document Server

    Fernández-Berni, Jorge; Rodríguez-Vázquez, Ángel

    2012-01-01

    This book presents a comprehensive, systematic approach to the development of vision system architectures that employ sensory-processing concurrency and parallel processing to meet the autonomy challenges posed by a variety of safety and surveillance applications.  Coverage includes a thorough analysis of resistive diffusion networks embedded within an image sensor array. This analysis supports a systematic approach to the design of spatial image filters and their implementation as vision chips in CMOS technology. The book also addresses system-level considerations pertaining to the embedding of these vision chips into vision-enabled wireless sensor networks.  Describes a system-level approach for designing of vision devices and  embedding them into vision-enabled, wireless sensor networks; Surveys state-of-the-art, vision-enabled WSN nodes; Includes details of specifications and challenges of vision-enabled WSNs; Explains architectures for low-energy CMOS vision chips with embedded, programmable spatial f...

  20. Real-time biochemical sensor based on Raman scattering with CMOS contact imaging.

    Science.gov (United States)

    Muyun Cao; Yuhua Li; Yadid-Pecht, Orly

    2015-08-01

    This work presents a biochemical sensor based on Raman scattering with Complementary metal-oxide-semiconductor (CMOS) contact imaging. This biochemical optical sensor is designed for detecting the concentration of solutions. The system is built with a laser diode, an optical filter, a sample holder and a commercial CMOS sensor. The output of the system is analyzed by an image processing program. The system provides instant measurements with a resolution of 0.2 to 0.4 Mol. This low cost and easy-operated small scale system is useful in chemical, biomedical and environmental labs for quantitative bio-chemical concentration detection with results reported comparable to a highly cost commercial spectrometer.

  1. Area-efficient readout with 14-bit SAR-ADC for CMOS image sensors

    Directory of Open Access Journals (Sweden)

    Aziza Sassi Ben

    2016-01-01

    Full Text Available This paper proposes a readout design for CMOS image sensors. It has been squeezed into a 7.5um pitch under a 0.28um 1P3M technology. The ADC performs one 14-bit conversion in only 1.5us and targets a theoretical DNL feature about +1.3/-1 at 14-bit accuracy. Correlated Double Sampling (CDS is performed both in the analog and digital domains to preserve the image quality.

  2. Automatic classification of unexploded ordnance applied to Spencer Range live site for 5x5 TEMTADS sensor

    Science.gov (United States)

    Sigman, John B.; Barrowes, Benjamin E.; O'Neill, Kevin; Shubitidze, Fridon

    2013-06-01

    This paper details methods for automatic classification of Unexploded Ordnance (UXO) as applied to sensor data from the Spencer Range live site. The Spencer Range is a former military weapons range in Spencer, Tennessee. Electromagnetic Induction (EMI) sensing is carried out using the 5x5 Time-domain Electromagnetic Multi-sensor Towed Array Detection System (5x5 TEMTADS), which has 25 receivers and 25 co-located transmitters. Every transmitter is activated sequentially, each followed by measuring the magnetic field in all 25 receivers, from 100 microseconds to 25 milliseconds. From these data target extrinsic and intrinsic parameters are extracted using the Differential Evolution (DE) algorithm and the Ortho-Normalized Volume Magnetic Source (ONVMS) algorithms, respectively. Namely, the inversion provides x, y, and z locations and a time series of the total ONVMS principal eigenvalues, which are intrinsic properties of the objects. The eigenvalues are fit to a power-decay empirical model, the Pasion-Oldenburg model, providing 3 coefficients (k, b, and g) for each object. The objects are grouped geometrically into variably-sized clusters, in the k-b-g space, using clustering algorithms. Clusters matching a priori characteristics are identified as Targets of Interest (TOI), and larger clusters are automatically subclustered. Ground Truths (GT) at the center of each class are requested, and probability density functions are created for clusters that have centroid TOI using a Gaussian Mixture Model (GMM). The probability functions are applied to all remaining anomalies. All objects of UXO probability higher than a chosen threshold are placed in a ranked dig list. This prioritized list is scored and the results are demonstrated and analyzed.

  3. Photoacoustic imaging of blood vessels with a double-ring sensor featuring a narrow angular aperture

    NARCIS (Netherlands)

    Kolkman, R.G.M.; Hondebrink, Erwin; Steenbergen, Wiendelt; van Leeuwen, Ton; de Mul, F.F.M.

    2004-01-01

    A photoacoustic double-ring sensor, featuring a narrow angular aperture, is developed for laser-induced photoacoustic imaging of blood vessels. An integrated optical fiber enables reflection-mode detection of ultrasonic waves. By using the cross-correlation between the signals detected by the two

  4. A Portable Colloidal Gold Strip Sensor for Clenbuterol and Ractopamine Using Image Processing Technology

    Directory of Open Access Journals (Sweden)

    Yi Guo

    2013-01-01

    Full Text Available A portable colloidal golden strip sensor for detecting clenbuterol and ractopamine has been developed using image processing technology, as well as a novel strip reader has achieved innovatively with this imaging sensor. Colloidal gold strips for clenbuterol and ractopamine is used as first sensor with given biomedical immunication reaction. After three minutes the target sample dropped on, the color showing in the T line is relative to the content of objects as clenbuterol, this reader can finish many functions like automatic acquit ion of colored strip image, quantatively analysis of the color lines including the control line and test line, and data storage and transfer to computer. The system is integrated image collection, pattern recognition and real-time colloidal gold quantitative measurement. In experiment, clenbuterol and ractopamine standard substance with concentration from 0 ppb to 10 ppb is prepared and tested, the result reveals that standard solutions of clenbuterol and ractopamine have a good secondary fitting character with color degree (R2 is up to 0.99 and 0.98. Besides, through standard sample addition to the object negative substance, good recovery results are obtained up to 98 %. Above all, an optical sensor for colloidal strip measure is capable of determining the content of clenbuterol and ractopamine, it is likely to apply to quantatively identifying of similar reaction of colloidal golden strips.

  5. A 10-bit column-parallel cyclic ADC for high-speed CMOS image sensors

    International Nuclear Information System (INIS)

    Han Ye; Li Quanliang; Shi Cong; Wu Nanjian

    2013-01-01

    This paper presents a high-speed column-parallel cyclic analog-to-digital converter (ADC) for a CMOS image sensor. A correlated double sampling (CDS) circuit is integrated in the ADC, which avoids a stand-alone CDS circuit block. An offset cancellation technique is also introduced, which reduces the column fixed-pattern noise (FPN) effectively. One single channel ADC with an area less than 0.02 mm 2 was implemented in a 0.13 μm CMOS image sensor process. The resolution of the proposed ADC is 10-bit, and the conversion rate is 1.6 MS/s. The measured differential nonlinearity and integral nonlinearity are 0.89 LSB and 6.2 LSB together with CDS, respectively. The power consumption from 3.3 V supply is only 0.66 mW. An array of 48 10-bit column-parallel cyclic ADCs was integrated into an array of CMOS image sensor pixels. The measured results indicated that the ADC circuit is suitable for high-speed CMOS image sensors. (semiconductor integrated circuits)

  6. High-resolution dynamic pressure sensor array based on piezo-phototronic effect tuned photoluminescence imaging.

    Science.gov (United States)

    Peng, Mingzeng; Li, Zhou; Liu, Caihong; Zheng, Qiang; Shi, Xieqing; Song, Ming; Zhang, Yang; Du, Shiyu; Zhai, Junyi; Wang, Zhong Lin

    2015-03-24

    A high-resolution dynamic tactile/pressure display is indispensable to the comprehensive perception of force/mechanical stimulations such as electronic skin, biomechanical imaging/analysis, or personalized signatures. Here, we present a dynamic pressure sensor array based on pressure/strain tuned photoluminescence imaging without the need for electricity. Each sensor is a nanopillar that consists of InGaN/GaN multiple quantum wells. Its photoluminescence intensity can be modulated dramatically and linearly by small strain (0-0.15%) owing to the piezo-phototronic effect. The sensor array has a high pixel density of 6350 dpi and exceptional small standard deviation of photoluminescence. High-quality tactile/pressure sensing distribution can be real-time recorded by parallel photoluminescence imaging without any cross-talk. The sensor array can be inexpensively fabricated over large areas by semiconductor product lines. The proposed dynamic all-optical pressure imaging with excellent resolution, high sensitivity, good uniformity, and ultrafast response time offers a suitable way for smart sensing, micro/nano-opto-electromechanical systems.

  7. VLC-based indoor location awareness using LED light and image sensors

    Science.gov (United States)

    Lee, Seok-Ju; Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    Recently, indoor LED lighting can be considered for constructing green infra with energy saving and additionally providing LED-IT convergence services such as visible light communication (VLC) based location awareness and navigation services. For example, in case of large complex shopping mall, location awareness to navigate the destination is very important issue. However, the conventional navigation using GPS is not working indoors. Alternative location service based on WLAN has a problem that the position accuracy is low. For example, it is difficult to estimate the height exactly. If the position error of the height is greater than the height between floors, it may cause big problem. Therefore, conventional navigation is inappropriate for indoor navigation. Alternative possible solution for indoor navigation is VLC based location awareness scheme. Because indoor LED infra will be definitely equipped for providing lighting functionality, indoor LED lighting has a possibility to provide relatively high accuracy of position estimation combined with VLC technology. In this paper, we provide a new VLC based positioning system using visible LED lights and image sensors. Our system uses location of image sensor lens and location of reception plane. By using more than two image sensor, we can determine transmitter position less than 1m position error. Through simulation, we verify the validity of the proposed VLC based new positioning system using visible LED light and image sensors.

  8. High-dynamic-range coherent diffractive imaging: ptychography using the mixed-mode pixel array detector

    Energy Technology Data Exchange (ETDEWEB)

    Giewekemeyer, Klaus, E-mail: klaus.giewekemeyer@xfel.eu [European XFEL GmbH, Hamburg (Germany); Philipp, Hugh T. [Cornell University, Ithaca, NY (United States); Wilke, Robin N. [Georg-August-Universität Göttingen, Göttingen (Germany); Aquila, Andrew [European XFEL GmbH, Hamburg (Germany); Osterhoff, Markus [Georg-August-Universität Göttingen, Göttingen (Germany); Tate, Mark W.; Shanks, Katherine S. [Cornell University, Ithaca, NY (United States); Zozulya, Alexey V. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Salditt, Tim [Georg-August-Universität Göttingen, Göttingen (Germany); Gruner, Sol M. [Cornell University, Ithaca, NY (United States); Cornell University, Ithaca, NY (United States); Kavli Institute of Cornell for Nanoscience, Ithaca, NY (United States); Mancuso, Adrian P. [European XFEL GmbH, Hamburg (Germany)

    2014-08-07

    The advantages of a novel wide dynamic range hard X-ray detector are demonstrated for (ptychographic) coherent X-ray diffractive imaging. Coherent (X-ray) diffractive imaging (CDI) is an increasingly popular form of X-ray microscopy, mainly due to its potential to produce high-resolution images and the lack of an objective lens between the sample and its corresponding imaging detector. One challenge, however, is that very high dynamic range diffraction data must be collected to produce both quantitative and high-resolution images. In this work, hard X-ray ptychographic coherent diffractive imaging has been performed at the P10 beamline of the PETRA III synchrotron to demonstrate the potential of a very wide dynamic range imaging X-ray detector (the Mixed-Mode Pixel Array Detector, or MM-PAD). The detector is capable of single photon detection, detecting fluxes exceeding 1 × 10{sup 8} 8-keV photons pixel{sup −1} s{sup −1}, and framing at 1 kHz. A ptychographic reconstruction was performed using a peak focal intensity on the order of 1 × 10{sup 10} photons µm{sup −2} s{sup −1} within an area of approximately 325 nm × 603 nm. This was done without need of a beam stop and with a very modest attenuation, while ‘still’ images of the empty beam far-field intensity were recorded without any attenuation. The treatment of the detector frames and CDI methodology for reconstruction of non-sensitive detector regions, partially also extending the active detector area, are described.

  9. A simple and low-cost biofilm quantification method using LED and CMOS image sensor.

    Science.gov (United States)

    Kwak, Yeon Hwa; Lee, Junhee; Lee, Junghoon; Kwak, Soo Hwan; Oh, Sangwoo; Paek, Se-Hwan; Ha, Un-Hwan; Seo, Sungkyu

    2014-12-01

    A novel biofilm detection platform, which consists of a cost-effective red, green, and blue light-emitting diode (RGB LED) as a light source and a lens-free CMOS image sensor as a detector, is designed. This system can measure the diffraction patterns of cells from their shadow images, and gather light absorbance information according to the concentration of biofilms through a simple image processing procedure. Compared to a bulky and expensive commercial spectrophotometer, this platform can provide accurate and reproducible biofilm concentration detection and is simple, compact, and inexpensive. Biofilms originating from various bacterial strains, including Pseudomonas aeruginosa (P. aeruginosa), were tested to demonstrate the efficacy of this new biofilm detection approach. The results were compared with the results obtained from a commercial spectrophotometer. To utilize a cost-effective light source (i.e., an LED) for biofilm detection, the illumination conditions were optimized. For accurate and reproducible biofilm detection, a simple, custom-coded image processing algorithm was developed and applied to a five-megapixel CMOS image sensor, which is a cost-effective detector. The concentration of biofilms formed by P. aeruginosa was detected and quantified by varying the indole concentration, and the results were compared with the results obtained from a commercial spectrophotometer. The correlation value of the results from those two systems was 0.981 (N = 9, P CMOS image-sensor platform. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Single photon imaging and timing array sensor apparatus and method

    Science.gov (United States)

    Smith, R. Clayton

    2003-06-24

    An apparatus and method are disclosed for generating a three-dimension image of an object or target. The apparatus is comprised of a photon source for emitting a photon at a target. The emitted photons are received by a photon receiver for receiving the photon when reflected from the target. The photon receiver determines a reflection time of the photon and further determines an arrival position of the photon on the photon receiver. An analyzer is communicatively coupled to the photon receiver, wherein the analyzer generates a three-dimensional image of the object based upon the reflection time and the arrival position.

  11. Three-Dimensional Microwave Imaging for Concealed Weapon Detection Using Range Stacking Technique

    Directory of Open Access Journals (Sweden)

    Weixian Tan

    2017-01-01

    Full Text Available Three-dimensional (3D microwave imaging has been proven to be well suited for concealed weapon detection application. For the 3D image reconstruction under two-dimensional (2D planar aperture condition, most of current imaging algorithms focus on decomposing the 3D free space Green function by exploiting the stationary phase and, consequently, the accuracy of the final imagery is obtained at a sacrifice of computational complexity due to the need of interpolation. In this paper, from an alternative viewpoint, we propose a novel interpolation-free imaging algorithm based on wavefront reconstruction theory. The algorithm is an extension of the 2D range stacking algorithm (RSA with the advantages of low computational cost and high precision. The algorithm uses different reference signal spectrums at different range bins and then forms the target functions at desired range bin by a concise coherent summation. Several practical issues such as the propagation loss compensation, wavefront reconstruction, and aliasing mitigating are also considered. The sampling criterion and the achievable resolutions for the proposed algorithm are also derived. Finally, the proposed method is validated through extensive computer simulations and real-field experiments. The results show that accurate 3D image can be generated at a very high speed by utilizing the proposed algorithm.

  12. CMOS Image Sensor and System for Imaging Hemodynamic Changes in Response to Deep Brain Stimulation.

    Science.gov (United States)

    Zhang, Xiao; Noor, Muhammad S; McCracken, Clinton B; Kiss, Zelma H T; Yadid-Pecht, Orly; Murari, Kartikeya

    2016-06-01

    Deep brain stimulation (DBS) is a therapeutic intervention used for a variety of neurological and psychiatric disorders, but its mechanism of action is not well understood. It is known that DBS modulates neural activity which changes metabolic demands and thus the cerebral circulation state. However, it is unclear whether there are correlations between electrophysiological, hemodynamic and behavioral changes and whether they have any implications for clinical benefits. In order to investigate these questions, we present a miniaturized system for spectroscopic imaging of brain hemodynamics. The system consists of a 144 ×144, [Formula: see text] pixel pitch, high-sensitivity, analog-output CMOS imager fabricated in a standard 0.35 μm CMOS process, along with a miniaturized imaging system comprising illumination, focusing, analog-to-digital conversion and μSD card based data storage. This enables stand alone operation without a computer, nor electrical or fiberoptic tethers. To achieve high sensitivity, the pixel uses a capacitive transimpedance amplifier (CTIA). The nMOS transistors are in the pixel while pMOS transistors are column-parallel, resulting in a fill factor (FF) of 26%. Running at 60 fps and exposed to 470 nm light, the CMOS imager has a minimum detectable intensity of 2.3 nW/cm(2) , a maximum signal-to-noise ratio (SNR) of 49 dB at 2.45 μW/cm(2) leading to a dynamic range (DR) of 61 dB while consuming 167 μA from a 3.3 V supply. In anesthetized rats, the system was able to detect temporal, spatial and spectral hemodynamic changes in response to DBS.

  13. Detection in Urban Scenario Using Combined Airborne Imaging Sensors

    NARCIS (Netherlands)

    Renhorn, I.; Axelsson, M.; Benoist, K.W.; Bourghys, D.; Boucher, Y.; Xavier Briottet, X.; Sergio De CeglieD, S. De; Dekker, R.J.; Dimmeler, A.; Dost, R.; Friman, O.; Kåsen, I.; Maerker, J.; Persie, M. van; Resta, S.; Schwering, P.B.W.; Shimoni, M.; Vegard Haavardsholm, T.

    2012-01-01

    The EDA project “Detection in Urban scenario using Combined Airborne imaging Sensors” (DUCAS) is in progress. The aim of the project is to investigate the potential benefit of combined high spatial and spectral resolution airborne imagery for several defense applications in the urban area. The

  14. Detection in Urban Scenario using Combined Airborne Imaging Sensors

    NARCIS (Netherlands)

    Renhorn, I.; Axelsson, M.; Benoist, K.W.; Bourghys, D.; Boucher, Y.; Xavier Briottet, X.; Sergio De CeglieD, S. De; Dekker, R.J.; Dimmeler, A.; Dost, R.; Friman, O.; Kåsen, I.; Maerker, J.; Persie, M. van; Resta, S.; Schwering, P.B.W.; Shimoni, M.; Vegard Haavardsholm, T.

    2012-01-01

    The EDA project “Detection in Urban scenario using Combined Airborne imaging Sensors” (DUCAS) is in progress. The aim of the project is to investigate the potential benefit of combined high spatial and spectral resolution airborne imagery for several defense applications in the urban area. The

  15. Range and Image Based Modelling: a way for Frescoed Vault Texturing Optimization

    Science.gov (United States)

    Caroti, G.; Martínez-Espejo Zaragoza, I.; Piemonte, A.

    2015-02-01

    In the restoration of the frescoed vaults it is not only important to know the geometric shape of the painted surface, but it is essential to document its chromatic characterization and conservation status. The new techniques of range-based and image-based modelling, each with its limitations and advantages, offer a wide range of methods to obtain the geometric shape. In fact, several studies widely document that laser scanning enable obtaining three-dimensional models with high morphological precision. However, the quality level of the colour obtained with built-in laser scanner cameras is not comparable to that obtained for the shape. It is possible to improve the texture quality by means of a dedicated photographic campaign. This procedure, however, requires to calculate the external orientation of each image identifying the control points on it and on the model through a costly step of post processing. With image-based modelling techniques it is possible to obtain models that maintain the colour quality of the original images, but with variable geometric precision, locally lower than the laser scanning model. This paper presents a methodology that uses the camera external orientation parameters calculated by image based modelling techniques to project the same image on the model obtained from the laser scan. This methodology is tested on an Italian mirror (a schifo) frescoed vault. In the paper the different models, the analysis of precision and the efficiency evaluation of proposed methodology are presented.

  16. An improved method to estimate reflectance parameters for high dynamic range imaging

    Science.gov (United States)

    Li, Shiying; Deguchi, Koichiro; Li, Renfa; Manabe, Yoshitsugu; Chihara, Kunihiro

    2008-01-01

    Two methods are described to accurately estimate diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness, over the dynamic range of the camera used to capture input images. Neither method needs to segment color areas on an image, or to reconstruct a high dynamic range (HDR) image. The second method improves on the first, bypassing the requirement for specific separation of diffuse and specular reflection components. For the latter method, diffuse and specular reflectance parameters are estimated separately, using the least squares method. Reflection values are initially assumed to be diffuse-only reflection components, and are subjected to the least squares method to estimate diffuse reflectance parameters. Specular reflection components, obtained by subtracting the computed diffuse reflection components from reflection values, are then subjected to a logarithmically transformed equation of the Torrance-Sparrow reflection model, and specular reflectance parameters for gloss intensity and surface roughness are finally estimated using the least squares method. Experiments were carried out using both methods, with simulation data at different saturation levels, generated according to the Lambert and Torrance-Sparrow reflection models, and the second method, with spectral images captured by an imaging spectrograph and a moving light source. Our results show that the second method can estimate the diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness more accurately and faster than the first one, so that colors and gloss can be reproduced more efficiently for HDR imaging.

  17. Quantitative analysis of velopharyngeal movement using a stereoendoscope: accuracy and reliability of range images.

    Science.gov (United States)

    Nakano, Asuka; Mishima, Katsuaki; Shiraishi, Ruriko; Ueyama, Yoshiya

    2015-01-01

    We developed a novel method of producing accurate range images of the velopharynx using a three-dimensional (3D) endoscope to obtain detailed measurements of velopharyngeal movements. The purpose of the present study was to determine the relationship between the distance from the endoscope to an object, elucidate the measurement accuracy along the temporal axes, and determine the degree of blurring when using a jig to fix the endoscope. An endoscopic measuring system was developed in which a pattern projection system was incorporated into a commercially available 3D endoscope. After correcting the distortion of the camera images, range images were produced using pattern projection to achieve stereo matching. Graph paper was used to measure the appropriate distance from the camera to an object, the mesial buccal cusp of the right maxillary first molar was measured to clarify the range image stability, and an electric actuator was used to evaluate the measurement accuracy along the temporal axes. The measurement error was substantial when the distance from the camera to the subject was >6.5 cm. The standard error of the 3D coordinate value produced from 30 frames was within 0.1 mm (range, 0.01-0.08 mm). The measurement error of the temporal axes was 9.16% in the horizontal direction and 9.27% in the vertical direction. The optimal distance from the camera to an object is movements.

  18. Application range of micro focus radiographic devices associated to image processors

    International Nuclear Information System (INIS)

    Cappabianca, C.; Ferriani, S.; Verre, F.

    1987-01-01

    X-ray devices having a focus area less than 100 μ are called micro focus X-ray equipment. Here the range of application and the characteristics of these devices including the possibility of employing the coupling with real time image enhancement computers are defined

  19. Influence of long-range Coulomb interaction in velocity map imaging.

    Science.gov (United States)

    Barillot, T; Brédy, R; Celep, G; Cohen, S; Compagnon, I; Concina, B; Constant, E; Danakas, S; Kalaitzis, P; Karras, G; Lépine, F; Loriot, V; Marciniak, A; Predelus-Renois, G; Schindler, B; Bordas, C

    2017-07-07

    The standard velocity-map imaging (VMI) analysis relies on the simple approximation that the residual Coulomb field experienced by the photoelectron ejected from a neutral or ion system may be neglected. Under this almost universal approximation, the photoelectrons follow ballistic (parabolic) trajectories in the externally applied electric field, and the recorded image may be considered as a 2D projection of the initial photoelectron velocity distribution. There are, however, several circumstances where this approximation is not justified and the influence of long-range forces must absolutely be taken into account for the interpretation and analysis of the recorded images. The aim of this paper is to illustrate this influence by discussing two different situations involving isolated atoms or molecules where the analysis of experimental images cannot be performed without considering long-range Coulomb interactions. The first situation occurs when slow (meV) photoelectrons are photoionized from a neutral system and strongly interact with the attractive Coulomb potential of the residual ion. The result of this interaction is the formation of a more complex structure in the image, as well as the appearance of an intense glory at the center of the image. The second situation, observed also at low energy, occurs in the photodetachment from a multiply charged anion and it is characterized by the presence of a long-range repulsive potential. Then, while the standard VMI approximation is still valid, the very specific features exhibited by the recorded images can be explained only by taking into consideration tunnel detachment through the repulsive Coulomb barrier.

  20. New segmentation-based tone mapping algorithm for high dynamic range image

    Science.gov (United States)

    Duan, Weiwei; Guo, Huinan; Zhou, Zuofeng; Huang, Huimin; Cao, Jianzhong

    2017-07-01

    The traditional tone mapping algorithm for the display of high dynamic range (HDR) image has the drawback of losing the impression of brightness, contrast and color information. To overcome this phenomenon, we propose a new tone mapping algorithm based on dividing the image into different exposure regions in this paper. Firstly, the over-exposure region is determined using the Local Binary Pattern information of HDR image. Then, based on the peak and average gray of the histogram, the under-exposure and normal-exposure region of HDR image are selected separately. Finally, the different exposure regions are mapped by differentiated tone mapping methods to get the final result. The experiment results show that the proposed algorithm achieve the better performance both in visual quality and objective contrast criterion than other algorithms.