WorldWideScience

Sample records for linear image sensors

  1. An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor

    Science.gov (United States)

    Liscombe, Michael

    3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.

  2. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization

    Energy Technology Data Exchange (ETDEWEB)

    Hutcheson, Joshua A.; Majid, Aneeka A.; Powless, Amy J.; Muldoon, Timothy J., E-mail: tmuldoon@uark.edu [Department of Biomedical Engineering, University of Arkansas, 120 Engineering Hall, Fayetteville, Arkansas 72701 (United States)

    2015-09-15

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min{sup −1} with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels{sup −1}.

  3. Increasing Linear Dynamic Range of a CMOS Image Sensor

    Science.gov (United States)

    Pain, Bedabrata

    2007-01-01

    A generic design and a corresponding operating sequence have been developed for increasing the linear-response dynamic range of a complementary metal oxide/semiconductor (CMOS) image sensor. The design provides for linear calibrated dual-gain pixels that operate at high gain at a low signal level and at low gain at a signal level above a preset threshold. Unlike most prior designs for increasing dynamic range of an image sensor, this design does not entail any increase in noise (including fixed-pattern noise), decrease in responsivity or linearity, or degradation of photometric calibration. The figure is a simplified schematic diagram showing the circuit of one pixel and pertinent parts of its column readout circuitry. The conventional part of the pixel circuit includes a photodiode having a small capacitance, CD. The unconventional part includes an additional larger capacitance, CL, that can be connected to the photodiode via a transfer gate controlled in part by a latch. In the high-gain mode, the signal labeled TSR in the figure is held low through the latch, which also helps to adapt the gain on a pixel-by-pixel basis. Light must be coupled to the pixel through a microlens or by back illumination in order to obtain a high effective fill factor; this is necessary to ensure high quantum efficiency, a loss of which would minimize the efficacy of the dynamic- range-enhancement scheme. Once the level of illumination of the pixel exceeds the threshold, TSR is turned on, causing the transfer gate to conduct, thereby adding CL to the pixel capacitance. The added capacitance reduces the conversion gain, and increases the pixel electron-handling capacity, thereby providing an extension of the dynamic range. By use of an array of comparators also at the bottom of the column, photocharge voltages on sampling capacitors in each column are compared with a reference voltage to determine whether it is necessary to switch from the high-gain to the low-gain mode. Depending upon

  4. Spatial filtering self-velocimeter for vehicle application using a CMOS linear image sensor

    Science.gov (United States)

    He, Xin; Zhou, Jian; Nie, Xiaoming; Long, Xingwu

    2015-03-01

    The idea of using a spatial filtering velocimeter (SFV) to measure the velocity of a vehicle for an inertial navigation system is put forward. The presented SFV is based on a CMOS linear image sensor with a high-speed data rate, large pixel size, and built-in timing generator. These advantages make the image sensor suitable to measure vehicle velocity. The power spectrum of the output signal is obtained by fast Fourier transform and is corrected by a frequency spectrum correction algorithm. This velocimeter was used to measure the velocity of a conveyor belt driven by a rotary table and the measurement uncertainty is ˜0.54%. Furthermore, it was also installed on a vehicle together with a laser Doppler velocimeter (LDV) to measure self-velocity. The measurement result of the designed SFV is compared with that of the LDV. It is shown that the measurement result of the SFV is coincident with that of the LDV. Therefore, the designed SFV is suitable for a vehicle self-contained inertial navigation system.

  5. Linear nanometric tunnel junction sensors with exchange pinned sensing layer

    International Nuclear Information System (INIS)

    Leitao, D. C.; Silva, A. V.; Cardoso, S.; Ferreira, R.; Paz, E.; Deepack, F. L.; Freitas, P. P.

    2014-01-01

    Highly sensitive nanosensors with high spatial resolution provide the necessary features for high accuracy imaging of isolated magnetic nanoparticles. In this work, we report the fabrication and characterization of MgO-barrier magnetic tunnel junction nanosensors, with two exchange-pinned electrodes. The perpendicular magnetization configuration for field sensing is set using a two-step annealing process, where the second annealing temperature was optimized to yield patterned sensors responses with improved linearity. The optimized circular nanosensors show sensitivities up to 0.1%/Oe, larger than previously reported for nanometric sensors and comparable to micrometric spin-valves. Our strategy avoids the use of external permanent biasing or demagnetizing fields (large for smaller structures) to achieve a linear response, enabling the control of the linear operation range using only the stack and thus providing a small footprint device

  6. Linear nanometric tunnel junction sensors with exchange pinned sensing layer

    Energy Technology Data Exchange (ETDEWEB)

    Leitao, D. C., E-mail: dleitao@inesc-mn.pt; Silva, A. V.; Cardoso, S. [INESC-MN and IN, Rua Alves Redol 9, 1000-029 Lisboa (Portugal); Instituto Superior Técnico (IST), Universidade de Lisboa, Av. Rovisco Pais, 1000-029 Lisboa (Portugal); Ferreira, R.; Paz, E.; Deepack, F. L. [INL, Av. Mestre Jose Veiga, 4715-31 Braga (Portugal); Freitas, P. P. [INESC-MN and IN, Rua Alves Redol 9, 1000-029 Lisboa (Portugal); INL, Av. Mestre Jose Veiga, 4715-31 Braga (Portugal)

    2014-05-07

    Highly sensitive nanosensors with high spatial resolution provide the necessary features for high accuracy imaging of isolated magnetic nanoparticles. In this work, we report the fabrication and characterization of MgO-barrier magnetic tunnel junction nanosensors, with two exchange-pinned electrodes. The perpendicular magnetization configuration for field sensing is set using a two-step annealing process, where the second annealing temperature was optimized to yield patterned sensors responses with improved linearity. The optimized circular nanosensors show sensitivities up to 0.1%/Oe, larger than previously reported for nanometric sensors and comparable to micrometric spin-valves. Our strategy avoids the use of external permanent biasing or demagnetizing fields (large for smaller structures) to achieve a linear response, enabling the control of the linear operation range using only the stack and thus providing a small footprint device.

  7. Toward CMOS image sensor based glucose monitoring.

    Science.gov (United States)

    Devadhasan, Jasmine Pramila; Kim, Sanghyo

    2012-09-07

    Complementary metal oxide semiconductor (CMOS) image sensor is a powerful tool for biosensing applications. In this present study, CMOS image sensor has been exploited for detecting glucose levels by simple photon count variation with high sensitivity. Various concentrations of glucose (100 mg dL(-1) to 1000 mg dL(-1)) were added onto a simple poly-dimethylsiloxane (PDMS) chip and the oxidation of glucose was catalyzed with the aid of an enzymatic reaction. Oxidized glucose produces a brown color with the help of chromogen during enzymatic reaction and the color density varies with the glucose concentration. Photons pass through the PDMS chip with varying color density and hit the sensor surface. Photon count was recognized by CMOS image sensor depending on the color density with respect to the glucose concentration and it was converted into digital form. By correlating the obtained digital results with glucose concentration it is possible to measure a wide range of blood glucose levels with great linearity based on CMOS image sensor and therefore this technique will promote a convenient point-of-care diagnosis.

  8. A Dynamic Range Enhanced Readout Technique with a Two-Step TDC for High Speed Linear CMOS Image Sensors

    Directory of Open Access Journals (Sweden)

    Zhiyuan Gao

    2015-11-01

    Full Text Available This paper presents a dynamic range (DR enhanced readout technique with a two-step time-to-digital converter (TDC for high speed linear CMOS image sensors. A multi-capacitor and self-regulated capacitive trans-impedance amplifier (CTIA structure is employed to extend the dynamic range. The gain of the CTIA is auto adjusted by switching different capacitors to the integration node asynchronously according to the output voltage. A column-parallel ADC based on a two-step TDC is utilized to improve the conversion rate. The conversion is divided into coarse phase and fine phase. An error calibration scheme is also proposed to correct quantization errors caused by propagation delay skew within −Tclk~+Tclk. A linear CMOS image sensor pixel array is designed in the 0.13 μm CMOS process to verify this DR-enhanced high speed readout technique. The post simulation results indicate that the dynamic range of readout circuit is 99.02 dB and the ADC achieves 60.22 dB SNDR and 9.71 bit ENOB at a conversion rate of 2 MS/s after calibration, with 14.04 dB and 2.4 bit improvement, compared with SNDR and ENOB of that without calibration.

  9. CCD linear image sensor ILX511 arrangment for a technical spectrometer

    Czech Academy of Sciences Publication Activity Database

    Bartoněk, L.; Keprt, Jiří; Vlček, Martin

    2003-01-01

    Roč. 33, 2-3 (2003), s. 548-553 ISSN 0078-5466 Institutional research plan: CEZ:AV0Z1010921 Keywords : CCD linear sensor ILX511 * enhanced parallel port (EPP able IEEE1284) * A/D converter AD9280 Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.221, year: 2003

  10. Nanophotonic Image Sensors.

    Science.gov (United States)

    Chen, Qin; Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R S

    2016-09-01

    The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial-based THz image sensors, filter-free nanowire image sensors and nanostructured-based multispectral image sensors. This novel combination of cutting edge photonics research and well-developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.

    Science.gov (United States)

    Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang

    2016-01-28

    In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.

  12. Extended Special Sensor Microwave Imager (SSM/I) Sensor Data Record (SDR) in netCDF

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Special Sensor Microwave Imager (SSM/I) is a seven-channel linearly polarized passive microwave radiometer that operates at frequencies of 19.36 (vertically and...

  13. Research on geometric rectification of the Large FOV Linear Array Whiskbroom Image

    Science.gov (United States)

    Liu, Dia; Liu, Hui-tong; Dong, Hao; Liu, Xiao-bo

    2015-08-01

    To solve the geometric distortion problem of large FOV linear array whiskbroom image, a model of multi center central projection collinearity equation was founded considering its whiskbroom and linear CCD imaging feature, and the principle of distortion was analyzed. Based on the rectification method with POS, we introduced the angular position sensor data of the servo system, and restored the geometric imaging process exactly. An indirect rectification scheme aiming at linear array imaging with best scanline searching method was adopted, matrixes for calculating the exterior orientation elements was redesigned. We improved two iterative algorithms for this device, and did comparison and analysis. The rectification for the images of airborne imaging experiment showed ideal effect.

  14. Photon-counting image sensors

    CERN Document Server

    Teranishi, Nobukazu; Theuwissen, Albert; Stoppa, David; Charbon, Edoardo

    2017-01-01

    The field of photon-counting image sensors is advancing rapidly with the development of various solid-state image sensor technologies including single photon avalanche detectors (SPADs) and deep-sub-electron read noise CMOS image sensor pixels. This foundational platform technology will enable opportunities for new imaging modalities and instrumentation for science and industry, as well as new consumer applications. Papers discussing various photon-counting image sensor technologies and selected new applications are presented in this all-invited Special Issue.

  15. The AOLI Non-Linear Curvature Wavefront Sensor: High sensitivity reconstruction for low-order AO

    Science.gov (United States)

    Crass, Jonathan; King, David; Mackay, Craig

    2013-12-01

    Many adaptive optics (AO) systems in use today require bright reference objects to determine the effects of atmospheric distortions on incoming wavefronts. This requirement is because Shack Hartmann wavefront sensors (SHWFS) distribute incoming light from reference objects into a large number of sub-apertures. Bright natural reference objects occur infrequently across the sky leading to the use of laser guide stars which add complexity to wavefront measurement systems. The non-linear curvature wavefront sensor as described by Guyon et al. has been shown to offer a significant increase in sensitivity when compared to a SHWFS. This facilitates much greater sky coverage using natural guide stars alone. This paper describes the current status of the non-linear curvature wavefront sensor being developed as part of an adaptive optics system for the Adaptive Optics Lucky Imager (AOLI) project. The sensor comprises two photon-counting EMCCD detectors from E2V Technologies, recording intensity at four near-pupil planes. These images are used with a reconstruction algorithm to determine the phase correction to be applied by an ALPAO 241-element deformable mirror. The overall system is intended to provide low-order correction for a Lucky Imaging based multi CCD imaging camera. We present the current optical design of the instrument including methods to minimise inherent optical effects, principally chromaticity. Wavefront reconstruction methods are discussed and strategies for their optimisation to run at the required real-time speeds are introduced. Finally, we discuss laboratory work with a demonstrator setup of the system.

  16. Implementation of software-based sensor linearization algorithms on low-cost microcontrollers.

    Science.gov (United States)

    Erdem, Hamit

    2010-10-01

    Nonlinear sensors and microcontrollers are used in many embedded system designs. As the input-output characteristic of most sensors is nonlinear in nature, obtaining data from a nonlinear sensor by using an integer microcontroller has always been a design challenge. This paper discusses the implementation of six software-based sensor linearization algorithms for low-cost microcontrollers. The comparative study of the linearization algorithms is performed by using a nonlinear optical distance-measuring sensor. The performance of the algorithms is examined with respect to memory space usage, linearization accuracy and algorithm execution time. The implementation and comparison results can be used for selection of a linearization algorithm based on the sensor transfer function, expected linearization accuracy and microcontroller capacity. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Parametric Optimization of Lateral NIPIN Phototransistors for Flexible Image Sensors

    Directory of Open Access Journals (Sweden)

    Min Seok Kim

    2017-08-01

    Full Text Available Curved image sensors, which are a key component in bio-inspired imaging systems, have been widely studied because they can improve an imaging system in various aspects such as low optical aberrations, small-form, and simple optics configuration. Many methods and materials to realize a curvilinear imager have been proposed to address the drawbacks of conventional imaging/optical systems. However, there have been few theoretical studies in terms of electronics on the use of a lateral photodetector as a flexible image sensor. In this paper, we demonstrate the applicability of a Si-based lateral phototransistor as the pixel of a high-efficiency curved photodetector by conducting various electrical simulations with technology computer aided design (TCAD. The single phototransistor is analyzed with different device parameters: the thickness of the active cell, doping concentration, and structure geometry. This work presents a method to improve the external quantum efficiency (EQE, linear dynamic range (LDR, and mechanical stability of the phototransistor. We also evaluated the dark current in a matrix form of phototransistors to estimate the feasibility of the device as a flexible image sensor. Moreover, we fabricated and demonstrated an array of phototransistors based on our study. The theoretical study and design guidelines of a lateral phototransistor create new opportunities in flexible image sensors.

  18. Analysis on the Effect of Sensor Views in Image Reconstruction Produced by Optical Tomography System Using Charge-Coupled Device.

    Science.gov (United States)

    Jamaludin, Juliza; Rahim, Ruzairi Abdul; Fazul Rahiman, Mohd Hafiz; Mohd Rohani, Jemmy

    2018-04-01

    Optical tomography (OPT) is a method to capture a cross-sectional image based on the data obtained by sensors, distributed around the periphery of the analyzed system. This system is based on the measurement of the final light attenuation or absorption of radiation after crossing the measured objects. The number of sensor views will affect the results of image reconstruction, where the high number of sensor views per projection will give a high image quality. This research presents an application of charge-coupled device linear sensor and laser diode in an OPT system. Experiments in detecting solid and transparent objects in crystal clear water were conducted. Two numbers of sensors views, 160 and 320 views are evaluated in this research in reconstructing the images. The image reconstruction algorithms used were filtered images of linear back projection algorithms. Analysis on comparing the simulation and experiments image results shows that, with 320 image views giving less area error than 160 views. This suggests that high image view resulted in the high resolution of image reconstruction.

  19. Imaging moving objects from multiply scattered waves and multiple sensors

    International Nuclear Information System (INIS)

    Miranda, Analee; Cheney, Margaret

    2013-01-01

    In this paper, we develop a linearized imaging theory that combines the spatial, temporal and spectral components of multiply scattered waves as they scatter from moving objects. In particular, we consider the case of multiple fixed sensors transmitting and receiving information from multiply scattered waves. We use a priori information about the multipath background. We use a simple model for multiple scattering, namely scattering from a fixed, perfectly reflecting (mirror) plane. We base our image reconstruction and velocity estimation technique on a modification of a filtered backprojection method that produces a phase-space image. We plot examples of point-spread functions for different geometries and waveforms, and from these plots, we estimate the resolution in space and velocity. Through this analysis, we are able to identify how the imaging system depends on parameters such as bandwidth and number of sensors. We ultimately show that enhanced phase-space resolution for a distribution of moving and stationary targets in a multipath environment may be achieved using multiple sensors. (paper)

  20. Focus on image sensors

    NARCIS (Netherlands)

    Jos Gunsing; Daniël Telgen; Johan van Althuis; Jaap van de Loosdrecht; Mark Stappers; Peter Klijn

    2013-01-01

    Robots need sensors to operate properly. Using a single image sensor, various aspects of a robot operating in its environment can be measured or monitored. Over the past few years, image sensors have improved a lot: frame rate and resolution have increased, while prices have fallen. As a result,

  1. Large area CMOS image sensors

    International Nuclear Information System (INIS)

    Turchetta, R; Guerrini, N; Sedgwick, I

    2011-01-01

    CMOS image sensors, also known as CMOS Active Pixel Sensors (APS) or Monolithic Active Pixel Sensors (MAPS), are today the dominant imaging devices. They are omnipresent in our daily life, as image sensors in cellular phones, web cams, digital cameras, ... In these applications, the pixels can be very small, in the micron range, and the sensors themselves tend to be limited in size. However, many scientific applications, like particle or X-ray detection, require large format, often with large pixels, as well as other specific performance, like low noise, radiation hardness or very fast readout. The sensors are also required to be sensitive to a broad spectrum of radiation: photons from the silicon cut-off in the IR down to UV and X- and gamma-rays through the visible spectrum as well as charged particles. This requirement calls for modifications to the substrate to be introduced to provide optimized sensitivity. This paper will review existing CMOS image sensors, whose size can be as large as a single CMOS wafer, and analyse the technical requirements and specific challenges of large format CMOS image sensors.

  2. Distributed transition-edge sensors for linearized position response in a phonon-mediated X-ray imaging spectrometer

    Science.gov (United States)

    Cabrera, Blas; Brink, Paul L.; Leman, Steven W.; Castle, Joseph P.; Tomada, Astrid; Young, Betty A.; Martínez-Galarce, Dennis S.; Stern, Robert A.; Deiker, Steve; Irwin, Kent D.

    2004-03-01

    For future solar X-ray satellite missions, we are developing a phonon-mediated macro-pixel composed of a Ge crystal absorber with four superconducting transition-edge sensors (TES) distributed on the backside. The X-rays are absorbed on the opposite side and the energy is converted into phonons, which are absorbed into the four TES sensors. By connecting together parallel elements into four channels, fractional total energy absorbed between two of the sensors provides x-position information and the other two provide y-position information. We determine the optimal distribution for the TES sub-elements to obtain linear position information while minimizing the degradation of energy resolution.

  3. Object-Oriented Hierarchy Radiation Consistency for Different Temporal and Different Sensor Images

    Directory of Open Access Journals (Sweden)

    Nan Su

    2018-02-01

    Full Text Available In the paper, we propose a novel object-oriented hierarchy radiation consistency method for dense matching of different temporal and different sensor data in the 3D reconstruction. For different temporal images, our illumination consistency method is proposed to solve both the illumination uniformity for a single image and the relative illumination normalization for image pairs. Especially in the relative illumination normalization step, singular value equalization and linear relationship of the invariant pixels is combined used for the initial global illumination normalization and the object-oriented refined illumination normalization in detail, respectively. For different sensor images, we propose the union group sparse method, which is based on improving the original group sparse model. The different sensor images are set to a similar smoothness level by the same threshold of singular value from the union group matrix. Our method comprehensively considered the influence factors on the dense matching of the different temporal and different sensor stereoscopic image pairs to simultaneously improve the illumination consistency and the smoothness consistency. The radiation consistency experimental results verify the effectiveness and superiority of the proposed method by comparing two other methods. Moreover, in the dense matching experiment of the mixed stereoscopic image pairs, our method has more advantages for objects in the urban area.

  4. Temperature Sensors Integrated into a CMOS Image Sensor

    NARCIS (Netherlands)

    Abarca Prouza, A.N.; Xie, S.; Markenhof, Jules; Theuwissen, A.J.P.

    2017-01-01

    In this work, a novel approach is presented for measuring relative temperature variations inside the pixel array of a CMOS image sensor itself. This approach can give important information when compensation for dark (current) fixed pattern noise (FPN) is needed. The test image sensor consists of

  5. All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement.

    Science.gov (United States)

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi

    2016-01-30

    This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of -20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system.

  6. High-resolution dynamic pressure sensor array based on piezo-phototronic effect tuned photoluminescence imaging.

    Science.gov (United States)

    Peng, Mingzeng; Li, Zhou; Liu, Caihong; Zheng, Qiang; Shi, Xieqing; Song, Ming; Zhang, Yang; Du, Shiyu; Zhai, Junyi; Wang, Zhong Lin

    2015-03-24

    A high-resolution dynamic tactile/pressure display is indispensable to the comprehensive perception of force/mechanical stimulations such as electronic skin, biomechanical imaging/analysis, or personalized signatures. Here, we present a dynamic pressure sensor array based on pressure/strain tuned photoluminescence imaging without the need for electricity. Each sensor is a nanopillar that consists of InGaN/GaN multiple quantum wells. Its photoluminescence intensity can be modulated dramatically and linearly by small strain (0-0.15%) owing to the piezo-phototronic effect. The sensor array has a high pixel density of 6350 dpi and exceptional small standard deviation of photoluminescence. High-quality tactile/pressure sensing distribution can be real-time recorded by parallel photoluminescence imaging without any cross-talk. The sensor array can be inexpensively fabricated over large areas by semiconductor product lines. The proposed dynamic all-optical pressure imaging with excellent resolution, high sensitivity, good uniformity, and ultrafast response time offers a suitable way for smart sensing, micro/nano-opto-electromechanical systems.

  7. Acoustic emission linear pulse holography

    Science.gov (United States)

    Collins, H.D.; Busse, L.J.; Lemon, D.K.

    1983-10-25

    This device relates to the concept of and means for performing Acoustic Emission Linear Pulse Holography, which combines the advantages of linear holographic imaging and Acoustic Emission into a single non-destructive inspection system. This unique system produces a chronological, linear holographic image of a flaw by utilizing the acoustic energy emitted during crack growth. The innovation is the concept of utilizing the crack-generated acoustic emission energy to generate a chronological series of images of a growing crack by applying linear, pulse holographic processing to the acoustic emission data. The process is implemented by placing on a structure an array of piezoelectric sensors (typically 16 or 32 of them) near the defect location. A reference sensor is placed between the defect and the array.

  8. Detection and Classification of Multiple Objects using an RGB-D Sensor and Linear Spatial Pyramid Matching

    DEFF Research Database (Denmark)

    Dimitriou, Michalis; Kounalakis, Tsampikos; Vidakis, Nikolaos

    2013-01-01

    , connected components detection and filtering approaches, in order to design a complete image processing algorithm for efficient object detection of multiple individual objects in a single scene, even in complex scenes with many objects. Besides, we apply the Linear Spatial Pyramid Matching (LSPM) [1] method......This paper presents a complete system for multiple object detection and classification in a 3D scene using an RGB-D sensor such as the Microsoft Kinect sensor. Successful multiple object detection and classification are crucial features in many 3D computer vision applications. The main goal...... is making machines see and understand objects like humans do. To this goal, the new RGB-D sensors can be utilized since they provide real-time depth map which can be used along with the RGB images for our tasks. In our system we employ effective depth map processing techniques, along with edge detection...

  9. Extended Special Sensor Microwave Imager (SSM/I) Temperature Data Record (TDR) in netCDF

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Special Sensor Microwave Imager (SSM/I) is a seven-channel linearly polarized passive microwave radiometer that operates at frequencies of 19.36 (vertically and...

  10. The Linearity of Optical Tomography: Sensor Model and Experimental Verification

    Directory of Open Access Journals (Sweden)

    Siti Zarina MOHD. MUJI

    2011-09-01

    Full Text Available The aim of this paper is to show the linearization of optical sensor. Linearity of the sensor response is a must in optical tomography application, which affects the tomogram result. Two types of testing are used namely, testing using voltage parameter and testing with time unit parameter. For the former, the testing is by measuring the voltage when the obstacle is placed between transmitter and receiver. The obstacle diameters are between 0.5 until 3 mm. The latter is also the same testing but the obstacle is bigger than the former which is 59.24 mm and the testing purpose is to measure the time unit spend for the ball when it cut the area of sensing circuit. Both results show a linear relation that proves the optical sensors is suitable for process tomography application.

  11. Automated Registration Of Images From Multiple Sensors

    Science.gov (United States)

    Rignot, Eric J. M.; Kwok, Ronald; Curlander, John C.; Pang, Shirley S. N.

    1994-01-01

    Images of terrain scanned in common by multiple Earth-orbiting remote sensors registered automatically with each other and, where possible, on geographic coordinate grid. Simulated image of terrain viewed by sensor computed from ancillary data, viewing geometry, and mathematical model of physics of imaging. In proposed registration algorithm, simulated and actual sensor images matched by area-correlation technique.

  12. Linearization of Positional Response Curve of a Fiber-optic Displacement Sensor

    Science.gov (United States)

    Babaev, O. G.; Matyunin, S. A.; Paranin, V. D.

    2018-01-01

    Currently, the creation of optical measuring instruments and sensors for measuring linear displacement is one of the most relevant problems in the area of instrumentation. Fiber-optic contactless sensors based on the magneto-optical effect are of special interest. They are essentially contactless, non-electrical and have a closed optical channel not subject to contamination. The main problem of this type of sensors is the non-linearity of their positional response curve due to the hyperbolic nature of the magnetic field intensity variation induced by moving the magnetic source mounted on the controlled object relative to the sensing element. This paper discusses an algorithmic method of linearizing the positional response curve of fiber-optic displacement sensors in any selected range of the displacements to be measured. The method is divided into two stages: 1 - definition of the calibration function, 2 - measurement and linearization of the positional response curve (including its temperature stabilization). The algorithm under consideration significantly reduces the number of points of the calibration function, which is essential for the calibration of temperature dependence, due to the use of the points that randomly deviate from the grid points with uniform spacing. Subsequent interpolation of the deviating points and piecewise linear-plane approximation of the calibration function reduces the microcontroller storage capacity for storing the calibration function and the time required to process the measurement results. The paper also presents experimental results of testing real samples of fiber-optic displacement sensors.

  13. Image-based occupancy sensor

    Science.gov (United States)

    Polese, Luigi Gentile; Brackney, Larry

    2015-05-19

    An image-based occupancy sensor includes a motion detection module that receives and processes an image signal to generate a motion detection signal, a people detection module that receives the image signal and processes the image signal to generate a people detection signal, a face detection module that receives the image signal and processes the image signal to generate a face detection signal, and a sensor integration module that receives the motion detection signal from the motion detection module, receives the people detection signal from the people detection module, receives the face detection signal from the face detection module, and generates an occupancy signal using the motion detection signal, the people detection signal, and the face detection signal, with the occupancy signal indicating vacancy or occupancy, with an occupancy indication specifying that one or more people are detected within the monitored volume.

  14. An Approach for Unsupervised Change Detection in Multitemporal VHR Images Acquired by Different Multispectral Sensors

    Directory of Open Access Journals (Sweden)

    Yady Tatiana Solano-Correa

    2018-03-01

    Full Text Available This paper proposes an approach for the detection of changes in multitemporal Very High Resolution (VHR optical images acquired by different multispectral sensors. The proposed approach, which is inspired by a recent framework developed to support the design of change-detection systems for single-sensor VHR remote sensing images, addresses and integrates in the general approach a strategy to effectively deal with multisensor information, i.e., to perform change detection between VHR images acquired by different multispectral sensors on two dates. This is achieved by the definition of procedures for the homogenization of radiometric, spectral and geometric image properties. These procedures map images into a common feature space where the information acquired by different multispectral sensors becomes comparable across time. Although the approach is general, here we optimize it for the detection of changes in vegetation and urban areas by employing features based on linear transformations (Tasseled Caps and Orthogonal Equations, which are shown to be effective for representing the multisensor information in a homogeneous physical way irrespectively of the considered sensor. Experiments on multitemporal images acquired by different VHR satellite systems (i.e., QuickBird, WorldView-2 and GeoEye-1 confirm the effectiveness of the proposed approach.

  15. Toward High Altitude Airship Ground-Based Boresight Calibration of Hyperspectral Pushbroom Imaging Sensors

    Directory of Open Access Journals (Sweden)

    Aiwu Zhang

    2015-12-01

    Full Text Available The complexity of the single linear hyperspectral pushbroom imaging based on a high altitude airship (HAA without a three-axis stabilized platform is much more than that based on the spaceborne and airborne. Due to the effects of air pressure, temperature and airflow, the large pitch and roll angles tend to appear frequently that create pushbroom images highly characterized with severe geometric distortions. Thus, the in-flight calibration procedure is not appropriate to apply to the single linear pushbroom sensors on HAA having no three-axis stabilized platform. In order to address this problem, a new ground-based boresight calibration method is proposed. Firstly, a coordinate’s transformation model is developed for direct georeferencing (DG of the linear imaging sensor, and then the linear error equation is derived from it by using the Taylor expansion formula. Secondly, the boresight misalignments are worked out by using iterative least squares method with few ground control points (GCPs and ground-based side-scanning experiments. The proposed method is demonstrated by three sets of experiments: (i the stability and reliability of the method is verified through simulation-based experiments; (ii the boresight calibration is performed using ground-based experiments; and (iii the validation is done by applying on the orthorectification of the real hyperspectral pushbroom images from a HAA Earth observation payload system developed by our research team—“LanTianHao”. The test results show that the proposed boresight calibration approach significantly improves the quality of georeferencing by reducing the geometric distortions caused by boresight misalignments to the minimum level.

  16. High Field Linear Magnetoresistance Sensors with Perpendicular Anisotropy L10-FePt Reference Layer

    Directory of Open Access Journals (Sweden)

    X. Liu

    2016-01-01

    Full Text Available High field linear magnetoresistance is an important feature for magnetic sensors applied in magnetic levitating train and high field positioning measurements. Here, we investigate linear magnetoresistance in Pt/FePt/ZnO/Fe/Pt multilayer magnetic sensor, where FePt and Fe ferromagnetic layers exhibit out-of-plane and in-plane magnetic anisotropy, respectively. Perpendicular anisotropy L10-FePt reference layer with large coercivity and high squareness ratio was obtained by in situ substrate heating. Linear magnetoresistance is observed in this sensor in a large range between +5 kOe and −5 kOe with the current parallel to the film plane. This L10-FePt based sensor is significant for the expansion of linear range and the simplification of preparation for future high field magnetic sensors.

  17. CMOS foveal image sensor chip

    Science.gov (United States)

    Bandera, Cesar (Inventor); Scott, Peter (Inventor); Sridhar, Ramalingam (Inventor); Xia, Shu (Inventor)

    2002-01-01

    A foveal image sensor integrated circuit comprising a plurality of CMOS active pixel sensors arranged both within and about a central fovea region of the chip. The pixels in the central fovea region have a smaller size than the pixels arranged in peripheral rings about the central region. A new photocharge normalization scheme and associated circuitry normalizes the output signals from the different size pixels in the array. The pixels are assembled into a multi-resolution rectilinear foveal image sensor chip using a novel access scheme to reduce the number of analog RAM cells needed. Localized spatial resolution declines monotonically with offset from the imager's optical axis, analogous to biological foveal vision.

  18. CMOS image sensor-based implantable glucose sensor using glucose-responsive fluorescent hydrogel.

    Science.gov (United States)

    Tokuda, Takashi; Takahashi, Masayuki; Uejima, Kazuhiro; Masuda, Keita; Kawamura, Toshikazu; Ohta, Yasumi; Motoyama, Mayumi; Noda, Toshihiko; Sasagawa, Kiyotaka; Okitsu, Teru; Takeuchi, Shoji; Ohta, Jun

    2014-11-01

    A CMOS image sensor-based implantable glucose sensor based on an optical-sensing scheme is proposed and experimentally verified. A glucose-responsive fluorescent hydrogel is used as the mediator in the measurement scheme. The wired implantable glucose sensor was realized by integrating a CMOS image sensor, hydrogel, UV light emitting diodes, and an optical filter on a flexible polyimide substrate. Feasibility of the glucose sensor was verified by both in vitro and in vivo experiments.

  19. Application Of FA Sensor 2

    International Nuclear Information System (INIS)

    Park, Seon Ho

    1993-03-01

    This book introduces FA sensor from basic to making system, which includes light sensor like photo diode and photo transistor, photo electricity sensor, CCD type image sensor, MOS type image sensor, color sensor, cds cell, and optical fiber scope. It also deals with direct election position sensor such as proximity switch, differential motion, linear scale of photo electricity type, and magnet scale, rotary sensor with summary of rotary encoder, rotary encoder types and applications, flow sensor, and sensing technology.

  20. The AOLI low-order non-linear curvature wavefront sensor: laboratory and on-sky results

    Science.gov (United States)

    Crass, Jonathan; King, David; MacKay, Craig

    2014-08-01

    Many adaptive optics (AO) systems in use today require the use of bright reference objects to determine the effects of atmospheric distortions. Typically these systems use Shack-Hartmann Wavefront sensors (SHWFS) to distribute incoming light from a reference object between a large number of sub-apertures. Guyon et al. evaluated the sensitivity of several different wavefront sensing techniques and proposed the non-linear Curvature Wavefront Sensor (nlCWFS) offering improved sensitivity across a range of orders of distortion. On large ground-based telescopes this can provide nearly 100% sky coverage using natural guide stars. We present work being undertaken on the nlCWFS development for the Adaptive Optics Lucky Imager (AOLI) project. The wavefront sensor is being developed as part of a low-order adaptive optics system for use in a dedicated instrument providing an AO corrected beam to a Lucky Imaging based science detector. The nlCWFS provides a total of four reference images on two photon-counting EMCCDs for use in the wavefront reconstruction process. We present results from both laboratory work using a calibration system and the first on-sky data obtained with the nlCWFS at the 4.2 metre William Herschel Telescope, La Palma. In addition, we describe the updated optical design of the wavefront sensor, strategies for minimising intrinsic effects and methods to maximise sensitivity using photon-counting detectors. We discuss on-going work to develop the high speed reconstruction algorithm required for the nlCWFS technique. This includes strategies to implement the technique on graphics processing units (GPUs) and to minimise computing overheads to obtain a prior for a rapid convergence of the wavefront reconstruction. Finally we evaluate the sensitivity of the wavefront sensor based upon both data and low-photon count strategies.

  1. A 128 x 128 CMOS Active Pixel Image Sensor for Highly Integrated Imaging Systems

    Science.gov (United States)

    Mendis, Sunetra K.; Kemeny, Sabrina E.; Fossum, Eric R.

    1993-01-01

    A new CMOS-based image sensor that is intrinsically compatible with on-chip CMOS circuitry is reported. The new CMOS active pixel image sensor achieves low noise, high sensitivity, X-Y addressability, and has simple timing requirements. The image sensor was fabricated using a 2 micrometer p-well CMOS process, and consists of a 128 x 128 array of 40 micrometer x 40 micrometer pixels. The CMOS image sensor technology enables highly integrated smart image sensors, and makes the design, incorporation and fabrication of such sensors widely accessible to the integrated circuit community.

  2. CMOS Image Sensors: Electronic Camera On A Chip

    Science.gov (United States)

    Fossum, E. R.

    1995-01-01

    Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.

  3. Image Sensor

    OpenAIRE

    Jerram, Paul; Stefanov, Konstantin

    2017-01-01

    An image sensor of the type for providing charge multiplication by impact ionisation has plurality of multiplication elements. Each element is arranged to receive charge from photosensitive elements of an image area and each element comprises a sequence of electrodes to move charge along a transport path. Each of the electrodes has an edge defining a boundary with a first electrode, a maximum width across the charge transport path and a leading edge that defines a boundary with a second elect...

  4. Priority image transmission in wireless sensor networks

    International Nuclear Information System (INIS)

    Nasri, M.; Helali, A.; Sghaier, H.; Maaref, H.

    2011-01-01

    The emerging technology during the last years allowed the development of new sensors equipped with wireless communication which can be organized into a cooperative autonomous network. Some application areas for wireless sensor networks (WSNs) are home automations, health care services, military domain, and environment monitoring. The required constraints are limited capacity of processing, limited storage capability, and especially these nodes are limited in energy. In addition, such networks are tiny battery powered which their lifetime is very limited. During image processing and transmission to the destination, the lifetime of sensor network is decreased quickly due to battery and processing power constraints. Therefore, digital image transmissions are a significant challenge for image sensor based Wireless Sensor Networks (WSNs). Based on a wavelet image compression, we propose a novel, robust and energy-efficient scheme, called Priority Image Transmission (PIT) in WSN by providing various priority levels during image transmissions. Different priorities in the compressed image are considered. The information for the significant wavelet coeffcients are transmitted with higher quality assurance, whereas relatively less important coefficients are transmitted with lower overhead. Simulation results show that the proposed scheme prolongs the system lifetime and achieves higher energy efficiency in WSN with an acceptable compromise on the image quality.

  5. Imaging in scattering media using correlation image sensors and sparse convolutional coding

    KAUST Repository

    Heide, Felix; Xiao, Lei; Kolb, Andreas; Hullin, Matthias B.; Heidrich, Wolfgang

    2014-01-01

    Correlation image sensors have recently become popular low-cost devices for time-of-flight, or range cameras. They usually operate under the assumption of a single light path contributing to each pixel. We show that a more thorough analysis of the sensor data from correlation sensors can be used can be used to analyze the light transport in much more complex environments, including applications for imaging through scattering and turbid media. The key of our method is a new convolutional sparse coding approach for recovering transient (light-in-flight) images from correlation image sensors. This approach is enabled by an analysis of sparsity in complex transient images, and the derivation of a new physically-motivated model for transient images with drastically improved sparsity.

  6. Imaging in scattering media using correlation image sensors and sparse convolutional coding

    KAUST Repository

    Heide, Felix

    2014-10-17

    Correlation image sensors have recently become popular low-cost devices for time-of-flight, or range cameras. They usually operate under the assumption of a single light path contributing to each pixel. We show that a more thorough analysis of the sensor data from correlation sensors can be used can be used to analyze the light transport in much more complex environments, including applications for imaging through scattering and turbid media. The key of our method is a new convolutional sparse coding approach for recovering transient (light-in-flight) images from correlation image sensors. This approach is enabled by an analysis of sparsity in complex transient images, and the derivation of a new physically-motivated model for transient images with drastically improved sparsity.

  7. Estimation of the limit of detection in semiconductor gas sensors through linearized calibration models.

    Science.gov (United States)

    Burgués, Javier; Jiménez-Soto, Juan Manuel; Marco, Santiago

    2018-07-12

    The limit of detection (LOD) is a key figure of merit in chemical sensing. However, the estimation of this figure of merit is hindered by the non-linear calibration curve characteristic of semiconductor gas sensor technologies such as, metal oxide (MOX), gasFETs or thermoelectric sensors. Additionally, chemical sensors suffer from cross-sensitivities and temporal stability problems. The application of the International Union of Pure and Applied Chemistry (IUPAC) recommendations for univariate LOD estimation in non-linear semiconductor gas sensors is not straightforward due to the strong statistical requirements of the IUPAC methodology (linearity, homoscedasticity, normality). Here, we propose a methodological approach to LOD estimation through linearized calibration models. As an example, the methodology is applied to the detection of low concentrations of carbon monoxide using MOX gas sensors in a scenario where the main source of error is the presence of uncontrolled levels of humidity. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Linear wide angle sun sensor for spinning satellites

    Science.gov (United States)

    Philip, M. P.; Kalakrishnan, B.; Jain, Y. K.

    1983-08-01

    A concept is developed which overcomes the defects of the nonlinearity of response and limitation in range exhibited by the V-slit, N-slit, and crossed slit sun sensors normally used for sun elevation angle measurements on spinning spacecraft. Two versions of sensors based on this concept which give a linear output and have a range of nearly + or - 90 deg of elevation angle are examined. Results are presented for the application of the twin slit version of the sun sensor in the three Indian satellites, Rohini, Apple, and Bhaskara II, which was successfully used for spin rate control and spin axis orientation control corrections as well as for sun elevation angle and spin period measurements.

  9. Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.

    Science.gov (United States)

    Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian

    2018-05-23

    Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.

  10. Beam imaging sensor and method for using same

    Energy Technology Data Exchange (ETDEWEB)

    McAninch, Michael D.; Root, Jeffrey J.

    2017-01-03

    The present invention relates generally to the field of sensors for beam imaging and, in particular, to a new and useful beam imaging sensor for use in determining, for example, the power density distribution of a beam including, but not limited to, an electron beam or an ion beam. In one embodiment, the beam imaging sensor of the present invention comprises, among other items, a circumferential slit that is either circular, elliptical or polygonal in nature. In another embodiment, the beam imaging sensor of the present invention comprises, among other things, a discontinuous partially circumferential slit. Also disclosed is a method for using the various beams sensor embodiments of the present invention.

  11. Image-based environmental monitoring sensor application using an embedded wireless sensor network.

    Science.gov (United States)

    Paek, Jeongyeup; Hicks, John; Coe, Sharon; Govindan, Ramesh

    2014-08-28

    This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet's built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Cannot Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions.

  12. Image-Based Environmental Monitoring Sensor Application Using an Embedded Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Jeongyeup Paek

    2014-08-01

    Full Text Available This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet’s built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Jacinto Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions.

  13. Micro-digital sun sensor: an imaging sensor for space applications

    NARCIS (Netherlands)

    Xie, N.; Theuwissen, A.J.P.; Büttgen, B.; Hakkesteegt, H.C.; Jasen, H.; Leijtens, J.A.P.

    2010-01-01

    Micro-Digital Sun Sensor is an attitude sensor which senses relative position of micro-satellites to the sun in space. It is composed of a solar cell power supply, a RF communication block and an imaging chip which is called APS+. The APS+ integrates a CMOS Active Pixel Sensor (APS) of 512×512

  14. Non-linear effects in transition edge sensors for X-ray detection

    International Nuclear Information System (INIS)

    Bandler, S.R.; Figueroa-Feliciano, E.; Iyomoto, N.; Kelley, R.L.; Kilbourne, C.A.; Murphy, K.D.; Porter, F.S.; Saab, T.; Sadleir, J.

    2006-01-01

    In a microcalorimeter that uses a transition-edge sensor to detect energy depositions, the small signal energy resolution improves with decreasing heat capacity. This improvement remains true up to the point where non-linear and saturation effects become significant. This happens when the energy deposition causes a significant change in the sensor resistance. Not only does the signal size become a non-linear function of the energy deposited, but also the noise becomes non-stationary over the duration of the pulse. Algorithms have been developed that can calculate the optimal performance given this non-linear behavior that typically requires significant processing and calibration work-both of which are impractical for space missions. We have investigated the relative importance of the various non-linear effects, with the hope that a computationally simple transformation can overcome the largest of the non-linear and non-stationary effects, producing a highly linear 'gain' for pulse-height versus energy, and close to the best energy resolution at all energies when using a Wiener filter

  15. Establishing imaging sensor specifications for digital still cameras

    Science.gov (United States)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  16. Fusion of Images from Dissimilar Sensor Systems

    National Research Council Canada - National Science Library

    Chow, Khin

    2004-01-01

    Different sensors exploit different regions of the electromagnetic spectrum; therefore a multi-sensor image fusion system can take full advantage of the complementary capabilities of individual sensors in the suit...

  17. Collaborative Image Coding and Transmission over Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Min Wu

    2007-01-01

    Full Text Available The imaging sensors are able to provide intuitive visual information for quick recognition and decision. However, imaging sensors usually generate vast amount of data. Therefore, processing and coding of image data collected in a sensor network for the purpose of energy efficient transmission poses a significant technical challenge. In particular, multiple sensors may be collecting similar visual information simultaneously. We propose in this paper a novel collaborative image coding and transmission scheme to minimize the energy for data transmission. First, we apply a shape matching method to coarsely register images to find out maximal overlap to exploit the spatial correlation between images acquired from neighboring sensors. For a given image sequence, we transmit background image only once. A lightweight and efficient background subtraction method is employed to detect targets. Only the regions of target and their spatial locations are transmitted to the monitoring center. The whole image can then be reconstructed by fusing the background and the target images as well as their spatial locations. Experimental results show that the energy for image transmission can indeed be greatly reduced with collaborative image coding and transmission.

  18. CMOS Active-Pixel Image Sensor With Intensity-Driven Readout

    Science.gov (United States)

    Langenbacher, Harry T.; Fossum, Eric R.; Kemeny, Sabrina

    1996-01-01

    Proposed complementary metal oxide/semiconductor (CMOS) integrated-circuit image sensor automatically provides readouts from pixels in order of decreasing illumination intensity. Sensor operated in integration mode. Particularly useful in number of image-sensing tasks, including diffractive laser range-finding, three-dimensional imaging, event-driven readout of sparse sensor arrays, and star tracking.

  19. Novel birefringence interrogation for Sagnac loop interferometer sensor with unlimited linear measurement range.

    Science.gov (United States)

    He, Haijun; Shao, Liyang; Qian, Heng; Zhang, Xinpu; Liang, Jiawei; Luo, Bin; Pan, Wei; Yan, Lianshan

    2017-03-20

    A novel demodulation method for Sagnac loop interferometer based sensor has been proposed and demonstrated, by unwrapping the phase changes with birefringence interrogation. A temperature sensor based on Sagnac loop interferometer has been used to verify the feasibility of the proposed method. Several tests with 40 °C temperature range have been accomplished with a great linearity of 0.9996 in full range. The proposed scheme is universal for all Sagnac loop interferometer based sensors and it has unlimited linear measurable range which overwhelming the conventional demodulation method with peak/dip tracing. Furthermore, the influence of the wavelength sampling interval and wavelength span on the demodulation error has been discussed in this work. The proposed interrogation method has a great significance for Sagnac loop interferometer sensor and it might greatly enhance the availability of this type of sensors in practical application.

  20. Commercial CMOS image sensors as X-ray imagers and particle beam monitors

    International Nuclear Information System (INIS)

    Castoldi, A.; Guazzoni, C.; Maffessanti, S.; Montemurro, G.V.; Carraresi, L.

    2015-01-01

    CMOS image sensors are widely used in several applications such as mobile handsets webcams and digital cameras among others. Furthermore they are available across a wide range of resolutions with excellent spectral and chromatic responses. In order to fulfill the need of cheap systems as beam monitors and high resolution image sensors for scientific applications we exploited the possibility of using commercial CMOS image sensors as X-rays and proton detectors. Two different sensors have been mounted and tested. An Aptina MT9v034, featuring 752 × 480 pixels, 6μm × 6μm pixel size has been mounted and successfully tested as bi-dimensional beam profile monitor, able to take pictures of the incoming proton bunches at the DeFEL beamline (1–6 MeV pulsed proton beam) of the LaBeC of INFN in Florence. The naked sensor is able to successfully detect the interactions of the single protons. The sensor point-spread-function (PSF) has been qualified with 1MeV protons and is equal to one pixel (6 mm) r.m.s. in both directions. A second sensor MT9M032, featuring 1472 × 1096 pixels, 2.2 × 2.2 μm pixel size has been mounted on a dedicated board as high-resolution imager to be used in X-ray imaging experiments with table-top generators. In order to ease and simplify the data transfer and the image acquisition the system is controlled by a dedicated micro-processor board (DM3730 1GHz SoC ARM Cortex-A8) on which a modified LINUX kernel has been implemented. The paper presents the architecture of the sensor systems and the results of the experimental measurements

  1. Virtual View Image over Wireless Visual Sensor Network

    Directory of Open Access Journals (Sweden)

    Gamantyo Hendrantoro

    2011-12-01

    Full Text Available In general, visual sensors are applied to build virtual view images. When number of visual sensors increases then quantity and quality of the information improves. However, the view images generation is a challenging task in Wireless Visual Sensor Network environment due to energy restriction, computation complexity, and bandwidth limitation. Hence this paper presents a new method of virtual view images generation from selected cameras on Wireless Visual Sensor Network. The aim of the paper is to meet bandwidth and energy limitations without reducing information quality. The experiment results showed that this method could minimize number of transmitted imageries with sufficient information.

  2. Imaging system design and image interpolation based on CMOS image sensor

    Science.gov (United States)

    Li, Yu-feng; Liang, Fei; Guo, Rui

    2009-11-01

    An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.

  3. Smart CMOS image sensor for lightning detection and imaging.

    Science.gov (United States)

    Rolando, Sébastien; Goiffon, Vincent; Magnan, Pierre; Corbière, Franck; Molina, Romain; Tulet, Michel; Bréart-de-Boisanger, Michel; Saint-Pé, Olivier; Guiry, Saïprasad; Larnaudie, Franck; Leone, Bruno; Perez-Cuevas, Leticia; Zayer, Igor

    2013-03-01

    We present a CMOS image sensor dedicated to lightning detection and imaging. The detector has been designed to evaluate the potentiality of an on-chip lightning detection solution based on a smart sensor. This evaluation is performed in the frame of the predevelopment phase of the lightning detector that will be implemented in the Meteosat Third Generation Imager satellite for the European Space Agency. The lightning detection process is performed by a smart detector combining an in-pixel frame-to-frame difference comparison with an adjustable threshold and on-chip digital processing allowing an efficient localization of a faint lightning pulse on the entire large format array at a frequency of 1 kHz. A CMOS prototype sensor with a 256×256 pixel array and a 60 μm pixel pitch has been fabricated using a 0.35 μm 2P 5M technology and tested to validate the selected detection approach.

  4. Robust Dehaze Algorithm for Degraded Image of CMOS Image Sensors

    Directory of Open Access Journals (Sweden)

    Chen Qu

    2017-09-01

    Full Text Available The CMOS (Complementary Metal-Oxide-Semiconductor is a new type of solid image sensor device widely used in object tracking, object recognition, intelligent navigation fields, and so on. However, images captured by outdoor CMOS sensor devices are usually affected by suspended atmospheric particles (such as haze, causing a reduction in image contrast, color distortion problems, and so on. In view of this, we propose a novel dehazing approach based on a local consistent Markov random field (MRF framework. The neighboring clique in traditional MRF is extended to the non-neighboring clique, which is defined on local consistent blocks based on two clues, where both the atmospheric light and transmission map satisfy the character of local consistency. In this framework, our model can strengthen the restriction of the whole image while incorporating more sophisticated statistical priors, resulting in more expressive power of modeling, thus, solving inadequate detail recovery effectively and alleviating color distortion. Moreover, the local consistent MRF framework can obtain details while maintaining better results for dehazing, which effectively improves the image quality captured by the CMOS image sensor. Experimental results verified that the method proposed has the combined advantages of detail recovery and color preservation.

  5. Oriented Edge-Based Feature Descriptor for Multi-Sensor Image Alignment and Enhancement

    Directory of Open Access Journals (Sweden)

    Myung-Ho Ju

    2013-10-01

    Full Text Available In this paper, we present an efficient image alignment and enhancement method for multi-sensor images. The shape of the object captured in a multi-sensor images can be determined by comparing variability of contrast using corresponding edges across multi-sensor image. Using this cue, we construct a robust feature descriptor based on the magnitudes of the oriented edges. Our proposed method enables fast image alignment by identifying matching features in multi-sensor images. We enhance the aligned multi-sensor images through the fusion of the salient regions from each image. The results of stitching the multi-sensor images and their enhancement demonstrate that our proposed method can align and enhance multi-sensor images more efficiently than previous methods.

  6. Contact CMOS imaging of gaseous oxygen sensor array.

    Science.gov (United States)

    Daivasagaya, Daisy S; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C; Chodavarapu, Vamsy P; Bright, Frank V

    2011-10-01

    We describe a compact luminescent gaseous oxygen (O 2 ) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O 2 -sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp) 3 ] 2+ ) encapsulated within sol-gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors.

  7. Multi-sensor image fusion and its applications

    CERN Document Server

    Blum, Rick S

    2005-01-01

    Taking another lesson from nature, the latest advances in image processing technology seek to combine image data from several diverse types of sensors in order to obtain a more accurate view of the scene: very much the same as we rely on our five senses. Multi-Sensor Image Fusion and Its Applications is the first text dedicated to the theory and practice of the registration and fusion of image data, covering such approaches as statistical methods, color-related techniques, model-based methods, and visual information display strategies.After a review of state-of-the-art image fusion techniques,

  8. Effect of Image Linearization on Normalized Compression Distance

    Science.gov (United States)

    Mortensen, Jonathan; Wu, Jia Jie; Furst, Jacob; Rogers, John; Raicu, Daniela

    Normalized Information Distance, based on Kolmogorov complexity, is an emerging metric for image similarity. It is approximated by the Normalized Compression Distance (NCD) which generates the relative distance between two strings by using standard compression algorithms to compare linear strings of information. This relative distance quantifies the degree of similarity between the two objects. NCD has been shown to measure similarity effectively on information which is already a string: genomic string comparisons have created accurate phylogeny trees and NCD has also been used to classify music. Currently, to find a similarity measure using NCD for images, the images must first be linearized into a string, and then compared. To understand how linearization of a 2D image affects the similarity measure, we perform four types of linearization on a subset of the Corel image database and compare each for a variety of image transformations. Our experiment shows that different linearization techniques produce statistically significant differences in NCD for identical spatial transformations.

  9. Fully wireless pressure sensor based on endoscopy images

    Science.gov (United States)

    Maeda, Yusaku; Mori, Hirohito; Nakagawa, Tomoaki; Takao, Hidekuni

    2018-04-01

    In this paper, the result of developing a fully wireless pressure sensor based on endoscopy images for an endoscopic surgery is reported for the first time. The sensor device has structural color with a nm-scale narrow gap, and the gap is changed by air pressure. The structural color of the sensor is acquired from camera images. Pressure detection can be realized with existing endoscope configurations only. The inner air pressure of the human body should be measured under flexible-endoscope operation using the sensor. Air pressure monitoring, has two important purposes. The first is to quantitatively measure tumor size under a constant air pressure for treatment selection. The second purpose is to prevent the endangerment of a patient due to over transmission of air. The developed sensor was evaluated, and the detection principle based on only endoscopy images has been successfully demonstrated.

  10. 3D-LSI technology for image sensor

    International Nuclear Information System (INIS)

    Motoyoshi, Makoto; Koyanagi, Mitsumasa

    2009-01-01

    Recently, the development of three-dimensional large-scale integration (3D-LSI) technologies has accelerated and has advanced from the research level or the limited production level to the investigation level, which might lead to mass production. By separating 3D-LSI technology into elementary technologies such as (1) through silicon via (TSV) formation, (2) bump formation, (3) wafer thinning, (4) chip/wafer alignment, and (5) chip/wafer stacking and reconstructing the entire process and structure, many methods to realize 3D-LSI devices can be developed. However, by considering a specific application, the supply chain of base wafers, and the purpose of 3D integration, a few suitable combinations can be identified. In this paper, we focus on the application of 3D-LSI technologies to image sensors. We describe the process and structure of the chip size package (CSP), developed on the basis of current and advanced 3D-LSI technologies, to be used in CMOS image sensors. Using the current LSI technologies, CSPs for 1.3 M, 2 M, and 5 M pixel CMOS image sensors were successfully fabricated without any performance degradation. 3D-LSI devices can be potentially employed in high-performance focal-plane-array image sensors. We propose a high-speed image sensor with an optical fill factor of 100% to be developed using next-generation 3D-LSI technology and fabricated using micro(μ)-bumps and micro(μ)-TSVs.

  11. Image acquisition system using on sensor compressed sampling technique

    Science.gov (United States)

    Gupta, Pravir Singh; Choi, Gwan Seong

    2018-01-01

    Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.

  12. CMOS sensors for atmospheric imaging

    Science.gov (United States)

    Pratlong, Jérôme; Burt, David; Jerram, Paul; Mayer, Frédéric; Walker, Andrew; Simpson, Robert; Johnson, Steven; Hubbard, Wendy

    2017-09-01

    Recent European atmospheric imaging missions have seen a move towards the use of CMOS sensors for the visible and NIR parts of the spectrum. These applications have particular challenges that are completely different to those that have driven the development of commercial sensors for applications such as cell-phone or SLR cameras. This paper will cover the design and performance of general-purpose image sensors that are to be used in the MTG (Meteosat Third Generation) and MetImage satellites and the technology challenges that they have presented. We will discuss how CMOS imagers have been designed with 4T pixel sizes of up to 250 μm square achieving good charge transfer efficiency, or low lag, with signal levels up to 2M electrons and with high line rates. In both devices a low noise analogue read-out chain is used with correlated double sampling to suppress the readout noise and give a maximum dynamic range that is significantly larger than in standard commercial devices. Radiation hardness is a particular challenge for CMOS detectors and both of these sensors have been designed to be fully radiation hard with high latch-up and single-event-upset tolerances, which is now silicon proven on MTG. We will also cover the impact of ionising radiation on these devices. Because with such large pixels the photodiodes have a large open area, front illumination technology is sufficient to meet the detection efficiency requirements but with thicker than standard epitaxial silicon to give improved IR response (note that this makes latch up protection even more important). However with narrow band illumination reflections from the front and back of the dielectric stack on the top of the sensor produce Fabry-Perot étalon effects, which have been minimised with process modifications. We will also cover the addition of precision narrow band filters inside the MTG package to provide a complete imaging subsystem. Control of reflected light is also critical in obtaining the

  13. A Wildlife Monitoring System Based on Wireless Image Sensor Networks

    Directory of Open Access Journals (Sweden)

    Junguo Zhang

    2014-10-01

    Full Text Available Survival and development of wildlife sustains the balance and stability of the entire ecosystem. Wildlife monitoring can provide lots of information such as wildlife species, quantity, habits, quality of life and habitat conditions, to help researchers grasp the status and dynamics of wildlife resources, and to provide basis for the effective protection, sustainable use, and scientific management of wildlife resources. Wildlife monitoring is the foundation of wildlife protection and management. Wireless Sensor Networks (WSN technology has become the most popular technology in the field of information. With advance of the CMOS image sensor technology, wireless sensor networks combined with image sensors, namely Wireless Image Sensor Networks (WISN technology, has emerged as an alternative in monitoring applications. Monitoring wildlife is one of its most promising applications. In this paper, system architecture of the wildlife monitoring system based on the wireless image sensor networks was presented to overcome the shortcomings of the traditional monitoring methods. Specifically, some key issues including design of wireless image sensor nodes and software process design have been studied and presented. A self-powered rotatable wireless infrared image sensor node based on ARM and an aggregation node designed for large amounts of data were developed. In addition, their corresponding software was designed. The proposed system is able to monitor wildlife accurately, automatically, and remotely in all-weather condition, which lays foundations for applications of wireless image sensor networks in wildlife monitoring.

  14. A linearization time-domain CMOS smart temperature sensor using a curvature compensation oscillator.

    Science.gov (United States)

    Chen, Chun-Chi; Chen, Hao-Wen

    2013-08-28

    This paper presents an area-efficient time-domain CMOS smart temperature sensor using a curvature compensation oscillator for linearity enhancement with a -40 to 120 °C temperature range operability. The inverter-based smart temperature sensors can substantially reduce the cost and circuit complexity of integrated temperature sensors. However, a large curvature exists on the temperature-to-time transfer curve of the inverter-based delay line and results in poor linearity of the sensor output. For cost reduction and error improvement, a temperature-to-pulse generator composed of a ring oscillator and a time amplifier was used to generate a thermal sensing pulse with a sufficient width proportional to the absolute temperature (PTAT). Then, a simple but effective on-chip curvature compensation oscillator is proposed to simultaneously count and compensate the PTAT pulse with curvature for linearization. With such a simple structure, the proposed sensor possesses an extremely small area of 0.07 mm2 in a TSMC 0.35-mm CMOS 2P4M digital process. By using an oscillator-based scheme design, the proposed sensor achieves a fine resolution of 0.045 °C without significantly increasing the circuit area. With the curvature compensation, the inaccuracy of -1.2 to 0.2 °C is achieved in an operation range of -40 to 120 °C after two-point calibration for 14 packaged chips. The power consumption is measured as 23 mW at a sample rate of 10 samples/s.

  15. A time-resolved image sensor for tubeless streak cameras

    Science.gov (United States)

    Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji

    2014-03-01

    This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .

  16. CMOS Imaging Sensor Technology for Aerial Mapping Cameras

    Science.gov (United States)

    Neumann, Klaus; Welzenbach, Martin; Timm, Martin

    2016-06-01

    In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.

  17. Network compensation for missing sensors

    Science.gov (United States)

    Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.

    1991-01-01

    A network learning translation invariance algorithm to compute interpolation functions is presented. This algorithm with one fixed receptive field can construct a linear transformation compensating for gain changes, sensor position jitter, and sensor loss when there are enough remaining sensors to adequately sample the input images. However, when the images are undersampled and complete compensation is not possible, the algorithm need to be modified. For moderate sensor losses, the algorithm works if the transformation weight adjustment is restricted to the weights to output units affected by the loss.

  18. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    Science.gov (United States)

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  19. Rectification of aerial images using piecewise linear transformation

    International Nuclear Information System (INIS)

    Liew, L H; Lee, B Y; Wang, Y C; Cheah, W S

    2014-01-01

    Aerial images are widely used in various activities by providing visual records. This type of remotely sensed image is helpful in generating digital maps, managing ecology, monitoring crop growth and region surveying. Such images could provide insight into areas of interest that have lower altitude, particularly in regions where optical satellite imaging is prevented due to cloudiness. Aerial images captured using a non-metric cameras contain real details of the images as well as unexpected distortions. Distortions would affect the actual length, direction and shape of objects in the images. There are many sources that could cause distortions such as lens, earth curvature, topographic relief and the attitude of the aircraft that is used to carry the camera. These distortions occur differently, collectively and irregularly in the entire image. Image rectification is an essential image pre-processing step to eliminate or at least reduce the effect of distortions. In this paper, a non-parametric approach with piecewise linear transformation is investigated in rectifying distorted aerial images. The non-parametric approach requires a set of corresponding control points obtained from a reference image and a distorted image. The corresponding control points are then applied with piecewise linear transformation as geometric transformation. Piecewise linear transformation divides the image into regions by triangulation. Different linear transformations are employed separately to triangular regions instead of using a single transformation as the rectification model for the entire image. The result of rectification is evaluated using total root mean square error (RMSE). Experiments show that piecewise linear transformation could assist in improving the limitation of using global transformation to rectify images

  20. Self-Similarity Superresolution for Resource-Constrained Image Sensor Node in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yuehai Wang

    2014-01-01

    Full Text Available Wireless sensor networks, in combination with image sensors, open up a grand sensing application field. It is a challenging problem to recover a high resolution (HR image from its low resolution (LR counterpart, especially for low-cost resource-constrained image sensors with limited resolution. Sparse representation-based techniques have been developed recently and increasingly to solve this ill-posed inverse problem. Most of these solutions are based on an external dictionary learned from huge image gallery, consequently needing tremendous iteration and long time to match. In this paper, we explore the self-similarity inside the image itself, and propose a new combined self-similarity superresolution (SR solution, with low computation cost and high recover performance. In the self-similarity image super resolution model (SSIR, a small size sparse dictionary is learned from the image itself by the methods such as KSVD. The most similar patch is searched and specially combined during the sparse regulation iteration. Detailed information, such as edge sharpness, is preserved more faithfully and clearly. Experiment results confirm the effectiveness and efficiency of this double self-learning method in the image super resolution.

  1. Epidermis Microstructure Inspired Graphene Pressure Sensor with Random Distributed Spinosum for High Sensitivity and Large Linearity.

    Science.gov (United States)

    Pang, Yu; Zhang, Kunning; Yang, Zhen; Jiang, Song; Ju, Zhenyi; Li, Yuxing; Wang, Xuefeng; Wang, Danyang; Jian, Muqiang; Zhang, Yingying; Liang, Renrong; Tian, He; Yang, Yi; Ren, Tian-Ling

    2018-03-27

    Recently, wearable pressure sensors have attracted tremendous attention because of their potential applications in monitoring physiological signals for human healthcare. Sensitivity and linearity are the two most essential parameters for pressure sensors. Although various designed micro/nanostructure morphologies have been introduced, the trade-off between sensitivity and linearity has not been well balanced. Human skin, which contains force receptors in a reticular layer, has a high sensitivity even for large external stimuli. Herein, inspired by the skin epidermis with high-performance force sensing, we have proposed a special surface morphology with spinosum microstructure of random distribution via the combination of an abrasive paper template and reduced graphene oxide. The sensitivity of the graphene pressure sensor with random distribution spinosum (RDS) microstructure is as high as 25.1 kPa -1 in a wide linearity range of 0-2.6 kPa. Our pressure sensor exhibits superior comprehensive properties compared with previous surface-modified pressure sensors. According to simulation and mechanism analyses, the spinosum microstructure and random distribution contribute to the high sensitivity and large linearity range, respectively. In addition, the pressure sensor shows promising potential in detecting human physiological signals, such as heartbeat, respiration, phonation, and human motions of a pushup, arm bending, and walking. The wearable pressure sensor array was further used to detect gait states of supination, neutral, and pronation. The RDS microstructure provides an alternative strategy to improve the performance of pressure sensors and extend their potential applications in monitoring human activities.

  2. High-speed imaging using CMOS image sensor with quasi pixel-wise exposure

    Science.gov (United States)

    Sonoda, T.; Nagahara, H.; Endo, K.; Sugiyama, Y.; Taniguchi, R.

    2017-02-01

    Several recent studies in compressive video sensing have realized scene capture beyond the fundamental trade-off limit between spatial resolution and temporal resolution using random space-time sampling. However, most of these studies showed results for higher frame rate video that were produced by simulation experiments or using an optically simulated random sampling camera, because there are currently no commercially available image sensors with random exposure or sampling capabilities. We fabricated a prototype complementary metal oxide semiconductor (CMOS) image sensor with quasi pixel-wise exposure timing that can realize nonuniform space-time sampling. The prototype sensor can reset exposures independently by columns and fix these amount of exposure by rows for each 8x8 pixel block. This CMOS sensor is not fully controllable via the pixels, and has line-dependent controls, but it offers flexibility when compared with regular CMOS or charge-coupled device sensors with global or rolling shutters. We propose a method to realize pseudo-random sampling for high-speed video acquisition that uses the flexibility of the CMOS sensor. We reconstruct the high-speed video sequence from the images produced by pseudo-random sampling using an over-complete dictionary.

  3. Thermoelectric infrared imaging sensors for automotive applications

    Science.gov (United States)

    Hirota, Masaki; Nakajima, Yasushi; Saito, Masanori; Satou, Fuminori; Uchiyama, Makoto

    2004-07-01

    This paper describes three low-cost thermoelectric infrared imaging sensors having a 1,536, 2,304, and 10,800 element thermoelectric focal plane array (FPA) respectively and two experimental automotive application systems. The FPAs are basically fabricated with a conventional IC process and micromachining technologies and have a low cost potential. Among these sensors, the sensor having 2,304 elements provide high responsivity of 5,500 V/W and a very small size with adopting a vacuum-sealed package integrated with a wide-angle ZnS lens. One experimental system incorporated in the Nissan ASV-2 is a blind spot pedestrian warning system that employs four infrared imaging sensors. This system helps alert the driver to the presence of a pedestrian in a blind spot by detecting the infrared radiation emitted from the person"s body. The system can also prevent the vehicle from moving in the direction of the pedestrian. The other is a rearview camera system with an infrared detection function. This system consists of a visible camera and infrared sensors, and it helps alert the driver to the presence of a pedestrian in a rear blind spot. Various issues that will need to be addressed in order to expand the automotive applications of IR imaging sensors in the future are also summarized. This performance is suitable for consumer electronics as well as automotive applications.

  4. Static Hyperspectral Fluorescence Imaging of Viscous Materials Based on a Linear Variable Filter Spectrometer

    Directory of Open Access Journals (Sweden)

    Alexander W. Koch

    2013-09-01

    Full Text Available This paper presents a low-cost hyperspectral measurement setup in a new application based on fluorescence detection in the visible (Vis wavelength range. The aim of the setup is to take hyperspectral fluorescence images of viscous materials. Based on these images, fluorescent and non-fluorescent impurities in the viscous materials can be detected. For the illumination of the measurement object, a narrow-band high-power light-emitting diode (LED with a center wavelength of 370 nm was used. The low-cost acquisition unit for the imaging consists of a linear variable filter (LVF and a complementary metal oxide semiconductor (CMOS 2D sensor array. The translucent wavelength range of the LVF is from 400 nm to 700 nm. For the confirmation of the concept, static measurements of fluorescent viscous materials with a non-fluorescent impurity have been performed and analyzed. With the presented setup, measurement surfaces in the micrometer range can be provided. The measureable minimum particle size of the impurities is in the nanometer range. The recording rate for the measurements depends on the exposure time of the used CMOS 2D sensor array and has been found to be in the microsecond range.

  5. A Support Vector Machine Approach for Truncated Fingerprint Image Detection from Sweeping Fingerprint Sensors

    Science.gov (United States)

    Chen, Chi-Jim; Pai, Tun-Wen; Cheng, Mox

    2015-01-01

    A sweeping fingerprint sensor converts fingerprints on a row by row basis through image reconstruction techniques. However, a built fingerprint image might appear to be truncated and distorted when the finger was swept across a fingerprint sensor at a non-linear speed. If the truncated fingerprint images were enrolled as reference targets and collected by any automated fingerprint identification system (AFIS), successful prediction rates for fingerprint matching applications would be decreased significantly. In this paper, a novel and effective methodology with low time computational complexity was developed for detecting truncated fingerprints in a real time manner. Several filtering rules were implemented to validate existences of truncated fingerprints. In addition, a machine learning method of supported vector machine (SVM), based on the principle of structural risk minimization, was applied to reject pseudo truncated fingerprints containing similar characteristics of truncated ones. The experimental result has shown that an accuracy rate of 90.7% was achieved by successfully identifying truncated fingerprint images from testing images before AFIS enrollment procedures. The proposed effective and efficient methodology can be extensively applied to all existing fingerprint matching systems as a preliminary quality control prior to construction of fingerprint templates. PMID:25835186

  6. A Support Vector Machine Approach for Truncated Fingerprint Image Detection from Sweeping Fingerprint Sensors

    Directory of Open Access Journals (Sweden)

    Chi-Jim Chen

    2015-03-01

    Full Text Available A sweeping fingerprint sensor converts fingerprints on a row by row basis through image reconstruction techniques. However, a built fingerprint image might appear to be truncated and distorted when the finger was swept across a fingerprint sensor at a non-linear speed. If the truncated fingerprint images were enrolled as reference targets and collected by any automated fingerprint identification system (AFIS, successful prediction rates for fingerprint matching applications would be decreased significantly. In this paper, a novel and effective methodology with low time computational complexity was developed for detecting truncated fingerprints in a real time manner. Several filtering rules were implemented to validate existences of truncated fingerprints. In addition, a machine learning method of supported vector machine (SVM, based on the principle of structural risk minimization, was applied to reject pseudo truncated fingerprints containing similar characteristics of truncated ones. The experimental result has shown that an accuracy rate of 90.7% was achieved by successfully identifying truncated fingerprint images from testing images before AFIS enrollment procedures. The proposed effective and efficient methodology can be extensively applied to all existing fingerprint matching systems as a preliminary quality control prior to construction of fingerprint templates.

  7. CMOS image sensor-based immunodetection by refractive-index change.

    Science.gov (United States)

    Devadhasan, Jasmine P; Kim, Sanghyo

    2012-01-01

    A complementary metal oxide semiconductor (CMOS) image sensor is an intriguing technology for the development of a novel biosensor. Indeed, the CMOS image sensor mechanism concerning the detection of the antigen-antibody (Ag-Ab) interaction at the nanoscale has been ambiguous so far. To understand the mechanism, more extensive research has been necessary to achieve point-of-care diagnostic devices. This research has demonstrated a CMOS image sensor-based analysis of cardiovascular disease markers, such as C-reactive protein (CRP) and troponin I, Ag-Ab interactions on indium nanoparticle (InNP) substrates by simple photon count variation. The developed sensor is feasible to detect proteins even at a fg/mL concentration under ordinary room light. Possible mechanisms, such as dielectric constant and refractive-index changes, have been studied and proposed. A dramatic change in the refractive index after protein adsorption on an InNP substrate was observed to be a predominant factor involved in CMOS image sensor-based immunoassay.

  8. Subpixel Mapping of Hyperspectral Image Based on Linear Subpixel Feature Detection and Object Optimization

    Science.gov (United States)

    Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan

    2018-04-01

    Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.

  9. Linear all-fiber temperature sensor based on macro-bent erbium doped fiber

    International Nuclear Information System (INIS)

    Hajireza, P; Cham, C L; Kumar, D; Abdul-Rashid, H A; Emami, S D; Harun, S W

    2010-01-01

    A new all fiber temperature sensor is proposed and demonstrated based on a pair of 1 meter erbium-doped fiber (EDF), which are respectively macro-bent and straight. The sensor has a linear normalized loss (dB) response to temperature at 6.5 mm bending radius and 1580 nm input wavelength. The main advantage of this sensor is high temperature resolution (less than 1°C) and sensitivity (0.03 dB/°C) due to combination of temperature dependence of EDF and bending loss. The proposed silica based sensor, has the potential for wide range and high temperature applications in harsh environments

  10. The Systematic Bias of Ingestible Core Temperature Sensors Requires a Correction by Linear Regression.

    Science.gov (United States)

    Hunt, Andrew P; Bach, Aaron J E; Borg, David N; Costello, Joseph T; Stewart, Ian B

    2017-01-01

    An accurate measure of core body temperature is critical for monitoring individuals, groups and teams undertaking physical activity in situations of high heat stress or prolonged cold exposure. This study examined the range in systematic bias of ingestible temperature sensors compared to a certified and traceable reference thermometer. A total of 119 ingestible temperature sensors were immersed in a circulated water bath at five water temperatures (TEMP A: 35.12 ± 0.60°C, TEMP B: 37.33 ± 0.56°C, TEMP C: 39.48 ± 0.73°C, TEMP D: 41.58 ± 0.97°C, and TEMP E: 43.47 ± 1.07°C) along with a certified traceable reference thermometer. Thirteen sensors (10.9%) demonstrated a systematic bias > ±0.1°C, of which 4 (3.3%) were > ± 0.5°C. Limits of agreement (95%) indicated that systematic bias would likely fall in the range of -0.14 to 0.26°C, highlighting that it is possible for temperatures measured between sensors to differ by more than 0.4°C. The proportion of sensors with systematic bias > ±0.1°C (10.9%) confirms that ingestible temperature sensors require correction to ensure their accuracy. An individualized linear correction achieved a mean systematic bias of 0.00°C, and limits of agreement (95%) to 0.00-0.00°C, with 100% of sensors achieving ±0.1°C accuracy. Alternatively, a generalized linear function (Corrected Temperature (°C) = 1.00375 × Sensor Temperature (°C) - 0.205549), produced as the average slope and intercept of a sub-set of 51 sensors and excluding sensors with accuracy outside ±0.5°C, reduced the systematic bias to Correction of sensor temperature to a reference thermometer by linear function eliminates this systematic bias (individualized functions) or ensures systematic bias is within ±0.1°C in 98% of the sensors (generalized function).

  11. Combined Simulation of a Micro Permanent Magnetic Linear Contactless Displacement Sensor

    Directory of Open Access Journals (Sweden)

    Jing Gao

    2010-09-01

    Full Text Available The permanent magnetic linear contactless displacement (PLCD sensor is a new type of displacement sensor operating on the magnetic inductive principle. It has many excellent properties and has already been used for many applications. In this article a Micro-PLCD sensor which can be used for microelectromechanical system (MEMS measurements is designed and simulated with the CST EM STUDIO® software, including building a virtual model, magnetostatic calculations, low frequency calculations, steady current calculations and thermal calculations. The influence of some important parameters such as air gap dimension, working frequency, coil current and eddy currents etc. is studied in depth.

  12. Position sensor for linear synchronous motors employing halbach arrays

    Science.gov (United States)

    Post, Richard Freeman

    2014-12-23

    A position sensor suitable for use in linear synchronous motor (LSM) drive systems employing Halbach arrays to create their magnetic fields is described. The system has several advantages over previously employed ones, especially in its simplicity and its freedom from being affected by weather conditions, accumulated dirt, or electrical interference from the LSM system itself.

  13. Linear Algebra and Image Processing

    Science.gov (United States)

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  14. A generalized logarithmic image processing model based on the gigavision sensor model.

    Science.gov (United States)

    Deng, Guang

    2012-03-01

    The logarithmic image processing (LIP) model is a mathematical theory providing generalized linear operations for image processing. The gigavision sensor (GVS) is a new imaging device that can be described by a statistical model. In this paper, by studying these two seemingly unrelated models, we develop a generalized LIP (GLIP) model. With the LIP model being its special case, the GLIP model not only provides new insights into the LIP model but also defines new image representations and operations for solving general image processing problems that are not necessarily related to the GVS. A new parametric LIP model is also developed. To illustrate the application of the new scalar multiplication operation, we propose an energy-preserving algorithm for tone mapping, which is a necessary step in image dehazing. By comparing with results using two state-of-the-art algorithms, we show that the new scalar multiplication operation is an effective tool for tone mapping.

  15. The Systematic Bias of Ingestible Core Temperature Sensors Requires a Correction by Linear Regression

    Directory of Open Access Journals (Sweden)

    Andrew P. Hunt

    2017-04-01

    Full Text Available An accurate measure of core body temperature is critical for monitoring individuals, groups and teams undertaking physical activity in situations of high heat stress or prolonged cold exposure. This study examined the range in systematic bias of ingestible temperature sensors compared to a certified and traceable reference thermometer. A total of 119 ingestible temperature sensors were immersed in a circulated water bath at five water temperatures (TEMP A: 35.12 ± 0.60°C, TEMP B: 37.33 ± 0.56°C, TEMP C: 39.48 ± 0.73°C, TEMP D: 41.58 ± 0.97°C, and TEMP E: 43.47 ± 1.07°C along with a certified traceable reference thermometer. Thirteen sensors (10.9% demonstrated a systematic bias > ±0.1°C, of which 4 (3.3% were > ± 0.5°C. Limits of agreement (95% indicated that systematic bias would likely fall in the range of −0.14 to 0.26°C, highlighting that it is possible for temperatures measured between sensors to differ by more than 0.4°C. The proportion of sensors with systematic bias > ±0.1°C (10.9% confirms that ingestible temperature sensors require correction to ensure their accuracy. An individualized linear correction achieved a mean systematic bias of 0.00°C, and limits of agreement (95% to 0.00–0.00°C, with 100% of sensors achieving ±0.1°C accuracy. Alternatively, a generalized linear function (Corrected Temperature (°C = 1.00375 × Sensor Temperature (°C − 0.205549, produced as the average slope and intercept of a sub-set of 51 sensors and excluding sensors with accuracy outside ±0.5°C, reduced the systematic bias to < ±0.1°C in 98.4% of the remaining sensors (n = 64. In conclusion, these data show that using an uncalibrated ingestible temperature sensor may provide inaccurate data that still appears to be statistically, physiologically, and clinically meaningful. Correction of sensor temperature to a reference thermometer by linear function eliminates this systematic bias (individualized functions or ensures

  16. Flexible Ferroelectric Sensors with Ultrahigh Pressure Sensitivity and Linear Response over Exceptionally Broad Pressure Range.

    Science.gov (United States)

    Lee, Youngoh; Park, Jonghwa; Cho, Soowon; Shin, Young-Eun; Lee, Hochan; Kim, Jinyoung; Myoung, Jinyoung; Cho, Seungse; Kang, Saewon; Baig, Chunggi; Ko, Hyunhyub

    2018-04-24

    Flexible pressure sensors with a high sensitivity over a broad linear range can simplify wearable sensing systems without additional signal processing for the linear output, enabling device miniaturization and low power consumption. Here, we demonstrate a flexible ferroelectric sensor with ultrahigh pressure sensitivity and linear response over an exceptionally broad pressure range based on the material and structural design of ferroelectric composites with a multilayer interlocked microdome geometry. Due to the stress concentration between interlocked microdome arrays and increased contact area in the multilayer design, the flexible ferroelectric sensors could perceive static/dynamic pressure with high sensitivity (47.7 kPa -1 , 1.3 Pa minimum detection). In addition, efficient stress distribution between stacked multilayers enables linear sensing over exceptionally broad pressure range (0.0013-353 kPa) with fast response time (20 ms) and high reliability over 5000 repetitive cycles even at an extremely high pressure of 272 kPa. Our sensor can be used to monitor diverse stimuli from a low to a high pressure range including weak gas flow, acoustic sound, wrist pulse pressure, respiration, and foot pressure with a single device.

  17. Vision communications based on LED array and imaging sensor

    Science.gov (United States)

    Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.

  18. Design and Fabrication of Vertically-Integrated CMOS Image Sensors

    Science.gov (United States)

    Skorka, Orit; Joseph, Dileepan

    2011-01-01

    Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors. PMID:22163860

  19. CMOS-sensors for energy-resolved X-ray imaging

    International Nuclear Information System (INIS)

    Doering, D.; Amar-Youcef, S.; Deveaux, M.; Linnik, B.; Müntz, C.; Stroth, Joachim; Baudot, J.; Dulinski, W.; Kachel, M.

    2016-01-01

    Due to their low noise, CMOS Monolithic Active Pixel Sensors are suited to sense X-rays with a few keV quantum energy, which is of interest for high resolution X-ray imaging. Moreover, the good energy resolution of the silicon sensors might be used to measure this quantum energy. Combining both features with the good spatial resolution of CMOS sensors opens the potential to build ''color sensitive' X-ray cameras. Taking such colored images is hampered by the need to operate the CMOS sensors in a single photon counting mode, which restricts the photon flux capability of the sensors. More importantly, the charge sharing between the pixels smears the potentially good energy resolution of the sensors. Based on our experience with CMOS sensors for charged particle tracking, we studied techniques to overcome the latter by means of an offline processing of the data obtained from a CMOS sensor prototype. We found that the energy resolution of the pixels can be recovered at the expense of reduced quantum efficiency. We will introduce the results of our study and discuss the feasibility of taking colored X-ray pictures with CMOS sensors

  20. A Monolithic CMOS Magnetic Hall Sensor with High Sensitivity and Linearity Characteristics.

    Science.gov (United States)

    Huang, Haiyun; Wang, Dejun; Xu, Yue

    2015-10-27

    This paper presents a fully integrated linear Hall sensor by means of 0.8 μm high voltage complementary metal-oxide semiconductor (CMOS) technology. This monolithic Hall sensor chip features a highly sensitive horizontal switched Hall plate and an efficient signal conditioner using dynamic offset cancellation technique. An improved cross-like Hall plate achieves high magnetic sensitivity and low offset. A new spinning current modulator stabilizes the quiescent output voltage and improves the reliability of the signal conditioner. The tested results show that at the 5 V supply voltage, the maximum Hall output voltage of the monolithic Hall sensor microsystem, is up to ±2.1 V and the linearity of Hall output voltage is higher than 99% in the magnetic flux density range from ±5 mT to ±175 mT. The output equivalent residual offset is 0.48 mT and the static power consumption is 20 mW.

  1. A Monolithic CMOS Magnetic Hall Sensor with High Sensitivity and Linearity Characteristics

    Directory of Open Access Journals (Sweden)

    Haiyun Huang

    2015-10-01

    Full Text Available This paper presents a fully integrated linear Hall sensor by means of 0.8 μm high voltage complementary metal-oxide semiconductor (CMOS technology. This monolithic Hall sensor chip features a highly sensitive horizontal switched Hall plate and an efficient signal conditioner using dynamic offset cancellation technique. An improved cross-like Hall plate achieves high magnetic sensitivity and low offset. A new spinning current modulator stabilizes the quiescent output voltage and improves the reliability of the signal conditioner. The tested results show that at the 5 V supply voltage, the maximum Hall output voltage of the monolithic Hall sensor microsystem, is up to ±2.1 V and the linearity of Hall output voltage is higher than 99% in the magnetic flux density range from ±5 mT to ±175 mT. The output equivalent residual offset is 0.48 mT and the static power consumption is 20 mW.

  2. Non-linear Ultrasound Imaging

    DEFF Research Database (Denmark)

    Du, Yigang

    .3% relative to the measurement from a 1 inch diameter transducer. A preliminary study for harmonic imaging using synthetic aperture sequential beamforming (SASB) has been demonstrated. A wire phantom underwater measurement is made by an experimental synthetic aperture real-time ultrasound scanner (SARUS......) with a linear array transducer. The second harmonic imaging is obtained by a pulse inversion technique. The received data is beamformed by the SASB using a Beamformation Toolbox. In the measurements the lateral resolution at -6 dB is improved by 66% compared to the conventional imaging algorithm. There is also...... a 35% improvement for the lateral resolution at -6 dB compared with the sole harmonic imaging and a 46% improvement compared with merely using the SASB....

  3. Study of photoconductor-based radiological image sensors

    International Nuclear Information System (INIS)

    Beaumont, Francois

    1989-01-01

    Because of the evolution of medical imaging techniques to digital Systems, it is necessary to replace radiological film which has many drawbacks, by a detector quite as efficient and quickly giving a digitizable signal. The purpose of this thesis is to find new X-ray digital imaging processes using photoconductor materials such as amorphous selenium. After reviewing the principle of direct radiology and functions to be served by the X-ray sensor (i.e. detection, memory, assignment, visualization), we explain specification. We especially show the constraints due to the object to be radiographed (condition of minimal exposure), and to the reading signal (electronic noise detection associated with a reading frequency). As a result of this study, a first photoconductor sensor could be designed. Its principle is based on photo-carrier trapping at dielectric-photoconductor structure interface. The reading System needs the scanning of a laser beam upon the sensor surface. The dielectric-photoconductor structure enabled us to estimate the possibilities offered by the sensor and to build a complete x-ray imaging System. The originality of thermo-dielectric sensor, that was next studied, is to allow a thermal assignment reading. The chosen System consists in varying the ferroelectric polymer capacity whose dielectric permittivity is weak at room temperature. The thermo-dielectric material was studied by thermal or Joule effect stimulation. During our experiments, trapping was found in a sensor made of amorphous selenium between two electrodes. This new effect was performed and enabled us to expose a first interpretation. Eventually, the comparison of these new sensor concepts with radiological film shows the advantage of the proposed solution. (author) [fr

  4. Fabricating Optical Fiber Imaging Sensors Using Inkjet Printing Technology: a pH Sensor Proof-of-Concept

    Energy Technology Data Exchange (ETDEWEB)

    Carter, J C; Alvis, R M; Brown, S B; Langry, K C; Wilson, T S; McBride, M T; Myrick, M L; Cox, W R; Grove, M E; Colston, B W

    2005-03-01

    We demonstrate the feasibility of using Drop-on-Demand microjet printing technology for fabricating imaging sensors by reproducibly printing an array of photopolymerizable sensing elements, containing a pH sensitive indicator, on the surface of an optical fiber image guide. The reproducibility of the microjet printing process is excellent for microdot (i.e. micron-sized polymer) sensor diameter (92.2 {+-} 2.2 microns), height (35.0 {+-} 1.0 microns), and roundness (0.00072 {+-} 0.00023). pH sensors were evaluated in terms of pH sensing ability ({le}2% sensor variation), response time, and hysteresis using a custom fluorescence imaging system. In addition, the microjet technique has distinct advantages over other fabrication methods, which are discussed in detail.

  5. CMOS Imaging of Pin-Printed Xerogel-Based Luminescent Sensor Microarrays.

    Science.gov (United States)

    Yao, Lei; Yung, Ka Yi; Khan, Rifat; Chodavarapu, Vamsy P; Bright, Frank V

    2010-12-01

    We present the design and implementation of a luminescence-based miniaturized multisensor system using pin-printed xerogel materials which act as host media for chemical recognition elements. We developed a CMOS imager integrated circuit (IC) to image the luminescence response of the xerogel-based sensor array. The imager IC uses a 26 × 20 (520 elements) array of active pixel sensors and each active pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. The imager includes a correlated double sampling circuit and pixel address/digital control circuit; the image data is read-out as coded serial signal. The sensor system uses a light-emitting diode (LED) to excite the target analyte responsive luminophores doped within discrete xerogel-based sensor elements. As a prototype, we developed a 4 × 4 (16 elements) array of oxygen (O 2 ) sensors. Each group of 4 sensor elements in the array (arranged in a row) is designed to provide a different and specific sensitivity to the target gaseous O 2 concentration. This property of multiple sensitivities is achieved by using a strategic mix of two oxygen sensitive luminophores ([Ru(dpp) 3 ] 2+ and ([Ru(bpy) 3 ] 2+ ) in each pin-printed xerogel sensor element. The CMOS imager consumes an average power of 8 mW operating at 1 kHz sampling frequency driven at 5 V. The developed prototype system demonstrates a low cost and miniaturized luminescence multisensor system.

  6. Lightning Imaging Sensor (LIS) on TRMM Science Data V4

    Data.gov (United States)

    National Aeronautics and Space Administration — The Lightning Imaging Sensor (LIS) Science Data was collected by the Lightning Imaging Sensor (LIS), which was an instrument on the Tropical Rainfall Measurement...

  7. Fingerprint image reconstruction for swipe sensor using Predictive Overlap Method

    Directory of Open Access Journals (Sweden)

    Mardiansyah Ahmad Zafrullah

    2018-01-01

    Full Text Available Swipe sensor is one of many biometric authentication sensor types that widely applied to embedded devices. The sensor produces an overlap on every pixel block of the image, so the picture requires a reconstruction process before heading to the feature extraction process. Conventional reconstruction methods require extensive computation, causing difficult to apply to embedded devices that have limited computing process. In this paper, image reconstruction is proposed using predictive overlap method, which determines the image block shift from the previous set of change data. The experiments were performed using 36 images generated by a swipe sensor with 128 x 8 pixels size of the area, where each image has an overlap in each block. The results reveal computation can increase up to 86.44% compared with conventional methods, with accuracy decreasing to 0.008% in average.

  8. The challenge of sCMOS image sensor technology to EMCCD

    Science.gov (United States)

    Chang, Weijing; Dai, Fang; Na, Qiyue

    2018-02-01

    In the field of low illumination image sensor, the noise of the latest scientific-grade CMOS image sensor is close to EMCCD, and the industry thinks it has the potential to compete and even replace EMCCD. Therefore we selected several typical sCMOS and EMCCD image sensors and cameras to compare their performance parameters. The results show that the signal-to-noise ratio of sCMOS is close to EMCCD, and the other parameters are superior. But signal-to-noise ratio is very important for low illumination imaging, and the actual imaging results of sCMOS is not ideal. EMCCD is still the first choice in the high-performance application field.

  9. A bio-image sensor for simultaneous detection of multi-neurotransmitters.

    Science.gov (United States)

    Lee, You-Na; Okumura, Koichi; Horio, Tomoko; Iwata, Tatsuya; Takahashi, Kazuhiro; Hattori, Toshiaki; Sawada, Kazuaki

    2018-03-01

    We report here a new bio-image sensor for simultaneous detection of spatial and temporal distribution of multi-neurotransmitters. It consists of multiple enzyme-immobilized membranes on a 128 × 128 pixel array with read-out circuit. Apyrase and acetylcholinesterase (AChE), as selective elements, are used to recognize adenosine 5'-triphosphate (ATP) and acetylcholine (ACh), respectively. To enhance the spatial resolution, hydrogen ion (H + ) diffusion barrier layers are deposited on top of the bio-image sensor and demonstrated their prevention capability. The results are used to design the space among enzyme-immobilized pixels and the null H + sensor to minimize the undesired signal overlap by H + diffusion. Using this bio-image sensor, we can obtain H + diffusion-independent imaging of concentration gradients of ATP and ACh in real-time. The sensing characteristics, such as sensitivity and detection of limit, are determined experimentally. With the proposed bio-image sensor the possibility exists for customizable monitoring of the activities of various neurochemicals by using different kinds of proton-consuming or generating enzymes. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Simultaneous live cell imaging using dual FRET sensors with a single excitation light.

    Directory of Open Access Journals (Sweden)

    Yusuke Niino

    Full Text Available Fluorescence resonance energy transfer (FRET between fluorescent proteins is a powerful tool for visualization of signal transduction in living cells, and recently, some strategies for imaging of dual FRET pairs in a single cell have been reported. However, these necessitate alteration of excitation light between two different wavelengths to avoid the spectral overlap, resulting in sequential detection with a lag time. Thus, to follow fast signal dynamics or signal changes in highly motile cells, a single-excitation dual-FRET method should be required. Here we reported this by using four-color imaging with a single excitation light and subsequent linear unmixing to distinguish fluorescent proteins. We constructed new FRET sensors with Sapphire/RFP to combine with CFP/YFP, and accomplished simultaneous imaging of cAMP and cGMP in single cells. We confirmed that signal amplitude of our dual FRET measurement is comparable to of conventional single FRET measurement. Finally, we demonstrated to monitor both intracellular Ca(2+ and cAMP in highly motile cardiac myocytes. To cancel out artifacts caused by the movement of the cell, this method expands the applicability of the combined use of dual FRET sensors for cell samples with high motility.

  11. A Low Power Digital Accumulation Technique for Digital-Domain CMOS TDI Image Sensor.

    Science.gov (United States)

    Yu, Changwei; Nie, Kaiming; Xu, Jiangtao; Gao, Jing

    2016-09-23

    In this paper, an accumulation technique suitable for digital domain CMOS time delay integration (TDI) image sensors is proposed to reduce power consumption without degrading the rate of imaging. In terms of the slight variations of quantization codes among different pixel exposures towards the same object, the pixel array is divided into two groups: one is for coarse quantization of high bits only, and the other one is for fine quantization of low bits. Then, the complete quantization codes are composed of both results from the coarse-and-fine quantization. The equivalent operation comparably reduces the total required bit numbers of the quantization. In the 0.18 µm CMOS process, two versions of 16-stage digital domain CMOS TDI image sensor chains based on a 10-bit successive approximate register (SAR) analog-to-digital converter (ADC), with and without the proposed technique, are designed. The simulation results show that the average power consumption of slices of the two versions are 6 . 47 × 10 - 8 J/line and 7 . 4 × 10 - 8 J/line, respectively. Meanwhile, the linearity of the two versions are 99.74% and 99.99%, respectively.

  12. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process.

    Science.gov (United States)

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-12

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke - . Readout noise under the highest pixel gain condition is 1 e - with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7", 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach.

  13. A Dew Point Meter Comprising a Nanoporous Thin Film Alumina Humidity Sensor with a Linearizing Capacitance Measuring Electronics

    Directory of Open Access Journals (Sweden)

    Dilip Kumar Ghara

    2008-02-01

    Full Text Available A novel trace moisture analyzer is presented comprising a capacitive nanoporous film of metal oxide sensor and electronics. The change in capacity of the sensor is due to absorption of water vapor by the pores. A simple capacitance measuring electronics is developed which can detect any change in capacitance and correlates to ambient humidity. The circuit can minimize the parasitic earth capacitance. The non linear response of the sensor is linearized with a micro-controller linearizing circuit. The experimental result shows a resolution of -4°C DP and accuracy within 2%.

  14. High dynamic range imaging sensors and architectures

    CERN Document Server

    Darmont, Arnaud

    2013-01-01

    Illumination is a crucial element in many applications, matching the luminance of the scene with the operational range of a camera. When luminance cannot be adequately controlled, a high dynamic range (HDR) imaging system may be necessary. These systems are being increasingly used in automotive on-board systems, road traffic monitoring, and other industrial, security, and military applications. This book provides readers with an intermediate discussion of HDR image sensors and techniques for industrial and non-industrial applications. It describes various sensor and pixel architectures capable

  15. Experimental single-chip color HDTV image acquisition system with 8M-pixel CMOS image sensor

    Science.gov (United States)

    Shimamoto, Hiroshi; Yamashita, Takayuki; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed an experimental single-chip color HDTV image acquisition system using 8M-pixel CMOS image sensor. The sensor has 3840 × 2160 effective pixels and is progressively scanned at 60 frames per second. We describe the color filter array and interpolation method to improve image quality with a high-pixel-count single-chip sensor. We also describe an experimental image acquisition system we used to measured spatial frequency characteristics in the horizontal direction. The results indicate good prospects for achieving a high quality single chip HDTV camera that reduces pseudo signals and maintains high spatial frequency characteristics within the frequency band for HDTV.

  16. Mover Position Detection for PMTLM Based on Linear Hall Sensors through EKF Processing.

    Science.gov (United States)

    Yan, Leyang; Zhang, Hui; Ye, Peiqing

    2017-04-06

    Accurate mover position is vital for a permanent magnet tubular linear motor (PMTLM) control system. In this paper, two linear Hall sensors are utilized to detect the mover position. However, Hall sensor signals contain third-order harmonics, creating errors in mover position detection. To filter out the third-order harmonics, a signal processing method based on the extended Kalman filter (EKF) is presented. The limitation of conventional processing method is first analyzed, and then EKF is adopted to detect the mover position. In the EKF model, the amplitude of the fundamental component and the percentage of the harmonic component are taken as state variables, and they can be estimated based solely on the measured sensor signals. Then, the harmonic component can be calculated and eliminated. The proposed method has the advantages of faster convergence, better stability and higher accuracy. Finally, experimental results validate the effectiveness and superiority of the proposed method.

  17. Multimodal Image Alignment via Linear Mapping between Feature Modalities.

    Science.gov (United States)

    Jiang, Yanyun; Zheng, Yuanjie; Hou, Sujuan; Chang, Yuchou; Gee, James

    2017-01-01

    We propose a novel landmark matching based method for aligning multimodal images, which is accomplished uniquely by resolving a linear mapping between different feature modalities. This linear mapping results in a new measurement on similarity of images captured from different modalities. In addition, our method simultaneously solves this linear mapping and the landmark correspondences by minimizing a convex quadratic function. Our method can estimate complex image relationship between different modalities and nonlinear nonrigid spatial transformations even in the presence of heavy noise, as shown in our experiments carried out by using a variety of image modalities.

  18. Performance study of double SOI image sensors

    Science.gov (United States)

    Miyoshi, T.; Arai, Y.; Fujita, Y.; Hamasaki, R.; Hara, K.; Ikegami, Y.; Kurachi, I.; Nishimura, R.; Ono, S.; Tauchi, K.; Tsuboyama, T.; Yamada, M.

    2018-02-01

    Double silicon-on-insulator (DSOI) sensors composed of two thin silicon layers and one thick silicon layer have been developed since 2011. The thick substrate consists of high resistivity silicon with p-n junctions while the thin layers are used as SOI-CMOS circuitry and as shielding to reduce the back-gate effect and crosstalk between the sensor and the circuitry. In 2014, a high-resolution integration-type pixel sensor, INTPIX8, was developed based on the DSOI concept. This device is fabricated using a Czochralski p-type (Cz-p) substrate in contrast to a single SOI (SSOI) device having a single thin silicon layer and a Float Zone p-type (FZ-p) substrate. In the present work, X-ray spectra of both DSOI and SSOI sensors were obtained using an Am-241 radiation source at four gain settings. The gain of the DSOI sensor was found to be approximately three times that of the SSOI device because the coupling capacitance is reduced by the DSOI structure. An X-ray imaging demonstration was also performed and high spatial resolution X-ray images were obtained.

  19. Special Sensor Microwave Imager/Sounder (SSMIS) Sensor Data Record (SDR) in netCDF

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Special Sensor Microwave Imager/Sounder (SSMIS) is a series of passive microwave conically scanning imagers and sounders onboard the DMSP satellites beginning...

  20. APPLICATION OF SENSOR FUSION TO IMPROVE UAV IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    S. Jabari

    2017-08-01

    Full Text Available Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan camera along with either a colour camera or a four-band multi-spectral (MS camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC. We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  1. Application of Sensor Fusion to Improve Uav Image Classification

    Science.gov (United States)

    Jabari, S.; Fathollahi, F.; Zhang, Y.

    2017-08-01

    Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  2. Two-Level Evaluation on Sensor Interoperability of Features in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Ya-Shuo Li

    2012-03-01

    Full Text Available Features used in fingerprint segmentation significantly affect the segmentation performance. Various features exhibit different discriminating abilities on fingerprint images derived from different sensors. One feature which has better discriminating ability on images derived from a certain sensor may not adapt to segment images derived from other sensors. This degrades the segmentation performance. This paper empirically analyzes the sensor interoperability problem of segmentation feature, which refers to the feature’s ability to adapt to the raw fingerprints captured by different sensors. To address this issue, this paper presents a two-level feature evaluation method, including the first level feature evaluation based on segmentation error rate and the second level feature evaluation based on decision tree. The proposed method is performed on a number of fingerprint databases which are obtained from various sensors. Experimental results show that the proposed method can effectively evaluate the sensor interoperability of features, and the features with good evaluation results acquire better segmentation accuracies of images originating from different sensors.

  3. Research-grade CMOS image sensors for demanding space applications

    Science.gov (United States)

    Saint-Pé, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Belliot, Pierre

    2017-11-01

    Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid- 90s, CMOS Image Sensors (CIS) have been competing with CCDs for more and more consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA, and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this talk will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments of CIS prototypes built using an imaging CMOS process and of devices based on improved designs will be presented.

  4. Autonomous vision networking: miniature wireless sensor networks with imaging technology

    Science.gov (United States)

    Messinger, Gioia; Goldberg, Giora

    2006-09-01

    The recent emergence of integrated PicoRadio technology, the rise of low power, low cost, System-On-Chip (SOC) CMOS imagers, coupled with the fast evolution of networking protocols and digital signal processing (DSP), created a unique opportunity to achieve the goal of deploying large-scale, low cost, intelligent, ultra-low power distributed wireless sensor networks for the visualization of the environment. Of all sensors, vision is the most desired, but its applications in distributed sensor networks have been elusive so far. Not any more. The practicality and viability of ultra-low power vision networking has been proven and its applications are countless, from security, and chemical analysis to industrial monitoring, asset tracking and visual recognition, vision networking represents a truly disruptive technology applicable to many industries. The presentation discusses some of the critical components and technologies necessary to make these networks and products affordable and ubiquitous - specifically PicoRadios, CMOS imagers, imaging DSP, networking and overall wireless sensor network (WSN) system concepts. The paradigm shift, from large, centralized and expensive sensor platforms, to small, low cost, distributed, sensor networks, is possible due to the emergence and convergence of a few innovative technologies. Avaak has developed a vision network that is aided by other sensors such as motion, acoustic and magnetic, and plans to deploy it for use in military and commercial applications. In comparison to other sensors, imagers produce large data files that require pre-processing and a certain level of compression before these are transmitted to a network server, in order to minimize the load on the network. Some of the most innovative chemical detectors currently in development are based on sensors that change color or pattern in the presence of the desired analytes. These changes are easily recorded and analyzed by a CMOS imager and an on-board DSP processor

  5. Using polynomials to simplify fixed pattern noise and photometric correction of logarithmic CMOS image sensors.

    Science.gov (United States)

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-10-16

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.

  6. Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor.

    Science.gov (United States)

    Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2016-02-22

    In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.

  7. Visualization of heavy ion-induced charge production in a CMOS image sensor

    CERN Document Server

    Végh, J; Klamra, W; Molnár, J; Norlin, LO; Novák, D; Sánchez-Crespo, A; Van der Marel, J; Fenyvesi, A; Valastyan, I; Sipos, A

    2004-01-01

    A commercial CMOS image sensor was irradiated with heavy ion beams in the several MeV energy range. The image sensor is equipped with a standard video output. The data were collected on-line through frame grabbing and analysed off-line after digitisation. It was shown that the response of the image sensor to the heavy ion bombardment varied with the type and energy of the projectiles. The sensor will be used for the CMS Barrel Muon Alignment system.

  8. Non-linear imaging condition to image fractures as non-welded interfaces

    NARCIS (Netherlands)

    Minato, S.; Ghose, R.

    2014-01-01

    Hydraulic properties of a fractured reservoir are often controlled by large fractures. In order to seismically detect and characterize them, a high-resolution imaging method is necessary. We apply a non-linear imaging condition to image fractures, considered as non-welded interfaces. We derive the

  9. Ageing effects on image sensors due to terrestrial cosmic radiation

    NARCIS (Netherlands)

    Nampoothiri, G.G.; Horemans, M.L.R.; Theuwissen, A.J.P.

    2011-01-01

    We analyze the “ageing” effect on image sensors introduced by neutrons present in natural (terrestrial) cosmic environment. The results obtained at sea level are corroborated for the first time with accelerated neutron beam tests and for various image sensor operation conditions. The results reveal

  10. A Fiber Bragg Grating Sensor Interrogation System Based on a Linearly Wavelength-Swept Thermo-Optic Laser Chip

    Science.gov (United States)

    Lee, Hyung-Seok; Lee, Hwi Don; Kim, Hyo Jin; Cho, Jae Du; Jeong, Myung Yung; Kim, Chang-Seok

    2014-01-01

    A linearized wavelength-swept thermo-optic laser chip was applied to demonstrate a fiber Bragg grating (FBG) sensor interrogation system. A broad tuning range of 11.8 nm was periodically obtained from the laser chip for a sweep rate of 16 Hz. To measure the linear time response of the reflection signal from the FBG sensor, a programmed driving signal was directly applied to the wavelength-swept laser chip. The linear wavelength response of the applied strain was clearly extracted with an R-squared value of 0.99994. To test the feasibility of the system for dynamic measurements, the dynamic strain was successfully interrogated with a repetition rate of 0.2 Hz by using this FBG sensor interrogation system. PMID:25177803

  11. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process

    Directory of Open Access Journals (Sweden)

    Isao Takayanagi

    2018-01-01

    Full Text Available To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR approach.

  12. CMOS Imaging of Temperature Effects on Pin-Printed Xerogel Sensor Microarrays.

    Science.gov (United States)

    Lei Yao; Ka Yi Yung; Chodavarapu, Vamsy P; Bright, Frank V

    2011-04-01

    In this paper, we study the effect of temperature on the operation and performance of a xerogel-based sensor microarrays coupled to a complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC) that images the photoluminescence response from the sensor microarray. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. A correlated double sampling circuit and pixel address/digital control/signal integration circuit are also implemented on-chip. The CMOS imager data are read out as a serial coded signal. The sensor system uses a light-emitting diode to excite target analyte responsive organometallic luminophores doped within discrete xerogel-based sensor elements. As a proto type, we developed a 3 × 3 (9 elements) array of oxygen (O2) sensors. Each group of three sensor elements in the array (arranged in a column) is designed to provide a different and specific sensitivity to the target gaseous O2 concentration. This property of multiple sensitivities is achieved by using a mix of two O2 sensitive luminophores in each pin-printed xerogel sensor element. The CMOS imager is designed to be low noise and consumes a static power of 320.4 μW and an average dynamic power of 624.6 μW when operating at 100-Hz sampling frequency and 1.8-V dc power supply.

  13. Characterization of modulated time-of-flight range image sensors

    Science.gov (United States)

    Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.

    2009-01-01

    A number of full field image sensors have been developed that are capable of simultaneously measuring intensity and distance (range) for every pixel in a given scene using an indirect time-of-flight measurement technique. A light source is intensity modulated at a frequency between 10-100 MHz, and an image sensor is modulated at the same frequency, synchronously sampling light reflected from objects in the scene (homodyne detection). The time of flight is manifested as a phase shift in the illumination modulation envelope, which can be determined from the sampled data simultaneously for each pixel in the scene. This paper presents a method of characterizing the high frequency modulation response of these image sensors, using a pico-second laser pulser. The characterization results allow the optimal operating parameters, such as the modulation frequency, to be identified in order to maximize the range measurement precision for a given sensor. A number of potential sources of error exist when using these sensors, including deficiencies in the modulation waveform shape, duty cycle, or phase, resulting in contamination of the resultant range data. From the characterization data these parameters can be identified and compensated for by modifying the sensor hardware or through post processing of the acquired range measurements.

  14. ANALYSIS OF SPECTRAL CHARACTERISTICS AMONG DIFFERENT SENSORS BY USE OF SIMULATED RS IMAGES

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This research, by use of RS image-simulating method, simulated apparent reflectance images at sensor level and ground-reflectance images of SPOT-HRV,CBERS-CCD,Landsat-TM and NOAA14-AVHRR' s corresponding bands. These images were used to analyze sensor's differences caused by spectral sensitivity and atmospheric impacts. The differences were analyzed on Normalized Difference Vegetation Index(NDVI). The results showed that the differences of sensors' spectral characteristics cause changes of their NDVI and reflectance. When multiple sensors' data are applied to digital analysis, the error should be taken into account. Atmospheric effect makes NDVI smaller, and atn~pheric correction has the tendency of increasing NDVI values. The reflectance and their NDVIs of different sensors can be used to analyze the differences among sensor' s features. The spectral analysis method based on RS simulated images can provide a new way to design the spectral characteristics of new sensors.

  15. Laser-engraved carbon nanotube paper for instilling high sensitivity, high stretchability, and high linearity in strain sensors

    KAUST Repository

    Xin, Yangyang

    2017-06-29

    There is an increasing demand for strain sensors with high sensitivity and high stretchability for new applications such as robotics or wearable electronics. However, for the available technologies, the sensitivity of the sensors varies widely. These sensors are also highly nonlinear, making reliable measurement challenging. Here we introduce a new family of sensors composed of a laser-engraved carbon nanotube paper embedded in an elastomer. A roll-to-roll pressing of these sensors activates a pre-defined fragmentation process, which results in a well-controlled, fragmented microstructure. Such sensors are reproducible and durable and can attain ultrahigh sensitivity and high stretchability (with a gauge factor of over 4.2 × 10(4) at 150% strain). Moreover, they can attain high linearity from 0% to 15% and from 22% to 150% strain. They are good candidates for stretchable electronic applications that require high sensitivity and linearity at large strains.

  16. Linear variable differential transformer sensor using glass-covered amorphous wires as active core

    International Nuclear Information System (INIS)

    Chiriac, H.; Hristoforou, E.; Neagu, Maria; Pieptanariu, M.

    2000-01-01

    Results concerning linear variable differential transformer (LVDT) displacement sensor using as movable core glass-covered amorphous wires are presented. The LVDT response is linear for a displacement of the movable core up to about 14 mm, with an accuracy of 1 μm. LVDT using glass-covered amorphous wire as an active core presents a high sensitivity and good mechanical and corrosion resistance

  17. Three dimensional multi perspective imaging with randomly distributed sensors

    International Nuclear Information System (INIS)

    DaneshPanah, Mehdi; Javidi, Bahrain

    2008-01-01

    In this paper, we review a three dimensional (3D) passive imaging system that exploits the visual information captured from the scene from multiple perspectives to reconstruct the scene voxel by voxel in 3D space. The primary contribution of this work is to provide a computational reconstruction scheme based on randomly distributed sensor locations in space. In virtually all of multi perspective techniques (e.g. integral imaging, synthetic aperture integral imaging, etc), there is an implicit assumption that the sensors lie on a simple, regular pickup grid. Here, we relax this assumption and suggest a computational reconstruction framework that unifies the available methods as its special cases. The importance of this work is that it enables three dimensional imaging technology to be implemented in a multitude of novel application domains such as 3D aerial imaging, collaborative imaging, long range 3D imaging and etc, where sustaining a regular pickup grid is not possible and/or the parallax requirements call for a irregular or sparse synthetic aperture mode. Although the sensors can be distributed in any random arrangement, we assume that the pickup position is measured at the time of capture of each elemental image. We demonstrate the feasibility of the methods proposed here by experimental results.

  18. SENSOR CORRECTION AND RADIOMETRIC CALIBRATION OF A 6-BAND MULTISPECTRAL IMAGING SENSOR FOR UAV REMOTE SENSING

    Directory of Open Access Journals (Sweden)

    J. Kelcey

    2012-07-01

    Full Text Available The increased availability of unmanned aerial vehicles (UAVs has resulted in their frequent adoption for a growing range of remote sensing tasks which include precision agriculture, vegetation surveying and fine-scale topographic mapping. The development and utilisation of UAV platforms requires broad technical skills covering the three major facets of remote sensing: data acquisition, data post-processing, and image analysis. In this study, UAV image data acquired by a miniature 6-band multispectral imaging sensor was corrected and calibrated using practical image-based data post-processing techniques. Data correction techniques included dark offset subtraction to reduce sensor noise, flat-field derived per-pixel look-up-tables to correct vignetting, and implementation of the Brown- Conrady model to correct lens distortion. Radiometric calibration was conducted with an image-based empirical line model using pseudo-invariant features (PIFs. Sensor corrections and radiometric calibration improve the quality of the data, aiding quantitative analysis and generating consistency with other calibrated datasets.

  19. Study of x-ray CCD image sensor and application

    Science.gov (United States)

    Wang, Shuyun; Li, Tianze

    2008-12-01

    In this paper, we expounded the composing, specialty, parameter, its working process, key techniques and methods for charge coupled devices (CCD) twice value treatment. Disposal process for CCD video signal quantification was expatiated; X-ray image intensifier's constitutes, function of constitutes, coupling technique of X-ray image intensifier and CCD were analyzed. We analyzed two effective methods to reduce the harm to human beings when X-ray was used in the medical image. One was to reduce X-ray's radiation and adopt to intensify the image penetrated by X-ray to gain the same effect. The other was to use the image sensor to transfer the images to the safe area for observation. On this base, a new method was presented that CCD image sensor and X-ray image intensifier were combined organically. A practical medical X-ray photo electricity system was designed which can be used in the records and time of the human's penetrating images. The system was mainly made up with the medical X-ray, X-ray image intensifier, CCD vidicon with high resolution, image processor, display and so on. Its characteristics are: change the invisible X-ray into the visible light image; output the vivid images; short image recording time etc. At the same time we analyzed the main aspects which affect the system's resolution. Medical photo electricity system using X-ray image sensor can reduce the X-ray harm to human sharply when it is used in the medical diagnoses. At last we analyzed and looked forward the system's application in medical engineering and the related fields.

  20. Non-linear Imaging using an Experimental Synthetic Aperture Real Time Ultrasound Scanner

    DEFF Research Database (Denmark)

    Rasmussen, Joachim; Du, Yigang; Jensen, Jørgen Arendt

    2011-01-01

    This paper presents the first non-linear B-mode image of a wire phantom using pulse inversion attained via an experimental synthetic aperture real-time ultrasound scanner (SARUS). The purpose of this study is to implement and validate non-linear imaging on SARUS for the further development of new...... non-linear techniques. This study presents non-linear and linear B-mode images attained via SARUS and an existing ultrasound system as well as a Field II simulation. The non-linear image shows an improved spatial resolution and lower full width half max and -20 dB resolution values compared to linear...

  1. A Wireless Sensor Network for Vineyard Monitoring That Uses Image Processing

    Science.gov (United States)

    Lloret, Jaime; Bosch, Ignacio; Sendra, Sandra; Serrano, Arturo

    2011-01-01

    The first step to detect when a vineyard has any type of deficiency, pest or disease is to observe its stems, its grapes and/or its leaves. To place a sensor in each leaf of every vineyard is obviously not feasible in terms of cost and deployment. We should thus look for new methods to detect these symptoms precisely and economically. In this paper, we present a wireless sensor network where each sensor node takes images from the field and internally uses image processing techniques to detect any unusual status in the leaves. This symptom could be caused by a deficiency, pest, disease or other harmful agent. When it is detected, the sensor node sends a message to a sink node through the wireless sensor network in order to notify the problem to the farmer. The wireless sensor uses the IEEE 802.11 a/b/g/n standard, which allows connections from large distances in open air. This paper describes the wireless sensor network design, the wireless sensor deployment, how the node processes the images in order to monitor the vineyard, and the sensor network traffic obtained from a test bed performed in a flat vineyard in Spain. Although the system is not able to distinguish between deficiency, pest, disease or other harmful agents, a symptoms image database and a neuronal network could be added in order learn from the experience and provide an accurate problem diagnosis. PMID:22163948

  2. A wireless sensor network for vineyard monitoring that uses image processing.

    Science.gov (United States)

    Lloret, Jaime; Bosch, Ignacio; Sendra, Sandra; Serrano, Arturo

    2011-01-01

    The first step to detect when a vineyard has any type of deficiency, pest or disease is to observe its stems, its grapes and/or its leaves. To place a sensor in each leaf of every vineyard is obviously not feasible in terms of cost and deployment. We should thus look for new methods to detect these symptoms precisely and economically. In this paper, we present a wireless sensor network where each sensor node takes images from the field and internally uses image processing techniques to detect any unusual status in the leaves. This symptom could be caused by a deficiency, pest, disease or other harmful agent. When it is detected, the sensor node sends a message to a sink node through the wireless sensor network in order to notify the problem to the farmer. The wireless sensor uses the IEEE 802.11 a/b/g/n standard, which allows connections from large distances in open air. This paper describes the wireless sensor network design, the wireless sensor deployment, how the node processes the images in order to monitor the vineyard, and the sensor network traffic obtained from a test bed performed in a flat vineyard in Spain. Although the system is not able to distinguish between deficiency, pest, disease or other harmful agents, a symptoms image database and a neuronal network could be added in order learn from the experience and provide an accurate problem diagnosis.

  3. A fax-machine amorphous silicon sensor for X-ray detection

    Energy Technology Data Exchange (ETDEWEB)

    Alberdi, J. [Association EURATOM/CIEMAT, Madrid (Spain); Barcala, J.M. [Association EURATOM/CIEMAT, Madrid (Spain); Chvatchkine, V. [Association EURATOM/CIEMAT, Madrid (Spain); Ioudine, I. [Association EURATOM/CIEMAT, Madrid (Spain); Molinero, A. [Association EURATOM/CIEMAT, Madrid (Spain); Navarrete, J.J. [Association EURATOM/CIEMAT, Madrid (Spain); Yuste, C. [Association EURATOM/CIEMAT, Madrid (Spain)

    1996-10-01

    Amorphous silicon detectors have been used, basically, as solar cells for energetics applications. As light detectors, linear sensors are used in fax and photocopier machines because they can be built with a large size, low price and have a high radiation hardness. Due to these performances, amorphous silicon detectors have been used as radiation detectors, and, presently, some groups are developing matrix amorphous silicon detectors with built-in electronics for medical X-ray applications. Our group has been working on the design and development of an X-ray image system based on a commercial fax linear amorphous silicon detector. The sensor scans the selected area and detects light produced by the X-ray in a scintillator placed on the sensor. Image-processing software produces a final image with better resolution and definition. (orig.).

  4. On the Optimal Location of Sensors for Parametric Identification of Linear Structural Systems

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Brincker, Rune

    A survey of the field of optimal location of sensors for parametric identification of linear structural systems is presented. The survey shows that few papers are devoted to the case of optimal location sensors in which the measurements are modelled by a random field with non-trivial covariance...... function. Most often it is assumed that the results of the measurements are statistically independent variables. In an example the importance of considering the measurements as statistically dependent random variables is shown. The example is concerned with optimal location of sensors for parametric...... identification of modal parameters for a vibrating beam under random loading. The covariance of the modal parameters expected to be obtained is investigated to variations of number and location of sensors. Further, the influence of the noise on the optimal location of the sensors is investigated....

  5. Low-power high-accuracy micro-digital sun sensor by means of a CMOS image sensor

    NARCIS (Netherlands)

    Xie, N.; Theuwissen, A.J.P.

    2013-01-01

    A micro-digital sun sensor (?DSS) is a sun detector which senses a satellite’s instant attitude angle with respect to the sun. The core of this sensor is a system-on-chip imaging chip which is referred to as APS+. The APS+ integrates a CMOS active pixel sensor (APS) array of 368×368??pixels , a

  6. Camera sensor arrangement for crop/weed detection accuracy in agronomic images.

    Science.gov (United States)

    Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo

    2013-04-02

    In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects.

  7. Kalman filter-based tracking of moving objects using linear ultrasonic sensor array for road vehicles

    Science.gov (United States)

    Li, Shengbo Eben; Li, Guofa; Yu, Jiaying; Liu, Chang; Cheng, Bo; Wang, Jianqiang; Li, Keqiang

    2018-01-01

    Detection and tracking of objects in the side-near-field has attracted much attention for the development of advanced driver assistance systems. This paper presents a cost-effective approach to track moving objects around vehicles using linearly arrayed ultrasonic sensors. To understand the detection characteristics of a single sensor, an empirical detection model was developed considering the shapes and surface materials of various detected objects. Eight sensors were arrayed linearly to expand the detection range for further application in traffic environment recognition. Two types of tracking algorithms, including an Extended Kalman filter (EKF) and an Unscented Kalman filter (UKF), for the sensor array were designed for dynamic object tracking. The ultrasonic sensor array was designed to have two types of fire sequences: mutual firing or serial firing. The effectiveness of the designed algorithms were verified in two typical driving scenarios: passing intersections with traffic sign poles or street lights, and overtaking another vehicle. Experimental results showed that both EKF and UKF had more precise tracking position and smaller RMSE (root mean square error) than a traditional triangular positioning method. The effectiveness also encourages the application of cost-effective ultrasonic sensors in the near-field environment perception in autonomous driving systems.

  8. Space-based infrared sensors of space target imaging effect analysis

    Science.gov (United States)

    Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang

    2018-02-01

    Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.

  9. A Cost-effective Method for Resolution Increase of the Twostage Piecewise Linear ADC Used for Sensor Linearization

    Directory of Open Access Journals (Sweden)

    Jovanović Jelena

    2016-02-01

    Full Text Available A cost-effective method for resolution increase of a two-stage piecewise linear analog-to-digital converter used for sensor linearization is proposed in this paper. In both conversion stages flash analog-to-digital converters are employed. Resolution increase by one bit per conversion stage is performed by introducing one additional comparator in front of each of two flash analog-to-digital converters, while the converters’ resolutions remain the same. As a result, the number of employed comparators, as well as the circuit complexity and the power consumption originating from employed comparators are for almost 50 % lower in comparison to the same parameters referring to the linearization circuit of the conventional design and of the same resolution. Since the number of employed comparators is significantly reduced according to the proposed method, special modifications of the linearization circuit are needed in order to properly adjust reference voltages of employed comparators.

  10. Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors

    Science.gov (United States)

    Dutton, Neale A. W.; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K.

    2016-01-01

    SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed. PMID:27447643

  11. Malvar-He-Cutler Linear Image Demosaicking

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-08-01

    Full Text Available Image demosaicking (or demosaicing is the interpolation problem of estimating complete color information for an image that has been captured through a color filter array (CFA, particularly on the Bayer pattern. In this paper we review a simple linear method using 5 x 5 filters, proposed by Malvar, He, and Cutler in 2004, that shows surprisingly good results.

  12. The lucky image-motion prediction for simple scene observation based soft-sensor technology

    Science.gov (United States)

    Li, Yan; Su, Yun; Hu, Bin

    2015-08-01

    High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.

  13. On the Optimal Location of Sensors for Parametric Identification of Linear Systems

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Brincker, Rune

    1994-01-01

    . It is assumed most often that the results of the measurements are statistically independent random variables. In an example the importance of considering the measurements as statistically dependent random variables is shown. The covariance of the model parameters expected to be obtained is investigated......An outline of the field of optimal location of sensors for parametric identification of linear structural systems is presented. There are few papers devoted to the case of optimal location of sensors in which the measurements are modeled by a random field with non-trivial covariance function...

  14. Best linear decoding of random mask images

    International Nuclear Information System (INIS)

    Woods, J.W.; Ekstrom, M.P.; Palmieri, T.M.; Twogood, R.E.

    1975-01-01

    In 1968 Dicke proposed coded imaging of x and γ rays via random pinholes. Since then, many authors have agreed with him that this technique can offer significant image improvement. A best linear decoding of the coded image is presented, and its superiority over the conventional matched filter decoding is shown. Experimental results in the visible light region are presented. (U.S.)

  15. A Novel Linear Programming Formulation of Maximum Lifetime Routing Problem in Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Cetin, Bilge Kartal; Prasad, Neeli R.; Prasad, Ramjee

    2011-01-01

    In wireless sensor networks, one of the key challenge is to achieve minimum energy consumption in order to maximize network lifetime. In fact, lifetime depends on many parameters: the topology of the sensor network, the data aggregation regime in the network, the channel access schemes, the routing...... protocols, and the energy model for transmission. In this paper, we tackle the routing challenge for maximum lifetime of the sensor network. We introduce a novel linear programming approach to the maximum lifetime routing problem. To the best of our knowledge, this is the first mathematical programming...

  16. Effect of Inductive Coil Shape on Sensing Performance of Linear Displacement Sensor Using Thin Inductive Coil and Pattern Guide

    Directory of Open Access Journals (Sweden)

    Hiroyuki Wakiwaka

    2011-11-01

    Full Text Available This paper discusses the effect of inductive coil shape on the sensing performance of a linear displacement sensor. The linear displacement sensor consists of a thin type inductive coil with a thin pattern guide, thus being suitable for tiny space applications. The position can be detected by measuring the inductance of the inductive coil. At each position due to the change in inductive coil area facing the pattern guide the value of inductance is different. Therefore, the objective of this research is to study various inductive coil pattern shapes and to propose the pattern that can achieve good sensing performance. Various shapes of meander, triangular type meander, square and circle shape with different turn number of inductive coils are examined in this study. The inductance is measured with the sensor sensitivity and linearity as a performance evaluation parameter of the sensor. In conclusion, each inductive coil shape has its own advantages and disadvantages. For instance, the circle shape inductive coil produces high sensitivity with a low linearity response. Meanwhile, the square shape inductive coil has a medium sensitivity with higher linearity.

  17. Laser Doppler Blood Flow Imaging Using a CMOS Imaging Sensor with On-Chip Signal Processing

    Directory of Open Access Journals (Sweden)

    Cally Gill

    2013-09-01

    Full Text Available The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.

  18. Laser doppler blood flow imaging using a CMOS imaging sensor with on-chip signal processing.

    Science.gov (United States)

    He, Diwei; Nguyen, Hoang C; Hayes-Gill, Barrie R; Zhu, Yiqun; Crowe, John A; Gill, Cally; Clough, Geraldine F; Morgan, Stephen P

    2013-09-18

    The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.

  19. A Linear Birefringence Measurement Method for an Optical Fiber Current Sensor.

    Science.gov (United States)

    Xu, Shaoyi; Shao, Haiming; Li, Chuansheng; Xing, Fangfang; Wang, Yuqiao; Li, Wei

    2017-07-03

    In this work, a linear birefringence measurement method is proposed for an optical fiber current sensor (OFCS). First, the optical configuration of the measurement system is presented. Then, the elimination method of the effect of the azimuth angles between the sensing fiber and the two polarizers is demonstrated. Moreover, the relationship of the linear birefringence, the Faraday rotation angle and the final output is determined. On these bases, the multi-valued problem on the linear birefringence is simulated and its solution is illustrated when the linear birefringence is unknown. Finally, the experiments are conducted to prove the feasibility of the proposed method. When the numbers of turns of the sensing fiber in the OFCS are about 15, 19, 23, 27, 31, 35, and 39, the measured linear birefringence obtained by the proposed method are about 1.3577, 1.8425, 2.0983, 2.5914, 2.7891, 3.2003 and 3.5198 rad. Two typical methods provide the references for the proposed method. The proposed method is proven to be suitable for the linear birefringence measurement in the full range without the limitation that the linear birefringence must be smaller than π/2.

  20. Wireless image-data transmission from an implanted image sensor through a living mouse brain by intra body communication

    Science.gov (United States)

    Hayami, Hajime; Takehara, Hiroaki; Nagata, Kengo; Haruta, Makito; Noda, Toshihiko; Sasagawa, Kiyotaka; Tokuda, Takashi; Ohta, Jun

    2016-04-01

    Intra body communication technology allows the fabrication of compact implantable biomedical sensors compared with RF wireless technology. In this paper, we report the fabrication of an implantable image sensor of 625 µm width and 830 µm length and the demonstration of wireless image-data transmission through a brain tissue of a living mouse. The sensor was designed to transmit output signals of pixel values by pulse width modulation (PWM). The PWM signals from the sensor transmitted through a brain tissue were detected by a receiver electrode. Wireless data transmission of a two-dimensional image was successfully demonstrated in a living mouse brain. The technique reported here is expected to provide useful methods of data transmission using micro sized implantable biomedical sensors.

  1. CMOS SPAD-based image sensor for single photon counting and time of flight imaging

    OpenAIRE

    Dutton, Neale Arthur William

    2016-01-01

    The facility to capture the arrival of a single photon, is the fundamental limit to the detection of quantised electromagnetic radiation. An image sensor capable of capturing a picture with this ultimate optical and temporal precision is the pinnacle of photo-sensing. The creation of high spatial resolution, single photon sensitive, and time-resolved image sensors in complementary metal oxide semiconductor (CMOS) technology offers numerous benefits in a wide field of applications....

  2. DC and AC linear magnetic field sensor based on glass coated amorphous microwires with Giant Magnetoimpedance

    International Nuclear Information System (INIS)

    García-Chocano, Víctor Manuel; García-Miquel, Héctor

    2015-01-01

    Giant Magnetoimpedance (GMI) effect has been studied in amorphous glass-coated microwires of composition (Fe 6 Co 94 ) 72.5 Si 12.5 B 15 . The impedance of a 1.5 cm length sample has been characterized by using constant AC currents in the range of 400 µA–4 mA at frequencies from 7 to 15 MHz and DC magnetic fields from −900 to 900 A/m. Double peak responses have been obtained, showing GMI ratios up to 107%. A linear magnetic field sensor for DC and AC field has been designed, using two microwires connected in series with a magnetic bias of 400 A/m with opposite direction in each microwire in order to obtain a linear response from ±70 (A/m) rms for AC magnetic field, and ±100 A/m for DC magnetic field. A closed loop feedback circuit has been implemented to extend the linear range to ±1 kA/m for DC magnetic field. - Highlights: • Giant Magneto Impedance phenomenon has been studied in amorphous microwires. • A combination of two microwires with a bias field has been developed to get a linear response. • An electronic circuit has been developed to obtain a sensor with a linear response. • A feedback coil have been added to increase the measurable range of the sensor

  3. A Short-Range Distance Sensor with Exceptional Linearity

    Science.gov (United States)

    Simmons, Steven; Youngquist, Robert

    2013-01-01

    A sensor has been demonstrated that can measure distance over a total range of about 300 microns to an accuracy of about 0.1 nm (resolution of about 0.01 nm). This represents an exceptionally large dynamic range of operation - over 1,000,000. The sensor is optical in nature, and requires the attachment of a mirror to the object whose distance is being measured. This work resulted from actively developing a white light interferometric system to be used to measure the depths of defects in the Space Shuttle Orbiter windows. The concept was then applied to measuring distance. The concept later expanded to include spectrometer calibration. In summary, broadband (i.e., white) light is launched into a Michelson interferometer, one mirror of which is fixed and one of which is attached to the object whose distance is to be measured. The light emerging from the interferometer has traveled one of two distances: either the distance to the fixed mirror and back, or the distance to the moving mirror and back. These two light beams mix and produce an interference pattern where some wavelengths interfere constructively and some destructively. Sending this light into a spectrometer allows this interference pattern to be analyzed, yielding the net distance difference between the two paths. The unique feature of this distance sensor is its ability to measure accurately distance over a dynamic range of more than one million, the ratio of its range (about 300 microns) to its accuracy (about 0.1 nanometer). Such a large linear operating range is rare and arises here because both amplitude and phase-matching algorithms contribute to the performance. The sensor is limited by the need to attach a mirror of some kind to the object being tracked, and by the fairly small total range, but the exceptional dynamic range should make it of interest.

  4. A Biologically Inspired CMOS Image Sensor

    CERN Document Server

    Sarkar, Mukul

    2013-01-01

    Biological systems are a source of inspiration in the development of small autonomous sensor nodes. The two major types of optical vision systems found in nature are the single aperture human eye and the compound eye of insects. The latter are among the most compact and smallest vision sensors. The eye is a compound of individual lenses with their own photoreceptor arrays.  The visual system of insects allows them to fly with a limited intelligence and brain processing power. A CMOS image sensor replicating the perception of vision in insects is discussed and designed in this book for industrial (machine vision) and medical applications. The CMOS metal layer is used to create an embedded micro-polarizer able to sense polarization information. This polarization information is shown to be useful in applications like real time material classification and autonomous agent navigation. Further the sensor is equipped with in pixel analog and digital memories which allow variation of the dynamic range and in-pixel b...

  5. Linear-constraint wavefront control for exoplanet coronagraphic imaging systems

    Science.gov (United States)

    Sun, He; Eldorado Riggs, A. J.; Kasdin, N. Jeremy; Vanderbei, Robert J.; Groff, Tyler Dean

    2017-01-01

    A coronagraph is a leading technology for achieving high-contrast imaging of exoplanets in a space telescope. It uses a system of several masks to modify the diffraction and achieve extremely high contrast in the image plane around target stars. However, coronagraphic imaging systems are very sensitive to optical aberrations, so wavefront correction using deformable mirrors (DMs) is necessary to avoid contrast degradation in the image plane. Electric field conjugation (EFC) and Stroke minimization (SM) are two primary high-contrast wavefront controllers explored in the past decade. EFC minimizes the average contrast in the search areas while regularizing the strength of the control inputs. Stroke minimization calculates the minimum DM commands under the constraint that a target average contrast is achieved. Recently in the High Contrast Imaging Lab at Princeton University (HCIL), a new linear-constraint wavefront controller based on stroke minimization was developed and demonstrated using numerical simulation. Instead of only constraining the average contrast over the entire search area, the new controller constrains the electric field of each single pixel using linear programming, which could led to significant increases in speed of the wavefront correction and also create more uniform dark holes. As a follow-up of this work, another linear-constraint controller modified from EFC is demonstrated theoretically and numerically and the lab verification of the linear-constraint controllers is reported. Based on the simulation and lab results, the pros and cons of linear-constraint controllers are carefully compared with EFC and stroke minimization.

  6. Technical guidance for the development of a solid state image sensor for human low vision image warping

    Science.gov (United States)

    Vanderspiegel, Jan

    1994-01-01

    This report surveys different technologies and approaches to realize sensors for image warping. The goal is to study the feasibility, technical aspects, and limitations of making an electronic camera with special geometries which implements certain transformations for image warping. This work was inspired by the research done by Dr. Juday at NASA Johnson Space Center on image warping. The study has looked into different solid-state technologies to fabricate image sensors. It is found that among the available technologies, CMOS is preferred over CCD technology. CMOS provides more flexibility to design different functions into the sensor, is more widely available, and is a lower cost solution. By using an architecture with row and column decoders one has the added flexibility of addressing the pixels at random, or read out only part of the image.

  7. Sparse Detector Imaging Sensor with Two-Class Silhouette Classification

    Directory of Open Access Journals (Sweden)

    David Russomanno

    2008-12-01

    Full Text Available This paper presents the design and test of a simple active near-infrared sparse detector imaging sensor. The prototype of the sensor is novel in that it can capture remarkable silhouettes or profiles of a wide-variety of moving objects, including humans, animals, and vehicles using a sparse detector array comprised of only sixteen sensing elements deployed in a vertical configuration. The prototype sensor was built to collect silhouettes for a variety of objects and to evaluate several algorithms for classifying the data obtained from the sensor into two classes: human versus non-human. Initial tests show that the classification of individually sensed objects into two classes can be achieved with accuracy greater than ninety-nine percent (99% with a subset of the sixteen detectors using a representative dataset consisting of 512 signatures. The prototype also includes a Webservice interface such that the sensor can be tasked in a network-centric environment. The sensor appears to be a low-cost alternative to traditional, high-resolution focal plane array imaging sensors for some applications. After a power optimization study, appropriate packaging, and testing with more extensive datasets, the sensor may be a good candidate for deployment in vast geographic regions for a myriad of intelligent electronic fence and persistent surveillance applications, including perimeter security scenarios.

  8. Highly curved image sensors: a practical approach for improved optical performance.

    Science.gov (United States)

    Guenter, Brian; Joshi, Neel; Stoakley, Richard; Keefe, Andrew; Geary, Kevin; Freeman, Ryan; Hundley, Jake; Patterson, Pamela; Hammon, David; Herrera, Guillermo; Sherman, Elena; Nowak, Andrew; Schubert, Randall; Brewer, Peter; Yang, Louis; Mott, Russell; McKnight, Geoff

    2017-06-12

    The significant optical and size benefits of using a curved focal surface for imaging systems have been well studied yet never brought to market for lack of a high-quality, mass-producible, curved image sensor. In this work we demonstrate that commercial silicon CMOS image sensors can be thinned and formed into accurate, highly curved optical surfaces with undiminished functionality. Our key development is a pneumatic forming process that avoids rigid mechanical constraints and suppresses wrinkling instabilities. A combination of forming-mold design, pressure membrane elastic properties, and controlled friction forces enables us to gradually contact the die at the corners and smoothly press the sensor into a spherical shape. Allowing the die to slide into the concave target shape enables a threefold increase in the spherical curvature over prior approaches having mechanical constraints that resist deformation, and create a high-stress, stretch-dominated state. Our process creates a bridge between the high precision and low-cost but planar CMOS process, and ideal non-planar component shapes such as spherical imagers for improved optical systems. We demonstrate these curved sensors in prototype cameras with custom lenses, measuring exceptional resolution of 3220 line-widths per picture height at an aperture of f/1.2 and nearly 100% relative illumination across the field. Though we use a 1/2.3" format image sensor in this report, we also show this process is generally compatible with many state of the art imaging sensor formats. By example, we report photogrammetry test data for an APS-C sized silicon die formed to a 30° subtended spherical angle. These gains in sharpness and relative illumination enable a new generation of ultra-high performance, manufacturable, digital imaging systems for scientific, industrial, and artistic use.

  9. X-ray imaging characterization of active edge silicon pixel sensors

    International Nuclear Information System (INIS)

    Ponchut, C; Ruat, M; Kalliopuska, J

    2014-01-01

    The aim of this work was the experimental characterization of edge effects in active-edge silicon pixel sensors, in the frame of X-ray pixel detectors developments for synchrotron experiments. We produced a set of active edge pixel sensors with 300 to 500 μm thickness, edge widths ranging from 100 μm to 150 μm, and n or p pixel contact types. The sensors with 256 × 256 pixels and 55 × 55 μm 2 pixel pitch were then bump-bonded to Timepix readout chips for X-ray imaging measurements. The reduced edge widths makes the edge pixels more sensitive to the electrical field distribution at the sensor boundaries. We characterized this effect by mapping the spatial response of the sensor edges with a finely focused X-ray synchrotron beam. One of the samples showed a distortion-free response on all four edges, whereas others showed variable degrees of distortions extending at maximum to 300 micron from the sensor edge. An application of active edge pixel sensors to coherent diffraction imaging with synchrotron beams is described

  10. Moving-Article X-Ray Imaging System and Method for 3-D Image Generation

    Science.gov (United States)

    Fernandez, Kenneth R. (Inventor)

    2012-01-01

    An x-ray imaging system and method for a moving article are provided for an article moved along a linear direction of travel while the article is exposed to non-overlapping x-ray beams. A plurality of parallel linear sensor arrays are disposed in the x-ray beams after they pass through the article. More specifically, a first half of the plurality are disposed in a first of the x-ray beams while a second half of the plurality are disposed in a second of the x-ray beams. Each of the parallel linear sensor arrays is oriented perpendicular to the linear direction of travel. Each of the parallel linear sensor arrays in the first half is matched to a corresponding one of the parallel linear sensor arrays in the second half in terms of an angular position in the first of the x-ray beams and the second of the x-ray beams, respectively.

  11. Approach for Self-Calibrating CO₂ Measurements with Linear Membrane-Based Gas Sensors.

    Science.gov (United States)

    Lazik, Detlef; Sood, Pramit

    2016-11-17

    Linear membrane-based gas sensors that can be advantageously applied for the measurement of a single gas component in large heterogeneous systems, e.g., for representative determination of CO₂ in the subsurface, can be designed depending on the properties of the observation object. A resulting disadvantage is that the permeation-based sensor response depends on operating conditions, the individual site-adapted sensor geometry, the membrane material, and the target gas component. Therefore, calibration is needed, especially of the slope, which could change over several orders of magnitude. A calibration-free approach based on an internal gas standard is developed to overcome the multi-criterial slope dependency. This results in a normalization of sensor response and enables the sensor to assess the significance of measurement. The approach was proofed on the example of CO₂ analysis in dry air with tubular PDMS membranes for various CO₂ concentrations of an internal standard. Negligible temperature dependency was found within an 18 K range. The transformation behavior of the measurement signal and the influence of concentration variations of the internal standard on the measurement signal were shown. Offsets that were adjusted based on the stated theory for the given measurement conditions and material data from the literature were in agreement with the experimentally determined offsets. A measurement comparison with an NDIR reference sensor shows an unexpectedly low bias (sensor response, and comparable statistical uncertainty.

  12. RADIOMETRIC NORMALIZATION OF LARGE AIRBORNE IMAGE DATA SETS ACQUIRED BY DIFFERENT SENSOR TYPES

    Directory of Open Access Journals (Sweden)

    S. Gehrke

    2016-06-01

    Full Text Available Generating seamless mosaics of aerial images is a particularly challenging task when the mosaic comprises a large number of im-ages, collected over longer periods of time and with different sensors under varying imaging conditions. Such large mosaics typically consist of very heterogeneous image data, both spatially (different terrain types and atmosphere and temporally (unstable atmo-spheric properties and even changes in land coverage. We present a new radiometric normalization or, respectively, radiometric aerial triangulation approach that takes advantage of our knowledge about each sensor’s properties. The current implementation supports medium and large format airborne imaging sensors of the Leica Geosystems family, namely the ADS line-scanner as well as DMC and RCD frame sensors. A hierarchical modelling – with parameters for the overall mosaic, the sensor type, different flight sessions, strips and individual images – allows for adaptation to each sensor’s geometric and radiometric properties. Additional parameters at different hierarchy levels can compensate radiome-tric differences of various origins to compensate for shortcomings of the preceding radiometric sensor calibration as well as BRDF and atmospheric corrections. The final, relative normalization is based on radiometric tie points in overlapping images, absolute radiometric control points and image statistics. It is computed in a global least squares adjustment for the entire mosaic by altering each image’s histogram using a location-dependent mathematical model. This model involves contrast and brightness corrections at radiometric fix points with bilinear interpolation for corrections in-between. The distribution of the radiometry fixes is adaptive to each image and generally increases with image size, hence enabling optimal local adaptation even for very long image strips as typi-cally captured by a line-scanner sensor. The normalization approach is implemented in

  13. Photon detection with CMOS sensors for fast imaging

    International Nuclear Information System (INIS)

    Baudot, J.; Dulinski, W.; Winter, M.; Barbier, R.; Chabanat, E.; Depasse, P.; Estre, N.

    2009-01-01

    Pixel detectors employed in high energy physics aim to detect single minimum ionizing particle with micrometric positioning resolution. Monolithic CMOS sensors succeed in this task thanks to a low equivalent noise charge per pixel of around 10 to 15 e - , and a pixel pitch varying from 10 to a few 10 s of microns. Additionally, due to the possibility for integration of some data treatment in the sensor itself, readout times of 100μs have been reached for 100 kilo-pixels sensors. These aspects of CMOS sensors are attractive for applications in photon imaging. For X-rays of a few keV, the efficiency is limited to a few % due to the thin sensitive volume. For visible photons, the back-thinned version of CMOS sensor is sensitive to low intensity sources, of a few hundred photons. When a back-thinned CMOS sensor is combined with a photo-cathode, a new hybrid detector results (EBCMOS) and operates as a fast single photon imager. The first EBCMOS was produced in 2007 and demonstrated single photon counting with low dark current capability in laboratory conditions. It has been compared, in two different biological laboratories, with existing CCD-based 2D cameras for fluorescence microscopy. The current EBCMOS sensitivity and frame rate is comparable to existing EMCCDs. On-going developments aim at increasing this frame rate by, at least, an order of magnitude. We report in conclusion, the first test of a new CMOS sensor, LUCY, which reaches 1000 frames per second.

  14. Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications

    Science.gov (United States)

    Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David

    2017-10-01

    The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.

  15. Implementation of large area CMOS image sensor module using the precision align inspection

    International Nuclear Information System (INIS)

    Kim, Byoung Wook; Kim, Toung Ju; Ryu, Cheol Woo; Lee, Kyung Yong; Kim, Jin Soo; Kim, Myung Soo; Cho, Gyu Seong

    2014-01-01

    This paper describes a large area CMOS image sensor module Implementation using the precision align inspection program. This work is needed because wafer cutting system does not always have high precision. The program check more than 8 point of sensor edges and align sensors with moving table. The size of a 2×1 butted CMOS image sensor module which except for the size of PCB is 170 mm×170 mm. And the pixel size is 55 μm×55 μm and the number of pixels is 3,072×3,072. The gap between the two CMOS image sensor module was arranged in less than one pixel size

  16. Implementation of large area CMOS image sensor module using the precision align inspection

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Byoung Wook; Kim, Toung Ju; Ryu, Cheol Woo [Radiation Imaging Technology Center, JBTP, Iksan (Korea, Republic of); Lee, Kyung Yong; Kim, Jin Soo [Nano Sol-Tech INC., Iksan (Korea, Republic of); Kim, Myung Soo; Cho, Gyu Seong [Dept. of Nuclear and Quantum Engineering, KAIST, Daejeon (Korea, Republic of)

    2014-12-15

    This paper describes a large area CMOS image sensor module Implementation using the precision align inspection program. This work is needed because wafer cutting system does not always have high precision. The program check more than 8 point of sensor edges and align sensors with moving table. The size of a 2×1 butted CMOS image sensor module which except for the size of PCB is 170 mm×170 mm. And the pixel size is 55 μm×55 μm and the number of pixels is 3,072×3,072. The gap between the two CMOS image sensor module was arranged in less than one pixel size.

  17. Optical and Electric Multifunctional CMOS Image Sensors for On-Chip Biosensing Applications

    Directory of Open Access Journals (Sweden)

    Kiyotaka Sasagawa

    2010-12-01

    Full Text Available In this review, the concept, design, performance, and a functional demonstration of multifunctional complementary metal-oxide-semiconductor (CMOS image sensors dedicated to on-chip biosensing applications are described. We developed a sensor architecture that allows flexible configuration of a sensing pixel array consisting of optical and electric sensing pixels, and designed multifunctional CMOS image sensors that can sense light intensity and electric potential or apply a voltage to an on-chip measurement target. We describe the sensors’ architecture on the basis of the type of electric measurement or imaging functionalities.

  18. X-ray detectors based on image sensors

    International Nuclear Information System (INIS)

    Costa, A.P.R.

    1983-01-01

    X-ray detectors based on image sensors are described and a comparison is made between the advantages and the disadvantages of such a kind of detectors with the position sensitive detectors. (L.C.) [pt

  19. Multi sensor satellite imagers for commercial remote sensing

    Science.gov (United States)

    Cronje, T.; Burger, H.; Du Plessis, J.; Du Toit, J. F.; Marais, L.; Strumpfer, F.

    2005-10-01

    This paper will discuss and compare recent refractive and catodioptric imager designs developed and manufactured at SunSpace for Multi Sensor Satellite Imagers with Panchromatic, Multi-spectral, Area and Hyperspectral sensors on a single Focal Plane Array (FPA). These satellite optical systems were designed with applications to monitor food supplies, crop yield and disaster monitoring in mind. The aim of these imagers is to achieve medium to high resolution (2.5m to 15m) spatial sampling, wide swaths (up to 45km) and noise equivalent reflectance (NER) values of less than 0.5%. State-of-the-art FPA designs are discussed and address the choice of detectors to achieve these performances. Special attention is given to thermal robustness and compactness, the use of folding prisms to place multiple detectors in a large FPA and a specially developed process to customize the spectral selection with the need to minimize mass, power and cost. A refractive imager with up to 6 spectral bands (6.25m GSD) and a catodioptric imager with panchromatic (2.7m GSD), multi-spectral (6 bands, 4.6m GSD), hyperspectral (400nm to 2.35μm, 200 bands, 15m GSD) sensors on the same FPA will be discussed. Both of these imagers are also equipped with real time video view finding capabilities. The electronic units could be subdivided into the Front-End Electronics and Control Electronics with analogue and digital signal processing. A dedicated Analogue Front-End is used for Correlated Double Sampling (CDS), black level correction, variable gain and up to 12-bit digitizing and high speed LVDS data link to a mass memory unit.

  20. Temporal Noise Analysis of Charge-Domain Sampling Readout Circuits for CMOS Image Sensors

    Directory of Open Access Journals (Sweden)

    Xiaoliang Ge

    2018-02-01

    Full Text Available This paper presents a temporal noise analysis of charge-domain sampling readout circuits for Complementary Metal-Oxide Semiconductor (CMOS image sensors. In order to address the trade-off between the low input-referred noise and high dynamic range, a Gm-cell-based pixel together with a charge-domain correlated-double sampling (CDS technique has been proposed to provide a way to efficiently embed a tunable conversion gain along the read-out path. Such readout topology, however, operates in a non-stationery large-signal behavior, and the statistical properties of its temporal noise are a function of time. Conventional noise analysis methods for CMOS image sensors are based on steady-state signal models, and therefore cannot be readily applied for Gm-cell-based pixels. In this paper, we develop analysis models for both thermal noise and flicker noise in Gm-cell-based pixels by employing the time-domain linear analysis approach and the non-stationary noise analysis theory, which help to quantitatively evaluate the temporal noise characteristic of Gm-cell-based pixels. Both models were numerically computed in MATLAB using design parameters of a prototype chip, and compared with both simulation and experimental results. The good agreement between the theoretical and measurement results verifies the effectiveness of the proposed noise analysis models.

  1. Miniature infrared hyperspectral imaging sensor for airborne applications

    Science.gov (United States)

    Hinnrichs, Michele; Hinnrichs, Bradford; McCutchen, Earl

    2017-05-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera, both MWIR and LWIR, small enough to serve as a payload on a miniature unmanned aerial vehicles. The optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of the sensor. This new and innovative approach to infrared hyperspectral imaging spectrometer uses micro-optics and will be explained in this paper. The micro-optics are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a mini-UAV or commercial quadcopter. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the spatial resolution. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4 lenslet array on a 1024 x 1024 pixel element focal plane array which gives 16 spectral images of 256 x 256 pixel resolution each

  2. High-content analysis of single cells directly assembled on CMOS sensor based on color imaging.

    Science.gov (United States)

    Tanaka, Tsuyoshi; Saeki, Tatsuya; Sunaga, Yoshihiko; Matsunaga, Tadashi

    2010-12-15

    A complementary metal oxide semiconductor (CMOS) image sensor was applied to high-content analysis of single cells which were assembled closely or directly onto the CMOS sensor surface. The direct assembling of cell groups on CMOS sensor surface allows large-field (6.66 mm×5.32 mm in entire active area of CMOS sensor) imaging within a second. Trypan blue-stained and non-stained cells in the same field area on the CMOS sensor were successfully distinguished as white- and blue-colored images under white LED light irradiation. Furthermore, the chemiluminescent signals of each cell were successfully visualized as blue-colored images on CMOS sensor only when HeLa cells were placed directly on the micro-lens array of the CMOS sensor. Our proposed approach will be a promising technique for real-time and high-content analysis of single cells in a large-field area based on color imaging. Copyright © 2010 Elsevier B.V. All rights reserved.

  3. Multi-image acquisition-based distance sensor using agile laser spot beam.

    Science.gov (United States)

    Riza, Nabeel A; Amin, M Junaid

    2014-09-01

    We present a novel laser-based distance measurement technique that uses multiple-image-based spatial processing to enable distance measurements. Compared with the first-generation distance sensor using spatial processing, the modified sensor is no longer hindered by the classic Rayleigh axial resolution limit for the propagating laser beam at its minimum beam waist location. The proposed high-resolution distance sensor design uses an electronically controlled variable focus lens (ECVFL) in combination with an optical imaging device, such as a charged-coupled device (CCD), to produce and capture different laser spot size images on a target with these beam spot sizes different from the minimal spot size possible at this target distance. By exploiting the unique relationship of the target located spot sizes with the varying ECVFL focal length for each target distance, the proposed distance sensor can compute the target distance with a distance measurement resolution better than the axial resolution via the Rayleigh resolution criterion. Using a 30 mW 633 nm He-Ne laser coupled with an electromagnetically actuated liquid ECVFL, along with a 20 cm focal length bias lens, and using five spot images captured per target position by a CCD-based Nikon camera, a proof-of-concept proposed distance sensor is successfully implemented in the laboratory over target ranges from 10 to 100 cm with a demonstrated sub-cm axial resolution, which is better than the axial Rayleigh resolution limit at these target distances. Applications for the proposed potentially cost-effective distance sensor are diverse and include industrial inspection and measurement and 3D object shape mapping and imaging.

  4. A FPGA implementation for linearly unmixing a hyperspectral image using OpenCL

    Science.gov (United States)

    Guerra, Raúl; López, Sebastián.; Sarmiento, Roberto

    2017-10-01

    Hyperspectral imaging systems provide images in which single pixels have information from across the electromagnetic spectrum of the scene under analysis. These systems divide the spectrum into many contiguos channels, which may be even out of the visible part of the spectra. The main advantage of the hyperspectral imaging technology is that certain objects leave unique fingerprints in the electromagnetic spectrum, known as spectral signatures, which allow to distinguish between different materials that may look like the same in a traditional RGB image. Accordingly, the most important hyperspectral imaging applications are related with distinguishing or identifying materials in a particular scene. In hyperspectral imaging applications under real-time constraints, the huge amount of information provided by the hyperspectral sensors has to be rapidly processed and analysed. For such purpose, parallel hardware devices, such as Field Programmable Gate Arrays (FPGAs) are typically used. However, developing hardware applications typically requires expertise in the specific targeted device, as well as in the tools and methodologies which can be used to perform the implementation of the desired algorithms in the specific device. In this scenario, the Open Computing Language (OpenCL) emerges as a very interesting solution in which a single high-level synthesis design language can be used to efficiently develop applications in multiple and different hardware devices. In this work, the Fast Algorithm for Linearly Unmixing Hyperspectral Images (FUN) has been implemented into a Bitware Stratix V Altera FPGA using OpenCL. The obtained results demonstrate the suitability of OpenCL as a viable design methodology for quickly creating efficient FPGAs designs for real-time hyperspectral imaging applications.

  5. Multiple-Event, Single-Photon Counting Imaging Sensor

    Science.gov (United States)

    Zheng, Xinyu; Cunningham, Thomas J.; Sun, Chao; Wang, Kang L.

    2011-01-01

    The single-photon counting imaging sensor is typically an array of silicon Geiger-mode avalanche photodiodes that are monolithically integrated with CMOS (complementary metal oxide semiconductor) readout, signal processing, and addressing circuits located in each pixel and the peripheral area of the chip. The major problem is its single-event method for photon count number registration. A single-event single-photon counting imaging array only allows registration of up to one photon count in each of its pixels during a frame time, i.e., the interval between two successive pixel reset operations. Since the frame time can t be too short, this will lead to very low dynamic range and make the sensor merely useful for very low flux environments. The second problem of the prior technique is a limited fill factor resulting from consumption of chip area by the monolithically integrated CMOS readout in pixels. The resulting low photon collection efficiency will substantially ruin any benefit gained from the very sensitive single-photon counting detection. The single-photon counting imaging sensor developed in this work has a novel multiple-event architecture, which allows each of its pixels to register as more than one million (or more) photon-counting events during a frame time. Because of a consequently boosted dynamic range, the imaging array of the invention is capable of performing single-photon counting under ultra-low light through high-flux environments. On the other hand, since the multiple-event architecture is implemented in a hybrid structure, back-illumination and close-to-unity fill factor can be realized, and maximized quantum efficiency can also be achieved in the detector array.

  6. Precise shape reconstruction by active pattern in total-internal-reflection-based tactile sensor.

    Science.gov (United States)

    Saga, Satoshi; Taira, Ryosuke; Deguchi, Koichiro

    2014-03-01

    We are developing a total-internal-reflection-based tactile sensor in which the shape is reconstructed using an optical reflection. This sensor consists of silicone rubber, an image pattern, and a camera. It reconstructs the shape of the sensor surface from an image of a pattern reflected at the inner sensor surface by total internal reflection. In this study, we propose precise real-time reconstruction by employing an optimization method. Furthermore, we propose to use active patterns. Deformation of the reflection image causes reconstruction errors. By controlling the image pattern, the sensor reconstructs the surface deformation more precisely. We implement the proposed optimization and active-pattern-based reconstruction methods in a reflection-based tactile sensor, and perform reconstruction experiments using the system. A precise deformation experiment confirms the linearity and precision of the reconstruction.

  7. Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback

    Directory of Open Access Journals (Sweden)

    Haoting Liu

    2017-02-01

    Full Text Available An imaging sensor-based intelligent Light Emitting Diode (LED lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes.

  8. Scanning Electron Microscope Calibration Using a Multi-Image Non-Linear Minimization Process

    Science.gov (United States)

    Cui, Le; Marchand, Éric

    2015-04-01

    A scanning electron microscope (SEM) calibrating approach based on non-linear minimization procedure is presented in this article. A part of this article has been published in IEEE International Conference on Robotics and Automation (ICRA), 2014. . Both the intrinsic parameters and the extrinsic parameters estimations are achieved simultaneously by minimizing the registration error. The proposed approach considers multi-images of a multi-scale calibration pattern view from different positions and orientations. Since the projection geometry of the scanning electron microscope is different from that of a classical optical sensor, the perspective projection model and the parallel projection model are considered and compared with distortion models. Experiments are realized by varying the position and the orientation of a multi-scale chessboard calibration pattern from 300× to 10,000×. The experimental results show the efficiency and the accuracy of this approach.

  9. Retinal fundus imaging with a plenoptic sensor

    Science.gov (United States)

    Thurin, Brice; Bloch, Edward; Nousias, Sotiris; Ourselin, Sebastien; Keane, Pearse; Bergeles, Christos

    2018-02-01

    Vitreoretinal surgery is moving towards 3D visualization of the surgical field. This require acquisition system capable of recording such 3D information. We propose a proof of concept imaging system based on a light-field camera where an array of micro-lenses is placed in front of a conventional sensor. With a single snapshot, a stack of images focused at different depth are produced on the fly, which provides enhanced depth perception for the surgeon. Difficulty in depth localization of features and frequent focus-change during surgery are making current vitreoretinal heads-up surgical imaging systems cumbersome to use. To improve the depth perception and eliminate the need to manually refocus on the instruments during the surgery, we designed and implemented a proof-of-concept ophthalmoscope equipped with a commercial light-field camera. The sensor of our camera is composed of an array of micro-lenses which are projecting an array of overlapped micro-images. We show that with a single light-field snapshot we can digitally refocus between the retina and a tool located in front of the retina or display an extended depth-of-field image where everything is in focus. The design and system performances of the plenoptic fundus camera are detailed. We will conclude by showing in vivo data recorded with our device.

  10. Programmable Solution for Solving Non-linearity Characteristics of Smart Sensor Applications

    Directory of Open Access Journals (Sweden)

    S. Khan

    2007-10-01

    Full Text Available This paper presents a simple but programmable technique to solve the problem of non-linear characteristics of sensors used in more sensitive applications. The nonlinearity of the output response becomes a very sensitive issue in cases where a proportional increase in the physical quantity fails to bring about a proportional increase in the signal measured. The nonlinearity is addressed by using the interpolation method on the characteristics of a given sensor, approximating it to a set of tangent lines, the tangent points of which are recognized in the code of the processor by IF-THEN code. The method suggested here eliminates the use of external circuits for interfacing, and eases the programming burden on the processor at the cost of proportionally reduced memory requirements. The mathematically worked out results are compared with the simulation and experimental results for an IR sensor selected for the purpose and used for level measurement. This work will be of paramount importance and significance in applications where the controlled signal is required to follow the input signal precisely particularly in sensitive robotic applications.

  11. Acceleration of the direct reconstruction of linear parametric images using nested algorithms

    International Nuclear Information System (INIS)

    Wang Guobao; Qi Jinyi

    2010-01-01

    Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.

  12. Two-dimensional pixel image lag simulation and optimization in a 4-T CMOS image sensor

    Energy Technology Data Exchange (ETDEWEB)

    Yu Junting; Li Binqiao; Yu Pingping; Xu Jiangtao [School of Electronics Information Engineering, Tianjin University, Tianjin 300072 (China); Mou Cun, E-mail: xujiangtao@tju.edu.c [Logistics Management Office, Hebei University of Technology, Tianjin 300130 (China)

    2010-09-15

    Pixel image lag in a 4-T CMOS image sensor is analyzed and simulated in a two-dimensional model. Strategies of reducing image lag are discussed from transfer gate channel threshold voltage doping adjustment, PPD N-type doping dose/implant tilt adjustment and transfer gate operation voltage adjustment for signal electron transfer. With the computer analysis tool ISE-TCAD, simulation results show that minimum image lag can be obtained at a pinned photodiode n-type doping dose of 7.0 x 10{sup 12} cm{sup -2}, an implant tilt of -2{sup 0}, a transfer gate channel doping dose of 3.0 x 10{sup 12} cm{sup -2} and an operation voltage of 3.4 V. The conclusions of this theoretical analysis can be a guideline for pixel design to improve the performance of 4-T CMOS image sensors. (semiconductor devices)

  13. Adaptive Sensor Optimization and Cognitive Image Processing Using Autonomous Optical Neuroprocessors; TOPICAL

    International Nuclear Information System (INIS)

    CAMERON, STEWART M.

    2001-01-01

    Measurement and signal intelligence demands has created new requirements for information management and interoperability as they affect surveillance and situational awareness. Integration of on-board autonomous learning and adaptive control structures within a remote sensing platform architecture would substantially improve the utility of intelligence collection by facilitating real-time optimization of measurement parameters for variable field conditions. A problem faced by conventional digital implementations of intelligent systems is the conflict between a distributed parallel structure on a sequential serial interface functionally degrading bandwidth and response time. In contrast, optically designed networks exhibit the massive parallelism and interconnect density needed to perform complex cognitive functions within a dynamic asynchronous environment. Recently, all-optical self-organizing neural networks exhibiting emergent collective behavior which mimic perception, recognition, association, and contemplative learning have been realized using photorefractive holography in combination with sensory systems for feature maps, threshold decomposition, image enhancement, and nonlinear matched filters. Such hybrid information processors depart from the classical computational paradigm based on analytic rules-based algorithms and instead utilize unsupervised generalization and perceptron-like exploratory or improvisational behaviors to evolve toward optimized solutions. These systems are robust to instrumental systematics or corrupting noise and can enrich knowledge structures by allowing competition between multiple hypotheses. This property enables them to rapidly adapt or self-compensate for dynamic or imprecise conditions which would be unstable using conventional linear control models. By incorporating an intelligent optical neuroprocessor in the back plane of an imaging sensor, a broad class of high-level cognitive image analysis problems including geometric

  14. Research on Linear Wireless Sensor Networks Used for Online Monitoring of Rolling Bearing in Freight Train

    International Nuclear Information System (INIS)

    Wang Nan; Meng Qingfeng; Zheng Bin; Li Tong; Ma Qinghai

    2011-01-01

    This paper presents a Wireless Sensor Networks (WSNs) technique for the purpose of on-line monitoring of rolling bearing in freight train. A new technical scheme including the arrangements of sensors, the design of sensor nodes and base station, routing protocols, signal acquirement, processing and transmission is described, and an on-line monitoring system is established. Considering the approximately linear arrangements of cars and the running state of freight train, a linear topology structure of WSNs is adopted and five linear routing protocols are discussed in detail as to obtain the desired minimum energy consumption of WSNs. By analysing the simulation results, an optimal multi-hop routing protocol named sub-section routing protocol according to equal distance is adopted, in which all sensor nodes are divided into different groups according to the equal transmission distance, the optimal transmission distance and number of hops of routing protocol are also studied. We know that the communication consumes significant power in WSNs, so, in order to save the limit power supply of WSNs, the data compression and coding scheme based on lifting integer wavelet and embedded zerotree wavelet (EZW) algorithms is studied to reduce the amounts of data transmitted. The experimental results of rolling bearing have been given at last to verify the effectiveness of data compression algorithm. The on-line monitoring system of rolling bearing in freight train will be applied to actual application in the near future.

  15. Research on Linear Wireless Sensor Networks Used for Online Monitoring of Rolling Bearing in Freight Train

    Energy Technology Data Exchange (ETDEWEB)

    Wang Nan; Meng Qingfeng; Zheng Bin [Theory of Lubrication and Bearing Institute, Xi' an Jiaotong University Xi' an, 710049 (China); Li Tong; Ma Qinghai, E-mail: heroyoyu.2009@stu.xjtu.edu.cn [Xi' an Rail Bureau, Xi' an, 710054 (China)

    2011-07-19

    This paper presents a Wireless Sensor Networks (WSNs) technique for the purpose of on-line monitoring of rolling bearing in freight train. A new technical scheme including the arrangements of sensors, the design of sensor nodes and base station, routing protocols, signal acquirement, processing and transmission is described, and an on-line monitoring system is established. Considering the approximately linear arrangements of cars and the running state of freight train, a linear topology structure of WSNs is adopted and five linear routing protocols are discussed in detail as to obtain the desired minimum energy consumption of WSNs. By analysing the simulation results, an optimal multi-hop routing protocol named sub-section routing protocol according to equal distance is adopted, in which all sensor nodes are divided into different groups according to the equal transmission distance, the optimal transmission distance and number of hops of routing protocol are also studied. We know that the communication consumes significant power in WSNs, so, in order to save the limit power supply of WSNs, the data compression and coding scheme based on lifting integer wavelet and embedded zerotree wavelet (EZW) algorithms is studied to reduce the amounts of data transmitted. The experimental results of rolling bearing have been given at last to verify the effectiveness of data compression algorithm. The on-line monitoring system of rolling bearing in freight train will be applied to actual application in the near future.

  16. Data Retrieval Algorithms for Validating the Optical Transient Detector and the Lightning Imaging Sensor

    Science.gov (United States)

    Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.

    2000-01-01

    A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from an Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions. Solutions for the plane (i.e., no earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated datasets, and the relative influence of bearing and arrival time data an the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA:s Optical Transient Detector and Lightning Imaging Sensor. A quadratic planar solution that is useful when only three arrival time measurements are available is also introduced. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in sc)iirce location, Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated datasets, and the results are generally better than those obtained from the three-station linear planar method when bearing errors are about 2 deg.

  17. A novel sensor for two-degree-of-freedom motion measurement of linear nanopositioning stage using knife edge displacement sensing technique

    Science.gov (United States)

    Zolfaghari, Abolfazl; Jeon, Seongkyul; Stepanick, Christopher K.; Lee, ChaBum

    2017-06-01

    This paper presents a novel method for measuring two-degree-of-freedom (DOF) motion of flexure-based nanopositioning systems based on optical knife-edge sensing (OKES) technology, which utilizes the interference of two superimposed waves: a geometrical wave from the primary source of light and a boundary diffraction wave from the secondary source. This technique allows for two-DOF motion measurement of the linear and pitch motions of nanopositioning systems. Two capacitive sensors (CSs) are used for a baseline comparison with the proposed sensor by simultaneously measuring the motions of the nanopositioning system. The experimental results show that the proposed sensor closely agrees with the fundamental linear motion of the CS. However, the two-DOF OKES technology was shown to be approximately three times more sensitive to the pitch motion than the CS. The discrepancy in the two sensor outputs is discussed in terms of measuring principle, linearity, bandwidth, control effectiveness, and resolution.

  18. Image denoising using non linear diffusion tensors

    International Nuclear Information System (INIS)

    Benzarti, F.; Amiri, H.

    2011-01-01

    Image denoising is an important pre-processing step for many image analysis and computer vision system. It refers to the task of recovering a good estimate of the true image from a degraded observation without altering and changing useful structure in the image such as discontinuities and edges. In this paper, we propose a new approach for image denoising based on the combination of two non linear diffusion tensors. One allows diffusion along the orientation of greatest coherences, while the other allows diffusion along orthogonal directions. The idea is to track perfectly the local geometry of the degraded image and applying anisotropic diffusion mainly along the preferred structure direction. To illustrate the effective performance of our model, we present some experimental results on a test and real photographic color images.

  19. 77 FR 26787 - Certain CMOS Image Sensors and Products Containing Same; Notice of Receipt of Complaint...

    Science.gov (United States)

    2012-05-07

    ... INTERNATIONAL TRADE COMMISSION [Docket No. 2895] Certain CMOS Image Sensors and Products.... International Trade Commission has received a complaint entitled Certain CMOS Image Sensors and Products... importation, and the sale within the United States after importation of certain CMOS image sensors and...

  20. System overview and applications of a panoramic imaging perimeter sensor

    International Nuclear Information System (INIS)

    Pritchard, D.A.

    1995-01-01

    This paper presents an overview of the design and potential applications of a 360-degree scanning, multi-spectral intrusion detection sensor. This moderate-resolution, true panoramic imaging sensor is intended for exterior use at ranges from 50 to 1,500 meters. This Advanced Exterior Sensor (AES) simultaneously uses three sensing technologies (infrared, visible, and radar) along with advanced data processing methods to provide low false-alarm intrusion detection, tracking, and immediate visual assessment. The images from the infrared and visible detector sets and the radar range data are updated as the sensors rotate once per second. The radar provides range data with one-meter resolution. This sensor has been designed for easy use and rapid deployment to cover wide areas beyond or in place of typical perimeters, and tactical applications around fixed or temporary high-value assets. AES prototypes are in development. Applications discussed in this paper include replacements, augmentations, or new installations at fixed sites where topological features, atmospheric conditions, environmental restrictions, ecological regulations, and archaeological features limit the use of conventional security components and systems

  1. Methods and apparatuses for detection of radiation with semiconductor image sensors

    Science.gov (United States)

    Cogliati, Joshua Joseph

    2018-04-10

    A semiconductor image sensor is repeatedly exposed to high-energy photons while a visible light obstructer is in place to block visible light from impinging on the sensor to generate a set of images from the exposures. A composite image is generated from the set of images with common noise substantially removed so the composite image includes image information corresponding to radiated pixels that absorbed at least some energy from the high-energy photons. The composite image is processed to determine a set of bright points in the composite image, each bright point being above a first threshold. The set of bright points is processed to identify lines with two or more bright points that include pixels therebetween that are above a second threshold and identify a presence of the high-energy particles responsive to a number of lines.

  2. Median filters as a tool to determine dark noise thresholds in high resolution smartphone image sensors for scientific imaging

    Science.gov (United States)

    Igoe, Damien P.; Parisi, Alfio V.; Amar, Abdurazaq; Rummenie, Katherine J.

    2018-01-01

    An evaluation of the use of median filters in the reduction of dark noise in smartphone high resolution image sensors is presented. The Sony Xperia Z1 employed has a maximum image sensor resolution of 20.7 Mpixels, with each pixel having a side length of just over 1 μm. Due to the large number of photosites, this provides an image sensor with very high sensitivity but also makes them prone to noise effects such as hot-pixels. Similar to earlier research with older models of smartphone, no appreciable temperature effects were observed in the overall average pixel values for images taken in ambient temperatures between 5 °C and 25 °C. In this research, hot-pixels are defined as pixels with intensities above a specific threshold. The threshold is determined using the distribution of pixel values of a set of images with uniform statistical properties associated with the application of median-filters of increasing size. An image with uniform statistics was employed as a training set from 124 dark images, and the threshold was determined to be 9 digital numbers (DN). The threshold remained constant for multiple resolutions and did not appreciably change even after a year of extensive field use and exposure to solar ultraviolet radiation. Although the temperature effects' uniformity masked an increase in hot-pixel occurrences, the total number of occurrences represented less than 0.1% of the total image. Hot-pixels were removed by applying a median filter, with an optimum filter size of 7 × 7; similar trends were observed for four additional smartphone image sensors used for validation. Hot-pixels were also reduced by decreasing image resolution. The method outlined in this research provides a methodology to characterise the dark noise behavior of high resolution image sensors for use in scientific investigations, especially as pixel sizes decrease.

  3. Giga-pixel lensfree holographic microscopy and tomography using color image sensors.

    Directory of Open Access Journals (Sweden)

    Serhan O Isikman

    Full Text Available We report Giga-pixel lensfree holographic microscopy and tomography using color sensor-arrays such as CMOS imagers that exhibit Bayer color filter patterns. Without physically removing these color filters coated on the sensor chip, we synthesize pixel super-resolved lensfree holograms, which are then reconstructed to achieve ~350 nm lateral resolution, corresponding to a numerical aperture of ~0.8, across a field-of-view of ~20.5 mm(2. This constitutes a digital image with ~0.7 Billion effective pixels in both amplitude and phase channels (i.e., ~1.4 Giga-pixels total. Furthermore, by changing the illumination angle (e.g., ± 50° and scanning a partially-coherent light source across two orthogonal axes, super-resolved images of the same specimen from different viewing angles are created, which are then digitally combined to synthesize tomographic images of the object. Using this dual-axis lensfree tomographic imager running on a color sensor-chip, we achieve a 3D spatial resolution of ~0.35 µm × 0.35 µm × ~2 µm, in x, y and z, respectively, creating an effective voxel size of ~0.03 µm(3 across a sample volume of ~5 mm(3, which is equivalent to >150 Billion voxels. We demonstrate the proof-of-concept of this lensfree optical tomographic microscopy platform on a color CMOS image sensor by creating tomograms of micro-particles as well as a wild-type C. elegans nematode.

  4. CMOS Active-Pixel Image Sensor With Simple Floating Gates

    Science.gov (United States)

    Fossum, Eric R.; Nakamura, Junichi; Kemeny, Sabrina E.

    1996-01-01

    Experimental complementary metal-oxide/semiconductor (CMOS) active-pixel image sensor integrated circuit features simple floating-gate structure, with metal-oxide/semiconductor field-effect transistor (MOSFET) as active circuit element in each pixel. Provides flexibility of readout modes, no kTC noise, and relatively simple structure suitable for high-density arrays. Features desirable for "smart sensor" applications.

  5. Development of a 750x750 pixels CMOS imager sensor for tracking applications

    Science.gov (United States)

    Larnaudie, Franck; Guardiola, Nicolas; Saint-Pé, Olivier; Vignon, Bruno; Tulet, Michel; Davancens, Robert; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Estribeau, Magali

    2017-11-01

    Solid-state optical sensors are now commonly used in space applications (navigation cameras, astronomy imagers, tracking sensors...). Although the charge-coupled devices are still widely used, the CMOS image sensor (CIS), which performances are continuously improving, is a strong challenger for Guidance, Navigation and Control (GNC) systems. This paper describes a 750x750 pixels CMOS image sensor that has been specially designed and developed for star tracker and tracking sensor applications. Such detector, that is featuring smart architecture enabling very simple and powerful operations, is built using the AMIS 0.5μm CMOS technology. It contains 750x750 rectangular pixels with 20μm pitch. The geometry of the pixel sensitive zone is optimized for applications based on centroiding measurements. The main feature of this device is the on-chip control and timing function that makes the device operation easier by drastically reducing the number of clocks to be applied. This powerful function allows the user to operate the sensor with high flexibility: measurement of dark level from masked lines, direct access to the windows of interest… A temperature probe is also integrated within the CMOS chip allowing a very precise measurement through the video stream. A complete electro-optical characterization of the sensor has been performed. The major parameters have been evaluated: dark current and its uniformity, read-out noise, conversion gain, Fixed Pattern Noise, Photo Response Non Uniformity, quantum efficiency, Modulation Transfer Function, intra-pixel scanning. The characterization tests are detailed in the paper. Co60 and protons irradiation tests have been also carried out on the image sensor and the results are presented. The specific features of the 750x750 image sensor such as low power CMOS design (3.3V, power consumption<100mW), natural windowing (that allows efficient and robust tracking algorithms), simple proximity electronics (because of the on

  6. 77 FR 74513 - Certain CMOS Image Sensors and Products Containing Same; Investigations: Terminations...

    Science.gov (United States)

    2012-12-14

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-846] Certain CMOS Image Sensors and Products Containing Same; Investigations: Terminations, Modifications and Rulings AGENCY: U.S... United States after importation of certain CMOS image sensors and products containing the same based on...

  7. Cloud Classification in Wide-Swath Passive Sensor Images Aided by Narrow-Swath Active Sensor Data

    Directory of Open Access Journals (Sweden)

    Hongxia Wang

    2018-05-01

    Full Text Available It is a challenge to distinguish between different cloud types because of the complexity and diversity of cloud coverage, which is a significant clutter source that impacts on target detection and identification from the images of space-based infrared sensors. In this paper, a novel strategy for cloud classification in wide-swath passive sensor images is developed, which is aided by narrow-swath active sensor data. The strategy consists of three steps, that is, the orbit registration, most matching donor pixel selection, and cloud type assignment for each recipient pixel. A new criterion for orbit registration is proposed so as to improve the matching accuracy. The most matching donor pixel is selected via the Euclidean distance and the square sum of the radiance relative differences between the recipient and the potential donor pixels. Each recipient pixel is then assigned a cloud type that corresponds to the most matching donor. The cloud classification of the Moderate Resolution Imaging Spectroradiometer (MODIS images is performed with the aid of the data from Cloud Profiling Radar (CPR. The results are compared with the CloudSat product 2B-CLDCLASS, as well as those that are obtained using the method of the International Satellite Cloud Climatology Project (ISCCP, which demonstrates the superior classification performance of the proposed strategy.

  8. Optimized multiple linear mappings for single image super-resolution

    Science.gov (United States)

    Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo

    2017-12-01

    Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.

  9. Linear Mathematical Model for Seam Tracking with an Arc Sensor in P-GMAW Processes.

    Science.gov (United States)

    Liu, Wenji; Li, Liangyu; Hong, Ying; Yue, Jianfeng

    2017-03-14

    Arc sensors have been used in seam tracking and widely studied since the 80s and commercial arc sensing products for T and V shaped grooves have been developed. However, it is difficult to use these arc sensors in narrow gap welding because the arc stability and sensing accuracy are not satisfactory. Pulse gas melting arc welding (P-GMAW) has been successfully applied in narrow gap welding and all position welding processes, so it is worthwhile to research P-GMAW arc sensing technology. In this paper, we derived a linear mathematical P-GMAW model for arc sensing, and the assumptions for the model are verified through experiments and finite element methods. Finally, the linear characteristics of the mathematical model were investigated. In torch height changing experiments, uphill experiments, and groove angle changing experiments the P-GMAW arc signals all satisfied the linear rules. In addition, the faster the welding speed, the higher the arc signal sensitivities; the smaller the groove angle, the greater the arc sensitivities. The arc signal variation rate needs to be modified according to the welding power, groove angles, and weaving or rotate speed.

  10. Low-Power Smart Imagers for Vision-Enabled Sensor Networks

    CERN Document Server

    Fernández-Berni, Jorge; Rodríguez-Vázquez, Ángel

    2012-01-01

    This book presents a comprehensive, systematic approach to the development of vision system architectures that employ sensory-processing concurrency and parallel processing to meet the autonomy challenges posed by a variety of safety and surveillance applications.  Coverage includes a thorough analysis of resistive diffusion networks embedded within an image sensor array. This analysis supports a systematic approach to the design of spatial image filters and their implementation as vision chips in CMOS technology. The book also addresses system-level considerations pertaining to the embedding of these vision chips into vision-enabled wireless sensor networks.  Describes a system-level approach for designing of vision devices and  embedding them into vision-enabled, wireless sensor networks; Surveys state-of-the-art, vision-enabled WSN nodes; Includes details of specifications and challenges of vision-enabled WSNs; Explains architectures for low-energy CMOS vision chips with embedded, programmable spatial f...

  11. A multimodal image sensor system for identifying water stress in grapevines

    Science.gov (United States)

    Zhao, Yong; Zhang, Qin; Li, Minzan; Shao, Yongni; Zhou, Jianfeng; Sun, Hong

    2012-11-01

    Water stress is one of the most common limitations of fruit growth. Water is the most limiting resource for crop growth. In grapevines, as well as in other fruit crops, fruit quality benefits from a certain level of water deficit which facilitates to balance vegetative and reproductive growth and the flow of carbohydrates to reproductive structures. A multi-modal sensor system was designed to measure the reflectance signature of grape plant surfaces and identify different water stress levels in this paper. The multi-modal sensor system was equipped with one 3CCD camera (three channels in R, G, and IR). The multi-modal sensor can capture and analyze grape canopy from its reflectance features, and identify the different water stress levels. This research aims at solving the aforementioned problems. The core technology of this multi-modal sensor system could further be used as a decision support system that combines multi-modal sensory data to improve plant stress detection and identify the causes of stress. The images were taken by multi-modal sensor which could output images in spectral bands of near-infrared, green and red channel. Based on the analysis of the acquired images, color features based on color space and reflectance features based on image process method were calculated. The results showed that these parameters had the potential as water stress indicators. More experiments and analysis are needed to validate the conclusion.

  12. Image sensor system with bio-inspired efficient coding and adaptation.

    Science.gov (United States)

    Okuno, Hirotsugu; Yagi, Tetsuya

    2012-08-01

    We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.

  13. CMOS image sensors: State-of-the-art

    Science.gov (United States)

    Theuwissen, Albert J. P.

    2008-09-01

    This paper gives an overview of the state-of-the-art of CMOS image sensors. The main focus is put on the shrinkage of the pixels : what is the effect on the performance characteristics of the imagers and on the various physical parameters of the camera ? How is the CMOS pixel architecture optimized to cope with the negative performance effects of the ever-shrinking pixel size ? On the other hand, the smaller dimensions in CMOS technology allow further integration on column level and even on pixel level. This will make CMOS imagers even smarter that they are already.

  14. Self-amplified CMOS image sensor using a current-mode readout circuit

    Science.gov (United States)

    Santos, Patrick M.; de Lima Monteiro, Davies W.; Pittet, Patrick

    2014-05-01

    The feature size of the CMOS processes decreased during the past few years and problems such as reduced dynamic range have become more significant in voltage-mode pixels, even though the integration of more functionality inside the pixel has become easier. This work makes a contribution on both sides: the possibility of a high signal excursion range using current-mode circuits together with functionality addition by making signal amplification inside the pixel. The classic 3T pixel architecture was rebuild with small modifications to integrate a transconductance amplifier providing a current as an output. The matrix with these new pixels will operate as a whole large transistor outsourcing an amplified current that will be used for signal processing. This current is controlled by the intensity of the light received by the matrix, modulated pixel by pixel. The output current can be controlled by the biasing circuits to achieve a very large range of output signal levels. It can also be controlled with the matrix size and this permits a very high degree of freedom on the signal level, observing the current densities inside the integrated circuit. In addition, the matrix can operate at very small integration times. Its applications would be those in which fast imaging processing, high signal amplification are required and low resolution is not a major problem, such as UV image sensors. Simulation results will be presented to support: operation, control, design, signal excursion levels and linearity for a matrix of pixels that was conceived using this new concept of sensor.

  15. Evaluation of the AN/SAY-1 Thermal Imaging Sensor System

    National Research Council Canada - National Science Library

    Smith, John G; Middlebrook, Christopher T

    2002-01-01

    The AN/SAY-1 Thermal Imaging Sensor System "TISS" was developed to provide surface ships with a day/night imaging capability to detect low radar reflective, small cross-sectional area targets such as floating mines...

  16. 77 FR 33488 - Certain CMOS Image Sensors and Products Containing Same; Institution of Investigation Pursuant to...

    Science.gov (United States)

    2012-06-06

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-846] Certain CMOS Image Sensors and... image sensors and products containing same by reason of infringement of certain claims of U.S. Patent No... image sensors and products containing same that infringe one or more of claims 1 and 2 of the `126...

  17. Recognizing Banknote Fitness with a Visible Light One Dimensional Line Image Sensor

    Directory of Open Access Journals (Sweden)

    Tuyen Danh Pham

    2015-08-01

    Full Text Available In general, dirty banknotes that have creases or soiled surfaces should be replaced by new banknotes, whereas clean banknotes should be recirculated. Therefore, the accurate classification of banknote fitness when sorting paper currency is an important and challenging task. Most previous research has focused on sensors that used visible, infrared, and ultraviolet light. Furthermore, there was little previous research on the fitness classification for Indian paper currency. Therefore, we propose a new method for classifying the fitness of Indian banknotes, with a one-dimensional line image sensor that uses only visible light. The fitness of banknotes is usually determined by various factors such as soiling, creases, and tears, etc. although we just consider banknote soiling in our research. This research is novel in the following four ways: first, there has been little research conducted on fitness classification for the Indian Rupee using visible-light images. Second, the classification is conducted based on the features extracted from the regions of interest (ROIs, which contain little texture. Third, 1-level discrete wavelet transformation (DWT is used to extract the features for discriminating between fit and unfit banknotes. Fourth, the optimal DWT features that represent the fitness and unfitness of banknotes are selected based on linear regression analysis with ground-truth data measured by densitometer. In addition, the selected features are used as the inputs to a support vector machine (SVM for the final classification of banknote fitness. Experimental results showed that our method outperforms other methods.

  18. Recognizing Banknote Fitness with a Visible Light One Dimensional Line Image Sensor.

    Science.gov (United States)

    Pham, Tuyen Danh; Park, Young Ho; Kwon, Seung Yong; Nguyen, Dat Tien; Vokhidov, Husan; Park, Kang Ryoung; Jeong, Dae Sik; Yoon, Sungsoo

    2015-08-27

    In general, dirty banknotes that have creases or soiled surfaces should be replaced by new banknotes, whereas clean banknotes should be recirculated. Therefore, the accurate classification of banknote fitness when sorting paper currency is an important and challenging task. Most previous research has focused on sensors that used visible, infrared, and ultraviolet light. Furthermore, there was little previous research on the fitness classification for Indian paper currency. Therefore, we propose a new method for classifying the fitness of Indian banknotes, with a one-dimensional line image sensor that uses only visible light. The fitness of banknotes is usually determined by various factors such as soiling, creases, and tears, etc. although we just consider banknote soiling in our research. This research is novel in the following four ways: first, there has been little research conducted on fitness classification for the Indian Rupee using visible-light images. Second, the classification is conducted based on the features extracted from the regions of interest (ROIs), which contain little texture. Third, 1-level discrete wavelet transformation (DWT) is used to extract the features for discriminating between fit and unfit banknotes. Fourth, the optimal DWT features that represent the fitness and unfitness of banknotes are selected based on linear regression analysis with ground-truth data measured by densitometer. In addition, the selected features are used as the inputs to a support vector machine (SVM) for the final classification of banknote fitness. Experimental results showed that our method outperforms other methods.

  19. BIOME: An Ecosystem Remote Sensor Based on Imaging Interferometry

    Science.gov (United States)

    Peterson, David L.; Hammer, Philip; Smith, William H.; Lawless, James G. (Technical Monitor)

    1994-01-01

    Until recent times, optical remote sensing of ecosystem properties from space has been limited to broad band multispectral scanners such as Landsat and AVHRR. While these sensor data can be used to derive important information about ecosystem parameters, they are very limited for measuring key biogeochemical cycling parameters such as the chemical content of plant canopies. Such parameters, for example the lignin and nitrogen contents, are potentially amenable to measurements by very high spectral resolution instruments using a spectroscopic approach. Airborne sensors based on grating imaging spectrometers gave the first promise of such potential but the recent decision not to deploy the space version has left the community without many alternatives. In the past few years, advancements in high performance deep well digital sensor arrays coupled with a patented design for a two-beam interferometer has produced an entirely new design for acquiring imaging spectroscopic data at the signal to noise levels necessary for quantitatively estimating chemical composition (1000:1 at 2 microns). This design has been assembled as a laboratory instrument and the principles demonstrated for acquiring remote scenes. An airborne instrument is in production and spaceborne sensors being proposed. The instrument is extremely promising because of its low cost, lower power requirements, very low weight, simplicity (no moving parts), and high performance. For these reasons, we have called it the first instrument optimized for ecosystem studies as part of a Biological Imaging and Observation Mission to Earth (BIOME).

  20. Low-Power Low-Noise CMOS Imager Design : In Micro-Digital Sun Sensor Application

    NARCIS (Netherlands)

    Xie, N.

    2012-01-01

    A digital sun sensor is superior to an analog sun sensor in aspects of resolution, albedo immunity, and integration. The proposed Micro-Digital Sun Sensor (µDSS) is an autonomous digital sun sensor which is implemented by means of a CMOS image sensor, which is named APS+. The µDSS is designed

  1. Approach for Self-Calibrating CO2 Measurements with Linear Membrane-Based Gas Sensors

    Directory of Open Access Journals (Sweden)

    Detlef Lazik

    2016-11-01

    Full Text Available Linear membrane-based gas sensors that can be advantageously applied for the measurement of a single gas component in large heterogeneous systems, e.g., for representative determination of CO2 in the subsurface, can be designed depending on the properties of the observation object. A resulting disadvantage is that the permeation-based sensor response depends on operating conditions, the individual site-adapted sensor geometry, the membrane material, and the target gas component. Therefore, calibration is needed, especially of the slope, which could change over several orders of magnitude. A calibration-free approach based on an internal gas standard is developed to overcome the multi-criterial slope dependency. This results in a normalization of sensor response and enables the sensor to assess the significance of measurement. The approach was proofed on the example of CO2 analysis in dry air with tubular PDMS membranes for various CO2 concentrations of an internal standard. Negligible temperature dependency was found within an 18 K range. The transformation behavior of the measurement signal and the influence of concentration variations of the internal standard on the measurement signal were shown. Offsets that were adjusted based on the stated theory for the given measurement conditions and material data from the literature were in agreement with the experimentally determined offsets. A measurement comparison with an NDIR reference sensor shows an unexpectedly low bias (<1% of the non-calibrated sensor response, and comparable statistical uncertainty.

  2. Scene matching based on non-linear pre-processing on reference image and sensed image

    Institute of Scientific and Technical Information of China (English)

    Zhong Sheng; Zhang Tianxu; Sang Nong

    2005-01-01

    To solve the heterogeneous image scene matching problem, a non-linear pre-processing method for the original images before intensity-based correlation is proposed. The result shows that the proper matching probability is raised greatly. Especially for the low S/N image pairs, the effect is more remarkable.

  3. Soft sensor design by multivariate fusion of image features and process measurements

    DEFF Research Database (Denmark)

    Lin, Bao; Jørgensen, Sten Bay

    2011-01-01

    This paper presents a multivariate data fusion procedure for design of dynamic soft sensors where suitably selected image features are combined with traditional process measurements to enhance the performance of data-driven soft sensors. A key issue of fusing multiple sensor data, i.e. to determine...... with a multivariate analysis technique from RGB pictures. The color information is also transformed to hue, saturation and intensity components. Both sets of image features are combined with traditional process measurements to obtain an inferential model by partial least squares (PLS) regression. A dynamic PLS model...... oxides (NOx) emission of cement kilns. On-site tests demonstrate improved performance over soft sensors based on conventional process measurements only....

  4. Broadband image sensor array based on graphene-CMOS integration

    Science.gov (United States)

    Goossens, Stijn; Navickaite, Gabriele; Monasterio, Carles; Gupta, Shuchi; Piqueras, Juan José; Pérez, Raúl; Burwell, Gregory; Nikitskiy, Ivan; Lasanta, Tania; Galán, Teresa; Puma, Eric; Centeno, Alba; Pesquera, Amaia; Zurutuza, Amaia; Konstantatos, Gerasimos; Koppens, Frank

    2017-06-01

    Integrated circuits based on complementary metal-oxide-semiconductors (CMOS) are at the heart of the technological revolution of the past 40 years, enabling compact and low-cost microelectronic circuits and imaging systems. However, the diversification of this platform into applications other than microcircuits and visible-light cameras has been impeded by the difficulty to combine semiconductors other than silicon with CMOS. Here, we report the monolithic integration of a CMOS integrated circuit with graphene, operating as a high-mobility phototransistor. We demonstrate a high-resolution, broadband image sensor and operate it as a digital camera that is sensitive to ultraviolet, visible and infrared light (300-2,000 nm). The demonstrated graphene-CMOS integration is pivotal for incorporating 2D materials into the next-generation microelectronics, sensor arrays, low-power integrated photonics and CMOS imaging systems covering visible, infrared and terahertz frequencies.

  5. Thermal infrared panoramic imaging sensor

    Science.gov (United States)

    Gutin, Mikhail; Tsui, Eddy K.; Gutin, Olga; Wang, Xu-Ming; Gutin, Alexey

    2006-05-01

    Panoramic cameras offer true real-time, 360-degree coverage of the surrounding area, valuable for a variety of defense and security applications, including force protection, asset protection, asset control, security including port security, perimeter security, video surveillance, border control, airport security, coastguard operations, search and rescue, intrusion detection, and many others. Automatic detection, location, and tracking of targets outside protected area ensures maximum protection and at the same time reduces the workload on personnel, increases reliability and confidence of target detection, and enables both man-in-the-loop and fully automated system operation. Thermal imaging provides the benefits of all-weather, 24-hour day/night operation with no downtime. In addition, thermal signatures of different target types facilitate better classification, beyond the limits set by camera's spatial resolution. The useful range of catadioptric panoramic cameras is affected by their limited resolution. In many existing systems the resolution is optics-limited. Reflectors customarily used in catadioptric imagers introduce aberrations that may become significant at large camera apertures, such as required in low-light and thermal imaging. Advantages of panoramic imagers with high image resolution include increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) combines the strengths of improved, high-resolution panoramic optics with thermal imaging in the 8 - 14 micron spectral range, leveraged by intelligent video processing for automated detection, location, and tracking of moving targets. The work in progress supports the Future Combat Systems (FCS) and the

  6. A CMOS image sensor with row and column profiling means

    NARCIS (Netherlands)

    Xie, N.; Theuwissen, A.J.P.; Wang, X.; Leijtens, J.A.P.; Hakkesteegt, H.; Jansen, H.

    2008-01-01

    This paper describes the implementation and firstmeasurement results of a new way that obtains row and column profile data from a CMOS Image Sensor, which is developed for a micro-Digital Sun Sensor (μDSS).The basic profiling action is achieved by the pixels with p-type MOS transistors which realize

  7. Optical Inspection In Hostile Industrial Environments: Single-Sensor VS. Imaging Methods

    Science.gov (United States)

    Cielo, P.; Dufour, M.; Sokalski, A.

    1988-11-01

    On-line and unsupervised industrial inspection for quality control and process monitoring is increasingly required in the modern automated factory. Optical techniques are particularly well suited to industrial inspection in hostile environments because of their noncontact nature, fast response time and imaging capabilities. Optical sensors can be used for remote inspection of high temperature products or otherwise inaccessible parts, provided they are in a line-of-sight relation with the sensor. Moreover, optical sensors are much easier to adapt to a variety of part shapes, position or orientation and conveyor speeds as compared to contact-based sensors. This is an important requirement in a flexible automation environment. A number of choices are possible in the design of optical inspection systems. General-purpose two-dimensional (2-D) or three-dimensional (3-D) imaging techniques have advanced very rapidly in the last years thanks to a substantial research effort as well as to the availability of increasingly powerful and affordable hardware and software. Imaging can be realized using 2-D arrays or simpler one-dimensional (1-D) line-array detectors. Alternatively, dedicated single-spot sensors require a smaller amount of data processing and often lead to robust sensors which are particularly appropriate to on-line operation in hostile industrial environments. Many specialists now feel that dedicated sensors or clusters of sensors are often more effective for specific industrial automation and control tasks, at least in the short run. This paper will discuss optomechanical and electro-optical choices with reference to the design of a number of on-line inspection sensors which have been recently developed at our institute. Case studies will include real-time surface roughness evaluation on polymer cables extruded at high speed, surface characterization of hot-rolled or galvanized-steel sheets, temperature evaluation and pinhole detection in aluminum foil, multi

  8. Sparse PDF maps for non-linear multi-resolution image operations

    KAUST Repository

    Hadwiger, Markus

    2012-11-01

    We introduce a new type of multi-resolution image pyramid for high-resolution images called sparse pdf maps (sPDF-maps). Each pyramid level consists of a sparse encoding of continuous probability density functions (pdfs) of pixel neighborhoods in the original image. The encoded pdfs enable the accurate computation of non-linear image operations directly in any pyramid level with proper pre-filtering for anti-aliasing, without accessing higher or lower resolutions. The sparsity of sPDF-maps makes them feasible for gigapixel images, while enabling direct evaluation of a variety of non-linear operators from the same representation. We illustrate this versatility for antialiased color mapping, O(n) local Laplacian filters, smoothed local histogram filters (e.g., median or mode filters), and bilateral filters. © 2012 ACM.

  9. SNAPSHOT SPECTRAL AND COLOR IMAGING USING A REGULAR DIGITAL CAMERA WITH A MONOCHROMATIC IMAGE SENSOR

    Directory of Open Access Journals (Sweden)

    J. Hauser

    2017-10-01

    Full Text Available Spectral imaging (SI refers to the acquisition of the three-dimensional (3D spectral cube of spatial and spectral data of a source object at a limited number of wavelengths in a given wavelength range. Snapshot spectral imaging (SSI refers to the instantaneous acquisition (in a single shot of the spectral cube, a process suitable for fast changing objects. Known SSI devices exhibit large total track length (TTL, weight and production costs and relatively low optical throughput. We present a simple SSI camera based on a regular digital camera with (i an added diffusing and dispersing phase-only static optical element at the entrance pupil (diffuser and (ii tailored compressed sensing (CS methods for digital processing of the diffused and dispersed (DD image recorded on the image sensor. The diffuser is designed to mix the spectral cube data spectrally and spatially and thus to enable convergence in its reconstruction by CS-based algorithms. In addition to performing SSI, this SSI camera is capable to perform color imaging using a monochromatic or gray-scale image sensor without color filter arrays.

  10. A complex linear least-squares method to derive relative and absolute orientations of seismic sensors

    OpenAIRE

    F. Grigoli; Simone Cesca; Torsten Dahm; L. Krieger

    2012-01-01

    Determining the relative orientation of the horizontal components of seismic sensors is a common problem that limits data analysis and interpretation for several acquisition setups, including linear arrays of geophones deployed in borehole installations or ocean bottom seismometers deployed at the seafloor. To solve this problem we propose a new inversion method based on a complex linear algebra approach. Relative orientation angles are retrieved by minimizing, in a least-squares sense, the l...

  11. Fast regional readout CMOS Image Sensor for dynamic MLC tracking

    Science.gov (United States)

    Zin, H.; Harris, E.; Osmond, J.; Evans, P.

    2014-03-01

    Advanced radiotherapy techniques such as volumetric modulated arc therapy (VMAT) require verification of the complex beam delivery including tracking of multileaf collimators (MLC) and monitoring the dose rate. This work explores the feasibility of a prototype Complementary metal-oxide semiconductor Image Sensor (CIS) for tracking these complex treatments by utilising fast, region of interest (ROI) read out functionality. An automatic edge tracking algorithm was used to locate the MLC leaves edges moving at various speeds (from a moving triangle field shape) and imaged with various sensor frame rates. The CIS demonstrates successful edge detection of the dynamic MLC motion within accuracy of 1.0 mm. This demonstrates the feasibility of the sensor to verify treatment delivery involving dynamic MLC up to ~400 frames per second (equivalent to the linac pulse rate), which is superior to any current techniques such as using electronic portal imaging devices (EPID). CIS provides the basis to an essential real-time verification tool, useful in accessing accurate delivery of complex high energy radiation to the tumour and ultimately to achieve better cure rates for cancer patients.

  12. Fast regional readout CMOS image sensor for dynamic MLC tracking

    International Nuclear Information System (INIS)

    Zin, H; Harris, E; Osmond, J; Evans, P

    2014-01-01

    Advanced radiotherapy techniques such as volumetric modulated arc therapy (VMAT) require verification of the complex beam delivery including tracking of multileaf collimators (MLC) and monitoring the dose rate. This work explores the feasibility of a prototype Complementary metal-oxide semiconductor Image Sensor (CIS) for tracking these complex treatments by utilising fast, region of interest (ROI) read out functionality. An automatic edge tracking algorithm was used to locate the MLC leaves edges moving at various speeds (from a moving triangle field shape) and imaged with various sensor frame rates. The CIS demonstrates successful edge detection of the dynamic MLC motion within accuracy of 1.0 mm. This demonstrates the feasibility of the sensor to verify treatment delivery involving dynamic MLC up to ∼400 frames per second (equivalent to the linac pulse rate), which is superior to any current techniques such as using electronic portal imaging devices (EPID). CIS provides the basis to an essential real-time verification tool, useful in accessing accurate delivery of complex high energy radiation to the tumour and ultimately to achieve better cure rates for cancer patients.

  13. Photogrammetric Processing of Planetary Linear Pushbroom Images Based on Approximate Orthophotos

    Science.gov (United States)

    Geng, X.; Xu, Q.; Xing, S.; Hou, Y. F.; Lan, C. Z.; Zhang, J. J.

    2018-04-01

    It is still a great challenging task to efficiently produce planetary mapping products from orbital remote sensing images. There are many disadvantages in photogrammetric processing of planetary stereo images, such as lacking ground control information and informative features. Among which, image matching is the most difficult job in planetary photogrammetry. This paper designs a photogrammetric processing framework for planetary remote sensing images based on approximate orthophotos. Both tie points extraction for bundle adjustment and dense image matching for generating digital terrain model (DTM) are performed on approximate orthophotos. Since most of planetary remote sensing images are acquired by linear scanner cameras, we mainly deal with linear pushbroom images. In order to improve the computational efficiency of orthophotos generation and coordinates transformation, a fast back-projection algorithm of linear pushbroom images is introduced. Moreover, an iteratively refined DTM and orthophotos scheme was adopted in the DTM generation process, which is helpful to reduce search space of image matching and improve matching accuracy of conjugate points. With the advantages of approximate orthophotos, the matching results of planetary remote sensing images can be greatly improved. We tested the proposed approach with Mars Express (MEX) High Resolution Stereo Camera (HRSC) and Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) images. The preliminary experimental results demonstrate the feasibility of the proposed approach.

  14. VLC-based indoor location awareness using LED light and image sensors

    Science.gov (United States)

    Lee, Seok-Ju; Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    Recently, indoor LED lighting can be considered for constructing green infra with energy saving and additionally providing LED-IT convergence services such as visible light communication (VLC) based location awareness and navigation services. For example, in case of large complex shopping mall, location awareness to navigate the destination is very important issue. However, the conventional navigation using GPS is not working indoors. Alternative location service based on WLAN has a problem that the position accuracy is low. For example, it is difficult to estimate the height exactly. If the position error of the height is greater than the height between floors, it may cause big problem. Therefore, conventional navigation is inappropriate for indoor navigation. Alternative possible solution for indoor navigation is VLC based location awareness scheme. Because indoor LED infra will be definitely equipped for providing lighting functionality, indoor LED lighting has a possibility to provide relatively high accuracy of position estimation combined with VLC technology. In this paper, we provide a new VLC based positioning system using visible LED lights and image sensors. Our system uses location of image sensor lens and location of reception plane. By using more than two image sensor, we can determine transmitter position less than 1m position error. Through simulation, we verify the validity of the proposed VLC based new positioning system using visible LED light and image sensors.

  15. Coded aperture detector: an image sensor with sub 20-nm pixel resolution.

    Science.gov (United States)

    Miyakawa, Ryan; Mayer, Rafael; Wojdyla, Antoine; Vannier, Nicolas; Lesser, Ian; Aron-Dine, Shifrah; Naulleau, Patrick

    2014-08-11

    We describe the coded aperture detector, a novel image sensor based on uniformly redundant arrays (URAs) with customizable pixel size, resolution, and operating photon energy regime. In this sensor, a coded aperture is scanned laterally at the image plane of an optical system, and the transmitted intensity is measured by a photodiode. The image intensity is then digitally reconstructed using a simple convolution. We present results from a proof-of-principle optical prototype, demonstrating high-fidelity image sensing comparable to a CCD. A 20-nm half-pitch URA fabricated by the Center for X-ray Optics (CXRO) nano-fabrication laboratory is presented that is suitable for high-resolution image sensing at EUV and soft X-ray wavelengths.

  16. Automated, non-linear registration between 3-dimensional brain map and medical head image

    International Nuclear Information System (INIS)

    Mizuta, Shinobu; Urayama, Shin-ichi; Zoroofi, R.A.; Uyama, Chikao

    1998-01-01

    In this paper, we propose an automated, non-linear registration method between 3-dimensional medical head image and brain map in order to efficiently extract the regions of interest. In our method, input 3-dimensional image is registered into a reference image extracted from a brain map. The problems to be solved are automated, non-linear image matching procedure, and cost function which represents the similarity between two images. Non-linear matching is carried out by dividing the input image into connected partial regions, transforming the partial regions preserving connectivity among the adjacent images, evaluating the image similarity between the transformed regions of the input image and the correspondent regions of the reference image, and iteratively searching the optimal transformation of the partial regions. In order to measure the voxelwise similarity of multi-modal images, a cost function is introduced, which is based on the mutual information. Some experiments using MR images presented the effectiveness of the proposed method. (author)

  17. The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications

    Directory of Open Access Journals (Sweden)

    Keunyeol Park

    2018-02-01

    Full Text Available This paper presents a single-bit CMOS image sensor (CIS that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel is 2.84 mm2 with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB on an 8-bit ADC basis at a 50 MHz sampling frequency.

  18. The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications.

    Science.gov (United States)

    Park, Keunyeol; Song, Minkyu; Kim, Soo Youn

    2018-02-24

    This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm² with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency.

  19. Crop status sensing system by multi-spectral imaging sensor, 1: Image processing and paddy field sensing

    International Nuclear Information System (INIS)

    Ishii, K.; Sugiura, R.; Fukagawa, T.; Noguchi, N.; Shibata, Y.

    2006-01-01

    The objective of the study is to construct a sensing system for precision farming. A Multi-Spectral Imaging Sensor (MSIS), which can obtain three images (G. R and NIR) simultaneously, was used for detecting growth status of plants. The sensor was mounted on an unmanned helicopter. An image processing method for acquiring information of crop status with high accuracy was developed. Crop parameters that were measured include SPAD, leaf height, and stems number. Both direct seeding variety and transplant variety of paddy rice were adopted in the research. The result of a field test showed that crop status of both varieties could be detected with sufficient accuracy to apply to precision farming

  20. Accuracy of Shack-Hartmann wavefront sensor using a coherent wound fibre image bundle

    Science.gov (United States)

    Zheng, Jessica R.; Goodwin, Michael; Lawrence, Jon

    2018-03-01

    Shack-Hartmannwavefront sensors using wound fibre image bundles are desired for multi-object adaptive optical systems to provide large multiplex positioned by Starbugs. The use of a large-sized wound fibre image bundle provides the flexibility to use more sub-apertures wavefront sensor for ELTs. These compact wavefront sensors take advantage of large focal surfaces such as the Giant Magellan Telescope. The focus of this paper is to study the wound fibre image bundle structure defects effect on the centroid measurement accuracy of a Shack-Hartmann wavefront sensor. We use the first moment centroid method to estimate the centroid of a focused Gaussian beam sampled by a simulated bundle. Spot estimation accuracy with wound fibre image bundle and its structure impact on wavefront measurement accuracy statistics are addressed. Our results show that when the measurement signal-to-noise ratio is high, the centroid measurement accuracy is dominated by the wound fibre image bundle structure, e.g. tile angle and gap spacing. For the measurement with low signal-to-noise ratio, its accuracy is influenced by the read noise of the detector instead of the wound fibre image bundle structure defects. We demonstrate this both with simulation and experimentally. We provide a statistical model of the centroid and wavefront error of a wound fibre image bundle found through experiment.

  1. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    Science.gov (United States)

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

  2. Robust optical sensors for safety critical automotive applications

    Science.gov (United States)

    De Locht, Cliff; De Knibber, Sven; Maddalena, Sam

    2008-02-01

    Optical sensors for the automotive industry need to be robust, high performing and low cost. This paper focuses on the impact of automotive requirements on optical sensor design and packaging. Main strategies to lower optical sensor entry barriers in the automotive market include: Perform sensor calibration and tuning by the sensor manufacturer, sensor test modes on chip to guarantee functional integrity at operation, and package technology is key. As a conclusion, optical sensor applications are growing in automotive. Optical sensor robustness matured to the level of safety critical applications like Electrical Power Assisted Steering (EPAS) and Drive-by-Wire by optical linear arrays based systems and Automated Cruise Control (ACC), Lane Change Assist and Driver Classification/Smart Airbag Deployment by camera imagers based systems.

  3. Efficient demodulation scheme for rolling-shutter-patterning of CMOS image sensor based visible light communications.

    Science.gov (United States)

    Chen, Chia-Wei; Chow, Chi-Wai; Liu, Yang; Yeh, Chien-Hung

    2017-10-02

    Recently even the low-end mobile-phones are equipped with a high-resolution complementary-metal-oxide-semiconductor (CMOS) image sensor. This motivates using a CMOS image sensor for visible light communication (VLC). Here we propose and demonstrate an efficient demodulation scheme to synchronize and demodulate the rolling shutter pattern in image sensor based VLC. The implementation algorithm is discussed. The bit-error-rate (BER) performance and processing latency are evaluated and compared with other thresholding schemes.

  4. Improved linearity using harmonic error rejection in a full-field range imaging system

    Science.gov (United States)

    Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.

    2008-02-01

    Full field range imaging cameras are used to simultaneously measure the distance for every pixel in a given scene using an intensity modulated illumination source and a gain modulated receiver array. The light is reflected from an object in the scene, and the modulation envelope experiences a phase shift proportional to the target distance. Ideally the waveforms are sinusoidal, allowing the phase, and hence object range, to be determined from four measurements using an arctangent function. In practice these waveforms are often not perfectly sinusoidal, and in some cases square waveforms are instead used to simplify the electronic drive requirements. The waveforms therefore commonly contain odd harmonics which contribute a nonlinear error to the phase determination, and therefore an error in the range measurement. We have developed a unique sampling method to cancel the effect of these harmonics, with the results showing an order of magnitude improvement in the measurement linearity without the need for calibration or lookup tables, while the acquisition time remains unchanged. The technique can be applied to existing range imaging systems without having to change or modify the complex illumination or sensor systems, instead only requiring a change to the signal generation and timing electronics.

  5. Robust linear registration of CT images using random regression forests

    Science.gov (United States)

    Konukoglu, Ender; Criminisi, Antonio; Pathak, Sayan; Robertson, Duncan; White, Steve; Haynor, David; Siddiqui, Khan

    2011-03-01

    Global linear registration is a necessary first step for many different tasks in medical image analysis. Comparing longitudinal studies1, cross-modality fusion2, and many other applications depend heavily on the success of the automatic registration. The robustness and efficiency of this step is crucial as it affects all subsequent operations. Most common techniques cast the linear registration problem as the minimization of a global energy function based on the image intensities. Although these algorithms have proved useful, their robustness in fully automated scenarios is still an open question. In fact, the optimization step often gets caught in local minima yielding unsatisfactory results. Recent algorithms constrain the space of registration parameters by exploiting implicit or explicit organ segmentations, thus increasing robustness4,5. In this work we propose a novel robust algorithm for automatic global linear image registration. Our method uses random regression forests to estimate posterior probability distributions for the locations of anatomical structures - represented as axis aligned bounding boxes6. These posterior distributions are later integrated in a global linear registration algorithm. The biggest advantage of our algorithm is that it does not require pre-defined segmentations or regions. Yet it yields robust registration results. We compare the robustness of our algorithm with that of the state of the art Elastix toolbox7. Validation is performed via 1464 pair-wise registrations in a database of very diverse 3D CT images. We show that our method decreases the "failure" rate of the global linear registration from 12.5% (Elastix) to only 1.9%.

  6. A 10-bit column-parallel cyclic ADC for high-speed CMOS image sensors

    International Nuclear Information System (INIS)

    Han Ye; Li Quanliang; Shi Cong; Wu Nanjian

    2013-01-01

    This paper presents a high-speed column-parallel cyclic analog-to-digital converter (ADC) for a CMOS image sensor. A correlated double sampling (CDS) circuit is integrated in the ADC, which avoids a stand-alone CDS circuit block. An offset cancellation technique is also introduced, which reduces the column fixed-pattern noise (FPN) effectively. One single channel ADC with an area less than 0.02 mm 2 was implemented in a 0.13 μm CMOS image sensor process. The resolution of the proposed ADC is 10-bit, and the conversion rate is 1.6 MS/s. The measured differential nonlinearity and integral nonlinearity are 0.89 LSB and 6.2 LSB together with CDS, respectively. The power consumption from 3.3 V supply is only 0.66 mW. An array of 48 10-bit column-parallel cyclic ADCs was integrated into an array of CMOS image sensor pixels. The measured results indicated that the ADC circuit is suitable for high-speed CMOS image sensors. (semiconductor integrated circuits)

  7. Retina-like sensor image coordinates transformation and display

    Science.gov (United States)

    Cao, Fengmei; Cao, Nan; Bai, Tingzhu; Song, Shengyu

    2015-03-01

    For a new kind of retina-like senor camera, the image acquisition, coordinates transformation and interpolation need to be realized. Both of the coordinates transformation and interpolation are computed in polar coordinate due to the sensor's particular pixels distribution. The image interpolation is based on sub-pixel interpolation and its relative weights are got in polar coordinates. The hardware platform is composed of retina-like senor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes the real-time image acquisition, coordinate transformation and interpolation.

  8. Measuring the Contractile Response of Isolated Tissue Using an Image Sensor

    Directory of Open Access Journals (Sweden)

    David Díaz-Martín

    2015-04-01

    Full Text Available Isometric or isotonic transducers have traditionally been used to study the contractile/relaxation effects of drugs on isolated tissues. However, these mechanical sensors are expensive and delicate, and they are associated with certain disadvantages when performing experiments in the laboratory. In this paper, a method that uses an image sensor to measure the contractile effect of drugs on blood vessel rings and other luminal organs is presented. The new method is based on an image-processing algorithm, and it provides a fast, easy and non-expensive way to analyze the effects of such drugs. In our tests, we have obtained dose-response curves from rat aorta rings that are equivalent to those achieved with classical mechanic sensors.

  9. White-light full-field OCT resolution improvement by image sensor colour balance adjustment: numerical simulation

    International Nuclear Information System (INIS)

    Kalyanov, A L; Lychagov, V V; Ryabukho, V P; Smirnov, I V

    2012-01-01

    The possibility of improving white-light full-field optical coherence tomography (OCT) resolution by image sensor colour balance tuning is shown numerically. We calculated the full-width at half-maximum (FWHM) of a coherence pulse registered by a silicon colour image sensor under various colour balance settings. The calculations were made for both a halogen lamp and white LED sources. The results show that the interference pulse width can be reduced by the proper choice of colour balance coefficients. The reduction is up to 18%, as compared with a colour image sensor with regular settings, and up to 20%, as compared with a monochrome sensor. (paper)

  10. Extracellular Bio-imaging of Acetylcholine-stimulated PC12 Cells Using a Calcium and Potassium Multi-ion Image Sensor.

    Science.gov (United States)

    Matsuba, Sota; Kato, Ryo; Okumura, Koichi; Sawada, Kazuaki; Hattori, Toshiaki

    2018-01-01

    In biochemistry, Ca 2+ and K + play essential roles to control signal transduction. Much interest has been focused on ion-imaging, which facilitates understanding of their ion flux dynamics. In this paper, we report a calcium and potassium multi-ion image sensor and its application to living cells (PC12). The multi-ion sensor had two selective plasticized poly(vinyl chloride) membranes containing ionophores. Each region on the sensor responded to only the corresponding ion. The multi-ion sensor has many advantages including not only label-free and real-time measurement but also simultaneous detection of Ca 2+ and K + . Cultured PC12 cells treated with nerve growth factor were prepared, and a practical observation for the cells was conducted with the sensor. After the PC12 cells were stimulated by acetylcholine, only the extracellular Ca 2+ concentration increased while there was no increase in the extracellular K + concentration. Through the practical observation, we demonstrated that the sensor was helpful for analyzing the cell events with changing Ca 2+ and/or K + concentration.

  11. Optical Imaging Sensors and Systems for Homeland Security Applications

    CERN Document Server

    Javidi, Bahram

    2006-01-01

    Optical and photonic systems and devices have significant potential for homeland security. Optical Imaging Sensors and Systems for Homeland Security Applications presents original and significant technical contributions from leaders of industry, government, and academia in the field of optical and photonic sensors, systems and devices for detection, identification, prevention, sensing, security, verification and anti-counterfeiting. The chapters have recent and technically significant results, ample illustrations, figures, and key references. This book is intended for engineers and scientists in the relevant fields, graduate students, industry managers, university professors, government managers, and policy makers. Advanced Sciences and Technologies for Security Applications focuses on research monographs in the areas of -Recognition and identification (including optical imaging, biometrics, authentication, verification, and smart surveillance systems) -Biological and chemical threat detection (including bios...

  12. Laser Doppler perfusion imaging with a complimentary metal oxide semiconductor image sensor

    NARCIS (Netherlands)

    Serov, Alexander; Steenbergen, Wiendelt; de Mul, F.F.M.

    2002-01-01

    We utilized a complimentary metal oxide semiconductor video camera for fast f low imaging with the laser Doppler technique. A single sensor is used for both observation of the area of interest and measurements of the interference signal caused by dynamic light scattering from moving particles inside

  13. Light-Addressable Potentiometric Sensors for Quantitative Spatial Imaging of Chemical Species.

    Science.gov (United States)

    Yoshinobu, Tatsuo; Miyamoto, Ko-Ichiro; Werner, Carl Frederik; Poghossian, Arshak; Wagner, Torsten; Schöning, Michael J

    2017-06-12

    A light-addressable potentiometric sensor (LAPS) is a semiconductor-based chemical sensor, in which a measurement site on the sensing surface is defined by illumination. This light addressability can be applied to visualize the spatial distribution of pH or the concentration of a specific chemical species, with potential applications in the fields of chemistry, materials science, biology, and medicine. In this review, the features of this chemical imaging sensor technology are compared with those of other technologies. Instrumentation, principles of operation, and various measurement modes of chemical imaging sensor systems are described. The review discusses and summarizes state-of-the-art technologies, especially with regard to the spatial resolution and measurement speed; for example, a high spatial resolution in a submicron range and a readout speed in the range of several tens of thousands of pixels per second have been achieved with the LAPS. The possibility of combining this technology with microfluidic devices and other potential future developments are discussed.

  14. Real-time biochemical sensor based on Raman scattering with CMOS contact imaging.

    Science.gov (United States)

    Muyun Cao; Yuhua Li; Yadid-Pecht, Orly

    2015-08-01

    This work presents a biochemical sensor based on Raman scattering with Complementary metal-oxide-semiconductor (CMOS) contact imaging. This biochemical optical sensor is designed for detecting the concentration of solutions. The system is built with a laser diode, an optical filter, a sample holder and a commercial CMOS sensor. The output of the system is analyzed by an image processing program. The system provides instant measurements with a resolution of 0.2 to 0.4 Mol. This low cost and easy-operated small scale system is useful in chemical, biomedical and environmental labs for quantitative bio-chemical concentration detection with results reported comparable to a highly cost commercial spectrometer.

  15. Decoding mobile-phone image sensor rolling shutter effect for visible light communications

    Science.gov (United States)

    Liu, Yang

    2016-01-01

    Optical wireless communication (OWC) using visible lights, also known as visible light communication (VLC), has attracted significant attention recently. As the traditional OWC and VLC receivers (Rxs) are based on PIN photo-diode or avalanche photo-diode, deploying the complementary metal-oxide-semiconductor (CMOS) image sensor as the VLC Rx is attractive since nowadays nearly every person has a smart phone with embedded CMOS image sensor. However, deploying the CMOS image sensor as the VLC Rx is challenging. In this work, we propose and demonstrate two simple contrast ratio (CR) enhancement schemes to improve the contrast of the rolling shutter pattern. Then we describe their processing algorithms one by one. The experimental results show that both the proposed CR enhancement schemes can significantly mitigate the high-intensity fluctuations of the rolling shutter pattern and improve the bit-error-rate performance.

  16. A vertex detector for the International Linear Collider based on CMOS sensors

    Energy Technology Data Exchange (ETDEWEB)

    Besson, Auguste [Institut de Recherches Subatomiques, 23 rue du Loess, Strasbourg 67037 Cedex 02 (France)]. E-mail: abesson@in2p3.fr; Claus, Gilles [Institut de Recherches Subatomiques, 23 rue du Loess, Strasbourg 67037 Cedex 02 (France); Colledani, Claude [Institut de Recherches Subatomiques, 23 rue du Loess, Strasbourg 67037 Cedex 02 (France); Degerli, Yavuz [CEA Saclay, DAPNIA, Gif-sur-Yvette Cedex (France); Deptuch, Grzegorz [Institut de Recherches Subatomiques, 23 rue du Loess, Strasbourg 67037 Cedex 02 (France); Deveaux, Michael [Institut de Recherches Subatomiques, 23 rue du Loess, Strasbourg 67037 Cedex 02 (France) and GSI, Planckstrasse 1, Darmstadt 64291 (Germany); Dulinski, Wojciech [Institut de Recherches Subatomiques, 23 rue du Loess, Strasbourg 67037 Cedex 02 (France); Fourches, Nicolas [CEA Saclay, DAPNIA, Gif-sur-Yvette Cedex (France); Goffe, Mathieu [Institut de Recherches Subatomiques, 23 rue du Loess, Strasbourg 67037 Cedex 02 (France); Grandjean, Damien [Institut de Recherches Subatomiques, 23 rue du Loess, Strasbourg 67037 Cedex 02 (France); Guilloux, Fabrice [Institut de Recherches Subatomiques, 23 rue du Loess, Strasbourg 67037 Cedex 02 (France); Heini, Sebastien [Institut de Recherches Subatomiques, 23 rue du Loess, Strasbourg 67037 Cedex 02 (France)]|[GSI, Planckstrasse 1, Darmstadt 64291 (Germany); Himmi, Abdelkader; Hu, Christine; Jaaskelainen, Kimmo [Institut de Recherches Subatomiques, 23 rue du Loess, Strasbourg 67037 Cedex 02 (France); Li, Yan; Lutz, Pierre; Orsini, Fabienne [CEA Saclay, DAPNIA, Gif-sur-Yvette Cedex (France); Pellicioli, Michel; Scopelliti, Emanuele; Shabetai, Alexandre; Szelezniak, Michal; Valin, Isabelle [Institut de Recherches Subatomiques, 23 rue du Loess, Strasbourg 67037 Cedex 02 (France); Winter, Marc [Institut de Recherches Subatomiques, 23 rue du Loess, Strasbourg 67037 Cedex 02 (France)]. E-mail: marc.winter@ires.in2p3.f

    2006-11-30

    The physics programme at the International Linear Collider (ILC) calls for a vertex detector (VD) providing unprecedented flavour tagging performances, especially for c-quarks and {tau} leptons. This requirement makes a very granular, thin and multi-layer VD installed very close to the interaction region mandatory. Additional constraints, mainly on read-out speed and radiation tolerance, originate from the beam background, which governs the occupancy and the radiation level the detector should be able to cope with. CMOS sensors are being developed to fulfil these requirements. This report addresses the ILC requirements (highly related to beamstrahlung), the main advantages and features of CMOS sensors, the demonstrated performances and the specific aspects of a VD based on this technology. The status of the main R and D directions (radiation tolerance, thinning procedure and read-out speed) are also presented.

  17. Comparison of the performance of intraoral X-ray sensors using objective image quality assessment.

    Science.gov (United States)

    Hellén-Halme, Kristina; Johansson, Curt; Nilsson, Mats

    2016-05-01

    The main aim of this study was to evaluate the performance of 10 individual sensors of the same make, using objective measures of key image quality parameters. A further aim was to compare 8 brands of sensors. Ten new sensors of 8 different models from 6 manufacturers (i.e., 80 sensors) were included in the study. All sensors were exposed in a standardized way using an X-ray tube voltage of 60 kVp and different exposure times. Sensor response, noise, low-contrast resolution, spatial resolution and uniformity were measured. Individual differences between sensors of the same brand were surprisingly large in some cases. There were clear differences in the characteristics of the different brands of sensors. The largest variations were found for individual sensor response for some of the brands studied. Also, noise level and low contrast resolution showed large variations between brands. Sensors, even of the same brand, vary significantly in their quality. It is thus valuable to establish action levels for the acceptance of newly delivered sensors and to use objective image quality control for commissioning purposes and periodic checks to ensure high performance of individual digital sensors. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Computational multispectral video imaging [Invited].

    Science.gov (United States)

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  19. Development of integrated semiconductor optical sensors for functional brain imaging

    Science.gov (United States)

    Lee, Thomas T.

    Optical imaging of neural activity is a widely accepted technique for imaging brain function in the field of neuroscience research, and has been used to study the cerebral cortex in vivo for over two decades. Maps of brain activity are obtained by monitoring intensity changes in back-scattered light, called Intrinsic Optical Signals (IOS), that correspond to fluctuations in blood oxygenation and volume associated with neural activity. Current imaging systems typically employ bench-top equipment including lamps and CCD cameras to study animals using visible light. Such systems require the use of anesthetized or immobilized subjects with craniotomies, which imposes limitations on the behavioral range and duration of studies. The ultimate goal of this work is to overcome these limitations by developing a single-chip semiconductor sensor using arrays of sources and detectors operating at near-infrared (NIR) wavelengths. A single-chip implementation, combined with wireless telemetry, will eliminate the need for immobilization or anesthesia of subjects and allow in vivo studies of free behavior. NIR light offers additional advantages because it experiences less absorption in animal tissue than visible light, which allows for imaging through superficial tissues. This, in turn, reduces or eliminates the need for traumatic surgery and enables long-term brain-mapping studies in freely-behaving animals. This dissertation concentrates on key engineering challenges of implementing the sensor. This work shows the feasibility of using a GaAs-based array of vertical-cavity surface emitting lasers (VCSELs) and PIN photodiodes for IOS imaging. I begin with in-vivo studies of IOS imaging through the skull in mice, and use these results along with computer simulations to establish minimum performance requirements for light sources and detectors. I also evaluate the performance of a current commercial VCSEL for IOS imaging, and conclude with a proposed prototype sensor.

  20. Evaluation of onboard hyperspectral-image compression techniques for a parallel push-broom sensor

    Energy Technology Data Exchange (ETDEWEB)

    Briles, S.

    1996-04-01

    A single hyperspectral imaging sensor can produce frames with spatially-continuous rows of differing, but adjacent, spectral wavelength. If the frame sample-rate of the sensor is such that subsequent hyperspectral frames are spatially shifted by one row, then the sensor can be thought of as a parallel (in wavelength) push-broom sensor. An examination of data compression techniques for such a sensor is presented. The compression techniques are intended to be implemented onboard a space-based platform and to have implementation speeds that match the date rate of the sensor. Data partitions examined extend from individually operating on a single hyperspectral frame to operating on a data cube comprising the two spatial axes and the spectral axis. Compression algorithms investigated utilize JPEG-based image compression, wavelet-based compression and differential pulse code modulation. Algorithm performance is quantitatively presented in terms of root-mean-squared error and root-mean-squared correlation coefficient error. Implementation issues are considered in algorithm development.

  1. The linear variable differential transformer (LVDT) position sensor for gravitational wave interferometer low-frequency controls

    Energy Technology Data Exchange (ETDEWEB)

    Tariq, Hareem E-mail: htariq@ligo.caltech.edu; Takamori, Akiteru; Vetrano, Flavio; Wang Chenyang; Bertolini, Alessandro; Calamai, Giovanni; DeSalvo, Riccardo; Gennai, Alberto; Holloway, Lee; Losurdo, Giovanni; Marka, Szabolcs; Mazzoni, Massimo; Paoletti, Federico; Passuello, Diego; Sannibale, Virginio; Stanga, Ruggero

    2002-08-21

    Low-power, ultra-high-vacuum compatible, non-contacting position sensors with nanometer resolution and centimeter dynamic range have been developed, built and tested. They have been designed at Virgo as the sensors for low-frequency modal damping of Seismic Attenuation System chains in Gravitational Wave interferometers and sub-micron absolute mirror positioning. One type of these linear variable differential transformers (LVDTs) has been designed to be also insensitive to transversal displacement thus allowing 3D movement of the sensor head while still precisely reading its position along the sensitivity axis. A second LVDT geometry has been designed to measure the displacement of the vertical seismic attenuation filters from their nominal position. Unlike the commercial LVDTs, mostly based on magnetic cores, the LVDTs described here exert no force on the measured structure.

  2. Honeywell's Compact, Wide-angle Uv-visible Imaging Sensor

    Science.gov (United States)

    Pledger, D.; Billing-Ross, J.

    1993-01-01

    Honeywell is currently developing the Earth Reference Attitude Determination System (ERADS). ERADS determines attitude by imaging the entire Earth's limb and a ring of the adjacent star field in the 2800-3000 A band of the ultraviolet. This is achieved through the use of a highly nonconventional optical system, an intensifier tube, and a mega-element CCD array. The optics image a 30 degree region in the center of the field, and an outer region typically from 128 to 148 degrees, which can be adjusted up to 180 degrees. Because of the design employed, the illumination at the outer edge of the field is only some 15 percent below that at the center, in contrast to the drastic rolloffs encountered in conventional wide-angle sensors. The outer diameter of the sensor is only 3 in; the volume and weight of the entire system, including processor, are 1000 cc and 6 kg, respectively.

  3. Detection and Classification of Multiple Objects using an RGB-D Sensor and Linear Spatial Pyramid Matching

    OpenAIRE

    Dimitriou, Michalis; Kounalakis, Tsampikos; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2013-01-01

    This paper presents a complete system for multiple object detection and classification in a 3D scene using an RGB-D sensor such as the Microsoft Kinect sensor. Successful multiple object detection and classification are crucial features in many 3D computer vision applications. The main goal is making machines see and understand objects like humans do. To this goal, the new RGB-D sensors can be utilized since they provide real-time depth map which can be used along with the RGB images for our ...

  4. Operational calibration and validation of landsat data continuity mission (LDCM) sensors using the image assessment system (IAS)

    Science.gov (United States)

    Micijevic, Esad; Morfitt, Ron

    2010-01-01

    Systematic characterization and calibration of the Landsat sensors and the assessment of image data quality are performed using the Image Assessment System (IAS). The IAS was first introduced as an element of the Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) ground segment and recently extended to Landsat 4 (L4) and 5 (L5) Thematic Mappers (TM) and Multispectral Sensors (MSS) on-board the Landsat 1-5 satellites. In preparation for the Landsat Data Continuity Mission (LDCM), the IAS was developed for the Earth Observer 1 (EO-1) Advanced Land Imager (ALI) with a capability to assess pushbroom sensors. This paper describes the LDCM version of the IAS and how it relates to unique calibration and validation attributes of its on-board imaging sensors. The LDCM IAS system will have to handle a significantly larger number of detectors and the associated database than the previous IAS versions. An additional challenge is that the LDCM IAS must handle data from two sensors, as the LDCM products will combine the Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) spectral bands.

  5. Image quality assessment using deep convolutional networks

    Science.gov (United States)

    Li, Yezhou; Ye, Xiang; Li, Yong

    2017-12-01

    This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.

  6. A Faraday effect position sensor for interventional magnetic resonance imaging.

    Science.gov (United States)

    Bock, M; Umathum, R; Sikora, J; Brenner, S; Aguor, E N; Semmler, W

    2006-02-21

    An optical sensor is presented which determines the position and one degree of orientation within a magnetic resonance tomograph. The sensor utilizes the Faraday effect to measure the local magnetic field, which is modulated by switching additional linear magnetic fields, the gradients. Existing methods for instrument localization during an interventional MR procedure often use electrically conducting structures at the instruments that can heat up excessively during MRI and are thus a significant danger for the patient. The proposed optical Faraday effect position sensor consists of non-magnetic and electrically non-conducting components only so that heating is avoided and the sensor could be applied safely even within the human body. With a non-magnetic prototype set-up, experiments were performed to demonstrate the possibility of measuring both the localization and the orientation in a magnetic resonance tomograph. In a 30 mT m(-1) gradient field, a localization uncertainty of 1.5 cm could be achieved.

  7. Autonomous Kinematic Calibration of the Robot Manipulator with a Linear Laser-Vision Sensor

    Science.gov (United States)

    Kang, Hee-Jun; Jeong, Jeong-Woo; Shin, Sung-Weon; Suh, Young-Soo; Ro, Young-Schick

    This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The data collected by changing robot configuration and measuring the intersection points are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.

  8. Single image super-resolution using locally adaptive multiple linear regression.

    Science.gov (United States)

    Yu, Soohwan; Kang, Wonseok; Ko, Seungyong; Paik, Joonki

    2015-12-01

    This paper presents a regularized superresolution (SR) reconstruction method using locally adaptive multiple linear regression to overcome the limitation of spatial resolution of digital images. In order to make the SR problem better-posed, the proposed method incorporates the locally adaptive multiple linear regression into the regularization process as a local prior. The local regularization prior assumes that the target high-resolution (HR) pixel is generated by a linear combination of similar pixels in differently scaled patches and optimum weight parameters. In addition, we adapt a modified version of the nonlocal means filter as a smoothness prior to utilize the patch redundancy. Experimental results show that the proposed algorithm better restores HR images than existing state-of-the-art methods in the sense of the most objective measures in the literature.

  9. HIGH-RESOLUTION LINEAR POLARIMETRIC IMAGING FOR THE EVENT HORIZON TELESCOPE

    Energy Technology Data Exchange (ETDEWEB)

    Chael, Andrew A.; Johnson, Michael D.; Narayan, Ramesh; Doeleman, Sheperd S. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Wardle, John F. C. [Brandeis University, Physics Department, Waltham, MA 02454 (United States); Bouman, Katherine L., E-mail: achael@cfa.harvard.edu [Massachusetts Institute of Technology, Computer Science and Artificial Intelligence Laboratory, 32 Vassar Street, Cambridge, MA 02139 (United States)

    2016-09-20

    Images of the linear polarizations of synchrotron radiation around active galactic nuclei (AGNs) highlight their projected magnetic field lines and provide key data for understanding the physics of accretion and outflow from supermassive black holes. The highest-resolution polarimetric images of AGNs are produced with Very Long Baseline Interferometry (VLBI). Because VLBI incompletely samples the Fourier transform of the source image, any image reconstruction that fills in unmeasured spatial frequencies will not be unique and reconstruction algorithms are required. In this paper, we explore some extensions of the Maximum Entropy Method (MEM) to linear polarimetric VLBI imaging. In contrast to previous work, our polarimetric MEM algorithm combines a Stokes I imager that only uses bispectrum measurements that are immune to atmospheric phase corruption, with a joint Stokes Q and U imager that operates on robust polarimetric ratios. We demonstrate the effectiveness of our technique on 7 and 3 mm wavelength quasar observations from the VLBA and simulated 1.3 mm Event Horizon Telescope observations of Sgr A* and M87. Consistent with past studies, we find that polarimetric MEM can produce superior resolution compared to the standard CLEAN algorithm, when imaging smooth and compact source distributions. As an imaging framework, MEM is highly adaptable, allowing a range of constraints on polarization structure. Polarimetric MEM is thus an attractive choice for image reconstruction with the EHT.

  10. Image sensor for testing refractive error of eyes

    Science.gov (United States)

    Li, Xiangning; Chen, Jiabi; Xu, Longyun

    2000-05-01

    It is difficult to detect ametropia and anisometropia for children. Image sensor for testing refractive error of eyes does not need the cooperation of children and can be used to do the general survey of ametropia and anisometropia for children. In our study, photographs are recorded by a CCD element in a digital form which can be directly processed by a computer. In order to process the image accurately by digital technique, formula considering the effect of extended light source and the size of lens aperture has been deduced, which is more reliable in practice. Computer simulation of the image sensing is made to verify the fineness of the results.

  11. CCD image sensor induced error in PIV applications

    Science.gov (United States)

    Legrand, M.; Nogueira, J.; Vargas, A. A.; Ventas, R.; Rodríguez-Hidalgo, M. C.

    2014-06-01

    The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (˜0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described.

  12. CCD image sensor induced error in PIV applications

    International Nuclear Information System (INIS)

    Legrand, M; Nogueira, J; Vargas, A A; Ventas, R; Rodríguez-Hidalgo, M C

    2014-01-01

    The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (∼0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described. (paper)

  13. Integration of piezo-capacitive and piezo-electric nanoweb based pressure sensors for imaging of static and dynamic pressure distribution.

    Science.gov (United States)

    Jeong, Y J; Oh, T I; Woo, E J; Kim, K J

    2017-07-01

    Recently, highly flexible and soft pressure distribution imaging sensor is in great demand for tactile sensing, gait analysis, ubiquitous life-care based on activity recognition, and therapeutics. In this study, we integrate the piezo-capacitive and piezo-electric nanowebs with the conductive fabric sheets for detecting static and dynamic pressure distributions on a large sensing area. Electrical impedance tomography (EIT) and electric source imaging are applied for reconstructing pressure distribution images from measured current-voltage data on the boundary of the hybrid fabric sensor. We evaluated the piezo-capacitive nanoweb sensor, piezo-electric nanoweb sensor, and hybrid fabric sensor. The results show the feasibility of static and dynamic pressure distribution imaging from the boundary measurements of the fabric sensors.

  14. Fast photoacoustic imaging system based on 320-element linear transducer array

    International Nuclear Information System (INIS)

    Yin Bangzheng; Xing Da; Wang Yi; Zeng Yaguang; Tan Yi; Chen Qun

    2004-01-01

    A fast photoacoustic (PA) imaging system, based on a 320-transducer linear array, was developed and tested on a tissue phantom. To reconstruct a test tomographic image, 64 time-domain PA signals were acquired from a tissue phantom with embedded light-absorption targets. A signal acquisition was accomplished by utilizing 11 phase-controlled sub-arrays, each consisting of four transducers. The results show that the system can rapidly map the optical absorption of a tissue phantom and effectively detect the embedded light-absorbing target. By utilizing the multi-element linear transducer array and phase-controlled imaging algorithm, we thus can acquire PA tomography more efficiently, compared to other existing technology and algorithms. The methodology and equipment thus provide a rapid and reliable approach to PA imaging that may have potential applications in noninvasive imaging and clinic diagnosis

  15. Image-guided linear accelerator-based spinal radiosurgery for hemangioblastoma.

    Science.gov (United States)

    Selch, Michael T; Tenn, Steve; Agazaryan, Nzhde; Lee, Steve P; Gorgulho, Alessandra; De Salles, Antonio A F

    2012-01-01

    To retrospectively review the efficacy and safety of image-guided linear accelerator-based radiosurgery for spinal hemangioblastomas. Between August 2004 and September 2010, nine patients with 20 hemangioblastomas underwent spinal radiosurgery. Five patients had von Hipple-Lindau disease. Four patients had multiple tumors. Ten tumors were located in the thoracic spine, eight in the cervical spine, and two in the lumbar spine. Tumor volume varied from 0.08 to 14.4 cc (median 0.72 cc). Maximum tumor dimension varied from 2.5 to 24 mm (median 10.5 mm). Radiosurgery was performed with a dedicated 6 MV linear accelerator equipped with a micro-multileaf collimator. Median peripheral tumor dose and prescription isodose were 12 Gy and 90%, respectively. Image guidance was performed by optical tracking of infrared reflectors, fusion of oblique radiographs with dynamically reconstructed digital radiographs, and automatic patient positioning. Follow-up varied from 14 to 86 months (median 51 months). Kaplan-Meier estimated 4-year overall and solid tumor local control rates were 90% and 95%, respectively. One tumor progressed 12 months after treatment and a new cyst developed 10 months after treatment in another tumor. There has been no clinical or imaging evidence for spinal cord injury. Results of this limited experience indicate linear accelerator-based radiosurgery is safe and effective for spinal cord hemangioblastomas. Longer follow-up is necessary to confirm the durability of tumor control, but these initial results imply linear accelerator-based radiosurgery may represent a therapeutic alternative to surgery for selected patients with spinal hemangioblastomas.

  16. Visual Image Sensor Organ Replacement

    Science.gov (United States)

    Maluf, David A.

    2014-01-01

    This innovation is a system that augments human vision through a technique called "Sensing Super-position" using a Visual Instrument Sensory Organ Replacement (VISOR) device. The VISOR device translates visual and other sensors (i.e., thermal) into sounds to enable very difficult sensing tasks. Three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. Because the human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns, the translation of images into sounds reduces the risk of accidentally filtering out important clues. The VISOR device was developed to augment the current state-of-the-art head-mounted (helmet) display systems. It provides the ability to sense beyond the human visible light range, to increase human sensing resolution, to use wider angle visual perception, and to improve the ability to sense distances. It also allows compensation for movement by the human or changes in the scene being viewed.

  17. Method of orthogonally splitting imaging pose measurement

    Science.gov (United States)

    Zhao, Na; Sun, Changku; Wang, Peng; Yang, Qian; Liu, Xintong

    2018-01-01

    In order to meet the aviation's and machinery manufacturing's pose measurement need of high precision, fast speed and wide measurement range, and to resolve the contradiction between measurement range and resolution of vision sensor, this paper proposes an orthogonally splitting imaging pose measurement method. This paper designs and realizes an orthogonally splitting imaging vision sensor and establishes a pose measurement system. The vision sensor consists of one imaging lens, a beam splitter prism, cylindrical lenses and dual linear CCD. Dual linear CCD respectively acquire one dimensional image coordinate data of the target point, and two data can restore the two dimensional image coordinates of the target point. According to the characteristics of imaging system, this paper establishes the nonlinear distortion model to correct distortion. Based on cross ratio invariability, polynomial equation is established and solved by the least square fitting method. After completing distortion correction, this paper establishes the measurement mathematical model of vision sensor, and determines intrinsic parameters to calibrate. An array of feature points for calibration is built by placing a planar target in any different positions for a few times. An terative optimization method is presented to solve the parameters of model. The experimental results show that the field angle is 52 °, the focus distance is 27.40 mm, image resolution is 5185×5117 pixels, displacement measurement error is less than 0.1mm, and rotation angle measurement error is less than 0.15°. The method of orthogonally splitting imaging pose measurement can satisfy the pose measurement requirement of high precision, fast speed and wide measurement range.

  18. Hybrid Spectral Unmixing: Using Artificial Neural Networks for Linear/Non-Linear Switching

    Directory of Open Access Journals (Sweden)

    Asmau M. Ahmed

    2017-07-01

    Full Text Available Spectral unmixing is a key process in identifying spectral signature of materials and quantifying their spatial distribution over an image. The linear model is expected to provide acceptable results when two assumptions are satisfied: (1 The mixing process should occur at macroscopic level and (2 Photons must interact with single material before reaching the sensor. However, these assumptions do not always hold and more complex nonlinear models are required. This study proposes a new hybrid method for switching between linear and nonlinear spectral unmixing of hyperspectral data based on artificial neural networks. The neural networks was trained with parameters within a window of the pixel under consideration. These parameters are computed to represent the diversity of the neighboring pixels and are based on the Spectral Angular Distance, Covariance and a non linearity parameter. The endmembers were extracted using Vertex Component Analysis while the abundances were estimated using the method identified by the neural networks (Vertex Component Analysis, Fully Constraint Least Square Method, Polynomial Post Nonlinear Mixing Model or Generalized Bilinear Model. Results show that the hybrid method performs better than each of the individual techniques with high overall accuracy, while the abundance estimation error is significantly lower than that obtained using the individual methods. Experiments on both synthetic dataset and real hyperspectral images demonstrated that the proposed hybrid switch method is efficient for solving spectral unmixing of hyperspectral images as compared to individual algorithms.

  19. An efficient and secure partial image encryption for wireless multimedia sensor networks using discrete wavelet transform, chaotic maps and substitution box

    Science.gov (United States)

    Khan, Muazzam A.; Ahmad, Jawad; Javaid, Qaisar; Saqib, Nazar A.

    2017-03-01

    Wireless Sensor Networks (WSN) is widely deployed in monitoring of some physical activity and/or environmental conditions. Data gathered from WSN is transmitted via network to a central location for further processing. Numerous applications of WSN can be found in smart homes, intelligent buildings, health care, energy efficient smart grids and industrial control systems. In recent years, computer scientists has focused towards findings more applications of WSN in multimedia technologies, i.e. audio, video and digital images. Due to bulky nature of multimedia data, WSN process a large volume of multimedia data which significantly increases computational complexity and hence reduces battery time. With respect to battery life constraints, image compression in addition with secure transmission over a wide ranged sensor network is an emerging and challenging task in Wireless Multimedia Sensor Networks. Due to the open nature of the Internet, transmission of data must be secure through a process known as encryption. As a result, there is an intensive demand for such schemes that is energy efficient as well as highly secure since decades. In this paper, discrete wavelet-based partial image encryption scheme using hashing algorithm, chaotic maps and Hussain's S-Box is reported. The plaintext image is compressed via discrete wavelet transform and then the image is shuffled column-wise and row wise-wise via Piece-wise Linear Chaotic Map (PWLCM) and Nonlinear Chaotic Algorithm, respectively. To get higher security, initial conditions for PWLCM are made dependent on hash function. The permuted image is bitwise XORed with random matrix generated from Intertwining Logistic map. To enhance the security further, final ciphertext is obtained after substituting all elements with Hussain's substitution box. Experimental and statistical results confirm the strength of the anticipated scheme.

  20. Active Sensor for Microwave Tissue Imaging with Bias-Switched Arrays.

    Science.gov (United States)

    Foroutan, Farzad; Nikolova, Natalia K

    2018-05-06

    A prototype of a bias-switched active sensor was developed and measured to establish the achievable dynamic range in a new generation of active arrays for microwave tissue imaging. The sensor integrates a printed slot antenna, a low-noise amplifier (LNA) and an active mixer in a single unit, which is sufficiently small to enable inter-sensor separation distance as small as 12 mm. The sensor’s input covers the bandwidth from 3 GHz to 7.5 GHz. Its output intermediate frequency (IF) is 30 MHz. The sensor is controlled by a simple bias-switching circuit, which switches ON and OFF the bias of the LNA and the mixer simultaneously. It was demonstrated experimentally that the dynamic range of the sensor, as determined by its ON and OFF states, is 109 dB and 118 dB at resolution bandwidths of 1 kHz and 100 Hz, respectively.

  1. Displacement damage effects on CMOS APS image sensors induced by neutron irradiation from a nuclear reactor

    International Nuclear Information System (INIS)

    Wang, Zujun; Huang, Shaoyan; Liu, Minbo; Xiao, Zhigang; He, Baoping; Yao, Zhibin; Sheng, Jiangkun

    2014-01-01

    The experiments of displacement damage effects on CMOS APS image sensors induced by neutron irradiation from a nuclear reactor are presented. The CMOS APS image sensors are manufactured in the standard 0.35 μm CMOS technology. The flux of neutron beams was about 1.33 × 10 8 n/cm 2 s. The three samples were exposed by 1 MeV neutron equivalent-fluence of 1 × 10 11 , 5 × 10 11 , and 1 × 10 12 n/cm 2 , respectively. The mean dark signal (K D ), dark signal spike, dark signal non-uniformity (DSNU), noise (V N ), saturation output signal voltage (V S ), and dynamic range (DR) versus neutron fluence are investigated. The degradation mechanisms of CMOS APS image sensors are analyzed. The mean dark signal increase due to neutron displacement damage appears to be proportional to displacement damage dose. The dark images from CMOS APS image sensors irradiated by neutrons are presented to investigate the generation of dark signal spike

  2. The research of optical windows used in aircraft sensor systems

    International Nuclear Information System (INIS)

    Zhou Feng; Li Yan; Tang Tian-Jin

    2012-01-01

    The optical windows used in aircrafts protect their imaging sensors from environmental effects. Considering the imaging performance, flat surfaces are traditionally used in the design of optical windows. For aircrafts operating at high speeds, the optical windows should be relatively aerodynamic, but a flat optical window may introduce unacceptably high drag to the airframes. The linear scanning infrared sensors used in aircrafts with, respectively, a flat window, a spherical window and a toric window in front of the aircraft sensors are designed and compared. Simulation results show that the optical design using a toric surface has the integrated advantages of field of regard, aerodynamic drag, narcissus effect, and imaging performance, so the optical window with a toric surface is demonstrated to be suited for this application. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  3. Particle detection and classification using commercial off the shelf CMOS image sensors

    Energy Technology Data Exchange (ETDEWEB)

    Pérez, Martín [Instituto Balseiro, Av. Bustillo 9500, Bariloche, 8400 (Argentina); Comisión Nacional de Energía Atómica (CNEA), Centro Atómico Bariloche, Av. Bustillo 9500, Bariloche 8400 (Argentina); Consejo Nacional de Investigaciones Científicas y Técnicas, Centro Atómico Bariloche, Av. Bustillo 9500, 8400 Bariloche (Argentina); Lipovetzky, Jose, E-mail: lipo@cab.cnea.gov.ar [Instituto Balseiro, Av. Bustillo 9500, Bariloche, 8400 (Argentina); Comisión Nacional de Energía Atómica (CNEA), Centro Atómico Bariloche, Av. Bustillo 9500, Bariloche 8400 (Argentina); Consejo Nacional de Investigaciones Científicas y Técnicas, Centro Atómico Bariloche, Av. Bustillo 9500, 8400 Bariloche (Argentina); Sofo Haro, Miguel; Sidelnik, Iván; Blostein, Juan Jerónimo; Alcalde Bessia, Fabricio; Berisso, Mariano Gómez [Instituto Balseiro, Av. Bustillo 9500, Bariloche, 8400 (Argentina); Consejo Nacional de Investigaciones Científicas y Técnicas, Centro Atómico Bariloche, Av. Bustillo 9500, 8400 Bariloche (Argentina)

    2016-08-11

    In this paper we analyse the response of two different Commercial Off The shelf CMOS image sensors as particle detectors. Sensors were irradiated using X-ray photons, gamma photons, beta particles and alpha particles from diverse sources. The amount of charge produced by different particles, and the size of the spot registered on the sensor are compared, and analysed by an algorithm to classify them. For a known incident energy spectrum, the employed sensors provide a dose resolution lower than microGray, showing their potentials in radioprotection, area monitoring, or medical applications.

  4. Nanoimprinted distributed feedback dye laser sensor for real-time imaging of small molecule diffusion

    DEFF Research Database (Denmark)

    Vannahme, Christoph; Dufva, Martin; Kristensen, Anders

    2014-01-01

    Label-free imaging is a promising tool for the study of biological processes such as cell adhesion and small molecule signaling processes. In order to image in two dimensions of space current solutions require motorized stages which results in low imaging frame rates. Here, a highly sensitive...... distributed feedback (DFB) dye laser sensor for real-time label-free imaging without any moving parts enabling a frame rate of 12 Hz is presented. The presence of molecules on the laser surface results in a wavelength shift which is used as sensor signal. The unique DFB laser structure comprises several areas...

  5. Linearized least-square imaging of internally scattered data

    KAUST Repository

    Aldawood, Ali; Hoteit, Ibrahim; Turkiyyah, George M.; Zuberi, M. A H; Alkhalifah, Tariq Ali

    2014-01-01

    Internal multiples deteriorate the quality of the migrated image obtained conventionally by imaging single scattering energy. However, imaging internal multiples properly has the potential to enhance the migrated image because they illuminate zones in the subsurface that are poorly illuminated by single-scattering energy such as nearly vertical faults. Standard migration of these multiples provide subsurface reflectivity distributions with low spatial resolution and migration artifacts due to the limited recording aperture, coarse sources and receivers sampling, and the band-limited nature of the source wavelet. Hence, we apply a linearized least-square inversion scheme to mitigate the effect of the migration artifacts, enhance the spatial resolution, and provide more accurate amplitude information when imaging internal multiples. Application to synthetic data demonstrated the effectiveness of the proposed inversion in imaging a reflector that is poorly illuminated by single-scattering energy. The least-square inversion of doublescattered data helped delineate that reflector with minimal acquisition fingerprint.

  6. Image quality optimization and evaluation of linearly mixed images in dual-source, dual-energy CT

    International Nuclear Information System (INIS)

    Yu Lifeng; Primak, Andrew N.; Liu Xin; McCollough, Cynthia H.

    2009-01-01

    In dual-source dual-energy CT, the images reconstructed from the low- and high-energy scans (typically at 80 and 140 kV, respectively) can be mixed together to provide a single set of non-material-specific images for the purpose of routine diagnostic interpretation. Different from the material-specific information that may be obtained from the dual-energy scan data, the mixed images are created with the purpose of providing the interpreting physician a single set of images that have an appearance similar to that in single-energy images acquired at the same total radiation dose. In this work, the authors used a phantom study to evaluate the image quality of linearly mixed images in comparison to single-energy CT images, assuming the same total radiation dose and taking into account the effect of patient size and the dose partitioning between the low-and high-energy scans. The authors first developed a method to optimize the quality of the linearly mixed images such that the single-energy image quality was compared to the best-case image quality of the dual-energy mixed images. Compared to 80 kV single-energy images for the same radiation dose, the iodine CNR in dual-energy mixed images was worse for smaller phantom sizes. However, similar noise and similar or improved iodine CNR relative to 120 kV images could be achieved for dual-energy mixed images using the same total radiation dose over a wide range of patient sizes (up to 45 cm lateral thorax dimension). Thus, for adult CT practices, which primarily use 120 kV scanning, the use of dual-energy CT for the purpose of material-specific imaging can also produce a set of non-material-specific images for routine diagnostic interpretation that are of similar or improved quality relative to single-energy 120 kV scans.

  7. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    Science.gov (United States)

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  8. Planoconcave optical microresonator sensors for photoacoustic imaging: pushing the limits of sensitivity (Conference Presentation)

    Science.gov (United States)

    Guggenheim, James A.; Zhang, Edward Z.; Beard, Paul C.

    2016-03-01

    Most photoacoustic scanners use piezoelectric detectors but these have two key limitations. Firstly, they are optically opaque, inhibiting backward mode operation. Secondly, it is difficult to achieve adequate detection sensitivity with the small element sizes needed to provide near-omnidirectional response as required for tomographic imaging. Planar Fabry-Perot (FP) ultrasound sensing etalons can overcome both of these limitations and have proved extremely effective for superficial (beam. However, this has the disadvantage that beam walk-off due to the divergence of the beam fundamentally limits the etalon finesse and thus sensitivity - in essence, the problem is one of insufficient optical confinement. To overcome this, novel planoconcave micro-resonator sensors have been fabricated using precision ink-jet printed polymer domes with curvatures matching that of the laser wavefront. By providing near-perfect beam confinement, we show that it is possible to approach the maximum theoretical limit for finesse (f) imposed by the etalon mirror reflectivities (e.g. f=400 for R=99.2% in contrast to a typical planar sensor value of fbeam walk-off, viable sensors can be made with significantly greater thickness than planar FP sensors. This provides an additional sensitivity gain for deep tissue imaging applications such as breast imaging where detection bandwidths in the low MHz can be tolerated. For example, for a 250 μm thick planoconcave sensor with a -3dB bandwidth of 5MHz, the measured NEP was 4 Pa. This NEP is comparable to that provided by mm scale piezoelectric detectors used for breast imaging applications but with more uniform frequency response characteristics and an order-of-magnitude smaller element size. Following previous proof-of-concept work, several important advances towards practical application have been made. A family of sensors with bandwidths ranging from 3MHz to 20MHz have been fabricated and characterised. A novel interrogation scheme based on

  9. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    Directory of Open Access Journals (Sweden)

    Zhuowen Lv

    2015-01-01

    Full Text Available Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach.

  10. Linear array implementation of the EM algorithm for PET image reconstruction

    International Nuclear Information System (INIS)

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J.

    1995-01-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution back projection algorithms. However, the PET image reconstruction based on the EM algorithm is computationally burdensome for today's single processor systems. In addition, a large memory is required for the storage of the image, projection data, and the probability matrix. Since the computations are easily divided into tasks executable in parallel, multiprocessor configurations are the ideal choice for fast execution of the EM algorithms. In tis study, the authors attempt to overcome these two problems by parallelizing the EM algorithm on a multiprocessor systems. The parallel EM algorithm on a linear array topology using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PE's) has been implemented. The performance of the EM algorithm on a 386/387 machine, IBM 6000 RISC workstation, and on the linear array system is discussed and compared. The results show that the computational speed performance of a linear array using 8 DSP chips as PE's executing the EM image reconstruction algorithm is about 15.5 times better than that of the IBM 6000 RISC workstation. The novelty of the scheme is its simplicity. The linear array topology is expandable with a larger number of PE's. The architecture is not dependant on the DSP chip chosen, and the substitution of the latest DSP chip is straightforward and could yield better speed performance

  11. System and method to create three-dimensional images of non-linear acoustic properties in a region remote from a borehole

    Science.gov (United States)

    Vu, Cung; Nihei, Kurt T.; Schmitt, Denis P.; Skelt, Christopher; Johnson, Paul A.; Guyer, Robert; TenCate, James A.; Le Bas, Pierre-Yves

    2013-01-01

    In some aspects of the disclosure, a method for creating three-dimensional images of non-linear properties and the compressional to shear velocity ratio in a region remote from a borehole using a conveyed logging tool is disclosed. In some aspects, the method includes arranging a first source in the borehole and generating a steered beam of elastic energy at a first frequency; arranging a second source in the borehole and generating a steerable beam of elastic energy at a second frequency, such that the steerable beam at the first frequency and the steerable beam at the second frequency intercept at a location away from the borehole; receiving at the borehole by a sensor a third elastic wave, created by a three wave mixing process, with a frequency equal to a difference between the first and second frequencies and a direction of propagation towards the borehole; determining a location of a three wave mixing region based on the arrangement of the first and second sources and on properties of the third wave signal; and creating three-dimensional images of the non-linear properties using data recorded by repeating the generating, receiving and determining at a plurality of azimuths, inclinations and longitudinal locations within the borehole. The method is additionally used to generate three dimensional images of the ratio of compressional to shear acoustic velocity of the same volume surrounding the borehole.

  12. Near-IR Two-Photon Fluorescent Sensor for K(+) Imaging in Live Cells.

    Science.gov (United States)

    Sui, Binglin; Yue, Xiling; Kim, Bosung; Belfield, Kevin D

    2015-08-19

    A new two-photon excited fluorescent K(+) sensor is reported. The sensor comprises three moieties, a highly selective K(+) chelator as the K(+) recognition unit, a boron-dipyrromethene (BODIPY) derivative modified with phenylethynyl groups as the fluorophore, and two polyethylene glycol chains to afford water solubility. The sensor displays very high selectivity (>52-fold) in detecting K(+) over other physiological metal cations. Upon binding K(+), the sensor switches from nonfluorescent to highly fluorescent, emitting red to near-IR (NIR) fluorescence. The sensor exhibited a good two-photon absorption cross section, 500 GM at 940 nm. Moreover, it is not sensitive to pH in the physiological pH range. Time-dependent cell imaging studies via both one- and two-photon fluorescence microscopy demonstrate that the sensor is suitable for dynamic K(+) sensing in living cells.

  13. Special Sensor Microwave Imager/Sounder (SSMIS) Temperature Data Record (TDR) in netCDF

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Special Sensor Microwave Imager/Sounder (SSMIS) is a series of passive microwave conically scanning imagers and sounders onboard the DMSP satellites beginning...

  14. Towards UV imaging sensors based on single-crystal diamond chips for spectroscopic applications

    Energy Technology Data Exchange (ETDEWEB)

    De Sio, A. [Department of Astronomy and Space Science, University of Firenze, Largo E. Fermi 2, 50125 Florence (Italy)], E-mail: desio@arcetri.astro.it; Bocci, A. [Department of Astronomy and Space Science, University of Firenze, Largo E. Fermi 2, 50125 Florence (Italy); Bruno, P.; Di Benedetto, R.; Greco, V.; Gullotta, G. [INAF-Astrophysical Observatory of Catania (Italy); Marinelli, M. [INFN-Department of Mechanical Engineering, University of Roma ' Tor Vergata' (Italy); Pace, E. [Department of Astronomy and Space Science, University of Firenze, Largo E. Fermi 2, 50125 Florence (Italy); Rubulotta, D.; Scuderi, S. [INAF-Astrophysical Observatory of Catania (Italy); Verona-Rinati, G. [INFN-Department of Mechanical Engineering, University of Roma ' Tor Vergata' (Italy)

    2007-12-11

    The recent improvements achieved in the Homoepitaxial Chemical Vapour Deposition technique have led to the production of high-quality detector-grade single-crystal diamonds. Diamond-based detectors have shown excellent performances in UV and X-ray detection, paving the way for applications of diamond technology to the fields of space astronomy and high-energy photon detection in harsh environments or against strong visible light emission. These applications are possible due to diamond's unique properties such as its chemical inertness and visible blindness, respectively. Actually, the development of linear array detectors represents the main issue for a full exploitation of diamond detectors. Linear arrays are a first step to study bi-dimensional sensors. Such devices allow one to face the problems related to pixel miniaturisation and of signal read-out from many channels. Immediate applications would be in spectroscopy, where such arrays are preferred. This paper reports on the development of imaging detectors made by our groups, starting from the material growth and characterisation, through the design, fabrication and packaging of 2xn pixel arrays, to their electro-optical characterisation in terms of UV sensitivity, uniformity of the response and to the development of an electronic circuit suitable to read-out very low photocurrent signals. The detector and its electronic read-out were then tested using a 2x5 pixel array based on a single-crystal diamond. The results will be discussed in the framework of the development of an imager device for X-UV astronomy applications in space missions.

  15. CMOS image sensor for detection of interferon gamma protein interaction as a point-of-care approach.

    Science.gov (United States)

    Marimuthu, Mohana; Kandasamy, Karthikeyan; Ahn, Chang Geun; Sung, Gun Yong; Kim, Min-Gon; Kim, Sanghyo

    2011-09-01

    Complementary metal oxide semiconductor (CMOS)-based image sensors have received increased attention owing to the possibility of incorporating them into portable diagnostic devices. The present research examined the efficiency and sensitivity of a CMOS image sensor for the detection of antigen-antibody interactions involving interferon gamma protein without the aid of expensive instruments. The highest detection sensitivity of about 1 fg/ml primary antibody was achieved simply by a transmission mechanism. When photons are prevented from hitting the sensor surface, a reduction in digital output occurs in which the number of photons hitting the sensor surface is approximately proportional to the digital number. Nanoscale variation in substrate thickness after protein binding can be detected with high sensitivity by the CMOS image sensor. Therefore, this technique can be easily applied to smartphones or any clinical diagnostic devices for the detection of several biological entities, with high impact on the development of point-of-care applications.

  16. New amorphous-silicon image sensor for x-ray diagnostic medical imaging applications

    Science.gov (United States)

    Weisfield, Richard L.; Hartney, Mark A.; Street, Robert A.; Apte, Raj B.

    1998-07-01

    This paper introduces new high-resolution amorphous Silicon (a-Si) image sensors specifically configured for demonstrating film-quality medical x-ray imaging capabilities. The devices utilizes an x-ray phosphor screen coupled to an array of a-Si photodiodes for detecting visible light, and a-Si thin-film transistors (TFTs) for connecting the photodiodes to external readout electronics. We have developed imagers based on a pixel size of 127 micrometer X 127 micrometer with an approximately page-size imaging area of 244 mm X 195 mm, and array size of 1,536 data lines by 1,920 gate lines, for a total of 2.95 million pixels. More recently, we have developed a much larger imager based on the same pixel pattern, which covers an area of approximately 406 mm X 293 mm, with 2,304 data lines by 3,200 gate lines, for a total of nearly 7.4 million pixels. This is very likely to be the largest image sensor array and highest pixel count detector fabricated on a single substrate. Both imagers connect to a standard PC and are capable of taking an image in a few seconds. Through design rule optimization we have achieved a light sensitive area of 57% and optimized quantum efficiency for x-ray phosphor output in the green part of the spectrum, yielding an average quantum efficiency between 500 and 600 nm of approximately 70%. At the same time, we have managed to reduce extraneous leakage currents on these devices to a few fA per pixel, which allows for very high dynamic range to be achieved. We have characterized leakage currents as a function of photodiode bias, time and temperature to demonstrate high stability over these large sized arrays. At the electronics level, we have adopted a new generation of low noise, charge- sensitive amplifiers coupled to 12-bit A/D converters. Considerable attention was given to reducing electronic noise in order to demonstrate a large dynamic range (over 4,000:1) for medical imaging applications. Through a combination of low data lines capacitance

  17. The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor.

    Science.gov (United States)

    Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-Ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2018-03-05

    The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes.

  18. The linearized inversion of the generalized interferometric multiple imaging

    KAUST Repository

    Aldawood, Ali

    2016-09-06

    The generalized interferometric multiple imaging (GIMI) procedure can be used to image duplex waves and other higher order internal multiples. Imaging duplex waves could help illuminate subsurface zones that are not easily illuminated by primaries such as vertical and nearly vertical fault planes, and salt flanks. To image first-order internal multiple, the GIMI framework consists of three datuming steps, followed by applying the zero-lag cross-correlation imaging condition. However, the standard GIMI procedure yields migrated images that suffer from low spatial resolution, migration artifacts, and cross-talk noise. To alleviate these problems, we propose a least-squares GIMI framework in which we formulate the first two steps as a linearized inversion problem when imaging first-order internal multiples. Tests on synthetic datasets demonstrate the ability to localize subsurface scatterers in their true positions, and delineate a vertical fault plane using the proposed method. We, also, demonstrate the robustness of the proposed framework when imaging the scatterers or the vertical fault plane with erroneous migration velocities.

  19. Accuracy of Linear Measurements in Stitched Versus Non-Stitched Cone Beam Computed Tomography Images

    International Nuclear Information System (INIS)

    Srimawong, P.; Krisanachinda, A.; Chindasombatjaroen, J.

    2012-01-01

    Cone beam computed tomography images are useful in clinical dentistry. Linear measurements are necessary for accurate treatment planning.Therefore, the accuracy of linear measurements on CBCT images is needed to be verified. Current program called stitching program in Kodak 9000C 3D systems automatically combines up to three localized volumes to construct larger images with small voxel size.The purpose of this study was to assess the accuracy of linear measurements from stitched and non-stitched CBCT images in comparison to direct measurements.This study was performed in 10 human dry mandibles. Gutta-percha rods were marked at reference points to obtain 10 vertical and horizontal distances. Direct measurements by digital caliper were served as gold standard. All distances on CBCT images obtained by using and not using stitching program were measured, and compared with direct measurements.The intraclass correlation coefficients (ICC) were calculated.The ICC of direct measurements were 0.998 to 1.000.The ICC of intraobserver of both non-stitched CBCT images and stitched CBCT images were 1.000 indicated strong agreement made by a single observer.The intermethod ICC between direct measurements vs non-stitched CBCT images and direct measurements vs stitched CBCT images ranged from 0.972 to 1.000 and 0.967 to 0.998, respectively. No statistically significant differences between direct measurements and stitched CBCT images or non-stitched CBCT images (P > 0.05). The results showed that linear measurements on non-stitched and stitched CBCT images were highly accurate with no statistical difference compared to direct measurements. The ICC values in non-stitched and stitched CBCT images and direct measurements of vertical distances were slightly higher than those of horizontal distances. This indicated that the measurements in vertical orientation were more accurate than those in horizontal orientation. However, the differences were not statistically significant. Stitching

  20. Change Detection with GRASS GIS – Comparison of images taken by different sensors

    Directory of Open Access Journals (Sweden)

    Michael Fuchs

    2009-04-01

    Full Text Available Images of American military reconnaissance satellites of the Sixties (CORONA in combination with modern sensors (SPOT, QuickBird were used for detection of changes in land use. The pilot area was located about 40 km northwest of Yemen’s capital Sana’a and covered approximately 100 km2 . To produce comparable layers from images of distinctly different sources, the moving window technique was applied, using the diversity parameter. The resulting difference layers reveal plausible and interpretable change patterns, particularly in areas where urban sprawl occurs.The comparison of CORONA images with images taken by modern sensors proved to be an additional tool to visualize and quantify major changes in land use. The results should serve as additional basic data eg. in regional planning.The computation sequence was executed in GRASS GIS.

  1. Modeling of Potential Distribution of Electrical Capacitance Tomography Sensor for Multiphase Flow Image

    Directory of Open Access Journals (Sweden)

    S. Sathiyamoorthy

    2007-09-01

    Full Text Available Electrical Capacitance Tomography (ECT was used to develop image of various multi phase flow of gas-liquid-solid in a closed pipe. The principal difficulties to obtained real time image from ECT sensor are permittivity distribution across the plate and capacitance is nonlinear; the electric field is distorted by the material present and is also sensitive to measurement errors and noise. This work present a detailed description is given on method employed for image reconstruction from the capacitance measurements. The discretization and iterative algorithm is developed for improving the predictions with minimum error. The author analyzed eight electrodes square sensor ECT system with two-phase water-gas and solid-gas.

  2. Computed Tomography Image Origin Identification Based on Original Sensor Pattern Noise and 3-D Image Reconstruction Algorithm Footprints.

    Science.gov (United States)

    Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou

    2017-07-01

    In this paper, we focus on the "blind" identification of the computed tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT scanner based on an original sensor pattern noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its three-dimensional (3-D) image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train a support vector machine (SVM) based classifier to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than sensor pattern noise (SPN) based strategy proposed for general public camera devices.

  3. A CMOS Image Sensor With In-Pixel Buried-Channel Source Follower and Optimized Row Selector

    NARCIS (Netherlands)

    Chen, Y.; Wang, X.; Mierop, A.J.; Theuwissen, A.J.P.

    2009-01-01

    This paper presents a CMOS imager sensor with pinned-photodiode 4T active pixels which use in-pixel buried-channel source followers (SFs) and optimized row selectors. The test sensor has been fabricated in a 0.18-mum CMOS process. The sensor characterization was carried out successfully, and the

  4. Linearized inversion frameworks toward high-resolution seismic imaging

    KAUST Repository

    Aldawood, Ali

    2016-09-01

    internally multiply scattered seismic waves to obtain highly resolved images delineating vertical faults that are otherwise not easily imaged by primaries. Seismic interferometry is conventionally based on the cross-correlation and convolution of seismic traces to transform seismic data from one acquisition geometry to another. The conventional interferometric transformation yields virtual data that suffers from low temporal resolution, wavelet distortion, and correlation/convolution artifacts. I therefore incorporate a least-squares datuming technique to interferometrically transform vertical-seismic-profile surface-related multiples to surface-seismic-profile primaries. This yields redatumed data with high temporal resolution and less artifacts, which are subsequently imaged to obtain highly resolved subsurface images. Tests on synthetic examples demonstrate the efficiency of the proposed techniques, yielding highly resolved migrated sections compared with images obtained by imaging conventionally redatumed data. I further advance the recently developed cost-effective Generalized Interferometric Multiple Imaging procedure, which aims to not only image first but also higher-order multiples as well. I formulate this procedure as a linearized inversion framework and solve it as a least-squares problem. Tests of the least-squares Generalized Interferometric Multiple imaging framework on synthetic datasets and demonstrate that it could provide highly resolved migrated images and delineate vertical fault planes compared with the standard procedure. The results support the assertion that this linearized inversion framework can illuminate subsurface zones that are mainly illuminated by internally scattered energy.

  5. Implementation of non-linear filters for iterative penalized maximum likelihood image reconstruction

    International Nuclear Information System (INIS)

    Liang, Z.; Gilland, D.; Jaszczak, R.; Coleman, R.

    1990-01-01

    In this paper, the authors report on the implementation of six edge-preserving, noise-smoothing, non-linear filters applied in image space for iterative penalized maximum-likelihood (ML) SPECT image reconstruction. The non-linear smoothing filters implemented were the median filter, the E 6 filter, the sigma filter, the edge-line filter, the gradient-inverse filter, and the 3-point edge filter with gradient-inverse filter, and the 3-point edge filter with gradient-inverse weight. A 3 x 3 window was used for all these filters. The best image obtained, by viewing the profiles through the image in terms of noise-smoothing, edge-sharpening, and contrast, was the one smoothed with the 3-point edge filter. The computation time for the smoothing was less than 1% of one iteration, and the memory space for the smoothing was negligible. These images were compared with the results obtained using Bayesian analysis

  6. Thin Cloud Detection Method by Linear Combination Model of Cloud Image

    Science.gov (United States)

    Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.

    2018-04-01

    The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.

  7. A sprayable luminescent pH sensor and its use for wound imaging in vivo.

    Science.gov (United States)

    Schreml, Stephan; Meier, Robert J; Weiß, Katharina T; Cattani, Julia; Flittner, Dagmar; Gehmert, Sebastian; Wolfbeis, Otto S; Landthaler, Michael; Babilas, Philipp

    2012-12-01

    Non-invasive luminescence imaging is of great interest for studying biological parameters in wound healing, tumors and other biomedical fields. Recently, we developed the first method for 2D luminescence imaging of pH in vivo on humans, and a novel method for one-stop-shop visualization of oxygen and pH using the RGB read-out of digital cameras. Both methods make use of semitransparent sensor foils. Here, we describe a sprayable ratiometric luminescent pH sensor, which combines properties of both these methods. Additionally, a major advantage is that the sensor spray is applicable to very uneven tissue surfaces due to its consistency. A digital RGB image of the spray on tissue is taken. The signal of the pH indicator (fluorescein isothiocyanate) is stored in the green channel (G), while that of the reference dye [ruthenium(II)-tris-(4,7-diphenyl-1,10-phenanthroline)] is stored in the red channel (R). Images are processed by rationing luminescence intensities (G/R) to result in pseudocolor pH maps of tissues, e.g. wounds. © 2012 John Wiley & Sons A/S.

  8. Finite Element Analysis of Film Stack Architecture for Complementary Metal-Oxide-Semiconductor Image Sensors.

    Science.gov (United States)

    Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang

    2017-05-02

    Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components.

  9. Human visual modeling and image deconvolution by linear filtering

    International Nuclear Information System (INIS)

    Larminat, P. de; Barba, D.; Gerber, R.; Ronsin, J.

    1978-01-01

    The problem is the numerical restoration of images degraded by passing through a known and spatially invariant linear system, and by the addition of a stationary noise. We propose an improvement of the Wiener's filter to allow the restoration of such images. This improvement allows to reduce the important drawbacks of classical Wiener's filter: the voluminous data processing, the lack of consideration of the vision's characteristivs which condition the perception by the observer of the restored image. In a first paragraph, we describe the structure of the visual detection system and a modelling method of this system. In the second paragraph we explain a restoration method by Wiener filtering that takes the visual properties into account and that can be adapted to the local properties of the image. Then the results obtained on TV images or scintigrams (images obtained by a gamma-camera) are commented [fr

  10. The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors

    Science.gov (United States)

    Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.

    2015-12-01

    Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and

  11. Imaging dipole flow sources using an artificial lateral-line system made of biomimetic hair flow sensors

    NARCIS (Netherlands)

    Dagamseh, A.M.K.; Wiegerink, Remco J.; Lammerink, Theodorus S.J.; Krijnen, Gijsbertus J.M.

    2013-01-01

    In Nature, fish have the ability to localize prey, school, navigate, etc., using the lateral-line organ. Artificial hair flow sensors arranged in a linear array shape (inspired by the lateral-line system (LSS) in fish) have been applied to measure airflow patterns at the sensor positions. Here, we

  12. A Portable Colloidal Gold Strip Sensor for Clenbuterol and Ractopamine Using Image Processing Technology

    Directory of Open Access Journals (Sweden)

    Yi Guo

    2013-01-01

    Full Text Available A portable colloidal golden strip sensor for detecting clenbuterol and ractopamine has been developed using image processing technology, as well as a novel strip reader has achieved innovatively with this imaging sensor. Colloidal gold strips for clenbuterol and ractopamine is used as first sensor with given biomedical immunication reaction. After three minutes the target sample dropped on, the color showing in the T line is relative to the content of objects as clenbuterol, this reader can finish many functions like automatic acquit ion of colored strip image, quantatively analysis of the color lines including the control line and test line, and data storage and transfer to computer. The system is integrated image collection, pattern recognition and real-time colloidal gold quantitative measurement. In experiment, clenbuterol and ractopamine standard substance with concentration from 0 ppb to 10 ppb is prepared and tested, the result reveals that standard solutions of clenbuterol and ractopamine have a good secondary fitting character with color degree (R2 is up to 0.99 and 0.98. Besides, through standard sample addition to the object negative substance, good recovery results are obtained up to 98 %. Above all, an optical sensor for colloidal strip measure is capable of determining the content of clenbuterol and ractopamine, it is likely to apply to quantatively identifying of similar reaction of colloidal golden strips.

  13. A flowing liquid test system for assessing the linearity and time-response of rapid fibre optic oxygen partial pressure sensors.

    Science.gov (United States)

    Chen, R; Hahn, C E W; Farmery, A D

    2012-08-15

    The development of a methodology for testing the time response, linearity and performance characteristics of ultra fast fibre optic oxygen sensors in the liquid phase is presented. Two standard medical paediatric oxygenators are arranged to provide two independent extracorporeal circuits. Flow from either circuit can be diverted over the sensor under test by means of a system of rapid cross-over solenoid valves exposing the sensor to an abrupt change in oxygen partial pressure, P O2. The system is also capable of testing the oxygen sensor responses to changes in temperature, carbon dioxide partial pressure P CO2 and pH in situ. Results are presented for a miniature fibre optic oxygen sensor constructed in-house with a response time ≈ 50 ms and a commercial fibre optic sensor (Ocean Optics Foxy), when tested in flowing saline and stored blood. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Maintained functionality of an implantable radiotelemetric blood pressure and heart rate sensor after magnetic resonance imaging in rats

    International Nuclear Information System (INIS)

    Nölte, I; Boll, H; Figueiredo, G; Groden, C; Brockmann, M A; Gorbey, S; Lemmer, B

    2011-01-01

    Radiotelemetric sensors for in vivo assessment of blood pressure and heart rate are widely used in animal research. MRI with implanted sensors is regarded as contraindicated as transmitter malfunction and injury of the animal may be caused. Moreover, artefacts are expected to compromise image evaluation. In vitro, the function of a radiotelemetric sensor (TA11PA-C10, Data Sciences International) after exposure to MRI up to 9.4 T was assessed. The magnetic force of the electromagnetic field on the sensor as well as radiofrequency (RF)-induced sensor heating was analysed. Finally, MRI with an implanted sensor was performed in a rat. Imaging artefacts were analysed at 3.0 and 9.4 T ex vivo and in vivo. Transmitted 24 h blood pressure and heart rate were compared before and after MRI to verify the integrity of the telemetric sensor. The function of the sensor was not altered by MRI up to 9.4 T. The maximum force exerted on the sensor was 273 ± 50 mN. RF-induced heating was ruled out. Artefacts impeded the assessment of the abdomen and thorax in a dead rat, but not of the head and neck. MRI with implanted radiotelemetric sensors is feasible in principal. The tested sensor maintains functionality up to 9.4 T. Artefacts hampered abdominal and throacic imaging in rats, while assessment of the head and neck is possible

  15. COLOR IMAGE RETRIEVAL BASED ON FEATURE FUSION THROUGH MULTIPLE LINEAR REGRESSION ANALYSIS

    Directory of Open Access Journals (Sweden)

    K. Seetharaman

    2015-08-01

    Full Text Available This paper proposes a novel technique based on feature fusion using multiple linear regression analysis, and the least-square estimation method is employed to estimate the parameters. The given input query image is segmented into various regions according to the structure of the image. The color and texture features are extracted on each region of the query image, and the features are fused together using the multiple linear regression model. The estimated parameters of the model, which is modeled based on the features, are formed as a vector called a feature vector. The Canberra distance measure is adopted to compare the feature vectors of the query and target images. The F-measure is applied to evaluate the performance of the proposed technique. The obtained results expose that the proposed technique is comparable to the other existing techniques.

  16. Dark current spectroscopy of space and nuclear environment induced displacement damage defects in pinned photodiode based CMOS image sensors

    International Nuclear Information System (INIS)

    Belloir, Jean-Marc

    2016-01-01

    CMOS image sensors are envisioned for an increasing number of high-end scientific imaging applications such as space imaging or nuclear experiments. Indeed, the performance of high-end CMOS image sensors has dramatically increased in the past years thanks to the unceasing improvements of microelectronics, and these image sensors have substantial advantages over CCDs which make them great candidates to replace CCDs in future space missions. However, in space and nuclear environments, CMOS image sensors must face harsh radiation which can rapidly degrade their electro-optical performances. In particular, the protons, electrons and ions travelling in space or the fusion neutrons from nuclear experiments can displace silicon atoms in the pixels and break the crystalline structure. These displacement damage effects lead to the formation of stable defects and to the introduction of states in the forbidden bandgap of silicon, which can allow the thermal generation of electron-hole pairs. Consequently, non ionizing radiation leads to a permanent increase of the dark current of the pixels and thus a decrease of the image sensor sensitivity and dynamic range. The aim of the present work is to extend the understanding of the effect of displacement damage on the dark current increase of CMOS image sensors. In particular, this work focuses on the shape of the dark current distribution depending on the particle type, energy and fluence but also on the image sensor physical parameters. Thanks to the many conditions tested, an empirical model for the prediction of the dark current distribution induced by displacement damage in nuclear or space environments is experimentally validated and physically justified. Another central part of this work consists in using the dark current spectroscopy technique for the first time on irradiated CMOS image sensors to detect and characterize radiation-induced silicon bulk defects. Many types of defects are detected and two of them are identified

  17. X-ray CCD image sensor with a thick depletion region

    International Nuclear Information System (INIS)

    Saito, Hirobumi; Watabe, Hiroshi.

    1984-01-01

    To develop a solid-state image sensor for high energy X-ray above 1 -- 2 keV, basic studies have been made on the CCD (charge coupled device) with a thick depletion region. A method of super-imposing a high DC bias voltage on low voltage signal pulses was newly proposed. The characteristics of both SCCD and BCCD were investigated, and their ability as X-ray sensors was compared. It was found that a depletion region of 60 μm thick was able to be obtained with ordinary doping density of 10 20 /m 3 , and that even thicker over 1 mm depletion region was able to be obtained with doping density of about 10 18 /m 3 , and a high bias voltage above 1 kV was able to be applied. It is suggested that the CCD image sensors for 8 keV or 24 keV X-ray can be realized since the absorption length of these X-ray in Si is about 60 μm and 1 mm, respectively. As for the characteristics other than the depletion thickness, the BCCD is preferable to SCCD for the present purpose because of lower noise and dark current. As for the transfer method, the frame-transfer method is recommended. (Aoki, K.)

  18. Photoresponse analysis of the CMOS photodiodes for CMOS x-ray image sensor

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Soo; Ha, Jang Ho; Kim, Han Soo; Yeo, Sun Mok [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-11-15

    Although in the short term CMOS active pixel sensors (APSs) cannot compete with the conventionally used charge coupled devices (CCDs) for high quality scientific imaging, recent development in CMOS APSs indicate that CMOS performance level of CCDs in several domains. CMOS APSs possess thereby a number of advantages such as simpler driving requirements and low power operation. CMOS image sensors can be processed in standard CMOS technologies and the potential of on-chip integration of analog and digital circuitry makes them more suitable for several vision systems where system cost is of importance. Moreover, CMOS imagers can directly benefit from on-going technological progress in the field of CMOS technologies. Due to these advantages, the CMOS APSs are currently being investigated actively for various applications such as star tracker, navigation camera and X-ray imaging etc. In most detection systems, it is thought that the sensor is most important, since this decides the signal and noise level. So, in CMOS APSs, the pixel is very important compared to other functional blocks. In order to predict the performance of such image sensor, a detailed understanding of the photocurrent generation in the photodiodes that comprise the CMOS APS is required. In this work, we developed the analytical model that can calculate the photocurrent generated in CMOS photodiode comprising CMOS APSs. The photocurrent calculations and photo response simulations with respect to the wavelength of the incident photon were performed using this model for four types of photodiodes that can be fabricated in standard CMOS process. n{sup +}/p{sup -}sub and n{sup +}/p{sup -}epi/p{sup -}sub photodiode show better performance compared to n{sup -}well/p{sup -}sub and n{sup -}well/p{sup -}epi/p{sup -}sub due to the wider depletion width. Comparing n{sup +}/p{sup -}sub and n{sup +}/p{sup -}epi/p{sup -}sub photodiode, n{sup +}/p{sup -}sub has higher photo-responsivity in longer wavelength because of

  19. Photoresponse analysis of the CMOS photodiodes for CMOS x-ray image sensor

    International Nuclear Information System (INIS)

    Kim, Young Soo; Ha, Jang Ho; Kim, Han Soo; Yeo, Sun Mok

    2012-01-01

    Although in the short term CMOS active pixel sensors (APSs) cannot compete with the conventionally used charge coupled devices (CCDs) for high quality scientific imaging, recent development in CMOS APSs indicate that CMOS performance level of CCDs in several domains. CMOS APSs possess thereby a number of advantages such as simpler driving requirements and low power operation. CMOS image sensors can be processed in standard CMOS technologies and the potential of on-chip integration of analog and digital circuitry makes them more suitable for several vision systems where system cost is of importance. Moreover, CMOS imagers can directly benefit from on-going technological progress in the field of CMOS technologies. Due to these advantages, the CMOS APSs are currently being investigated actively for various applications such as star tracker, navigation camera and X-ray imaging etc. In most detection systems, it is thought that the sensor is most important, since this decides the signal and noise level. So, in CMOS APSs, the pixel is very important compared to other functional blocks. In order to predict the performance of such image sensor, a detailed understanding of the photocurrent generation in the photodiodes that comprise the CMOS APS is required. In this work, we developed the analytical model that can calculate the photocurrent generated in CMOS photodiode comprising CMOS APSs. The photocurrent calculations and photo response simulations with respect to the wavelength of the incident photon were performed using this model for four types of photodiodes that can be fabricated in standard CMOS process. n + /p - sub and n + /p - epi/p - sub photodiode show better performance compared to n - well/p - sub and n - well/p - epi/p - sub due to the wider depletion width. Comparing n + /p - sub and n + /p - epi/p - sub photodiode, n + /p - sub has higher photo-responsivity in longer wavelength because of the higher electron diffusion current

  20. Imaging intracellular pH in live cells with a genetically encoded red fluorescent protein sensor.

    Science.gov (United States)

    Tantama, Mathew; Hung, Yin Pun; Yellen, Gary

    2011-07-06

    Intracellular pH affects protein structure and function, and proton gradients underlie the function of organelles such as lysosomes and mitochondria. We engineered a genetically encoded pH sensor by mutagenesis of the red fluorescent protein mKeima, providing a new tool to image intracellular pH in live cells. This sensor, named pHRed, is the first ratiometric, single-protein red fluorescent sensor of pH. Fluorescence emission of pHRed peaks at 610 nm while exhibiting dual excitation peaks at 440 and 585 nm that can be used for ratiometric imaging. The intensity ratio responds with an apparent pK(a) of 6.6 and a >10-fold dynamic range. Furthermore, pHRed has a pH-responsive fluorescence lifetime that changes by ~0.4 ns over physiological pH values and can be monitored with single-wavelength two-photon excitation. After characterizing the sensor, we tested pHRed's ability to monitor intracellular pH by imaging energy-dependent changes in cytosolic and mitochondrial pH.

  1. MULTI-TEMPORAL AND MULTI-SENSOR IMAGE MATCHING BASED ON LOCAL FREQUENCY INFORMATION

    Directory of Open Access Journals (Sweden)

    X. Liu

    2012-08-01

    Full Text Available Image Matching is often one of the first tasks in many Photogrammetry and Remote Sensing applications. This paper presents an efficient approach to automated multi-temporal and multi-sensor image matching based on local frequency information. Two new independent image representations, Local Average Phase (LAP and Local Weighted Amplitude (LWA, are presented to emphasize the common scene information, while suppressing the non-common illumination and sensor-dependent information. In order to get the two representations, local frequency information is firstly obtained from Log-Gabor wavelet transformation, which is similar to that of the human visual system; then the outputs of odd and even symmetric filters are used to construct the LAP and LWA. The LAP and LWA emphasize on the phase and amplitude information respectively. As these two representations are both derivative-free and threshold-free, they are robust to noise and can keep as much of the image details as possible. A new Compositional Similarity Measure (CSM is also presented to combine the LAP and LWA with the same weight for measuring the similarity of multi-temporal and multi-sensor images. The CSM can make the LAP and LWA compensate for each other and can make full use of the amplitude and phase of local frequency information. In many image matching applications, the template is usually selected without consideration of its matching robustness and accuracy. In order to overcome this problem, a local best matching point detection is presented to detect the best matching template. In the detection method, we employ self-similarity analysis to identify the template with the highest matching robustness and accuracy. Experimental results using some real images and simulation images demonstrate that the presented approach is effective for matching image pairs with significant scene and illumination changes and that it has advantages over other state-of-the-art approaches, which include: the

  2. Introduction to sensors for ranging and imaging

    CERN Document Server

    Brooker, Graham

    2009-01-01

    ""This comprehensive text-reference provides a solid background in active sensing technology. It is concerned with active sensing, starting with the basics of time-of-flight sensors (operational principles, components), and going through the derivation of the radar range equation and the detection of echo signals, both fundamental to the understanding of radar, sonar and lidar imaging. Several chapters cover signal propagation of both electromagnetic and acoustic energy, target characteristics, stealth, and clutter. The remainder of the book introduces the range measurement process, active ima

  3. Velocity estimation of an airplane through a single satellite image

    Institute of Scientific and Technical Information of China (English)

    Zhuxin Zhao; Gongjian Wen; Bingwei Hui; Deren Li

    2012-01-01

    The motion information of a moving target can be recorded in a single image by a push-broom satellite. A push-broom satellite image is composed of many image lines sensed at different time instants. A method to estimate the velocity of a flying airplane from a single image based on the imagery model of the linear push-broom sensor is proposed. Some key points on the high-resolution image of the plane are chosen to determine the velocity (speed and direction). The performance of the method is tested and verified by experiments using a WorldView-1 image.%The motion information of a moving target can be recorded in a single image by a push-broom satellite.A push-broom satellite image is composed of many image lines sensed at different time instants.A method to estimate the velocity of a flying airplane from a single image based on the imagery model of the linear push-broom sensor is proposed.Some key points on the high-resolution image of the plane are chosen to determine the velocity (speed and direction).The performance of the method is tested and verified by experiments using a WorldView-1 image.

  4. High-Resolution Spin-on-Patterning of Perovskite Thin Films for a Multiplexed Image Sensor Array.

    Science.gov (United States)

    Lee, Woongchan; Lee, Jongha; Yun, Huiwon; Kim, Joonsoo; Park, Jinhong; Choi, Changsoon; Kim, Dong Chan; Seo, Hyunseon; Lee, Hakyong; Yu, Ji Woong; Lee, Won Bo; Kim, Dae-Hyeong

    2017-10-01

    Inorganic-organic hybrid perovskite thin films have attracted significant attention as an alternative to silicon in photon-absorbing devices mainly because of their superb optoelectronic properties. However, high-definition patterning of perovskite thin films, which is important for fabrication of the image sensor array, is hardly accomplished owing to their extreme instability in general photolithographic solvents. Here, a novel patterning process for perovskite thin films is described: the high-resolution spin-on-patterning (SoP) process. This fast and facile process is compatible with a variety of spin-coated perovskite materials and perovskite deposition techniques. The SoP process is successfully applied to develop a high-performance, ultrathin, and deformable perovskite-on-silicon multiplexed image sensor array, paving the road toward next-generation image sensor arrays. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Sparse PDF maps for non-linear multi-resolution image operations

    KAUST Repository

    Hadwiger, Markus; Sicat, Ronell Barrera; Beyer, Johanna; Krü ger, Jens J.; Mö ller, Torsten

    2012-01-01

    feasible for gigapixel images, while enabling direct evaluation of a variety of non-linear operators from the same representation. We illustrate this versatility for antialiased color mapping, O(n) local Laplacian filters, smoothed local histogram filters

  6. The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor

    Science.gov (United States)

    Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2018-01-01

    The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes. PMID:29510599

  7. Hyperspectral Imaging Sensors and the Marine Coastal Zone

    Science.gov (United States)

    Richardson, Laurie L.

    2000-01-01

    Hyperspectral imaging sensors greatly expand the potential of remote sensing to assess, map, and monitor marine coastal zones. Each pixel in a hyperspectral image contains an entire spectrum of information. As a result, hyperspectral image data can be processed in two very different ways: by image classification techniques, to produce mapped outputs of features in the image on a regional scale; and by use of spectral analysis of the spectral data embedded within each pixel of the image. The latter is particularly useful in marine coastal zones because of the spectral complexity of suspended as well as benthic features found in these environments. Spectral-based analysis of hyperspectral (AVIRIS) imagery was carried out to investigate a marine coastal zone of South Florida, USA. Florida Bay is a phytoplankton-rich estuary characterized by taxonomically distinct phytoplankton assemblages and extensive seagrass beds. End-member spectra were extracted from AVIRIS image data corresponding to ground-truth sample stations and well-known field sites. Spectral libraries were constructed from the AVIRIS end-member spectra and used to classify images using the Spectral Angle Mapper (SAM) algorithm, a spectral-based approach that compares the spectrum, in each pixel of an image with each spectrum in a spectral library. Using this approach different phytoplankton assemblages containing diatoms, cyanobacteria, and green microalgae, as well as benthic community (seagrasses), were mapped.

  8. Linear mixing model applied to coarse spatial resolution data from multispectral satellite sensors

    Science.gov (United States)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1993-01-01

    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55-3.95 micron channel was used with the two reflective channels 0.58-0.68 micron and 0.725-1.1 micron to run a constrained least squares model to generate fraction images for an area in the west central region of Brazil. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse spatial resolution data for global studies.

  9. Design and Parametric Study of the Magnetic Sensor for Position Detection in Linear Motor Based on Nonlinear Parametric model order reduction.

    Science.gov (United States)

    Paul, Sarbajit; Chang, Junghwan

    2017-07-01

    This paper presents a design approach for a magnetic sensor module to detect mover position using the proper orthogonal decomposition-dynamic mode decomposition (POD-DMD)-based nonlinear parametric model order reduction (PMOR). The parameterization of the sensor module is achieved by using the multipolar moment matching method. Several geometric variables of the sensor module are considered while developing the parametric study. The operation of the sensor module is based on the principle of the airgap flux density distribution detection by the Hall Effect IC. Therefore, the design objective is to achieve a peak flux density (PFD) greater than 0.1 T and total harmonic distortion (THD) less than 3%. To fulfill the constraint conditions, the specifications for the sensor module is achieved by using POD-DMD based reduced model. The POD-DMD based reduced model provides a platform to analyze the high number of design models very fast, with less computational burden. Finally, with the final specifications, the experimental prototype is designed and tested. Two different modes, 90° and 120° modes respectively are used to obtain the position information of the linear motor mover. The position information thus obtained are compared with that of the linear scale data, used as a reference signal. The position information obtained using the 120° mode has a standard deviation of 0.10 mm from the reference linear scale signal, whereas the 90° mode position signal shows a deviation of 0.23 mm from the reference. The deviation in the output arises due to the mechanical tolerances introduced into the specification during the manufacturing process. This provides a scope for coupling the reliability based design optimization in the design process as a future extension.

  10. Airborne digital-image data for monitoring the Colorado River corridor below Glen Canyon Dam, Arizona, 2009 - Image-mosaic production and comparison with 2002 and 2005 image mosaics

    Science.gov (United States)

    Davis, Philip A.

    2012-01-01

    Airborne digital-image data were collected for the Arizona part of the Colorado River ecosystem below Glen Canyon Dam in 2009. These four-band image data are similar in wavelength band (blue, green, red, and near infrared) and spatial resolution (20 centimeters) to image collections of the river corridor in 2002 and 2005. These periodic image collections are used by the Grand Canyon Monitoring and Research Center (GCMRC) of the U.S. Geological Survey to monitor the effects of Glen Canyon Dam operations on the downstream ecosystem. The 2009 collection used the latest model of the Leica ADS40 airborne digital sensor (the SH52), which uses a single optic for all four bands and collects and stores band radiance in 12-bits, unlike the image sensors that GCMRC used in 2002 and 2005. This study examined the performance of the SH52 sensor, on the basis of the collected image data, and determined that the SH52 sensor provided superior data relative to the previously employed sensors (that is, an early ADS40 model and Zeiss Imaging's Digital Mapping Camera) in terms of band-image registration, dynamic range, saturation, linearity to ground reflectance, and noise level. The 2009 image data were provided as orthorectified segments of each flightline to constrain the size of the image files; each river segment was covered by 5 to 6 overlapping, linear flightlines. Most flightline images for each river segment had some surface-smear defects and some river segments had cloud shadows, but these two conditions did not generally coincide in the majority of the overlapping flightlines for a particular river segment. Therefore, the final image mosaic for the 450-kilometer (km)-long river corridor required careful selection and editing of numerous flightline segments (a total of 513 segments, each 3.2 km long) to minimize surface defects and cloud shadows. The final image mosaic has a total of only 3 km of surface defects. The final image mosaic for the western end of the corridor has

  11. Column-Parallel Single Slope ADC with Digital Correlated Multiple Sampling for Low Noise CMOS Image Sensors

    NARCIS (Netherlands)

    Chen, Y.; Theuwissen, A.J.P.; Chae, Y.

    2011-01-01

    This paper presents a low noise CMOS image sensor (CIS) using 10/12 bit configurable column-parallel single slope ADCs (SS-ADCs) and digital correlated multiple sampling (CMS). The sensor used is a conventional 4T active pixel with a pinned-photodiode as photon detector. The test sensor was

  12. Fourier-based linear systems description of free-breathing pulmonary magnetic resonance imaging

    Science.gov (United States)

    Capaldi, D. P. I.; Svenningsen, S.; Cunningham, I. A.; Parraga, G.

    2015-03-01

    Fourier-decomposition of free-breathing pulmonary magnetic resonance imaging (FDMRI) was recently piloted as a way to provide rapid quantitative pulmonary maps of ventilation and perfusion without the use of exogenous contrast agents. This method exploits fast pulmonary MRI acquisition of free-breathing proton (1H) pulmonary images and non-rigid registration to compensate for changes in position and shape of the thorax associated with breathing. In this way, ventilation imaging using conventional MRI systems can be undertaken but there has been no systematic evaluation of fundamental image quality measurements based on linear systems theory. We investigated the performance of free-breathing pulmonary ventilation imaging using a Fourier-based linear system description of each operation required to generate FDMRI ventilation maps. Twelve subjects with chronic obstructive pulmonary disease (COPD) or bronchiectasis underwent pulmonary function tests and MRI. Non-rigid registration was used to co-register the temporal series of pulmonary images. Pulmonary voxel intensities were aligned along a time axis and discrete Fourier transforms were performed on the periodic signal intensity pattern to generate frequency spectra. We determined the signal-to-noise ratio (SNR) of the FDMRI ventilation maps using a conventional approach (SNRC) and using the Fourier-based description (SNRF). Mean SNR was 4.7 ± 1.3 for subjects with bronchiectasis and 3.4 ± 1.8, for COPD subjects (p>.05). SNRF was significantly different than SNRC (p<.01). SNRF was approximately 50% of SNRC suggesting that the linear system model well-estimates the current approach.

  13. Non-uniformity Correction of Infrared Images by Midway Equalization

    Directory of Open Access Journals (Sweden)

    Yohann Tendero

    2012-07-01

    Full Text Available The non-uniformity is a time-dependent noise caused by the lack of sensor equalization. We present here the detailed algorithm and on line demo of the non-uniformity correction method by midway infrared equalization. This method was designed to suit infrared images. Nevertheless, it can be applied to images produced for example by scanners, or by push-broom satellites. The obtained single image method works on static images, is fully automatic, having no user parameter, and requires no registration. It needs no camera motion compensation, no closed aperture sensor equalization and is able to correct for a fully non-linear non-uniformity.

  14. An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability.

    Science.gov (United States)

    Cevik, Ismail; Huang, Xiwei; Yu, Hao; Yan, Mei; Ay, Suat U

    2015-03-06

    An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability is introduced in this paper. The photodiode pixel array can not only capture images but also harvest solar energy. As such, the CMOS image sensor chip is able to switch between imaging and harvesting modes towards self-power operation. Moreover, an on-chip maximum power point tracking (MPPT)-based power management system (PMS) is designed for the dual-mode image sensor to further improve the energy efficiency. A new isolated P-well energy harvesting and imaging (EHI) pixel with very high fill factor is introduced. Several ultra-low power design techniques such as reset and select boosting techniques have been utilized to maintain a wide pixel dynamic range. The chip was designed and fabricated in a 1.8 V, 1P6M 0.18 µm CMOS process. Total power consumption of the imager is 6.53 µW for a 96 × 96 pixel array with 1 V supply and 5 fps frame rate. Up to 30 μW of power could be generated by the new EHI pixels. The PMS is capable of providing 3× the power required during imaging mode with 50% efficiency allowing energy autonomous operation with a 72.5% duty cycle.

  15. Advanced data visualization and sensor fusion: Conversion of techniques from medical imaging to Earth science

    Science.gov (United States)

    Savage, Richard C.; Chen, Chin-Tu; Pelizzari, Charles; Ramanathan, Veerabhadran

    1993-01-01

    Hughes Aircraft Company and the University of Chicago propose to transfer existing medical imaging registration algorithms to the area of multi-sensor data fusion. The University of Chicago's algorithms have been successfully demonstrated to provide pixel by pixel comparison capability for medical sensors with different characteristics. The research will attempt to fuse GOES (Geostationary Operational Environmental Satellite), AVHRR (Advanced Very High Resolution Radiometer), and SSM/I (Special Sensor Microwave Imager) sensor data which will benefit a wide range of researchers. The algorithms will utilize data visualization and algorithm development tools created by Hughes in its EOSDIS (Earth Observation SystemData/Information System) prototyping. This will maximize the work on the fusion algorithms since support software (e.g. input/output routines) will already exist. The research will produce a portable software library with documentation for use by other researchers.

  16. Multiocular image sensor with on-chip beam-splitter and inner meta-micro-lens for single-main-lens stereo camera.

    Science.gov (United States)

    Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa

    2016-08-08

    We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.

  17. Two-dimensional Fast ESPRIT Algorithm for Linear Array SAR Imaging

    Directory of Open Access Journals (Sweden)

    Zhao Yi-chao

    2015-10-01

    Full Text Available The linear array Synthetic Aperture Radar (SAR system is a popular research tool, because it can realize three-dimensional imaging. However, owning to limitations of the aircraft platform and actual conditions, resolution improvement is difficult in cross-track and along-track directions. In this study, a twodimensional fast Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT algorithm for linear array SAR imaging is proposed to overcome these limitations. This approach combines the Gerschgorin disks method and the ESPRIT algorithm to estimate the positions of scatterers in cross and along-rack directions. Moreover, the reflectivity of scatterers is obtained by a modified pairing method based on “region growing”, replacing the least-squares method. The simulation results demonstrate the applicability of the algorithm with high resolution, quick calculation, and good real-time response.

  18. Radiometric inter-sensor cross-calibration uncertainty using a traceable high accuracy reference hyperspectral imager

    Science.gov (United States)

    Gorroño, Javier; Banks, Andrew C.; Fox, Nigel P.; Underwood, Craig

    2017-08-01

    Optical earth observation (EO) satellite sensors generally suffer from drifts and biases relative to their pre-launch calibration, caused by launch and/or time in the space environment. This places a severe limitation on the fundamental reliability and accuracy that can be assigned to satellite derived information, and is particularly critical for long time base studies for climate change and enabling interoperability and Analysis Ready Data. The proposed TRUTHS (Traceable Radiometry Underpinning Terrestrial and Helio-Studies) mission is explicitly designed to address this issue through re-calibrating itself directly to a primary standard of the international system of units (SI) in-orbit and then through the extension of this SI-traceability to other sensors through in-flight cross-calibration using a selection of Committee on Earth Observation Satellites (CEOS) recommended test sites. Where the characteristics of the sensor under test allows, this will result in a significant improvement in accuracy. This paper describes a set of tools, algorithms and methodologies that have been developed and used in order to estimate the radiometric uncertainty achievable for an indicative target sensor through in-flight cross-calibration using a well-calibrated hyperspectral SI-traceable reference sensor with observational characteristics such as TRUTHS. In this study, Multi-Spectral Imager (MSI) of Sentinel-2 and Landsat-8 Operational Land Imager (OLI) is evaluated as an example, however the analysis is readily translatable to larger-footprint sensors such as Sentinel-3 Ocean and Land Colour Instrument (OLCI) and Visible Infrared Imaging Radiometer Suite (VIIRS). This study considers the criticality of the instrumental and observational characteristics on pixel level reflectance factors, within a defined spatial region of interest (ROI) within the target site. It quantifies the main uncertainty contributors in the spectral, spatial, and temporal domains. The resultant tool

  19. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    Science.gov (United States)

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  20. On use of image quality metrics for perceptual blur modeling: image/video compression case

    Science.gov (United States)

    Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn

    2018-02-01

    Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.

  1. Landsat 8 Operational Land Imager (OLI)_Thermal Infared Sensor (TIRS) V1

    Data.gov (United States)

    National Aeronautics and Space Administration — Abstract:The Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) are instruments onboard the Landsat 8 satellite, which was launched in February of...

  2. GREIT: a unified approach to 2D linear EIT reconstruction of lung images.

    Science.gov (United States)

    Adler, Andy; Arnold, John H; Bayford, Richard; Borsic, Andrea; Brown, Brian; Dixon, Paul; Faes, Theo J C; Frerichs, Inéz; Gagnon, Hervé; Gärber, Yvo; Grychtol, Bartłomiej; Hahn, Günter; Lionheart, William R B; Malik, Anjum; Patterson, Robert P; Stocks, Janet; Tizzard, Andrew; Weiler, Norbert; Wolf, Gerhard K

    2009-06-01

    Electrical impedance tomography (EIT) is an attractive method for clinically monitoring patients during mechanical ventilation, because it can provide a non-invasive continuous image of pulmonary impedance which indicates the distribution of ventilation. However, most clinical and physiological research in lung EIT is done using older and proprietary algorithms; this is an obstacle to interpretation of EIT images because the reconstructed images are not well characterized. To address this issue, we develop a consensus linear reconstruction algorithm for lung EIT, called GREIT (Graz consensus Reconstruction algorithm for EIT). This paper describes the unified approach to linear image reconstruction developed for GREIT. The framework for the linear reconstruction algorithm consists of (1) detailed finite element models of a representative adult and neonatal thorax, (2) consensus on the performance figures of merit for EIT image reconstruction and (3) a systematic approach to optimize a linear reconstruction matrix to desired performance measures. Consensus figures of merit, in order of importance, are (a) uniform amplitude response, (b) small and uniform position error, (c) small ringing artefacts, (d) uniform resolution, (e) limited shape deformation and (f) high resolution. Such figures of merit must be attained while maintaining small noise amplification and small sensitivity to electrode and boundary movement. This approach represents the consensus of a large and representative group of experts in EIT algorithm design and clinical applications for pulmonary monitoring. All software and data to implement and test the algorithm have been made available under an open source license which allows free research and commercial use.

  3. GREIT: a unified approach to 2D linear EIT reconstruction of lung images

    International Nuclear Information System (INIS)

    Adler, Andy; Arnold, John H; Bayford, Richard; Tizzard, Andrew; Borsic, Andrea; Brown, Brian; Dixon, Paul; Faes, Theo J C; Frerichs, Inéz; Weiler, Norbert; Gagnon, Hervé; Gärber, Yvo; Grychtol, Bartłomiej; Hahn, Günter; Lionheart, William R B; Malik, Anjum; Patterson, Robert P; Stocks, Janet; Wolf, Gerhard K

    2009-01-01

    Electrical impedance tomography (EIT) is an attractive method for clinically monitoring patients during mechanical ventilation, because it can provide a non-invasive continuous image of pulmonary impedance which indicates the distribution of ventilation. However, most clinical and physiological research in lung EIT is done using older and proprietary algorithms; this is an obstacle to interpretation of EIT images because the reconstructed images are not well characterized. To address this issue, we develop a consensus linear reconstruction algorithm for lung EIT, called GREIT (Graz consensus Reconstruction algorithm for EIT). This paper describes the unified approach to linear image reconstruction developed for GREIT. The framework for the linear reconstruction algorithm consists of (1) detailed finite element models of a representative adult and neonatal thorax, (2) consensus on the performance figures of merit for EIT image reconstruction and (3) a systematic approach to optimize a linear reconstruction matrix to desired performance measures. Consensus figures of merit, in order of importance, are (a) uniform amplitude response, (b) small and uniform position error, (c) small ringing artefacts, (d) uniform resolution, (e) limited shape deformation and (f) high resolution. Such figures of merit must be attained while maintaining small noise amplification and small sensitivity to electrode and boundary movement. This approach represents the consensus of a large and representative group of experts in EIT algorithm design and clinical applications for pulmonary monitoring. All software and data to implement and test the algorithm have been made available under an open source license which allows free research and commercial use

  4. Multi-linear sparse reconstruction for SAR imaging based on higher-order SVD

    Science.gov (United States)

    Gao, Yu-Fei; Gui, Guan; Cong, Xun-Chao; Yang, Yue; Zou, Yan-Bin; Wan, Qun

    2017-12-01

    This paper focuses on the spotlight synthetic aperture radar (SAR) imaging for point scattering targets based on tensor modeling. In a real-world scenario, scatterers usually distribute in the block sparse pattern. Such a distribution feature has been scarcely utilized by the previous studies of SAR imaging. Our work takes advantage of this structure property of the target scene, constructing a multi-linear sparse reconstruction algorithm for SAR imaging. The multi-linear block sparsity is introduced into higher-order singular value decomposition (SVD) with a dictionary constructing procedure by this research. The simulation experiments for ideal point targets show the robustness of the proposed algorithm to the noise and sidelobe disturbance which always influence the imaging quality of the conventional methods. The computational resources requirement is further investigated in this paper. As a consequence of the algorithm complexity analysis, the present method possesses the superiority on resource consumption compared with the classic matching pursuit method. The imaging implementations for practical measured data also demonstrate the effectiveness of the algorithm developed in this paper.

  5. Geomorphic domains and linear features on Landsat images, Circle Quadrangle, Alaska

    Science.gov (United States)

    Simpson, S.L.

    1984-01-01

    A remote sensing study using Landsat images was undertaken as part of the Alaska Mineral Resource Assessment Program (AMRAP). Geomorphic domains A and B, identified on enhanced Landsat images, divide Circle quadrangle south of Tintina fault zone into two regional areas having major differences in surface characteristics. Domain A is a roughly rectangular, northeast-trending area of relatively low relief and simple, widely spaced drainages, except where igneous rocks are exposed. In contrast, domain B, which bounds two sides of domain A, is more intricately dissected showing abrupt changes in slope and relatively high relief. The northwestern part of geomorphic domain A includes a previously mapped tectonostratigraphic terrane. The southeastern boundary of domain A occurs entirely within the adjoining tectonostratigraphic terrane. The sharp geomorphic contrast along the southeastern boundary of domain A and the existence of known faults along this boundary suggest that the southeastern part of domain A may be a subdivision of the adjoining terrane. Detailed field studies would be necessary to determine the characteristics of the subdivision. Domain B appears to be divisible into large areas of different geomorphic terrains by east-northeast-trending curvilinear lines drawn on Landsat images. Segments of two of these lines correlate with parts of boundaries of mapped tectonostratigraphic terranes. On Landsat images prominent north-trending lineaments together with the curvilinear lines form a large-scale regional pattern that is transected by mapped north-northeast-trending high-angle faults. The lineaments indicate possible lithlogic variations and/or structural boundaries. A statistical strike-frequency analysis of the linear features data for Circle quadrangle shows that northeast-trending linear features predominate throughout, and that most northwest-trending linear features are found south of Tintina fault zone. A major trend interval of N.64-72E. in the linear

  6. Low-voltage 96 dB snapshot CMOS image sensor with 4.5 nW power dissipation per pixel.

    Science.gov (United States)

    Spivak, Arthur; Teman, Adam; Belenky, Alexander; Yadid-Pecht, Orly; Fish, Alexander

    2012-01-01

    Modern "smart" CMOS sensors have penetrated into various applications, such as surveillance systems, bio-medical applications, digital cameras, cellular phones and many others. Reducing the power of these sensors continuously challenges designers. In this paper, a low power global shutter CMOS image sensor with Wide Dynamic Range (WDR) ability is presented. This sensor features several power reduction techniques, including a dual voltage supply, a selective power down, transistors with different threshold voltages, a non-rationed logic, and a low voltage static memory. A combination of all these approaches has enabled the design of the low voltage "smart" image sensor, which is capable of reaching a remarkable dynamic range, while consuming very low power. The proposed power-saving solutions have allowed the maintenance of the standard architecture of the sensor, reducing both the time and the cost of the design. In order to maintain the image quality, a relation between the sensor performance and power has been analyzed and a mathematical model, describing the sensor Signal to Noise Ratio (SNR) and Dynamic Range (DR) as a function of the power supplies, is proposed. The described sensor was implemented in a 0.18 um CMOS process and successfully tested in the laboratory. An SNR of 48 dB and DR of 96 dB were achieved with a power dissipation of 4.5 nW per pixel.

  7. Ultrahigh sensitivity endoscopic camera using a new CMOS image sensor: providing with clear images under low illumination in addition to fluorescent images.

    Science.gov (United States)

    Aoki, Hisae; Yamashita, Hiromasa; Mori, Toshiyuki; Fukuyo, Tsuneo; Chiba, Toshio

    2014-11-01

    We developed a new ultrahigh-sensitive CMOS camera using a specific sensor that has a wide range of spectral sensitivity characteristics. The objective of this study is to present our updated endoscopic technology that has successfully integrated two innovative functions; ultrasensitive imaging as well as advanced fluorescent viewing. Two different experiments were conducted. One was carried out to evaluate the function of the ultrahigh-sensitive camera. The other was to test the availability of the newly developed sensor and its performance as a fluorescence endoscope. In both studies, the distance from the endoscopic tip to the target was varied and those endoscopic images in each setting were taken for further comparison. In the first experiment, the 3-CCD camera failed to display the clear images under low illumination, and the target was hardly seen. In contrast, the CMOS camera was able to display the targets regardless of the camera-target distance under low illumination. Under high illumination, imaging quality given by both cameras was quite alike. In the second experiment as a fluorescence endoscope, the CMOS camera was capable of clearly showing the fluorescent-activated organs. The ultrahigh sensitivity CMOS HD endoscopic camera is expected to provide us with clear images under low illumination in addition to the fluorescent images under high illumination in the field of laparoscopic surgery.

  8. A simple and low-cost biofilm quantification method using LED and CMOS image sensor.

    Science.gov (United States)

    Kwak, Yeon Hwa; Lee, Junhee; Lee, Junghoon; Kwak, Soo Hwan; Oh, Sangwoo; Paek, Se-Hwan; Ha, Un-Hwan; Seo, Sungkyu

    2014-12-01

    A novel biofilm detection platform, which consists of a cost-effective red, green, and blue light-emitting diode (RGB LED) as a light source and a lens-free CMOS image sensor as a detector, is designed. This system can measure the diffraction patterns of cells from their shadow images, and gather light absorbance information according to the concentration of biofilms through a simple image processing procedure. Compared to a bulky and expensive commercial spectrophotometer, this platform can provide accurate and reproducible biofilm concentration detection and is simple, compact, and inexpensive. Biofilms originating from various bacterial strains, including Pseudomonas aeruginosa (P. aeruginosa), were tested to demonstrate the efficacy of this new biofilm detection approach. The results were compared with the results obtained from a commercial spectrophotometer. To utilize a cost-effective light source (i.e., an LED) for biofilm detection, the illumination conditions were optimized. For accurate and reproducible biofilm detection, a simple, custom-coded image processing algorithm was developed and applied to a five-megapixel CMOS image sensor, which is a cost-effective detector. The concentration of biofilms formed by P. aeruginosa was detected and quantified by varying the indole concentration, and the results were compared with the results obtained from a commercial spectrophotometer. The correlation value of the results from those two systems was 0.981 (N = 9, P CMOS image-sensor platform. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Characterisation of a novel reverse-biased PPD CMOS image sensor

    Science.gov (United States)

    Stefanov, K. D.; Clarke, A. S.; Ivory, J.; Holland, A. D.

    2017-11-01

    A new pinned photodiode (PPD) CMOS image sensor (CIS) has been developed and characterised. The sensor can be fully depleted by means of reverse bias applied to the substrate, and the principle of operation is applicable to very thick sensitive volumes. Additional n-type implants under the pixel p-wells, called Deep Depletion Extension (DDE), have been added in order to eliminate the large parasitic substrate current that would otherwise be present in a normal device. The first prototype has been manufactured on a 18 μm thick, 1000 Ω .cm epitaxial silicon wafers using 180 nm PPD image sensor process at TowerJazz Semiconductor. The chip contains arrays of 10 μm and 5.4 μm pixels, with variations of the shape, size and the depth of the DDE implant. Back-side illuminated (BSI) devices were manufactured in collaboration with Teledyne e2v, and characterised together with the front-side illuminated (FSI) variants. The presented results show that the devices could be reverse-biased without parasitic leakage currents, in good agreement with simulations. The new 10 μm pixels in both BSI and FSI variants exhibit nearly identical photo response to the reference non-modified pixels, as characterised with the photon transfer curve. Different techniques were used to measure the depletion depth in FSI and BSI chips, and the results are consistent with the expected full depletion.

  10. Videometrics-based Detection of Vibration Linearity in MEMS Gyroscope

    Directory of Open Access Journals (Sweden)

    Yong Zhou

    2011-05-01

    Full Text Available MEMS gyroscope performs as a sort of sensor to detect angular velocity, with diverse applications in engineering including vehicle and intelligent traffic etc. A balanced vibration of driving module excited by electrostatic driving signal is the base MEMS gyroscope's performance. In order to analyze the linear property of vibration in MEMS Gyroscope, a method of computer vision measuring is applied with the help of high-speed vidicon to obtain video of linear vibration of driving module in gyroscope, under the driving voltage signal of inherent frequency and amplitude linearly increasing. By means of image processing, target identifying, and motion parameter extracting from the obtained video, vibration curve with time variation is acquired. And then, linearity of this vibration system can be analyzed by focusing on the amplitude value of vibration responding to the amplitude variation of driving voltage signal.

  11. Blind phase retrieval for aberrated linear shift-invariant imaging systems

    International Nuclear Information System (INIS)

    Yu, Rotha P; Paganin, David M

    2010-01-01

    We develop a means to reconstruct an input complex coherent scalar wavefield, given a through focal series (TFS) of three intensity images output from a two-dimensional (2D) linear shift-invariant optical imaging system with unknown aberrations. This blind phase retrieval technique unites two methods, namely (i) TFS phase retrieval and (ii) iterative blind deconvolution. The efficacy of our blind phase retrieval procedure has been demonstrated using simulated data, for a variety of Poisson noise levels.

  12. Development of photoelectric balanced car based on the linear CCD sensor

    Directory of Open Access Journals (Sweden)

    Wang Feng

    2016-01-01

    Full Text Available The smart car is designed based on Freescale’s MC9S12XS128 and a linear CCD camera. The linear CCD collects the road information and sends it to MCU through the operational amplifier. The PID control algorithm, the proportional–integral–derivative control algorithm, is adopted synthetically to control the smart car. First, the smart car’s inclination and angular velocity are detect through the accelerometers and gyro sensors, then the PD control algorithm, the proportional–derivative control algorithm, is employed to make the smart car have the ability of two-wheeled self-balancing. Second, the speed of wheel obtained by the encoder is fed back to the MCU by way of pulse signal, then the PI control algorithm, the proportional–integral control algorithm, is employed to make the speed of smart car reach the set point in the shortest possible time and stabilize at the set point. Finally, the PD control algorithm is used to regulate the smart car’s turning angle to make the smart car respond quickly while the smart car is passing the curve path. The smart car can realize the self-balancing control of two wheels and track automatically the black and while lines to march.

  13. Overview of CMOS process and design options for image sensor dedicated to space applications

    Science.gov (United States)

    Martin-Gonthier, P.; Magnan, P.; Corbiere, F.

    2005-10-01

    With the growth of huge volume markets (mobile phones, digital cameras...) CMOS technologies for image sensor improve significantly. New process flows appear in order to optimize some parameters such as quantum efficiency, dark current, and conversion gain. Space applications can of course benefit from these improvements. To illustrate this evolution, this paper reports results from three technologies that have been evaluated with test vehicles composed of several sub arrays designed with some space applications as target. These three technologies are CMOS standard, improved and sensor optimized process in 0.35μm generation. Measurements are focussed on quantum efficiency, dark current, conversion gain and noise. Other measurements such as Modulation Transfer Function (MTF) and crosstalk are depicted in [1]. A comparison between results has been done and three categories of CMOS process for image sensors have been listed. Radiation tolerance has been also studied for the CMOS improved process in the way of hardening the imager by design. Results at 4, 15, 25 and 50 krad prove a good ionizing dose radiation tolerance applying specific techniques.

  14. Low-complex energy-aware image communication in visual sensor networks

    Science.gov (United States)

    Phamila, Yesudhas Asnath Victy; Amutha, Ramachandran

    2013-10-01

    A low-complex, low bit rate, energy-efficient image compression algorithm explicitly designed for resource-constrained visual sensor networks applied for surveillance, battle field, habitat monitoring, etc. is presented, where voluminous amount of image data has to be communicated over a bandwidth-limited wireless medium. The proposed method overcomes the energy limitation of individual nodes and is investigated in terms of image quality, entropy, processing time, overall energy consumption, and system lifetime. This algorithm is highly energy efficient and extremely fast since it applies energy-aware zonal binary discrete cosine transform (DCT) that computes only the few required significant coefficients and codes them using enhanced complementary Golomb Rice code without using any floating point operations. Experiments are performed using the Atmel Atmega128 and MSP430 processors to measure the resultant energy savings. Simulation results show that the proposed energy-aware fast zonal transform consumes only 0.3% of energy needed by conventional DCT. This algorithm consumes only 6% of energy needed by Independent JPEG Group (fast) version, and it suits for embedded systems requiring low power consumption. The proposed scheme is unique since it significantly enhances the lifetime of the camera sensor node and the network without any need for distributed processing as was traditionally required in existing algorithms.

  15. Single Image Super-Resolution Using Global Regression Based on Multiple Local Linear Mappings.

    Science.gov (United States)

    Choi, Jae-Seok; Kim, Munchurl

    2017-03-01

    Super-resolution (SR) has become more vital, because of its capability to generate high-quality ultra-high definition (UHD) high-resolution (HR) images from low-resolution (LR) input images. Conventional SR methods entail high computational complexity, which makes them difficult to be implemented for up-scaling of full-high-definition input images into UHD-resolution images. Nevertheless, our previous super-interpolation (SI) method showed a good compromise between Peak-Signal-to-Noise Ratio (PSNR) performances and computational complexity. However, since SI only utilizes simple linear mappings, it may fail to precisely reconstruct HR patches with complex texture. In this paper, we present a novel SR method, which inherits the large-to-small patch conversion scheme from SI but uses global regression based on local linear mappings (GLM). Thus, our new SR method is called GLM-SI. In GLM-SI, each LR input patch is divided into 25 overlapped subpatches. Next, based on the local properties of these subpatches, 25 different local linear mappings are applied to the current LR input patch to generate 25 HR patch candidates, which are then regressed into one final HR patch using a global regressor. The local linear mappings are learned cluster-wise in our off-line training phase. The main contribution of this paper is as follows: Previously, linear-mapping-based conventional SR methods, including SI only used one simple yet coarse linear mapping to each patch to reconstruct its HR version. On the contrary, for each LR input patch, our GLM-SI is the first to apply a combination of multiple local linear mappings, where each local linear mapping is found according to local properties of the current LR patch. Therefore, it can better approximate nonlinear LR-to-HR mappings for HR patches with complex texture. Experiment results show that the proposed GLM-SI method outperforms most of the state-of-the-art methods, and shows comparable PSNR performance with much lower

  16. Imaging of common bile duct by linear endoscopic ultrasound

    Institute of Scientific and Technical Information of China (English)

    Malay; Sharma; Amit; Pathak; Abid; Shoukat; Chittapuram; Srinivasan; Rameshbabu; Akash; Ajmera; Zeeshn; Ahamad; Wani; Praveer; Rai

    2015-01-01

    Imaging of common bile duct(CBD) can be done by many techniques. Endoscopic retrograde cholangiopancreaticography is considered the gold standard for imaging of CBD. A standard technique of imaging of CBD by endoscopic ultrasound(EUS) has not been specifically described. The available descriptions mention different stations of imaging from the stomach and duodenum. The CBD lies closest to duodenum and choice of imaging may be restricted to duodenum for many operators. Generally most operators prefer multi station imaging during EUS and the choice of selecting the initial station varies from operator to operator. Detailed evaluation of CBD is frequently the main focus of imaging during EUS and in such situations multi station imaging with a high-resolution ultrasound scanner may provide useful information. Examination of the CBD is one of the primary indications for doing an EUS and it can be done from five stations:(1) the fundus of stomach;(2) body of stomach;(3) duodenal bulb;(4) descending duodenum; and(5) antrum. Following down the upper 1/3rd of CBD can do imaging of entire CBD from the liver window and following up the lower 1/3rd of CBD can do imaging of entire CBD from the pancreatic window. This article aims at simplifying the techniques of imaging of CBD by linear EUS.

  17. A novel highly parallel algorithm for linearly unmixing hyperspectral images

    Science.gov (United States)

    Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto

    2014-10-01

    Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.

  18. Real-time DNA Amplification and Detection System Based on a CMOS Image Sensor.

    Science.gov (United States)

    Wang, Tiantian; Devadhasan, Jasmine Pramila; Lee, Do Young; Kim, Sanghyo

    2016-01-01

    In the present study, we developed a polypropylene well-integrated complementary metal oxide semiconductor (CMOS) platform to perform the loop mediated isothermal amplification (LAMP) technique for real-time DNA amplification and detection simultaneously. An amplification-coupled detection system directly measures the photon number changes based on the generation of magnesium pyrophosphate and color changes. The photon number decreases during the amplification process. The CMOS image sensor observes the photons and converts into digital units with the aid of an analog-to-digital converter (ADC). In addition, UV-spectral studies, optical color intensity detection, pH analysis, and electrophoresis detection were carried out to prove the efficiency of the CMOS sensor based the LAMP system. Moreover, Clostridium perfringens was utilized as proof-of-concept detection for the new system. We anticipate that this CMOS image sensor-based LAMP method will enable the creation of cost-effective, label-free, optical, real-time and portable molecular diagnostic devices.

  19. A Full Parallel Event Driven Readout Technique for Area Array SPAD FLIM Image Sensors

    Directory of Open Access Journals (Sweden)

    Kaiming Nie

    2016-01-01

    Full Text Available This paper presents a full parallel event driven readout method which is implemented in an area array single-photon avalanche diode (SPAD image sensor for high-speed fluorescence lifetime imaging microscopy (FLIM. The sensor only records and reads out effective time and position information by adopting full parallel event driven readout method, aiming at reducing the amount of data. The image sensor includes four 8 × 8 pixel arrays. In each array, four time-to-digital converters (TDCs are used to quantize the time of photons’ arrival, and two address record modules are used to record the column and row information. In this work, Monte Carlo simulations were performed in Matlab in terms of the pile-up effect induced by the readout method. The sensor’s resolution is 16 × 16. The time resolution of TDCs is 97.6 ps and the quantization range is 100 ns. The readout frame rate is 10 Mfps, and the maximum imaging frame rate is 100 fps. The chip’s output bandwidth is 720 MHz with an average power of 15 mW. The lifetime resolvability range is 5–20 ns, and the average error of estimated fluorescence lifetimes is below 1% by employing CMM to estimate lifetimes.

  20. An Ultra-Low Power CMOS Image Sensor with On-Chip Energy Harvesting and Power Management Capability

    Directory of Open Access Journals (Sweden)

    Ismail Cevik

    2015-03-01

    Full Text Available An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability is introduced in this paper. The photodiode pixel array can not only capture images but also harvest solar energy. As such, the CMOS image sensor chip is able to switch between imaging and harvesting modes towards self-power operation. Moreover, an on-chip maximum power point tracking (MPPT-based power management system (PMS is designed for the dual-mode image sensor to further improve the energy efficiency. A new isolated P-well energy harvesting and imaging (EHI pixel with very high fill factor is introduced. Several ultra-low power design techniques such as reset and select boosting techniques have been utilized to maintain a wide pixel dynamic range. The chip was designed and fabricated in a 1.8 V, 1P6M 0.18 µm CMOS process. Total power consumption of the imager is 6.53 µW for a 96 × 96 pixel array with 1 V supply and 5 fps frame rate. Up to 30 μW of power could be generated by the new EHI pixels. The PMS is capable of providing 3× the power required during imaging mode with 50% efficiency allowing energy autonomous operation with a 72.5% duty cycle.

  1. A Non-linear Model for Predicting Tip Position of a Pliable Robot Arm Segment Using Bending Sensor Data

    Directory of Open Access Journals (Sweden)

    Elizabeth I. SKLAR

    2016-04-01

    Full Text Available Using pliable materials for the construction of robot bodies presents new and interesting challenges for the robotics community. Within the EU project entitled STIFFness controllable Flexible & Learnable manipulator for surgical Operations (STIFF-FLOP, a bendable, segmented robot arm has been developed. The exterior of the arm is composed of a soft material (silicone, encasing an internal structure that contains air-chamber actuators and a variety of sensors for monitoring applied force, position and shape of the arm as it bends. Due to the physical characteristics of the arm, a proper model of robot kinematics and dynamics is difficult to infer from the sensor data. Here we propose a non-linear approach to predicting the robot arm posture, by training a feed-forward neural network with a structured series of pressures values applied to the arm's actuators. The model is developed across a set of seven different experiments. Because the STIFF-FLOP arm is intended for use in surgical procedures, traditional methods for position estimation (based on visual information or electromagnetic tracking will not be possible to implement. Thus the ability to estimate pose based on data from a custom fiber-optic bending sensor and accompanying model is a valuable contribution. Results are presented which demonstrate the utility of our non-linear modelling approach across a range of data collection procedures.

  2. Multi-Sensor Fusion of Infrared and Electro-Optic Signals for High Resolution Night Images

    Directory of Open Access Journals (Sweden)

    Victor Lawrence

    2012-07-01

    Full Text Available Electro-optic (EO image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF of a uniform detector array and the incoherent optical transfer function (OTF of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1 inverse filter-based IR image transformation; (2 EO image edge detection; (3 registration; and (4 blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available.

  3. Optical colour image watermarking based on phase-truncated linear canonical transform and image decomposition

    Science.gov (United States)

    Su, Yonggang; Tang, Chen; Li, Biyuan; Lei, Zhenkun

    2018-05-01

    This paper presents a novel optical colour image watermarking scheme based on phase-truncated linear canonical transform (PT-LCT) and image decomposition (ID). In this proposed scheme, a PT-LCT-based asymmetric cryptography is designed to encode the colour watermark into a noise-like pattern, and an ID-based multilevel embedding method is constructed to embed the encoded colour watermark into a colour host image. The PT-LCT-based asymmetric cryptography, which can be optically implemented by double random phase encoding with a quadratic phase system, can provide a higher security to resist various common cryptographic attacks. And the ID-based multilevel embedding method, which can be digitally implemented by a computer, can make the information of the colour watermark disperse better in the colour host image. The proposed colour image watermarking scheme possesses high security and can achieve a higher robustness while preserving the watermark’s invisibility. The good performance of the proposed scheme has been demonstrated by extensive experiments and comparison with other relevant schemes.

  4. Low-Voltage 96 dB Snapshot CMOS Image Sensor with 4.5 nW Power Dissipation per Pixel

    Directory of Open Access Journals (Sweden)

    Orly Yadid-Pecht

    2012-07-01

    Full Text Available Modern “smart” CMOS sensors have penetrated into various applications, such as surveillance systems, bio-medical applications, digital cameras, cellular phones and many others. Reducing the power of these sensors continuously challenges designers. In this paper, a low power global shutter CMOS image sensor with Wide Dynamic Range (WDR ability is presented. This sensor features several power reduction techniques, including a dual voltage supply, a selective power down, transistors with different threshold voltages, a non-rationed logic, and a low voltage static memory. A combination of all these approaches has enabled the design of the low voltage “smart” image sensor, which is capable of reaching a remarkable dynamic range, while consuming very low power. The proposed power-saving solutions have allowed the maintenance of the standard architecture of the sensor, reducing both the time and the cost of the design. In order to maintain the image quality, a relation between the sensor performance and power has been analyzed and a mathematical model, describing the sensor Signal to Noise Ratio (SNR and Dynamic Range (DR as a function of the power supplies, is proposed. The described sensor was implemented in a 0.18 um CMOS process and successfully tested in the laboratory. An SNR of 48 dB and DR of 96 dB were achieved with a power dissipation of 4.5 nW per pixel.

  5. The assessment of multi-sensor image fusion using wavelet transforms for mapping the Brazilian Savanna

    NARCIS (Netherlands)

    Weimar Acerbi, F.; Clevers, J.G.P.W.; Schaepman, M.E.

    2006-01-01

    Multi-sensor image fusion using the wavelet approach provides a conceptual framework for the improvement of the spatial resolution with minimal distortion of the spectral content of the source image. This paper assesses whether images with a large ratio of spatial resolution can be fused, and

  6. ALDF Data Retrieval Algorithms for Validating the Optical Transient Detector (OTD) and the Lightning Imaging Sensor (LIS)

    Science.gov (United States)

    Koshak, W. J.; Blakeslee, R. J.; Bailey, J. C.

    1997-01-01

    A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from in Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing, and arrival time of lightning radio emissions and solutions for the plane (i.e.. no Earth curvature) are provided that implement all of these measurements. The accuracy of the retrieval method is tested using computer-simulated data sets and the relative influence of bearing and arrival time data on the outcome of the final solution is formally demonstrated. The algorithm is sufficiently accurate to validate NASA's Optical Transient Detector (OTD) and Lightning Imaging System (LIS). We also introduce a quadratic planar solution that is useful when only three arrival time measurements are available. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in source location. Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. For arbitrary noncollinear network geometries and in the absence of measurement errors, it is shown that the two quadratic roots are equivalent (no source location ambiguity) on the outer sensor baselines. The accuracy of the quadratic planar method is tested with computer-generated data sets and the results are generally better than those obtained from the three station linear planar method when bearing errors are about 2 degrees.

  7. Real time three-dimensional space video rate sensors for millimeter waves imaging based very inexpensive plasma LED lamps

    Science.gov (United States)

    Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S.; Rozban, Daniel; Abramovich, Amir

    2014-10-01

    In recent years, much effort has been invested to develop inexpensive but sensitive Millimeter Wave (MMW) detectors that can be used in focal plane arrays (FPAs), in order to implement real time MMW imaging. Real time MMW imaging systems are required for many varied applications in many fields as homeland security, medicine, communications, military products and space technology. It is mainly because this radiation has high penetration and good navigability through dust storm, fog, heavy rain, dielectric materials, biological tissue, and diverse materials. Moreover, the atmospheric attenuation in this range of the spectrum is relatively low and the scattering is also low compared to NIR and VIS. The lack of inexpensive room temperature imaging systems makes it difficult to provide a suitable MMW system for many of the above applications. In last few years we advanced in research and development of sensors using very inexpensive (30-50 cents) Glow Discharge Detector (GDD) plasma indicator lamps as MMW detectors. This paper presents three kinds of GDD sensor based lamp Focal Plane Arrays (FPA). Those three kinds of cameras are different in the number of detectors, scanning operation, and detection method. The 1st and 2nd generations are 8 × 8 pixel array and an 18 × 2 mono-rail scanner array respectively, both of them for direct detection and limited to fixed imaging. The last designed sensor is a multiplexing frame rate of 16x16 GDD FPA. It permits real time video rate imaging of 30 frames/ sec and comprehensive 3D MMW imaging. The principle of detection in this sensor is a frequency modulated continuous wave (FMCW) system while each of the 16 GDD pixel lines is sampled simultaneously. Direct detection is also possible and can be done with a friendly user interface. This FPA sensor is built over 256 commercial GDD lamps with 3 mm diameter International Light, Inc., Peabody, MA model 527 Ne indicator lamps as pixel detectors. All three sensors are fully supported

  8. Resolution limits of migration and linearized waveform inversion images in a lossy medium

    KAUST Repository

    Schuster, Gerard T.; Dutta, Gaurav; Li, Jing

    2017-01-01

    The vertical-and horizontal-resolution limits Delta x(lossy) and Delta z(lossy) of post-stack migration and linearized waveform inversion images are derived for lossy data in the far-field approximation. Unlike the horizontal resolution limit Delta x proportional to lambda z/L in a lossless medium which linearly worsens in depth z, Delta x(lossy) proportional to z(2)/QL worsens quadratically with depth for a medium with small Q values. Here, Q is the quality factor, lambda is the effective wavelength, L is the recording aperture, and loss in the resolution formulae is accounted for by replacing lambda with z/Q. In contrast, the lossy vertical-resolution limit Delta z(lossy) only worsens linearly in depth compared to Delta z proportional to lambda for a lossless medium. For both the causal and acausal Q models, the resolution limits are linearly proportional to 1/Q for small Q. These theoretical predictions are validated with migration images computed from lossy data.

  9. Resolution limits of migration and linearized waveform inversion images in a lossy medium

    KAUST Repository

    Schuster, Gerard T.

    2017-03-10

    The vertical-and horizontal-resolution limits Delta x(lossy) and Delta z(lossy) of post-stack migration and linearized waveform inversion images are derived for lossy data in the far-field approximation. Unlike the horizontal resolution limit Delta x proportional to lambda z/L in a lossless medium which linearly worsens in depth z, Delta x(lossy) proportional to z(2)/QL worsens quadratically with depth for a medium with small Q values. Here, Q is the quality factor, lambda is the effective wavelength, L is the recording aperture, and loss in the resolution formulae is accounted for by replacing lambda with z/Q. In contrast, the lossy vertical-resolution limit Delta z(lossy) only worsens linearly in depth compared to Delta z proportional to lambda for a lossless medium. For both the causal and acausal Q models, the resolution limits are linearly proportional to 1/Q for small Q. These theoretical predictions are validated with migration images computed from lossy data.

  10. Test of the Practicality and Feasibility of EDoF-Empowered Image Sensors for Long-Range Biometrics.

    Science.gov (United States)

    Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao

    2016-11-25

    For many practical applications of image sensors, how to extend the depth-of-field (DoF) is an important research topic; if successfully implemented, it could be beneficial in various applications, from photography to biometrics. In this work, we want to examine the feasibility and practicability of a well-known "extended DoF" (EDoF) technique, or "wavefront coding," by building real-time long-range iris recognition and performing large-scale iris recognition. The key to the success of long-range iris recognition includes long DoF and image quality invariance toward various object distance, which is strict and harsh enough to test the practicality and feasibility of EDoF-empowered image sensors. Besides image sensor modification, we also explored the possibility of varying enrollment/testing pairs. With 512 iris images from 32 Asian people as the database, 400-mm focal length and F/6.3 optics over 3 m working distance, our results prove that a sophisticated coding design scheme plus homogeneous enrollment/testing setups can effectively overcome the blurring caused by phase modulation and omit Wiener-based restoration. In our experiments, which are based on 3328 iris images in total, the EDoF factor can achieve a result 3.71 times better than the original system without a loss of recognition accuracy.

  11. Development of High Resolution Eddy Current Imaging Using an Electro-Mechanical Sensor (Preprint)

    Science.gov (United States)

    2011-11-01

    The Fluxgate Magnetometer ,” J. Phys. E: Sci. Instrum., Vol. 12: 241-253. 13. A. Abedi, J. J. Fellenstein, A. J. Lucas, and J. P. Wikswo, Jr., “A...206 (2006). 11. Ripka, P., 1992, Review of Fluxgate Sensors, Sensors and Actuators, A. 33, Elsevier Sequoia: 129-141. 12. Primdahl, F., 1979...superconducting quantum interference device magnetometer system for quantitative analysis and imaging of hidden corrosion activity in aircraft aluminum

  12. Simulation and measurement of total ionizing dose radiation induced image lag increase in pinned photodiode CMOS image sensors

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Jing [School of Materials Science and Engineering, Xiangtan University, Hunan (China); State Key Laboratory of Intense Pulsed Irradiation Simulation and Effect, Northwest Institute of Nuclear Technology, P.O.Box 69-10, Xi’an (China); Chen, Wei, E-mail: chenwei@nint.ac.cn [State Key Laboratory of Intense Pulsed Irradiation Simulation and Effect, Northwest Institute of Nuclear Technology, P.O.Box 69-10, Xi’an (China); Wang, Zujun, E-mail: wangzujun@nint.ac.cn [State Key Laboratory of Intense Pulsed Irradiation Simulation and Effect, Northwest Institute of Nuclear Technology, P.O.Box 69-10, Xi’an (China); Xue, Yuanyuan; Yao, Zhibin; He, Baoping; Ma, Wuying; Jin, Junshan; Sheng, Jiangkun; Dong, Guantao [State Key Laboratory of Intense Pulsed Irradiation Simulation and Effect, Northwest Institute of Nuclear Technology, P.O.Box 69-10, Xi’an (China)

    2017-06-01

    This paper presents an investigation of total ionizing dose (TID) induced image lag sources in pinned photodiodes (PPD) CMOS image sensors based on radiation experiments and TCAD simulation. The radiation experiments have been carried out at the Cobalt −60 gamma-ray source. The experimental results show the image lag degradation is more and more serious with increasing TID. Combining with the TCAD simulation results, we can confirm that the junction of PPD and transfer gate (TG) is an important region forming image lag during irradiation. These simulations demonstrate that TID can generate a potential pocket leading to incomplete transfer.

  13. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Yu Lu

    2016-04-01

    Full Text Available A new compact large field of view (FOV multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second.

  14. A Method for Application of Classification Tree Models to Map Aquatic Vegetation Using Remotely Sensed Images from Different Sensors and Dates

    Directory of Open Access Journals (Sweden)

    Ying Cai

    2012-09-01

    Full Text Available In previous attempts to identify aquatic vegetation from remotely-sensed images using classification trees (CT, the images used to apply CT models to different times or locations necessarily originated from the same satellite sensor as that from which the original images used in model development came, greatly limiting the application of CT. We have developed an effective normalization method to improve the robustness of CT models when applied to images originating from different sensors and dates. A total of 965 ground-truth samples of aquatic vegetation types were obtained in 2009 and 2010 in Taihu Lake, China. Using relevant spectral indices (SI as classifiers, we manually developed a stable CT model structure and then applied a standard CT algorithm to obtain quantitative (optimal thresholds from 2009 ground-truth data and images from Landsat7-ETM+, HJ-1B-CCD, Landsat5-TM and ALOS-AVNIR-2 sensors. Optimal CT thresholds produced average classification accuracies of 78.1%, 84.7% and 74.0% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. However, the optimal CT thresholds for different sensor images differed from each other, with an average relative variation (RV of 6.40%. We developed and evaluated three new approaches to normalizing the images. The best-performing method (Method of 0.1% index scaling normalized the SI images using tailored percentages of extreme pixel values. Using the images normalized by Method of 0.1% index scaling, CT models for a particular sensor in which thresholds were replaced by those from the models developed for images originating from other sensors provided average classification accuracies of 76.0%, 82.8% and 68.9% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. Applying the CT models developed for normalized 2009 images to 2010 images resulted in high classification (78.0%–93.3% and overall (92.0%–93.1% accuracies. Our

  15. Note: An absolute X-Y-Θ position sensor using a two-dimensional phase-encoded binary scale

    Science.gov (United States)

    Kim, Jong-Ahn; Kim, Jae Wan; Kang, Chu-Shik; Jin, Jonghan

    2018-04-01

    This Note presents a new absolute X-Y-Θ position sensor for measuring planar motion of a precision multi-axis stage system. By analyzing the rotated image of a two-dimensional phase-encoded binary scale (2D), the absolute 2D position values at two separated points were obtained and the absolute X-Y-Θ position could be calculated combining these values. The sensor head was constructed using a board-level camera, a light-emitting diode light source, an imaging lens, and a cube beam-splitter. To obtain the uniform intensity profiles from the vignette scale image, we selected the averaging directions deliberately, and higher resolution in the angle measurement could be achieved by increasing the allowable offset size. The performance of a prototype sensor was evaluated in respect of resolution, nonlinearity, and repeatability. The sensor could resolve 25 nm linear and 0.001° angular displacements clearly, and the standard deviations were less than 18 nm when 2D grid positions were measured repeatedly.

  16. Proximity gettering technology for advanced CMOS image sensors using carbon cluster ion-implantation technique. A review

    Energy Technology Data Exchange (ETDEWEB)

    Kurita, Kazunari; Kadono, Takeshi; Okuyama, Ryousuke; Shigemastu, Satoshi; Hirose, Ryo; Onaka-Masada, Ayumi; Koga, Yoshihiro; Okuda, Hidehiko [SUMCO Corporation, Saga (Japan)

    2017-07-15

    A new technique is described for manufacturing advanced silicon wafers with the highest capability yet reported for gettering transition metallic, oxygen, and hydrogen impurities in CMOS image sensor fabrication processes. Carbon and hydrogen elements are localized in the projection range of the silicon wafer by implantation of ion clusters from a hydrocarbon molecular gas source. Furthermore, these wafers can getter oxygen impurities out-diffused to device active regions from a Czochralski grown silicon wafer substrate to the carbon cluster ion projection range during heat treatment. Therefore, they can reduce the formation of transition metals and oxygen-related defects in the device active regions and improve electrical performance characteristics, such as the dark current, white spot defects, pn-junction leakage current, and image lag characteristics. The new technique enables the formation of high-gettering-capability sinks for transition metals, oxygen, and hydrogen impurities under device active regions of CMOS image sensors. The wafers formed by this technique have the potential to significantly improve electrical devices performance characteristics in advanced CMOS image sensors. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  17. Focal spot motion of linear accelerators and its effect on portal image analysis

    International Nuclear Information System (INIS)

    Sonke, Jan-Jakob; Brand, Bob; Herk, Marcel van

    2003-01-01

    The focal spot of a linear accelerator is often considered to have a fully stable position. In practice, however, the beam control loop of a linear accelerator needs to stabilize after the beam is turned on. As a result, some motion of the focal spot might occur during the start-up phase of irradiation. When acquiring portal images, this motion will affect the projected position of anatomy and field edges, especially when low exposures are used. In this paper, the motion of the focal spot and the effect of this motion on portal image analysis are quantified. A slightly tilted narrow slit phantom was placed at the isocenter of several linear accelerators and images were acquired (3.5 frames per second) by means of an amorphous silicon flat panel imager positioned ∼0.7 m below the isocenter. The motion of the focal spot was determined by converting the tilted slit images to subpixel accurate line spread functions. The error in portal image analysis due to focal spot motion was estimated by a subtraction of the relative displacement of the projected slit from the relative displacement of the field edges. It was found that the motion of the focal spot depends on the control system and design of the accelerator. The shift of the focal spot at the start of irradiation ranges between 0.05-0.7 mm in the gun-target (GT) direction. In the left-right (AB) direction the shift is generally smaller. The resulting error in portal image analysis due to focal spot motion ranges between 0.05-1.1 mm for a dose corresponding to two monitor units (MUs). For 20 MUs, the effect of the focal spot motion reduces to 0.01-0.3 mm. The error in portal image analysis due to focal spot motion can be reduced by reducing the applied dose rate

  18. Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks

    Science.gov (United States)

    Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias

    2012-06-01

    Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.

  19. Evaluation of fiber Bragg grating sensor interrogation using InGaAs linear detector arrays and Gaussian approximation on embedded hardware

    Science.gov (United States)

    Kumar, Saurabh; Amrutur, Bharadwaj; Asokan, Sundarrajan

    2018-02-01

    Fiber Bragg Grating (FBG) sensors have become popular for applications related to structural health monitoring, biomedical engineering, and robotics. However, for successful large scale adoption, FBG interrogation systems are as important as sensor characteristics. Apart from accuracy, the required number of FBG sensors per fiber and the distance between the device in which the sensors are used and the interrogation system also influence the selection of the interrogation technique. For several measurement devices developed for applications in biomedical engineering and robotics, only a few sensors per fiber are required and the device is close to the interrogation system. For these applications, interrogation systems based on InGaAs linear detector arrays provide a good choice. However, their resolution is dependent on the algorithms used for curve fitting. In this work, a detailed analysis of the choice of algorithm using the Gaussian approximation for the FBG spectrum and the number of pixels used for curve fitting on the errors is provided. The points where the maximum errors occur have been identified. All comparisons for wavelength shift detection have been made against another interrogation system based on the tunable swept laser. It has been shown that maximum errors occur when the wavelength shift is such that one new pixel is included for curve fitting. It has also been shown that an algorithm with lower computation cost compared to the more popular methods using iterative non-linear least squares estimation can be used without leading to the loss of accuracy. The algorithm has been implemented on embedded hardware, and a speed-up of approximately six times has been observed.

  20. Development of a solid-state multi-sensor array camera for real time imaging of magnetic fields

    International Nuclear Information System (INIS)

    Benitez, D; Gaydecki, P; Quek, S; Torres, V

    2007-01-01

    The development of a real-time magnetic field imaging camera based on solid-state sensors is described. The final laboratory comprises a 2D array of 33 x 33 solid state, tri-axial magneto-inductive sensors, and is located within a large current-carrying coil. This may be excited to produce either a steady or time-varying magnetic field. Outputs from several rows of sensors are routed to a sub-master controller and all sub-masters route to a master-controller responsible for data coordination and signal pre-processing. The data are finally streamed to a host computer via a USB interface and the image generated and displayed at a rate of several frames per second. Accurate image generation is predicated on a knowledge of the sensor response, magnetic field perturbations and the nature of the target respecting permeability and conductivity. To this end, the development of the instrumentation has been complemented by extensive numerical modelling of field distribution patterns using boundary element methods. Although it was originally intended for deployment in the nondestructive evaluation (NDE) of reinforced concrete, it was soon realised during the course of the work that the magnetic field imaging system had many potential applications, for example, in medicine, security screening, quality assurance (such as the food industry), other areas of nondestructive evaluation (NDE), designs associated with magnetic fields, teaching and research

  1. Development of a solid-state multi-sensor array camera for real time imaging of magnetic fields

    Science.gov (United States)

    Benitez, D.; Gaydecki, P.; Quek, S.; Torres, V.

    2007-07-01

    The development of a real-time magnetic field imaging camera based on solid-state sensors is described. The final laboratory comprises a 2D array of 33 x 33 solid state, tri-axial magneto-inductive sensors, and is located within a large current-carrying coil. This may be excited to produce either a steady or time-varying magnetic field. Outputs from several rows of sensors are routed to a sub-master controller and all sub-masters route to a master-controller responsible for data coordination and signal pre-processing. The data are finally streamed to a host computer via a USB interface and the image generated and displayed at a rate of several frames per second. Accurate image generation is predicated on a knowledge of the sensor response, magnetic field perturbations and the nature of the target respecting permeability and conductivity. To this end, the development of the instrumentation has been complemented by extensive numerical modelling of field distribution patterns using boundary element methods. Although it was originally intended for deployment in the nondestructive evaluation (NDE) of reinforced concrete, it was soon realised during the course of the work that the magnetic field imaging system had many potential applications, for example, in medicine, security screening, quality assurance (such as the food industry), other areas of nondestructive evaluation (NDE), designs associated with magnetic fields, teaching and research.

  2. Linearized image reconstruction method for ultrasound modulated electrical impedance tomography based on power density distribution

    International Nuclear Information System (INIS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2017-01-01

    Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results. (paper)

  3. Noise analysis of a novel hybrid active-passive pixel sensor for medical X-ray imaging

    International Nuclear Information System (INIS)

    Safavian, N.; Izadi, M.H.; Sultana, A.; Wu, D.; Karim, K.S.; Nathan, A.; Rowlands, J.A.

    2009-01-01

    Passive pixel sensor (PPS) is one of the most widely used architectures in large area amorphous silicon (a-Si) flat panel imagers. It consists of a detector and a thin film transistor (TFT) acting as a readout switch. While the PPS is advantageous in terms of providing a simple and small architecture suitable for high-resolution imaging, it directly exposes the signal to the noise of data line and external readout electronics, causing significant increase in the minimum readable sensor input signal. In this work we present the operation and noise performance of a hybrid 3-TFT current programmed, current output active pixel sensor (APS) suitable for real-time X-ray imaging. The pixel circuit extends the application of a-Si TFT from conventional switching element to on-pixel amplifier for enhanced signal-to-noise ratio and higher imager dynamic range. The capability of operation in both passive and active modes as well as being able to compensate for inherent instabilities of the TFTs makes the architecture a good candidate for X-ray imaging modalities with a wide range of incoming X-ray intensities. Measurement and theoretical calculations reveal a value for input refferd noise below the 1000 electron noise limit for real-time fluoroscopy. (copyright 2009 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  4. Development of High Resolution Eddy Current Imaging Using an Electro-Mechanical Sensor (Postprint)

    Science.gov (United States)

    2011-08-01

    Primdahl, F., 1979, “The Fluxgate Magnetometer ,” J. Phys. E: Sci. Instrum., Vol. 12: 241-253. 13. A. Abedi, J. J. Fellenstein, A. J. Lucas, and J. P...Issues 1-2, Pages 203-206 (2006). 11. Ripka, P., 1992, Review of Fluxgate Sensors, Sensors and Actuators, A. 33, Elsevier Sequoia: 129-141. 12...Wikswo, Jr., “A superconducting quantum interference device magnetometer system for quantitative analysis and imaging of hidden corrosion activity in

  5. Qualidade das Imagens de Alta Resolução Geradas por Sensores Aéreos Digitais / Image Quality from High Resolution Airbone Sensors

    Directory of Open Access Journals (Sweden)

    Irineu da Silva

    2006-10-01

    Full Text Available Os sensores digitais aerotransportados atualmente disponíveis no mercado possuem dois tipos de soluções: a solução de imagens por quadros, que emula a fotografia clássica, e a solução de imagem tipo “pushbroom”, que se caracteriza por uma imagem contínua, gerada a partir de um arranjo linear de sensores, que varrem a cena e possuem capacidade para gerar faixas de imagens pancromáticas, coloridas e de falsa cor com um nível de resolução elevado, compatível com as imagens pancromáticas geradas pelas câmaras convencionais. Neste artigo serão analisadas e discutidas as principais características das imagens geradas por esse tipo de sensor.

  6. Real-time, wide-area hyperspectral imaging sensors for standoff detection of explosives and chemical warfare agents

    Science.gov (United States)

    Gomer, Nathaniel R.; Tazik, Shawna; Gardner, Charles W.; Nelson, Matthew P.

    2017-05-01

    Hyperspectral imaging (HSI) is a valuable tool for the detection and analysis of targets located within complex backgrounds. HSI can detect threat materials on environmental surfaces, where the concentration of the target of interest is often very low and is typically found within complex scenery. Unfortunately, current generation HSI systems have size, weight, and power limitations that prohibit their use for field-portable and/or real-time applications. Current generation systems commonly provide an inefficient area search rate, require close proximity to the target for screening, and/or are not capable of making real-time measurements. ChemImage Sensor Systems (CISS) is developing a variety of real-time, wide-field hyperspectral imaging systems that utilize shortwave infrared (SWIR) absorption and Raman spectroscopy. SWIR HSI sensors provide wide-area imagery with at or near real time detection speeds. Raman HSI sensors are being developed to overcome two obstacles present in standard Raman detection systems: slow area search rate (due to small laser spot sizes) and lack of eye-safety. SWIR HSI sensors have been integrated into mobile, robot based platforms and handheld variants for the detection of explosives and chemical warfare agents (CWAs). In addition, the fusion of these two technologies into a single system has shown the feasibility of using both techniques concurrently to provide higher probability of detection and lower false alarm rates. This paper will provide background on Raman and SWIR HSI, discuss the applications for these techniques, and provide an overview of novel CISS HSI sensors focusing on sensor design and detection results.

  7. The effect of split pixel HDR image sensor technology on MTF measurements

    Science.gov (United States)

    Deegan, Brian M.

    2014-03-01

    Split-pixel HDR sensor technology is particularly advantageous in automotive applications, because the images are captured simultaneously rather than sequentially, thereby reducing motion blur. However, split pixel technology introduces artifacts in MTF measurement. To achieve a HDR image, raw images are captured from both large and small sub-pixels, and combined to make the HDR output. In some cases, a large sub-pixel is used for long exposure captures, and a small sub-pixel for short exposures, to extend the dynamic range. The relative size of the photosensitive area of the pixel (fill factor) plays a very significant role in the output MTF measurement. Given an identical scene, the MTF will be significantly different, depending on whether you use the large or small sub-pixels i.e. a smaller fill factor (e.g. in the short exposure sub-pixel) will result in higher MTF scores, but significantly greater aliasing. Simulations of split-pixel sensors revealed that, when raw images from both sub-pixels are combined, there is a significant difference in rising edge (i.e. black-to-white transition) and falling edge (white-to-black) reproduction. Experimental results showed a difference of ~50% in measured MTF50 between the falling and rising edges of a slanted edge test chart.

  8. High frame rate multi-resonance imaging refractometry with distributed feedback dye laser sensor

    DEFF Research Database (Denmark)

    Vannahme, Christoph; Dufva, Martin; Kristensen, Anders

    2015-01-01

    imaging refractometry without moving parts is presented. DFB dye lasers are low-cost and highly sensitive refractive index sensors. The unique multi-wavelength DFB laser structure presented here comprises several areas with different grating periods. Imaging in two dimensions of space is enabled...... by analyzing laser light from all areas in parallel with an imaging spectrometer. With this multi-resonance imaging refractometry method, the spatial position in one direction is identified from the horizontal, i.e., spectral position of the multiple laser lines which is obtained from the spectrometer charged...

  9. Laser beam welding quality monitoring system based in high-speed (10 kHz) uncooled MWIR imaging sensors

    Science.gov (United States)

    Linares, Rodrigo; Vergara, German; Gutiérrez, Raúl; Fernández, Carlos; Villamayor, Víctor; Gómez, Luis; González-Camino, Maria; Baldasano, Arturo; Castro, G.; Arias, R.; Lapido, Y.; Rodríguez, J.; Romero, Pablo

    2015-05-01

    The combination of flexibility, productivity, precision and zero-defect manufacturing in future laser-based equipment are a major challenge that faces this enabling technology. New sensors for online monitoring and real-time control of laserbased processes are necessary for improving products quality and increasing manufacture yields. New approaches to fully automate processes towards zero-defect manufacturing demand smarter heads where lasers, optics, actuators, sensors and electronics will be integrated in a unique compact and affordable device. Many defects arising in laser-based manufacturing processes come from instabilities in the dynamics of the laser process. Temperature and heat dynamics are key parameters to be monitored. Low cost infrared imagers with high-speed of response will constitute the next generation of sensors to be implemented in future monitoring and control systems for laser-based processes, capable to provide simultaneous information about heat dynamics and spatial distribution. This work describes the result of using an innovative low-cost high-speed infrared imager based on the first quantum infrared imager monolithically integrated with Si-CMOS ROIC of the market. The sensor is able to provide low resolution images at frame rates up to 10 KHz in uncooled operation at the same cost as traditional infrared spot detectors. In order to demonstrate the capabilities of the new sensor technology, a low-cost camera was assembled on a standard production laser welding head, allowing to register melting pool images at frame rates of 10 kHz. In addition, a specific software was developed for defect detection and classification. Multiple laser welding processes were recorded with the aim to study the performance of the system and its application to the real-time monitoring of laser welding processes. During the experiments, different types of defects were produced and monitored. The classifier was fed with the experimental images obtained. Self

  10. Test of the Practicality and Feasibility of EDoF-Empowered Image Sensors for Long-Range Biometrics

    Directory of Open Access Journals (Sweden)

    Sheng-Hsun Hsieh

    2016-11-01

    Full Text Available For many practical applications of image sensors, how to extend the depth-of-field (DoF is an important research topic; if successfully implemented, it could be beneficial in various applications, from photography to biometrics. In this work, we want to examine the feasibility and practicability of a well-known “extended DoF” (EDoF technique, or “wavefront coding,” by building real-time long-range iris recognition and performing large-scale iris recognition. The key to the success of long-range iris recognition includes long DoF and image quality invariance toward various object distance, which is strict and harsh enough to test the practicality and feasibility of EDoF-empowered image sensors. Besides image sensor modification, we also explored the possibility of varying enrollment/testing pairs. With 512 iris images from 32 Asian people as the database, 400-mm focal length and F/6.3 optics over 3 m working distance, our results prove that a sophisticated coding design scheme plus homogeneous enrollment/testing setups can effectively overcome the blurring caused by phase modulation and omit Wiener-based restoration. In our experiments, which are based on 3328 iris images in total, the EDoF factor can achieve a result 3.71 times better than the original system without a loss of recognition accuracy.

  11. Third-generation imaging sensor system concepts

    Science.gov (United States)

    Reago, Donald A.; Horn, Stuart B.; Campbell, James, Jr.; Vollmerhausen, Richard H.

    1999-07-01

    Second generation forward looking infrared sensors, based on either parallel scanning, long wave (8 - 12 um) time delay and integration HgCdTe detectors or mid wave (3 - 5 um), medium format staring (640 X 480 pixels) InSb detectors, are being fielded. The science and technology community is now turning its attention toward the definition of a future third generation of FLIR sensors, based on emerging research and development efforts. Modeled third generation sensor performance demonstrates a significant improvement in performance over second generation, resulting in enhanced lethality and survivability on the future battlefield. In this paper we present the current thinking on what third generation sensors systems will be and the resulting requirements for third generation focal plane array detectors. Three classes of sensors have been identified. The high performance sensor will contain a megapixel or larger array with at least two colors. Higher operating temperatures will also be the goal here so that power and weight can be reduced. A high performance uncooled sensor is also envisioned that will perform somewhere between first and second generation cooled detectors, but at significantly lower cost, weight, and power. The final third generation sensor is a very low cost micro sensor. This sensor can open up a whole new IR market because of its small size, weight, and cost. Future unattended throwaway sensors, micro UAVs, and helmet mounted IR cameras will be the result of this new class.

  12. Study of CMOS Image Sensors for the Alignment System of the CMS Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Virto, A. L.; Vila, I.; Rodrigo, T.; Matorras, F.; Figueroa, C. F.; Calvo, E.; Calderon, A.; Arce, P.; Oller, J. C.; Molinero, A.; Josa, M. I.; Fuentes, J.; Ferrando, A.; Fernandez, M. G.; Barcala, J. M.

    2002-07-01

    We report on an in-depth study made on commercial CMOS image sensors in order to determine their feasibility for beam light position detection in the CMS multipoint alignment scheme. (Author) 21 refs.

  13. Gimbal Integration to Small Format, Airborne, MWIR and LWIR Imaging Sensors, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed innovation is for enhanced sensor performance and high resolution imaging for Long Wave InfraRed (LWIR) and Medium Wave IR (MWIR) camera systems used in...

  14. Imaging Intracellular pH in Live Cells with a Genetically-Encoded Red Fluorescent Protein Sensor

    OpenAIRE

    Tantama, Mathew; Hung, Yin Pun; Yellen, Gary

    2011-01-01

    Intracellular pH affects protein structure and function, and proton gradients underlie the function of organelles such as lysosomes and mitochondria. We engineered a genetically-encoded pH sensor by mutagenesis of the red fluorescent protein mKeima, providing a new tool to image intracellular pH in live cells. This sensor, named pHRed, is the first ratiometric, single-protein red fluorescent sensor of pH. Fluorescence emission of pHRed peaks at 610 nm while exhibiting dual excitation peaks at...

  15. Extended SWIR imaging sensors for hyperspectral imaging applications

    Science.gov (United States)

    Weber, A.; Benecke, M.; Wendler, J.; Sieck, A.; Hübner, D.; Figgemeier, H.; Breiter, R.

    2016-05-01

    AIM has developed SWIR modules including FPAs based on liquid phase epitaxy (LPE) grown MCT usable in a wide range of hyperspectral imaging applications. Silicon read-out integrated circuits (ROIC) provide various integration and readout modes including specific functions for spectral imaging applications. An important advantage of MCT based detectors is the tunable band gap. The spectral sensitivity of MCT detectors can be engineered to cover the extended SWIR spectral region up to 2.5μm without compromising in performance. AIM developed the technology to extend the spectral sensitivity of its SWIR modules also into the VIS. This has been successfully demonstrated for 384x288 and 1024x256 FPAs with 24μm pitch. Results are presented in this paper. The FPAs are integrated into compact dewar cooler configurations using different types of coolers, like rotary coolers, AIM's long life split linear cooler MCC030 or extreme long life SF100 Pulse Tube cooler. The SWIR modules include command and control electronics (CCE) which allow easy interfacing using a digital standard interface. The development status and performance results of AIM's latest MCT SWIR modules suitable for hyperspectral systems and applications will be presented.

  16. A radiographic imaging system based upon a 2-D silicon microstrip sensor

    CERN Document Server

    Papanestis, A; Corrin, E; Raymond, M; Hall, G; Triantis, F A; Manthos, N; Evagelou, I; Van den Stelt, P; Tarrant, T; Speller, R D; Royle, G F

    2000-01-01

    A high resolution, direct-digital detector system based upon a 2-D silicon microstrip sensor has been designed, built and is undergoing evaluation for applications in dentistry and mammography. The sensor parameters and image requirements were selected using Monte Carlo simulations. Sensors selected for evaluation have a strip pitch of 50mum on the p-side and 80mum on the n-side. Front-end electronics and data acquisition are based on the APV6 chip and were adapted from systems used at CERN for high-energy physics experiments. The APV6 chip is not self-triggering so data acquisition is done at a fixed trigger rate. This paper describes the mammographic evaluation of the double sided microstrip sensor. Raw data correction procedures were implemented to remove the effects of dead strips and non-uniform response. Standard test objects (TORMAX) were used to determine limiting spatial resolution and detectability. MTFs were determined using the edge response. The results indicate that the spatial resolution of the...

  17. Reduction of CMOS Image Sensor Read Noise to Enable Photon Counting.

    Science.gov (United States)

    Guidash, Michael; Ma, Jiaju; Vogelsang, Thomas; Endsley, Jay

    2016-04-09

    Recent activity in photon counting CMOS image sensors (CIS) has been directed to reduction of read noise. Many approaches and methods have been reported. This work is focused on providing sub 1 e(-) read noise by design and operation of the binary and small signal readout of photon counting CIS. Compensation of transfer gate feed-through was used to provide substantially reduced CDS time and source follower (SF) bandwidth. SF read noise was reduced by a factor of 3 with this method. This method can be applied broadly to CIS devices to reduce the read noise for small signals to enable use as a photon counting sensor.

  18. The effect of Moidal non-linear blending function for dual-energy CT on CT image quality

    International Nuclear Information System (INIS)

    Zhang Fan; Yang Li

    2011-01-01

    Objective: To compare the difference between linear blending and non-linear blending function for dual-energy CT, and to evaluate the effect on CT image quality. Methods: The model was made of a piece of fresh pork liver inserted with 5 syringes containing various concentrations of iodine solutions (16.3, 26.4, 48.7, 74.6 and 112.3 HU). Linear blending images were automatically reformatted after the model was scanned in the dual-energy mode. Non-linear blending images were reformatted using the software of optimal contrast in Syngo workstation. Images were divided into 3 groups, including linear blending group, non-linear blending group and 120 kV group. Contrast noise ratio (CNR) were measured and calculated respectively in the 3 groups and the different figure of merit (FOM) values between the groups were compared using one-way ANOVA. Twenty patients scanned in the dual-energy mode were randomly selected and the SNR of their liver, renal cortex, spleen, pancreas and abdominal aorta were measured. The independent sample t test was used to compare the difference of signal to noise ratio (SNR) between linear blending group and non linear blending group. Two readers' agreement score and single-blind method were used to investigate the conspicuity difference between linear blending group and non linear blending group. Results: With models of different CT values, the FOM values in non-linear blending group were 20.65± 8.18, 11.40±4.25, 1.60±0.82, 2.40±1.13, 45.49±17.86. In 74.6 HU and 112.3 HU models, the differences of the FOM values observed among the three groups were statistically significant (P<0.05), which were 0.30±0.06 and 14.43±4.59 for linear blending group, and 0.22±0.05 and 15.31±5.16 for 120 kV group. And non-linear blending group had a better FOM value. The SNR of renal cortex and abdominal aorta were 19.2±5.1 and 36.5±13.9 for non-linear blending group, while they were 12.4±3.8 and 22.6±7.0 for linear blending group. There were statistically

  19. Optical fiber sensors for image formation in radiodiagnostic - preliminary essays

    International Nuclear Information System (INIS)

    Carvalho, Cesar C. de; Werneck, Marcelo M.

    1998-01-01

    This work describes preliminary experiments that will bring subsidies to analyze the capability to implement a system able to capture radiological images with new sensor system, comprised by FOs scanning process and I-CCD camera. These experiments have the main objective to analyze the optical response from FOs bundle, with several typos of scintillators associated with them, when it is submitted to medical x-rays exposition. (author)

  20. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation.

    Science.gov (United States)

    Sansoni, Giovanna; Trebeschi, Marco; Docchio, Franco

    2009-01-01

    3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a "sensor fusion" approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications.

  1. An ultrasensitive method of real time pH monitoring with complementary metal oxide semiconductor image sensor.

    Science.gov (United States)

    Devadhasan, Jasmine Pramila; Kim, Sanghyo

    2015-02-09

    CMOS sensors are becoming a powerful tool in the biological and chemical field. In this work, we introduce a new approach on quantifying various pH solutions with a CMOS image sensor. The CMOS image sensor based pH measurement produces high-accuracy analysis, making it a truly portable and user friendly system. pH indicator blended hydrogel matrix was fabricated as a thin film to the accurate color development. A distinct color change of red, green and blue (RGB) develops in the hydrogel film by applying various pH solutions (pH 1-14). The semi-quantitative pH evolution was acquired by visual read out. Further, CMOS image sensor absorbs the RGB color intensity of the film and hue value converted into digital numbers with the aid of an analog-to-digital converter (ADC) to determine the pH ranges of solutions. Chromaticity diagram and Euclidean distance represent the RGB color space and differentiation of pH ranges, respectively. This technique is applicable to sense the various toxic chemicals and chemical vapors by situ sensing. Ultimately, the entire approach can be integrated into smartphone and operable with the user friendly manner. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Improved CT-detection of acute bowel ischemia using frequency selective non-linear image blending.

    Science.gov (United States)

    Schneeweiss, Sven; Esser, Michael; Thaiss, Wolfgang; Boesmueller, Hans; Ditt, Hendrik; Nikolau, Konstantin; Horger, Marius

    2017-07-01

    Computed tomography (CT) as a fast and reliable diagnostic technique is the imaging modality of choice for acute bowel ischemia. However, diagnostic is often difficult mainly due to low attenuation differences between ischemic and perfused segments. To compare the diagnostic efficacy of a new post-processing tool based on frequency selective non-linear blending with that of conventional linear contrast-enhanced CT (CECT) image blending for the detection of bowel ischemia. Twenty-seven consecutive patients (19 women; mean age = 73.7 years, age range = 50-94 years) with acute bowel ischemia were scanned using multidetector CT (120 kV; 100-200 mAs). Pre-contrast and portal venous scans (65-70 s delay) were acquired. All patients underwent surgery for acute bowel ischemia and intraoperative diagnosis as well as histologic evaluation of explanted bowel segments was considered "gold standard." First, two radiologists read the conventional CECT images in which linear blending was adapted for optimal contrast, and second (three weeks later) the frequency selective non-linear blending (F-NLB) image. Attenuation values were compared, both in the involved and non-involved bowel segments creating ratios between unenhanced and CECT. The mean attenuation difference between ischemic and non-ischemic wall in the portal venous scan was 69.54 HU (reader 2 = 69.01 HU) higher for F-NLB compared with conventional CECT. Also, the attenuation ratio between contrast-enhanced and pre-contrast CT data for the non-ischemic walls showed significantly higher values for the F-NLB image (CECT: reader 1 = 2.11 (reader 2 = 3.36), F-NLB: reader 1 = 4.46 (reader 2 = 4.98)]. Sensitivity in detecting ischemic areas increased significantly for both readers using F-NLB (CECT: reader 1/2 = 53%/65% versus F-NLB: reader 1/2 = 62%/75%). Frequency selective non-linear blending improves detection of bowel ischemia compared with conventional CECT by increasing

  3. Design and Implementation of a Novel Compatible Encoding Scheme in the Time Domain for Image Sensor Communication

    Directory of Open Access Journals (Sweden)

    Trang Nguyen

    2016-05-01

    Full Text Available This paper presents a modulation scheme in the time domain based on On-Off-Keying and proposes various compatible supports for different types of image sensors. The content of this article is a sub-proposal to the IEEE 802.15.7r1 Task Group (TG7r1 aimed at Optical Wireless Communication (OWC using an image sensor as the receiver. The compatibility support is indispensable for Image Sensor Communications (ISC because the rolling shutter image sensors currently available have different frame rates, shutter speeds, sampling rates, and resolutions. However, focusing on unidirectional communications (i.e., data broadcasting, beacons, an asynchronous communication prototype is also discussed in the paper. Due to the physical limitations associated with typical image sensors (including low and varying frame rates, long exposures, and low shutter speeds, the link speed performance is critically considered. Based on the practical measurement of camera response to modulated light, an operating frequency range is suggested along with the similar system architecture, decoding procedure, and algorithms. A significant feature of our novel data frame structure is that it can support both typical frame rate cameras (in the oversampling mode as well as very low frame rate cameras (in the error detection mode for a camera whose frame rate is lower than the transmission packet rate. A high frame rate camera, i.e., no less than 20 fps, is supported in an oversampling mode in which a majority voting scheme for decoding data is applied. A low frame rate camera, i.e., when the frame rate drops to less than 20 fps at some certain time, is supported by an error detection mode in which any missing data sub-packet is detected in decoding and later corrected by external code. Numerical results and valuable analysis are also included to indicate the capability of the proposed schemes.

  4. Nanosecond-laser induced crosstalk of CMOS image sensor

    Science.gov (United States)

    Zhu, Rongzhen; Wang, Yanbin; Chen, Qianrong; Zhou, Xuanfeng; Ren, Guangsen; Cui, Longfei; Li, Hua; Hao, Daoliang

    2018-02-01

    The CMOS Image Sensor (CIS) is photoelectricity image device which focused the photosensitive array, amplifier, A/D transfer, storage, DSP, computer interface circuit on the same silicon substrate[1]. It has low power consumption, high integration,low cost etc. With large scale integrated circuit technology progress, the noise suppression level of CIS is enhanced unceasingly, and its image quality is getting better and better. It has been in the security monitoring, biometrice, detection and imaging and even military reconnaissance and other field is widely used. CIS is easily disturbed and damaged while it is irradiated by laser. It is of great significance to study the effect of laser irradiation on optoelectronic countermeasure and device for the laser strengthening resistance is of great significance. There are some researchers have studied the laser induced disturbed and damaged of CIS. They focused on the saturation, supersaturated effects, and they observed different effects as for unsaturation, saturation, supersaturated, allsaturated and pixel flip etc. This paper research 1064nm laser interference effect in a typical before type CMOS, and observring the saturated crosstalk and half the crosstalk line. This paper extracted from cmos devices working principle and signal detection methods such as the Angle of the formation mechanism of the crosstalk line phenomenon are analyzed.

  5. High speed global shutter image sensors for professional applications

    Science.gov (United States)

    Wu, Xu; Meynants, Guy

    2015-04-01

    Global shutter imagers expand the use to miscellaneous applications, such as machine vision, 3D imaging, medical imaging, space etc. to eliminate motion artifacts in rolling shutter imagers. A low noise global shutter pixel requires more than one non-light sensitive memory to reduce the read noise. But larger memory area reduces the fill-factor of the pixels. Modern micro-lenses technology can compensate this fill-factor loss. Backside illumination (BSI) is another popular technique to improve the pixel fill-factor. But some pixel architecture may not reach sufficient shutter efficiency with backside illumination. Non-light sensitive memory elements make the fabrication with BSI possible. Machine vision like fast inspection system, medical imaging like 3D medical or scientific applications always ask for high frame rate global shutter image sensors. Thanks to the CMOS technology, fast Analog-to-digital converters (ADCs) can be integrated on chip. Dual correlated double sampling (CDS) on chip ADC with high interface digital data rate reduces the read noise and makes more on-chip operation control. As a result, a global shutter imager with digital interface is a very popular solution for applications with high performance and high frame rate requirements. In this paper we will review the global shutter architectures developed in CMOSIS, discuss their optimization process and compare their performances after fabrication.

  6. Area-efficient readout with 14-bit SAR-ADC for CMOS image sensors

    Directory of Open Access Journals (Sweden)

    Aziza Sassi Ben

    2016-01-01

    Full Text Available This paper proposes a readout design for CMOS image sensors. It has been squeezed into a 7.5um pitch under a 0.28um 1P3M technology. The ADC performs one 14-bit conversion in only 1.5us and targets a theoretical DNL feature about +1.3/-1 at 14-bit accuracy. Correlated Double Sampling (CDS is performed both in the analog and digital domains to preserve the image quality.

  7. Image accuracy and representational enhancement through low-level, multi-sensor integration techniques

    International Nuclear Information System (INIS)

    Baker, J.E.

    1993-05-01

    Multi-Sensor Integration (MSI) is the combining of data and information from more than one source in order to generate a more reliable and consistent representation of the environment. The need for MSI derives largely from basic ambiguities inherent in our current sensor imaging technologies. These ambiguities exist as long as the mapping from reality to image is not 1-to-1. That is, if different 44 realities'' lead to identical images, a single image cannot reveal the particular reality which was the truth. MSI techniques can be divided into three categories based on the relative information content of the original images with that of the desired representation: (1) ''detail enhancement,'' wherein the relative information content of the original images is less rich than the desired representation; (2) ''data enhancement,'' wherein the MSI techniques axe concerned with improving the accuracy of the data rather than either increasing or decreasing the level of detail; and (3) ''conceptual enhancement,'' wherein the image contains more detail than is desired, making it difficult to easily recognize objects of interest. In conceptual enhancement one must group pixels corresponding to the same conceptual object and thereby reduce the level of extraneous detail. This research focuses on data and conceptual enhancement algorithms. To be useful in many real-world applications, e.g., autonomous or teleoperated robotics, real-time feedback is critical. But, many MSI/image processing algorithms require significant processing time. This is especially true of feature extraction, object isolation, and object recognition algorithms due to their typical reliance on global or large neighborhood information. This research attempts to exploit the speed currently available in state-of-the-art digitizers and highly parallel processing systems by developing MSI algorithms based on pixel rather than global-level features

  8. Frequency Selective Non-Linear Blending to Improve Image Quality in Liver CT.

    Science.gov (United States)

    Bongers, M N; Bier, G; Kloth, C; Schabel, C; Fritz, J; Nikolaou, K; Horger, M

    2016-12-01

    Purpose: To evaluate the effects of a new frequency selective non-linear blending (NLB) algorithm on the contrast resolution of liver CT with low intravascular concentration of iodine contrast. Materials and Methods: Our local ethics committee approved this retrospective study. The informed consent requirement was waived. CT exams of 25 patients (60 % female, mean age: 65 ± 16 years of age) with late phase CT scans of the liver were included as a model for poor intrahepatic vascular contrast enhancement. Optimal post-processing settings to enhance the contrast of hepatic vessels were determined. Outcome variables included signal-to-noise (SNR) and contrast-to-noise ratios (CNR) of hepatic vessels and SNR of liver parenchyma of standard and post-processed images. Image quality was quantified by two independent readers using Likert scales. Results: The post-processing settings for the visualization of hepatic vasculature were optimal at a center of 115HU, delta of 25HU, and slope of 5. Image noise was statistically indifferent between standard and post-processed images. The CNR between the hepatic vasculature (HV) and liver parenchyma could be significantly increased for liver veins (CNR Standard 1.62 ± 1.10, CNR NLB 3.6 ± 2.94, p = 0.0002) and portal veins (CNR Standard 1.31 ± 0.85, CNR NLB 2.42 ± 3.03, p = 0.046). The SNR of liver parenchyma was significantly higher on post-processed images (SNR NLB 11.26 ± 3.16, SNR Standard 8.85 ± 2.27, p = 0.008). The overall image quality and depiction of HV were significantly higher on post-processed images (NLB DHV : 4 [3 - 4.75], S tandardDHV : 2 [1.3 - 2.5], p = algorithm increases the contrast resolution of liver CT and can improve the visibility of the hepatic vasculature in the setting of a low contrast ratio between vessels and the parenchyma. Key Points: • Using the new frequency selective non-linear blending algorithm is feasible in contrast

  9. Researchers develop CCD image sensor with 20ns per row parallel readout time

    CERN Multimedia

    Bush, S

    2004-01-01

    "Scientists at the Rutherford Appleton Laboratory (RAL) in Oxfordshire have developed what they claim is the fastest CCD (charge-coupled device) image sensor, with a readout time which is 20ns per row" (1/2 page)

  10. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation

    International Nuclear Information System (INIS)

    Zhao, Zhanqi; Möller, Knut; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich

    2014-01-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton–Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR C ) and (4) GREIT with individual thorax geometry (GR T ). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal–Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms. (paper)

  11. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    Science.gov (United States)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  12. Transverse Field Effect in Fluxgate Sensors

    DEFF Research Database (Denmark)

    Brauer, Peter; Merayo, José M.G.; Nielsen, Otto V

    1997-01-01

    A model of the fluxgate magnetometer based on the field interactions in the fluxgate core has been derived. The non-linearity of the ringcore sensors due to large uncompensated fields transverse to the measuring axis are calculated and compared with measurements. Measurements of the non-linearity......A model of the fluxgate magnetometer based on the field interactions in the fluxgate core has been derived. The non-linearity of the ringcore sensors due to large uncompensated fields transverse to the measuring axis are calculated and compared with measurements. Measurements of the non......-linearity are made with a spectrum analyser, measuring the higher harmonics of an applied sinusoidal field. For a sensor with a permalloy ringcore of 1" in diameter the deviation from linearity is measured to about 15 nTp-p in the earth's field and the measurements are shown to fit well the calculations. Further......, the measurements and the calculations are also compared with a calibration model of the fluxgate sensor onboard the "MAGSAT" satellite. The later has a deviation from linearity of about 50 nTp-p but shows basically the same form of non-linearity as the measurements....

  13. The linear attenuation coefficients as features of multiple energy CT image classification

    International Nuclear Information System (INIS)

    Homem, M.R.P.; Mascarenhas, N.D.A.; Cruvinel, P.E.

    2000-01-01

    We present in this paper an analysis of the linear attenuation coefficients as useful features of single and multiple energy CT images with the use of statistical pattern classification tools. We analyzed four CT images through two pointwise classifiers (the first classifier is based on the maximum-likelihood criterion and the second classifier is based on the k-means clustering algorithm) and one contextual Bayesian classifier (ICM algorithm - Iterated Conditional Modes) using an a priori Potts-Strauss model. A feature extraction procedure using the Jeffries-Matusita (J-M) distance and the Karhunen-Loeve transformation was also performed. Both the classification and the feature selection procedures were found to be in agreement with the predicted discrimination given by the separation of the linear attenuation coefficient curves for different materials

  14. Characterization of Scintillating X-ray Optical Fiber Sensors

    Science.gov (United States)

    Sporea, Dan; Mihai, Laura; Vâţă, Ion; McCarthy, Denis; O'Keeffe, Sinead; Lewis, Elfed

    2014-01-01

    The paper presents a set of tests carried out in order to evaluate the design characteristics and the operating performance of a set of six X-ray extrinsic optical fiber sensors. The extrinsic sensor we developed is intended to be used as a low energy X-ray detector for monitoring radiation levels in radiotherapy, industrial applications and for personnel dosimetry. The reproducibility of the manufacturing process and the characteristics of the sensors were assessed. The sensors dynamic range, linearity, sensitivity, and reproducibility are evaluated through radioluminescence measurements, X-ray fluorescence and X-ray imaging investigations. Their response to the operating conditions of the excitation source was estimated. The effect of the sensors design and implementation, on the collecting efficiency of the radioluminescence signal was measured. The study indicated that the sensors are efficient only in the first 5 mm of the tip, and that a reflective coating can improve their response. Additional tests were done to investigate the concentricity of the sensors tip against the core of the optical fiber guiding the optical signal. The influence of the active material concentration on the sensor response to X-ray was studied. The tests were carried out by measuring the radioluminescence signal with an optical fiber spectrometer and with a Multi-Pixel Photon Counter. PMID:24556676

  15. Intrinsic coincident linear polarimetry using stacked organic photovoltaics.

    Science.gov (United States)

    Roy, S Gupta; Awartani, O M; Sen, P; O'Connor, B T; Kudenov, M W

    2016-06-27

    Polarimetry has widespread applications within atmospheric sensing, telecommunications, biomedical imaging, and target detection. Several existing methods of imaging polarimetry trade off the sensor's spatial resolution for polarimetric resolution, and often have some form of spatial registration error. To mitigate these issues, we have developed a system using oriented polymer-based organic photovoltaics (OPVs) that can preferentially absorb linearly polarized light. Additionally, the OPV cells can be made semitransparent, enabling multiple detectors to be cascaded along the same optical axis. Since each device performs a partial polarization measurement of the same incident beam, high temporal resolution is maintained with the potential for inherent spatial registration. In this paper, a Mueller matrix model of the stacked OPV design is provided. Based on this model, a calibration technique is developed and presented. This calibration technique and model are validated with experimental data, taken with a cascaded three cell OPV Stokes polarimeter, capable of measuring incident linear polarization states. Our results indicate polarization measurement error of 1.2% RMS and an average absolute radiometric accuracy of 2.2% for the demonstrated polarimeter.

  16. Selecting Optimal Parameters of Random Linear Network Coding for Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Heide, J; Zhang, Qi; Fitzek, F H P

    2013-01-01

    This work studies how to select optimal code parameters of Random Linear Network Coding (RLNC) in Wireless Sensor Networks (WSNs). With Rateless Deluge [1] the authors proposed to apply Network Coding (NC) for Over-the-Air Programming (OAP) in WSNs, and demonstrated that with NC a significant...... reduction in the number of transmitted packets can be achieved. However, NC introduces additional computations and potentially a non-negligible transmission overhead, both of which depend on the chosen coding parameters. Therefore it is necessary to consider the trade-off that these coding parameters...... present in order to obtain the lowest energy consumption per transmitted bit. This problem is analyzed and suitable coding parameters are determined for the popular Tmote Sky platform. Compared to the use of traditional RLNC, these parameters enable a reduction in the energy spent per bit which grows...

  17. Monitoring Pest Insect Traps by Means of Low-Power Image Sensor Technologies

    Directory of Open Access Journals (Sweden)

    Juan J. Serrano

    2012-11-01

    Full Text Available Monitoring pest insect populations is currently a key issue in agriculture and forestry protection. At the farm level, human operators typically must perform periodical surveys of the traps disseminated through the field. This is a labor-, time- and cost-consuming activity, in particular for large plantations or large forestry areas, so it would be of great advantage to have an affordable system capable of doing this task automatically in an accurate and a more efficient way. This paper proposes an autonomous monitoring system based on a low-cost image sensor that it is able to capture and send images of the trap contents to a remote control station with the periodicity demanded by the trapping application. Our autonomous monitoring system will be able to cover large areas with very low energy consumption. This issue would be the main key point in our study; since the operational live of the overall monitoring system should be extended to months of continuous operation without any kind of maintenance (i.e., battery replacement. The images delivered by image sensors would be time-stamped and processed in the control station to get the number of individuals found at each trap. All the information would be conveniently stored at the control station, and accessible via Internet by means of available network services at control station (WiFi, WiMax, 3G/4G, etc..

  18. Monitoring Pest Insect Traps by Means of Low-Power Image Sensor Technologies

    Science.gov (United States)

    López, Otoniel; Rach, Miguel Martinez; Migallon, Hector; Malumbres, Manuel P.; Bonastre, Alberto; Serrano, Juan J.

    2012-01-01

    Monitoring pest insect populations is currently a key issue in agriculture and forestry protection. At the farm level, human operators typically must perform periodical surveys of the traps disseminated through the field. This is a labor-, time- and cost-consuming activity, in particular for large plantations or large forestry areas, so it would be of great advantage to have an affordable system capable of doing this task automatically in an accurate and a more efficient way. This paper proposes an autonomous monitoring system based on a low-cost image sensor that it is able to capture and send images of the trap contents to a remote control station with the periodicity demanded by the trapping application. Our autonomous monitoring system will be able to cover large areas with very low energy consumption. This issue would be the main key point in our study; since the operational live of the overall monitoring system should be extended to months of continuous operation without any kind of maintenance (i.e., battery replacement). The images delivered by image sensors would be time-stamped and processed in the control station to get the number of individuals found at each trap. All the information would be conveniently stored at the control station, and accessible via Internet by means of available network services at control station (WiFi, WiMax, 3G/4G, etc.). PMID:23202232

  19. Highly sensitive and area-efficient CMOS image sensor using a PMOSFET-type photodetector with a built-in transfer gate

    Science.gov (United States)

    Seo, Sang-Ho; Kim, Kyoung-Do; Kong, Jae-Sung; Shin, Jang-Kyoo; Choi, Pyung

    2007-02-01

    In this paper, a new CMOS image sensor is presented, which uses a PMOSFET-type photodetector with a transfer gate that has a high and variable sensitivity. The proposed CMOS image sensor has been fabricated using a 0.35 μm 2-poly 4- metal standard CMOS technology and is composed of a 256 × 256 array of 7.05 × 7.10 μm pixels. The unit pixel has a configuration of a pseudo 3-transistor active pixel sensor (APS) with the PMOSFET-type photodetector with a transfer gate, which has a function of conventional 4-transistor APS. The generated photocurrent is controlled by the transfer gate of the PMOSFET-type photodetector. The maximum responsivity of the photodetector is larger than 1.0 × 10 3 A/W without any optical lens. Fabricated 256 × 256 CMOS image sensor exhibits a good response to low-level illumination as low as 5 lux.

  20. Microwave Sensors for Breast Cancer Detection.

    Science.gov (United States)

    Wang, Lulu

    2018-02-23

    Breast cancer is the leading cause of death among females, early diagnostic methods with suitable treatments improve the 5-year survival rates significantly. Microwave breast imaging has been reported as the most potential to become the alternative or additional tool to the current gold standard X-ray mammography for detecting breast cancer. The microwave breast image quality is affected by the microwave sensor, sensor array, the number of sensors in the array and the size of the sensor. In fact, microwave sensor array and sensor play an important role in the microwave breast imaging system. Numerous microwave biosensors have been developed for biomedical applications, with particular focus on breast tumor detection. Compared to the conventional medical imaging and biosensor techniques, these microwave sensors not only enable better cancer detection and improve the image resolution, but also provide attractive features such as label-free detection. This paper aims to provide an overview of recent important achievements in microwave sensors for biomedical imaging applications, with particular focus on breast cancer detection. The electric properties of biological tissues at microwave spectrum, microwave imaging approaches, microwave biosensors, current challenges and future works are also discussed in the manuscript.

  1. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation

    Directory of Open Access Journals (Sweden)

    Giovanna Sansoni

    2009-01-01

    Full Text Available 3D imaging sensors for the acquisition of three dimensional (3D shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a “sensor fusion” approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications.

  2. Integration of computer imaging and sensor data for structural health monitoring of bridges

    International Nuclear Information System (INIS)

    Zaurin, R; Catbas, F N

    2010-01-01

    The condition of civil infrastructure systems (CIS) changes over their life cycle for different reasons such as damage, overloading, severe environmental inputs, and ageing due normal continued use. The structural performance often decreases as a result of the change in condition. Objective condition assessment and performance evaluation are challenging activities since they require some type of monitoring to track the response over a period of time. In this paper, integrated use of video images and sensor data in the context of structural health monitoring is demonstrated as promising technologies for the safety of civil structures in general and bridges in particular. First, the challenges and possible solutions to using video images and computer vision techniques for structural health monitoring are presented. Then, the synchronized image and sensing data are analyzed to obtain unit influence line (UIL) as an index for monitoring bridge behavior under identified loading conditions. Subsequently, the UCF 4-span bridge model is used to demonstrate the integration and implementation of imaging devices and traditional sensing technology with UIL for evaluating and tracking the bridge behavior. It is shown that video images and computer vision techniques can be used to detect, classify and track different vehicles with synchronized sensor measurements to establish an input–output relationship to determine the normalized response of the bridge

  3. Automatic Welding System of Aluminum Pipe by Monitoring Backside Image of Molten Pool Using Vision Sensor

    Science.gov (United States)

    Baskoro, Ario Sunar; Kabutomori, Masashi; Suga, Yasuo

    An automatic welding system using Tungsten Inert Gas (TIG) welding with vision sensor for welding of aluminum pipe was constructed. This research studies the intelligent welding process of aluminum alloy pipe 6063S-T5 in fixed position and moving welding torch with the AC welding machine. The monitoring system consists of a vision sensor using a charge-coupled device (CCD) camera to monitor backside image of molten pool. The captured image was processed to recognize the edge of molten pool by image processing algorithm. Neural network model for welding speed control were constructed to perform the process automatically. From the experimental results it shows the effectiveness of the control system confirmed by good detection of molten pool and sound weld of experimental result.

  4. CMOS image sensor with contour enhancement

    Science.gov (United States)

    Meng, Liya; Lai, Xiaofeng; Chen, Kun; Yuan, Xianghui

    2010-10-01

    Imitating the signal acquisition and processing of vertebrate retina, a CMOS image sensor with bionic pre-processing circuit is designed. Integration of signal-process circuit on-chip can reduce the requirement of bandwidth and precision of the subsequent interface circuit, and simplify the design of the computer-vision system. This signal pre-processing circuit consists of adaptive photoreceptor, spatial filtering resistive network and Op-Amp calculation circuit. The adaptive photoreceptor unit with a dynamic range of approximately 100 dB has a good self-adaptability for the transient changes in light intensity instead of intensity level itself. Spatial low-pass filtering resistive network used to mimic the function of horizontal cell, is composed of the horizontal resistor (HRES) circuit and OTA (Operational Transconductance Amplifier) circuit. HRES circuit, imitating dendrite of the neuron cell, comprises of two series MOS transistors operated in weak inversion region. Appending two diode-connected n-channel transistors to a simple transconductance amplifier forms the OTA Op-Amp circuit, which provides stable bias voltage for the gate of MOS transistors in HRES circuit, while serves as an OTA voltage follower to provide input voltage for the network nodes. The Op-Amp calculation circuit with a simple two-stage Op-Amp achieves the image contour enhancing. By adjusting the bias voltage of the resistive network, the smoothing effect can be tuned to change the effect of image's contour enhancement. Simulations of cell circuit and 16×16 2D circuit array are implemented using CSMC 0.5μm DPTM CMOS process.

  5. Multiple Linear Regression Analysis Indicates Association of P-Glycoprotein Substrate or Inhibitor Character with Bitterness Intensity, Measured with a Sensor.

    Science.gov (United States)

    Yano, Kentaro; Mita, Suzune; Morimoto, Kaori; Haraguchi, Tamami; Arakawa, Hiroshi; Yoshida, Miyako; Yamashita, Fumiyoshi; Uchida, Takahiro; Ogihara, Takuo

    2015-09-01

    P-glycoprotein (P-gp) regulates absorption of many drugs in the gastrointestinal tract and their accumulation in tumor tissues, but the basis of substrate recognition by P-gp remains unclear. Bitter-tasting phenylthiocarbamide, which stimulates taste receptor 2 member 38 (T2R38), increases P-gp activity and is a substrate of P-gp. This led us to hypothesize that bitterness intensity might be a predictor of P-gp-inhibitor/substrate status. Here, we measured the bitterness intensity of a panel of P-gp substrates and nonsubstrates with various taste sensors, and used multiple linear regression analysis to examine the relationship between P-gp-inhibitor/substrate status and various physical properties, including intensity of bitter taste measured with the taste sensor. We calculated the first principal component analysis score (PC1) as the representative value of bitterness, as all taste sensor's outputs shared significant correlation. The P-gp substrates showed remarkably greater mean bitterness intensity than non-P-gp substrates. We found that Km value of P-gp substrates were correlated with molecular weight, log P, and PC1 value, and the coefficient of determination (R(2) ) of the linear regression equation was 0.63. This relationship might be useful as an aid to predict P-gp substrate status at an early stage of drug discovery. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  6. Interior Temperature Measurement Using Curved Mercury Capillary Sensor Based on X-ray Radiography

    Science.gov (United States)

    Chen, Shuyue; Jiang, Xing; Lu, Guirong

    2017-07-01

    A method was presented for measuring the interior temperature of objects using a curved mercury capillary sensor based on X-ray radiography. The sensor is composed of a mercury bubble, a capillary and a fixed support. X-ray digital radiography was employed to capture image of the mercury column in the capillary, and a temperature control system was designed for the sensor calibration. We adopted livewire algorithms and mathematical morphology to calculate the mercury length. A measurement model relating mercury length to temperature was established, and the measurement uncertainty associated with the mercury column length and the linear model fitted by least-square method were analyzed. To verify the system, the interior temperature measurement of an autoclave, which is totally closed, was taken from 29.53°C to 67.34°C. The experiment results show that the response of the system is approximately linear with an uncertainty of maximum 0.79°C. This technique provides a new approach to measure interior temperature of objects.

  7. An Image Compression Scheme in Wireless Multimedia Sensor Networks Based on NMF

    Directory of Open Access Journals (Sweden)

    Shikang Kong

    2017-02-01

    Full Text Available With the goal of addressing the issue of image compression in wireless multimedia sensor networks with high recovered quality and low energy consumption, an image compression and transmission scheme based on non-negative matrix factorization (NMF is proposed in this paper. First, the NMF algorithm theory is studied. Then, a collaborative mechanism of image capture, block, compression and transmission is completed. Camera nodes capture images and send them to ordinary nodes which use an NMF algorithm for image compression. Compressed images are transmitted to the station by the cluster head node and received from ordinary nodes. The station takes on the image restoration. Simulation results show that, compared with the JPEG2000 and singular value decomposition (SVD compression schemes, the proposed scheme has a higher quality of recovered images and lower total node energy consumption. It is beneficial to reduce the burden of energy consumption and prolong the life of the whole network system, which has great significance for practical applications of WMSNs.

  8. ESPRIT And Uniform Linear Arrays

    Science.gov (United States)

    Roy, R. H.; Goldburg, M.; Ottersten, B. E.; Swindlehurst, A. L.; Viberg, M.; Kailath, T.

    1989-11-01

    Abstract ¬â€?ESPRIT is a recently developed and patented technique for high-resolution estimation of signal parameters. It exploits an invariance structure designed into the sensor array to achieve a reduction in computational requirements of many orders of magnitude over previous techniques such as MUSIC, Burg's MEM, and Capon's ML, and in addition achieves performance improvement as measured by parameter estimate error variance. It is also manifestly more robust with respect to sensor errors (e.g. gain, phase, and location errors) than other methods as well. Whereas ESPRIT only requires that the sensor array possess a single invariance best visualized by considering two identical but other-wise arbitrary arrays of sensors displaced (but not rotated) with respect to each other, many arrays currently in use in various applications are uniform linear arrays of identical sensor elements. Phased array radars are commonplace in high-resolution direction finding systems, and uniform tapped delay lines (i.e., constant rate A/D converters) are the rule rather than the exception in digital signal processing systems. Such arrays possess many invariances, and are amenable to other types of analysis, which is one of the main reasons such structures are so prevalent. Recent developments in high-resolution algorithms of the signal/noise subspace genre including total least squares (TLS) ESPRIT applied to uniform linear arrays are summarized. ESPRIT is also shown to be a generalization of the root-MUSIC algorithm (applicable only to the case of uniform linear arrays of omni-directional sensors and unimodular cisoids). Comparisons with various estimator bounds, including CramerRao bounds, are presented.

  9. Experimental characterization of a 10 μW 55 μm-pitch FPN-compensated CMOS digital pixel sensor for X-ray imagers

    Energy Technology Data Exchange (ETDEWEB)

    Figueras, Roger, E-mail: roger.figueras@imb-cnm.csic.es [Institut de Microelectrònica de Barcelona IMB-CNM(CSIC), Bellaterra (Spain); Martínez, Ricardo; Terés, Lluís [Institut de Microelectrònica de Barcelona IMB-CNM(CSIC), Bellaterra (Spain); Serra-Graells, Francisco [Institut de Microelectrònica de Barcelona IMB-CNM(CSIC), Bellaterra (Spain); Department of Microelectronics and Electronic Systems, Universitat Autònoma de Barcelona, Bellaterra (Spain)

    2014-10-11

    This paper presents experimental results obtained from both electrical and radiation tests of a new room-temperature digital pixel sensor (DPS) circuit specifically optimized for digital direct X-ray imaging. The 10 μW 55 μm-pitch CMOS active pixel circuit under test includes self-bias capability, built-in test, selectable e{sup −}/h{sup +} collection, 10-bit charge-integration A/D conversion, individual gain tuning for fixed pattern noise (FPN) cancellation, and digital-only I/O interface, which make it suitable for 2D modular chip assemblies in large and seamless sensing areas. Experimental results for this DPS architecture in 0.18 μm 1P6M CMOS technology are reported, returning good performance in terms of linearity, 2ke{sub rms}{sup −} of ENC, inter-pixel crosstalk below 0.5 LSB, 50 Mbps of I/O speed, and good radiation response for its use in digital X-ray imaging.

  10. Handbook on linear motor application

    International Nuclear Information System (INIS)

    1988-10-01

    This book guides the application for Linear motor. It lists classification and speciality of Linear Motor, terms of linear-induction motor, principle of the Motor, types on one-side linear-induction motor, bilateral linear-induction motor, linear-DC Motor on basic of the motor, linear-DC Motor for moving-coil type, linear-DC motor for permanent-magnet moving type, linear-DC motor for electricity non-utility type, linear-pulse motor for variable motor, linear-pulse motor for permanent magneto type, linear-vibration actuator, linear-vibration actuator for moving-coil type, linear synchronous motor, linear electromagnetic motor, linear electromagnetic solenoid, technical organization and magnetic levitation and linear motor and sensor.

  11. Robust Sensor Faults Reconstruction for a Class of Uncertain Linear Systems Using a Sliding Mode Observer: An LMI Approach

    International Nuclear Information System (INIS)

    Iskander, Boulaabi; Faycal, Ben Hmida; Moncef, Gossa; Anis, Sellami

    2009-01-01

    This paper presents a design method of a Sliding Mode Observer (SMO) for robust sensor faults reconstruction of systems with matched uncertainty. This class of uncertainty requires a known upper bound. The basic idea is to use the H ∞ concept to design the observer, which minimizes the effect of the uncertainty on the reconstruction of the sensor faults. Specifically, we applied the equivalent output error injection concept from previous work in Fault Detection and Isolation (FDI) scheme. Then, these two problems of design and reconstruction can be expressed and numerically formulate via Linear Matrix Inequalities (LMIs) optimization. Finally, a numerical example is given to illustrate the validity and the applicability of the proposed approach.

  12. Edgeless silicon sensors for Medipix-based large-area X-ray imaging detectors

    International Nuclear Information System (INIS)

    Bosma, M J; Visser, J; Koffeman, E N; Evrard, O; De Moor, P; De Munck, K; Tezcan, D Sabuncuoglu

    2011-01-01

    Some X-ray imaging applications demand sensitive areas exceeding the active area of a single sensor. This requires a seamless tessellation of multiple detector modules with edgeless sensors. Our research is aimed at minimising the insensitive periphery that isolates the active area from the edge. Reduction of the edge-defect induced charge injection, caused by the deleterious effects of dicing, is an important step. We report on the electrical characterisation of 300 μm thick edgeless silicon p + -ν-n + diodes, diced using deep reactive ion etching. Sensors with both n-type and p-type stop rings were fabricated in various edge topologies. Leakage currents in the active area are compared with those of sensors with a conventional design. As expected, we observe an inverse correlation between leakage-current density and both the edge distance and stop-ring width. From this correlation we determine a minimum acceptable edge distance of 50 μm. We also conclude that structures with a p-type stop ring show lower leakage currents and higher breakdown voltages than the ones with an n-type stop ring.

  13. Coseismic displacements from SAR image offsets between different satellite sensors: Application to the 2001 Bhuj (India) earthquake

    KAUST Repository

    Wang, Teng

    2015-09-05

    Synthetic aperture radar (SAR) image offset tracking is increasingly being used for measuring ground displacements, e.g., due to earthquakes and landslide movement. However, this technique has been applied only to images acquired by the same or identical satellites. Here we propose a novel approach for determining offsets between images acquired by different satellite sensors, extending the usability of existing SAR image archives. The offsets are measured between two multiimage reflectivity maps obtained from different SAR data sets, which provide significantly better results than with single preevent and postevent images. Application to the 2001 Mw7.6 Bhuj earthquake reveals, for the first time, its near-field deformation using multiple preearthquake ERS and postearthquake Envisat images. The rupture model estimated from these cross-sensor offsets and teleseismic waveforms shows a compact fault slip pattern with fairly short rise times (<3 s) and a large stress drop (20 MPa), explaining the intense shaking observed in the earthquake.

  14. MHz rate X-Ray imaging with GaAs:Cr sensors using the LPD detector system

    Science.gov (United States)

    Veale, M. C.; Booker, P.; Cline, B.; Coughlan, J.; Hart, M.; Nicholls, T.; Schneider, A.; Seller, P.; Pape, I.; Sawhney, K.; Lozinskaya, A. D.; Novikov, V. A.; Tolbanov, O. P.; Tyazhev, A.; Zarubin, A. N.

    2017-02-01

    The STFC Rutherford Appleton Laboratory (U.K.) and Tomsk State University (Russia) have been working together to develop and characterise detector systems based on chromium-compensated gallium arsenide (GaAs:Cr) semiconductor material for high frame rate X-ray imaging. Previous work has demonstrated the spectroscopic performance of the material and its resistance to damage induced by high fluxes of X-rays. In this paper, recent results from experiments at the Diamond Light Source Synchrotron have demonstrated X-ray imaging with GaAs:Cr sensors at a frame rate of 3.7 MHz using the Large Pixel Detector (LPD) ASIC, developed by STFC for the European XFEL. Measurements have been made using a monochromatic 20 keV X-ray beam delivered in a single hybrid pulse with an instantenous flux of up to ~ 1 × 1010 photons s-1 mm-2. The response of 500 μm GaAs:Cr sensors is compared to that of the standard 500 μm thick LPD Si sensors.

  15. Thin-Film Quantum Dot Photodiode for Monolithic Infrared Image Sensors.

    Science.gov (United States)

    Malinowski, Pawel E; Georgitzikis, Epimitheas; Maes, Jorick; Vamvaka, Ioanna; Frazzica, Fortunato; Van Olmen, Jan; De Moor, Piet; Heremans, Paul; Hens, Zeger; Cheyns, David

    2017-12-10

    Imaging in the infrared wavelength range has been fundamental in scientific, military and surveillance applications. Currently, it is a crucial enabler of new industries such as autonomous mobility (for obstacle detection), augmented reality (for eye tracking) and biometrics. Ubiquitous deployment of infrared cameras (on a scale similar to visible cameras) is however prevented by high manufacturing cost and low resolution related to the need of using image sensors based on flip-chip hybridization. One way to enable monolithic integration is by replacing expensive, small-scale III-V-based detector chips with narrow bandgap thin-films compatible with 8- and 12-inch full-wafer processing. This work describes a CMOS-compatible pixel stack based on lead sulfide quantum dots (PbS QD) with tunable absorption peak. Photodiode with a 150-nm thick absorber in an inverted architecture shows dark current of 10 -6 A/cm² at -2 V reverse bias and EQE above 20% at 1440 nm wavelength. Optical modeling for top illumination architecture can improve the contact transparency to 70%. Additional cooling (193 K) can improve the sensitivity to 60 dB. This stack can be integrated on a CMOS ROIC, enabling order-of-magnitude cost reduction for infrared sensors.

  16. NRT Lightning Imaging Sensor (LIS) on International Space Station (ISS) Science Data Vb0

    Data.gov (United States)

    National Aeronautics and Space Administration — The NRT Lightning Imaging Sensor (LIS) on International Space Station (ISS) Science Data were collected by the LIS instrument on the ISS used to detect the...

  17. Time-of-flight camera via a single-pixel correlation image sensor

    Science.gov (United States)

    Mao, Tianyi; Chen, Qian; He, Weiji; Dai, Huidong; Ye, Ling; Gu, Guohua

    2018-04-01

    A time-of-flight imager based on single-pixel correlation image sensors is proposed for noise-free depth map acquisition in presence of ambient light. Digital micro-mirror device and time-modulated IR-laser provide spatial and temporal illumination on the unknown object. Compressed sensing and ‘four bucket principle’ method are combined to reconstruct the depth map from a sequence of measurements at a low sampling rate. Second-order correlation transform is also introduced to reduce the noise from the detector itself and direct ambient light. Computer simulations are presented to validate the computational models and improvement of reconstructions.

  18. An Optimal Image-Based Method for Identification of Acoustic Emission (AE) Sources in Plate-Like Structures Using a Lead Zirconium Titanate (PZT) Sensor Array

    Science.gov (United States)

    Zhou, Li

    2018-01-01

    This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method. PMID:29466310

  19. An Optimal Image-Based Method for Identification of Acoustic Emission (AE) Sources in Plate-Like Structures Using a Lead Zirconium Titanate (PZT) Sensor Array.

    Science.gov (United States)

    Yan, Gang; Zhou, Li

    2018-02-21

    This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method.

  20. Position and out-of-straightness measurement of a precision linear air-bearing stage by using a two-degree-of-freedom linear encoder

    International Nuclear Information System (INIS)

    Kimura, Akihide; Gao, Wei; Lijiang, Zeng

    2010-01-01

    This paper presents measurement of the X-directional position and the Z-directional out-of-straightness of a precision linear air-bearing stage with a two-degree-of-freedom (two-DOF) linear encoder, which is an optical displacement sensor for simultaneous measurement of the two-DOF displacements. The two-DOF linear encoder is composed of a reflective-type one-axis scale grating and an optical sensor head. A reference grating is placed perpendicular to the scale grating in the optical sensor head. Two-DOF displacements can be obtained from interference signals generated by the ±1 order diffracted beams from two gratings. A prototype two-DOF linear encoder employing the scale grating with the grating period of approximately 1.67 µm measured the X-directional position and the Z-directional out-of-straightness of the linear air-bearing stage

  1. Performance of a novel wafer scale CMOS active pixel sensor for bio-medical imaging

    International Nuclear Information System (INIS)

    Esposito, M; Evans, P M; Wells, K; Anaxagoras, T; Konstantinidis, A C; Zheng, Y; Speller, R D; Allinson, N M

    2014-01-01

    Recently CMOS active pixels sensors (APSs) have become a valuable alternative to amorphous silicon and selenium flat panel imagers (FPIs) in bio-medical imaging applications. CMOS APSs can now be scaled up to the standard 20 cm diameter wafer size by means of a reticle stitching block process. However, despite wafer scale CMOS APS being monolithic, sources of non-uniformity of response and regional variations can persist representing a significant challenge for wafer scale sensor response. Non-uniformity of stitched sensors can arise from a number of factors related to the manufacturing process, including variation of amplification, variation between readout components, wafer defects and process variations across the wafer due to manufacturing processes. This paper reports on an investigation into the spatial non-uniformity and regional variations of a wafer scale stitched CMOS APS. For the first time a per-pixel analysis of the electro-optical performance of a wafer CMOS APS is presented, to address inhomogeneity issues arising from the stitching techniques used to manufacture wafer scale sensors. A complete model of the signal generation in the pixel array has been provided and proved capable of accounting for noise and gain variations across the pixel array. This novel analysis leads to readout noise and conversion gain being evaluated at pixel level, stitching block level and in regions of interest, resulting in a coefficient of variation ⩽1.9%. The uniformity of the image quality performance has been further investigated in a typical x-ray application, i.e. mammography, showing a uniformity in terms of CNR among the highest when compared with mammography detectors commonly used in clinical practice. Finally, in order to compare the detection capability of this novel APS with the technology currently used (i.e. FPIs), theoretical evaluation of the detection quantum efficiency (DQE) at zero-frequency has been performed, resulting in a higher DQE for this

  2. Performance of a novel wafer scale CMOS active pixel sensor for bio-medical imaging.

    Science.gov (United States)

    Esposito, M; Anaxagoras, T; Konstantinidis, A C; Zheng, Y; Speller, R D; Evans, P M; Allinson, N M; Wells, K

    2014-07-07

    Recently CMOS active pixels sensors (APSs) have become a valuable alternative to amorphous silicon and selenium flat panel imagers (FPIs) in bio-medical imaging applications. CMOS APSs can now be scaled up to the standard 20 cm diameter wafer size by means of a reticle stitching block process. However, despite wafer scale CMOS APS being monolithic, sources of non-uniformity of response and regional variations can persist representing a significant challenge for wafer scale sensor response. Non-uniformity of stitched sensors can arise from a number of factors related to the manufacturing process, including variation of amplification, variation between readout components, wafer defects and process variations across the wafer due to manufacturing processes. This paper reports on an investigation into the spatial non-uniformity and regional variations of a wafer scale stitched CMOS APS. For the first time a per-pixel analysis of the electro-optical performance of a wafer CMOS APS is presented, to address inhomogeneity issues arising from the stitching techniques used to manufacture wafer scale sensors. A complete model of the signal generation in the pixel array has been provided and proved capable of accounting for noise and gain variations across the pixel array. This novel analysis leads to readout noise and conversion gain being evaluated at pixel level, stitching block level and in regions of interest, resulting in a coefficient of variation ⩽1.9%. The uniformity of the image quality performance has been further investigated in a typical x-ray application, i.e. mammography, showing a uniformity in terms of CNR among the highest when compared with mammography detectors commonly used in clinical practice. Finally, in order to compare the detection capability of this novel APS with the technology currently used (i.e. FPIs), theoretical evaluation of the detection quantum efficiency (DQE) at zero-frequency has been performed, resulting in a higher DQE for this

  3. Augmented switching linear dynamical system model for gas concentration estimation with MOX sensors in an open sampling system.

    Science.gov (United States)

    Di Lello, Enrico; Trincavelli, Marco; Bruyninckx, Herman; De Laet, Tinne

    2014-07-11

    In this paper, we introduce a Bayesian time series model approach for gas concentration estimation using Metal Oxide (MOX) sensors in Open Sampling System (OSS). Our approach focuses on the compensation of the slow response of MOX sensors, while concurrently solving the problem of estimating the gas concentration in OSS. The proposed Augmented Switching Linear System model allows to include all the sources of uncertainty arising at each step of the problem in a single coherent probabilistic formulation. In particular, the problem of detecting on-line the current sensor dynamical regime and estimating the underlying gas concentration under environmental disturbances and noisy measurements is formulated and solved as a statistical inference problem. Our model improves, with respect to the state of the art, where system modeling approaches have been already introduced, but only provided an indirect relative measures proportional to the gas concentration and the problem of modeling uncertainty was ignored. Our approach is validated experimentally and the performances in terms of speed of and quality of the gas concentration estimation are compared with the ones obtained using a photo-ionization detector.

  4. Visual Sensor Based Image Segmentation by Fuzzy Classification and Subregion Merge

    Directory of Open Access Journals (Sweden)

    Huidong He

    2017-01-01

    Full Text Available The extraction and tracking of targets in an image shot by visual sensors have been studied extensively. The technology of image segmentation plays an important role in such tracking systems. This paper presents a new approach to color image segmentation based on fuzzy color extractor (FCE. Different from many existing methods, the proposed approach provides a new classification of pixels in a source color image which usually classifies an individual pixel into several subimages by fuzzy sets. This approach shows two unique features: the spatial proximity and color similarity, and it mainly consists of two algorithms: CreateSubImage and MergeSubImage. We apply the FCE to segment colors of the test images from the database at UC Berkeley in the RGB, HSV, and YUV, the three different color spaces. The comparative studies show that the FCE applied in the RGB space is superior to the HSV and YUV spaces. Finally, we compare the segmentation effect with Canny edge detection and Log edge detection algorithms. The results show that the FCE-based approach performs best in the color image segmentation.

  5. Rationally encapsulated gold nanorods improving both linear and nonlinear photoacoustic imaging contrast in vivo.

    Science.gov (United States)

    Gao, Fei; Bai, Linyi; Liu, Siyu; Zhang, Ruochong; Zhang, Jingtao; Feng, Xiaohua; Zheng, Yuanjin; Zhao, Yanli

    2017-01-07

    Photoacoustic tomography has emerged as a promising non-invasive imaging technique that integrates the merits of high optical contrast with high ultrasound resolution in deep scattering medium. Unfortunately, the blood background in vivo seriously impedes the quality of imaging due to its comparable optical absorption with contrast agents, especially in conventional linear photoacoustic imaging modality. In this study, we demonstrated that two hybrids consisting of gold nanorods (Au NRs) and zinc tetra(4-pyridyl)porphyrin (ZnTPP) exhibited a synergetic effect in improving optical absorption, conversion efficiency from light to heat, and thermoelastic expansion, leading to a notable enhancement in both linear (four times greater) and nonlinear (more than six times) photoacoustic signals as compared with conventional Au NRs. Subsequently, we carefully investigated the interesting factors that may influence photoacoustic signal amplification, suggesting that the coating of ZnTPP on Au NRs could result in the reduction of gold interfacial thermal conductance with a solvent, so that the heat is more confined within the nanoparticle clusters for a significant enhancement of local temperature. Hence, both the linear and nonlinear photoacoustic signals are enhanced on account of better thermal confinement. The present work not only shows that ZnTPP coated Au NRs could serve as excellent photoacoustic nanoamplifiers, but also brings a perspective for photoacoustic image-guided therapy.

  6. IR sensitivity enhancement of CMOS Image Sensor with diffractive light trapping pixels.

    Science.gov (United States)

    Yokogawa, Sozo; Oshiyama, Itaru; Ikeda, Harumi; Ebiko, Yoshiki; Hirano, Tomoyuki; Saito, Suguru; Oinoue, Takashi; Hagimoto, Yoshiya; Iwamoto, Hayato

    2017-06-19

    We report on the IR sensitivity enhancement of back-illuminated CMOS Image Sensor (BI-CIS) with 2-dimensional diffractive inverted pyramid array structure (IPA) on crystalline silicon (c-Si) and deep trench isolation (DTI). FDTD simulations of semi-infinite thick c-Si having 2D IPAs on its surface whose pitches over 400 nm shows more than 30% improvement of light absorption at λ = 850 nm and the maximum enhancement of 43% with the 540 nm pitch at the wavelength is confirmed. A prototype BI-CIS sample with pixel size of 1.2 μm square containing 400 nm pitch IPAs shows 80% sensitivity enhancement at λ = 850 nm compared to the reference sample with flat surface. This is due to diffraction with the IPA and total reflection at the pixel boundary. The NIR images taken by the demo camera equip with a C-mount lens show 75% sensitivity enhancement in the λ = 700-1200 nm wavelength range with negligible spatial resolution degradation. Light trapping CIS pixel technology promises to improve NIR sensitivity and appears to be applicable to many different image sensor applications including security camera, personal authentication, and range finding Time-of-Flight camera with IR illuminations.

  7. A Dual-Linear Kalman Filter for Real-Time Orientation Determination System Using Low-Cost MEMS Sensors.

    Science.gov (United States)

    Zhang, Shengzhi; Yu, Shuai; Liu, Chaojun; Yuan, Xuebing; Liu, Sheng

    2016-02-20

    To provide a long-time reliable orientation, sensor fusion technologies are widely used to integrate available inertial sensors for the low-cost orientation estimation. In this paper, a novel dual-linear Kalman filter was designed for a multi-sensor system integrating MEMS gyros, an accelerometer, and a magnetometer. The proposed filter precludes the impacts of magnetic disturbances on the pitch and roll which the heading is subjected to. The filter can achieve robust orientation estimation for different statistical models of the sensors. The root mean square errors (RMSE) of the estimated attitude angles are reduced by 30.6% under magnetic disturbances. Owing to the reduction of system complexity achieved by smaller matrix operations, the mean total time consumption is reduced by 23.8%. Meanwhile, the separated filter offers greater flexibility for the system configuration, as it is possible to switch on or off the second stage filter to include or exclude the magnetometer compensation for the heading. Online experiments were performed on the homemade miniature orientation determination system (MODS) with the turntable. The average RMSE of estimated orientation are less than 0.4° and 1° during the static and low-dynamic tests, respectively. More realistic tests on two-wheel self-balancing vehicle driving and indoor pedestrian walking were carried out to evaluate the performance of the designed MODS when high accelerations and angular rates were introduced. Test results demonstrate that the MODS is applicable for the orientation estimation under various dynamic conditions. This paper provides a feasible alternative for low-cost orientation determination.

  8. Image sensor pixel with on-chip high extinction ratio polarizer based on 65-nm standard CMOS technology.

    Science.gov (United States)

    Sasagawa, Kiyotaka; Shishido, Sanshiro; Ando, Keisuke; Matsuoka, Hitoshi; Noda, Toshihiko; Tokuda, Takashi; Kakiuchi, Kiyomi; Ohta, Jun

    2013-05-06

    In this study, we demonstrate a polarization sensitive pixel for a complementary metal-oxide-semiconductor (CMOS) image sensor based on 65-nm standard CMOS technology. Using such a deep-submicron CMOS technology, it is possible to design fine metal patterns smaller than the wavelengths of visible light by using a metal wire layer. We designed and fabricated a metal wire grid polarizer on a 20 × 20 μm(2) pixel for image sensor. An extinction ratio of 19.7 dB was observed at a wavelength 750 nm.

  9. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Sensor Data Record (SDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sensor Data Records (SDRs), or Level 1b data, from the Visible Infrared Imaging Radiometer Suite (VIIRS) are the calibrated and geolocated radiance and reflectance...

  10. The instantaneous linear motion information measurement method based on inertial sensors for ships

    Science.gov (United States)

    Yang, Xu; Huang, Jing; Gao, Chen; Quan, Wei; Li, Ming; Zhang, Yanshun

    2018-05-01

    Ship instantaneous line motion information is the important foundation for ship control, which needs to be measured accurately. For this purpose, an instantaneous line motion measurement method based on inertial sensors is put forward for ships. By introducing a half-fixed coordinate system to realize the separation between instantaneous line motion and ship master movement, the instantaneous line motion acceleration of ships can be obtained with higher accuracy. Then, the digital high-pass filter is applied to suppress the velocity error caused by the low frequency signal such as schuler period. Finally, the instantaneous linear motion displacement of ships can be measured accurately. Simulation experimental results show that the method is reliable and effective, and can realize the precise measurement of velocity and displacement of instantaneous line motion for ships.

  11. Low Computational-Cost Footprint Deformities Diagnosis Sensor through Angles, Dimensions Analysis and Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    J. Rodolfo Maestre-Rendon

    2017-11-01

    Full Text Available Manual measurements of foot anthropometry can lead to errors since this task involves the experience of the specialist who performs them, resulting in different subjective measures from the same footprint. Moreover, some of the diagnoses that are given to classify a footprint deformity are based on a qualitative interpretation by the physician; there is no quantitative interpretation of the footprint. The importance of providing a correct and accurate diagnosis lies in the need to ensure that an appropriate treatment is provided for the improvement of the patient without risking his or her health. Therefore, this article presents a smart sensor that integrates the capture of the footprint, a low computational-cost analysis of the image and the interpretation of the results through a quantitative evaluation. The smart sensor implemented required the use of a camera (Logitech C920 connected to a Raspberry Pi 3, where a graphical interface was made for the capture and processing of the image, and it was adapted to a podoscope conventionally used by specialists such as orthopedist, physiotherapists and podiatrists. The footprint diagnosis smart sensor (FPDSS has proven to be robust to different types of deformity, precise, sensitive and correlated in 0.99 with the measurements from the digitalized image of the ink mat.

  12. Remote Sensing of Residue Management in Farms using Landsat 8 Sensor Imagery

    Directory of Open Access Journals (Sweden)

    M. A Rostami

    2017-10-01

    Full Text Available Introduction Preserving of crop residues in the field surface after harvesting crops, making difficult farm operations. The farmers for getting rid of crop residues always choose the easiest way, i.e. burning. Burning is one of the common disposal methods for wheat and corn straw in some region of the world. Present study was aimed to investigate the accurate methods for monitoring of residue management after wheat harvesting. With this vision, the potential of Landsat 8 sensor was evaluated for monitoring of residue burning, using satellite spectral indices and Linear Spectral Unmixing Analysis. For this purpose, correlation of ground data with satellite spectral indices and LSUA data were tested by linear regression. Materials and Methods In this study we considered 12 farms where remained plants were burned, 12 green farm, 12 bare farms and 12 farms with full crop residue cover were considered. Spatial coordinates of experimental fields recorded with a GPS and fields map were drawn using ArcGissoftware, version of 10.1. In this study,t wo methods were used to separate burned fields from other farms including Satellite Spectral Indices and Linear Spectral unmixing analysis. In this study, multispectral landsat 8 image was acquired over 2015 year. Landsat 8 products are delivered to the customer as radiometric, sensor, and geometric corrections. Image pixels are unique to Landsat 8 data, and should not be directly compared to imagery from other sensors. Therefore, DN value must be converted to radiance value in order to change the radiance to the reflectance, which is useful when performing spectral analysis techniques, such as transformations, band ratios and the Normalized Difference Vegetation Index (NDVI, etc. In this study, a number of spectral indices and Linear Spectral Unmixing Analysis data were imported/extracted from Landsat 8 image. All satellite image data were analyzed by ENVI software package. The spectral indices used in this

  13. Edge pixel response studies of edgeless silicon sensor technology for pixellated imaging detectors

    Science.gov (United States)

    Maneuski, D.; Bates, R.; Blue, A.; Buttar, C.; Doonan, K.; Eklund, L.; Gimenez, E. N.; Hynds, D.; Kachkanov, S.; Kalliopuska, J.; McMullen, T.; O'Shea, V.; Tartoni, N.; Plackett, R.; Vahanen, S.; Wraight, K.

    2015-03-01

    Silicon sensor technologies with reduced dead area at the sensor's perimeter are under development at a number of institutes. Several fabrication methods for sensors which are sensitive close to the physical edge of the device are under investigation utilising techniques such as active-edges, passivated edges and current-terminating rings. Such technologies offer the goal of a seamlessly tiled detection surface with minimum dead space between the individual modules. In order to quantify the performance of different geometries and different bulk and implant types, characterisation of several sensors fabricated using active-edge technology were performed at the B16 beam line of the Diamond Light Source. The sensors were fabricated by VTT and bump-bonded to Timepix ROICs. They were 100 and 200 μ m thick sensors, with the last pixel-to-edge distance of either 50 or 100 μ m. The sensors were fabricated as either n-on-n or n-on-p type devices. Using 15 keV monochromatic X-rays with a beam spot of 2.5 μ m, the performance at the outer edge and corners pixels of the sensors was evaluated at three bias voltages. The results indicate a significant change in the charge collection properties between the edge and 5th (up to 275 μ m) from edge pixel for the 200 μ m thick n-on-n sensor. The edge pixel performance of the 100 μ m thick n-on-p sensors is affected only for the last two pixels (up to 110 μ m) subject to biasing conditions. Imaging characteristics of all sensor types investigated are stable over time and the non-uniformities can be minimised by flat-field corrections. The results from the synchrotron tests combined with lab measurements are presented along with an explanation of the observed effects.

  14. Engineering workstation: Sensor modeling

    Science.gov (United States)

    Pavel, M; Sweet, B.

    1993-01-01

    The purpose of the engineering workstation is to provide an environment for rapid prototyping and evaluation of fusion and image processing algorithms. Ideally, the algorithms are designed to optimize the extraction of information that is useful to a pilot for all phases of flight operations. Successful design of effective fusion algorithms depends on the ability to characterize both the information available from the sensors and the information useful to a pilot. The workstation is comprised of subsystems for simulation of sensor-generated images, image processing, image enhancement, and fusion algorithms. As such, the workstation can be used to implement and evaluate both short-term solutions and long-term solutions. The short-term solutions are being developed to enhance a pilot's situational awareness by providing information in addition to his direct vision. The long term solutions are aimed at the development of complete synthetic vision systems. One of the important functions of the engineering workstation is to simulate the images that would be generated by the sensors. The simulation system is designed to use the graphics modeling and rendering capabilities of various workstations manufactured by Silicon Graphics Inc. The workstation simulates various aspects of the sensor-generated images arising from phenomenology of the sensors. In addition, the workstation can be used to simulate a variety of impairments due to mechanical limitations of the sensor placement and due to the motion of the airplane. Although the simulation is currently not performed in real-time, sequences of individual frames can be processed, stored, and recorded in a video format. In that way, it is possible to examine the appearance of different dynamic sensor-generated and fused images.

  15. Bayesian integration of sensor information and a multivariate dynamic linear model for prediction of dairy cow mastitis.

    Science.gov (United States)

    Jensen, Dan B; Hogeveen, Henk; De Vries, Albert

    2016-09-01

    Rapid detection of dairy cow mastitis is important so corrective action can be taken as soon as possible. Automatically collected sensor data used to monitor the performance and the health state of the cow could be useful for rapid detection of mastitis while reducing the labor needs for monitoring. The state of the art in combining sensor data to predict clinical mastitis still does not perform well enough to be applied in practice. Our objective was to combine a multivariate dynamic linear model (DLM) with a naïve Bayesian classifier (NBC) in a novel method using sensor and nonsensor data to detect clinical cases of mastitis. We also evaluated reductions in the number of sensors for detecting mastitis. With the DLM, we co-modeled 7 sources of sensor data (milk yield, fat, protein, lactose, conductivity, blood, body weight) collected at each milking for individual cows to produce one-step-ahead forecasts for each sensor. The observations were subsequently categorized according to the errors of the forecasted values and the estimated forecast variance. The categorized sensor data were combined with other data pertaining to the cow (week in milk, parity, mastitis history, somatic cell count category, and season) using Bayes' theorem, which produced a combined probability of the cow having clinical mastitis. If this probability was above a set threshold, the cow was classified as mastitis positive. To illustrate the performance of our method, we used sensor data from 1,003,207 milkings from the University of Florida Dairy Unit collected from 2008 to 2014. Of these, 2,907 milkings were associated with recorded cases of clinical mastitis. Using the DLM/NBC method, we reached an area under the receiver operating characteristic curve of 0.89, with a specificity of 0.81 when the sensitivity was set at 0.80. Specificities with omissions of sensor data ranged from 0.58 to 0.81. These results are comparable to other studies, but differences in data quality, definitions of

  16. AROSICS: An Automated and Robust Open-Source Image Co-Registration Software for Multi-Sensor Satellite Data

    Directory of Open Access Journals (Sweden)

    Daniel Scheffler

    2017-07-01

    Full Text Available Geospatial co-registration is a mandatory prerequisite when dealing with remote sensing data. Inter- or intra-sensoral misregistration will negatively affect any subsequent image analysis, specifically when processing multi-sensoral or multi-temporal data. In recent decades, many algorithms have been developed to enable manual, semi- or fully automatic displacement correction. Especially in the context of big data processing and the development of automated processing chains that aim to be applicable to different remote sensing systems, there is a strong need for efficient, accurate and generally usable co-registration. Here, we present AROSICS (Automated and Robust Open-Source Image Co-Registration Software, a Python-based open-source software including an easy-to-use user interface for automatic detection and correction of sub-pixel misalignments between various remote sensing datasets. It is independent of spatial or spectral characteristics and robust against high degrees of cloud coverage and spectral and temporal land cover dynamics. The co-registration is based on phase correlation for sub-pixel shift estimation in the frequency domain utilizing the Fourier shift theorem in a moving-window manner. A dense grid of spatial shift vectors can be created and automatically filtered by combining various validation and quality estimation metrics. Additionally, the software supports the masking of, e.g., clouds and cloud shadows to exclude such areas from spatial shift detection. The software has been tested on more than 9000 satellite images acquired by different sensors. The results are evaluated exemplarily for two inter-sensoral and two intra-sensoral use cases and show registration results in the sub-pixel range with root mean square error fits around 0.3 pixels and better.

  17. An Improved Piecewise Linear Chaotic Map Based Image Encryption Algorithm

    Directory of Open Access Journals (Sweden)

    Yuping Hu

    2014-01-01

    Full Text Available An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack.

  18. Real-time two-dimensional imaging of potassium ion distribution using an ion semiconductor sensor with charged coupled device technology.

    Science.gov (United States)

    Hattori, Toshiaki; Masaki, Yoshitomo; Atsumi, Kazuya; Kato, Ryo; Sawada, Kazuaki

    2010-01-01

    Two-dimensional real-time observation of potassium ion distributions was achieved using an ion imaging device based on charge-coupled device (CCD) and metal-oxide semiconductor technologies, and an ion selective membrane. The CCD potassium ion image sensor was equipped with an array of 32 × 32 pixels (1024 pixels). It could record five frames per second with an area of 4.16 × 4.16 mm(2). Potassium ion images were produced instantly. The leaching of potassium ion from a 3.3 M KCl Ag/AgCl reference electrode was dynamically monitored in aqueous solution. The potassium ion selective membrane on the semiconductor consisted of plasticized poly(vinyl chloride) (PVC) with bis(benzo-15-crown-5). The addition of a polyhedral oligomeric silsesquioxane to the plasticized PVC membrane greatly improved adhesion of the membrane onto Si(3)N(4) of the semiconductor surface, and the potential response was stabilized. The potential response was linear from 10(-2) to 10(-5) M logarithmic concentration of potassium ion. The selectivity coefficients were K(K(+),Li(+))(pot) = 10(-2.85), K(K(+),Na(+))(pot) = 10(-2.30), K(K(+),Rb(+))(pot) =10(-1.16), and K(K(+),Cs(+))(pot) = 10(-2.05).

  19. Miniaturized thermal flow sensor with planar-integrated sensor structures on semicircular surface channels

    NARCIS (Netherlands)

    Dijkstra, Marcel; de Boer, Meint J.; Berenschot, Johan W.; Lammerink, Theodorus S.J.; Wiegerink, Remco J.; Elwenspoek, Michael Curt

    2008-01-01

    A calorimetric miniaturized flow sensor was realized with a linear sensor response measured for water flow up to flow rates in the order of 300 nl min-1. A versatile technological concept is used to realize a sensor with a thermally isolated freely suspended silicon-rich silicon-nitride microchannel

  20. A 256×256 low-light-level CMOS imaging sensor with digital CDS

    Science.gov (United States)

    Zou, Mei; Chen, Nan; Zhong, Shengyou; Li, Zhengfen; Zhang, Jicun; Yao, Li-bin

    2016-10-01

    In order to achieve high sensitivity for low-light-level CMOS image sensors (CIS), a capacitive transimpedance amplifier (CTIA) pixel circuit with a small integration capacitor is used. As the pixel and the column area are highly constrained, it is difficult to achieve analog correlated double sampling (CDS) to remove the noise for low-light-level CIS. So a digital CDS is adopted, which realizes the subtraction algorithm between the reset signal and pixel signal off-chip. The pixel reset noise and part of the column fixed-pattern noise (FPN) can be greatly reduced. A 256×256 CIS with CTIA array and digital CDS is implemented in the 0.35μm CMOS technology. The chip size is 7.7mm×6.75mm, and the pixel size is 15μm×15μm with a fill factor of 20.6%. The measured pixel noise is 24LSB with digital CDS in RMS value at dark condition, which shows 7.8× reduction compared to the image sensor without digital CDS. Running at 7fps, this low-light-level CIS can capture recognizable images with the illumination down to 0.1lux.

  1. MEMS climate sensor for crops in greenhouses

    DEFF Research Database (Denmark)

    Birkelund, Karen; Jensen, Kim Degn; Højlund-Nielsen, Emil

    2010-01-01

    We have developed and fabricated a multi-sensor chip for greenhouse applications and demonstrated the functionality under controlled conditions. The sensor consists of a humidity sensor, temperature sensor and three photodiodes sensitive to blue, red and white light, respectively. The humidity...... sensor responds linearly with humidity with a full scale change of 5.6 pF. The best performing design measures a relative change of 48%. The temperature sensor responds linearly with temperature with a temperature coefficient of resistance of 3.95 x 10(-3) K-1 and a sensitivity of 26.5 Omega degrees C-1...... and humidity sensors have further been tested on plants in a greenhouse, demonstrating that individual plant behavior can be monitored....

  2. Imaging Voltage in Genetically Defined Neuronal Subpopulations with a Cre Recombinase-Targeted Hybrid Voltage Sensor.

    Science.gov (United States)

    Bayguinov, Peter O; Ma, Yihe; Gao, Yu; Zhao, Xinyu; Jackson, Meyer B

    2017-09-20

    Genetically encoded voltage indicators create an opportunity to monitor electrical activity in defined sets of neurons as they participate in the complex patterns of coordinated electrical activity that underlie nervous system function. Taking full advantage of genetically encoded voltage indicators requires a generalized strategy for targeting the probe to genetically defined populations of cells. To this end, we have generated a mouse line with an optimized hybrid voltage sensor (hVOS) probe within a locus designed for efficient Cre recombinase-dependent expression. Crossing this mouse with Cre drivers generated double transgenics expressing hVOS probe in GABAergic, parvalbumin, and calretinin interneurons, as well as hilar mossy cells, new adult-born neurons, and recently active neurons. In each case, imaging in brain slices from male or female animals revealed electrically evoked optical signals from multiple individual neurons in single trials. These imaging experiments revealed action potentials, dynamic aspects of dendritic integration, and trial-to-trial fluctuations in response latency. The rapid time response of hVOS imaging revealed action potentials with high temporal fidelity, and enabled accurate measurements of spike half-widths characteristic of each cell type. Simultaneous recording of rapid voltage changes in multiple neurons with a common genetic signature offers a powerful approach to the study of neural circuit function and the investigation of how neural networks encode, process, and store information. SIGNIFICANCE STATEMENT Genetically encoded voltage indicators hold great promise in the study of neural circuitry, but realizing their full potential depends on targeting the sensor to distinct cell types. Here we present a new mouse line that expresses a hybrid optical voltage sensor under the control of Cre recombinase. Crossing this line with Cre drivers generated double-transgenic mice, which express this sensor in targeted cell types. In

  3. Development of magnetic jxB sensor

    International Nuclear Information System (INIS)

    Kasai, Satoshi; Ishitsuka, Etsuo

    2001-12-01

    The improved mechanical sensor, i.e. magnetic jxB sensor (a mechanical sensor and a part of the steady state hybrid-type magnetic sensor) has been designed. The basic structure of the sensor is similar to the previously developed sensor (old sensor) in EDA phase. In this design, the neutron resistant materials are selected for the load cell (strain gauge and sensor beam) and sensing coil/frame. In order to reduce temperature drift of the sensor signal, four strain gauges with the same electrical property and geometrical size are bonded on the sensor beam by using Al 2 O 3 plasma spraying process, i.e., a couple of strain gauges is bonded on one side of the beam and another couple of gauges is bonded on the other side. These four strain gauges form an electrical bridge circuit. The zero-level drift of the output of the load cell used in the magnetic jxB sensor was reduced to about 1/20 compared with the old sensor. The temperature dependence of the output of the load cell is small. The linearity of the output of the load cell against weight was obtained. A non-linearity was observed in the sensitivity of the magnetic jxB sensor. The deviation of sensitivity from the fitting line was less than 7% in the high magnetic field region. The neutron irradiation effect on sensitivity of the sensor was investigated. The sensitivity of the sensor was gradually decreased by ∼30% at neutron fluence of (1.8-2.8)x10 23 n/m 2 in the high magnetic field. During irradiation, the non-linearity was observed in the sensitivity. (author)

  4. A 75-ps Gated CMOS Image Sensor with Low Parasitic Light Sensitivity.

    Science.gov (United States)

    Zhang, Fan; Niu, Hanben

    2016-06-29

    In this study, a 40 × 48 pixel global shutter complementary metal-oxide-semiconductor (CMOS) image sensor with an adjustable shutter time as low as 75 ps was implemented using a 0.5-μm mixed-signal CMOS process. The implementation consisted of a continuous contact ring around each p+/n-well photodiode in the pixel array in order to apply sufficient light shielding. The parasitic light sensitivity of the in-pixel storage node was measured to be 1/8.5 × 10⁷ when illuminated by a 405-nm diode laser and 1/1.4 × 10⁴ when illuminated by a 650-nm diode laser. The pixel pitch was 24 μm, the size of the square p+/n-well photodiode in each pixel was 7 μm per side, the measured random readout noise was 217 e(-) rms, and the measured dynamic range of the pixel of the designed chip was 5500:1. The type of gated CMOS image sensor (CIS) that is proposed here can be used in ultra-fast framing cameras to observe non-repeatable fast-evolving phenomena.

  5. Linear GPR Imaging Based on Electromagnetic Plane-Wave Spectra and Diffraction Tomography

    DEFF Research Database (Denmark)

    Meincke, Peter

    2004-01-01

    Two linear diffraction-tomography based inversion schemes, referred to as the Fourier transform method (FTM) and the far-field method (FFM), are derived for 3-dimensional fixed-offset GPR imaging of buried objects. The FTM and FFM are obtained by using different asymptotic approximations...

  6. Design and Performance of a Pinned Photodiode CMOS Image Sensor Using Reverse Substrate Bias.

    Science.gov (United States)

    Stefanov, Konstantin D; Clarke, Andrew S; Ivory, James; Holland, Andrew D

    2018-01-03

    A new pinned photodiode (PPD) CMOS image sensor with reverse biased p-type substrate has been developed and characterized. The sensor uses traditional PPDs with one additional deep implantation step to suppress the parasitic reverse currents, and can be fully depleted. The first prototypes have been manufactured on an 18 µm thick, 1000 Ω·cm epitaxial silicon wafers using 180 nm PPD image sensor process. Both front-side illuminated (FSI) and back-side illuminated (BSI) devices were manufactured in collaboration with Teledyne e2v. The characterization results from a number of arrays of 10 µm and 5.4 µm PPD pixels, with different shape, the size and the depth of the new implant are in good agreement with device simulations. The new pixels could be reverse-biased without parasitic leakage currents well beyond full depletion, and demonstrate nearly identical optical response to the reference non-modified pixels. The observed excessive charge sharing in some pixel variants is shown to not be a limiting factor in operation. This development promises to realize monolithic PPD CIS with large depleted thickness and correspondingly high quantum efficiency at near-infrared and soft X-ray wavelengths.

  7. Imaging properties of small-pixel spectroscopic x-ray detectors based on cadmium telluride sensors

    International Nuclear Information System (INIS)

    Koenig, Thomas; Schulze, Julia; Zuber, Marcus; Rink, Kristian; Oelfke, Uwe; Butzer, Jochen; Hamann, Elias; Cecilia, Angelica; Zwerger, Andreas; Fauler, Alex; Fiederle, Michael

    2012-01-01

    Spectroscopic x-ray imaging by means of photon counting detectors has received growing interest during the past years. Critical to the image quality of such devices is their pixel pitch and the sensor material employed. This paper describes the imaging properties of Medipix2 MXR multi-chip assemblies bump bonded to 1 mm thick CdTe sensors. Two systems were investigated with pixel pitches of 110 and 165 μm, which are in the order of the mean free path lengths of the characteristic x-rays produced in their sensors. Peak widths were found to be almost constant across the energy range of 10 to 60 keV, with values of 2.3 and 2.2 keV (FWHM) for the two pixel pitches. The average number of pixels responding to a single incoming photon are about 1.85 and 1.45 at 60 keV, amounting to detective quantum efficiencies of 0.77 and 0.84 at a spatial frequency of zero. Energy selective CT acquisitions are presented, and the two pixel pitches' abilities to discriminate between iodine and gadolinium contrast agents are examined. It is shown that the choice of the pixel pitch translates into a minimum contrast agent concentration for which material discrimination is still possible. We finally investigate saturation effects at high x-ray fluxes and conclude with the finding that higher maximum count rates come at the cost of a reduced energy resolution. (paper)

  8. WE-AB-BRA-11: Improved Imaging of Permanent Prostate Brachytherapy Seed Implants by Combining an Endorectal X-Ray Sensor with a CT Scanner

    International Nuclear Information System (INIS)

    Steiner, J; Matthews, K; Jia, G

    2016-01-01

    Purpose: To test feasibility of the use of a digital endorectal x-ray sensor for improved image resolution of permanent brachytherapy seed implants compared to conventional CT. Methods: Two phantoms simulating the male pelvic region were used to test the capabilities of a digital endorectal x-ray sensor for imaging permanent brachytherapy seed implants. Phantom 1 was constructed from acrylic plastic with cavities milled in the locations of the prostate and the rectum. The prostate cavity was filled a Styrofoam plug implanted with 10 training seeds. Phantom 2 was constructed from tissue-equivalent gelatins and contained a prostate phantom implanted with 18 strands of training seeds. For both phantoms, an intraoral digital dental x-ray sensor was placed in the rectum within 2 cm of the seed implants. Scout scans were taken of the phantoms over a limited arc angle using a CT scanner (80 kV, 120–200 mA). The dental sensor was removed from the phantoms and normal helical CT and scout (0 degree) scans using typical parameters for pelvic CT (120 kV, auto-mA) were collected. A shift-and add tomosynthesis algorithm was developed to localize seed plane location normal to detector face. Results: The endorectal sensor produced images with improved resolution compared to CT scans. Seed clusters and individual seed geometry were more discernable using the endorectal sensor. Seed 3D locations, including seeds that were not located in every projection image, were discernable using the shift and add algorithm. Conclusion: This work shows that digital endorectal x-ray sensors are a feasible method for improving imaging of permanent brachytherapy seed implants. Future work will consist of optimizing the tomosynthesis technique to produce higher resolution, lower dose images of 1) permanent brachytherapy seed implants for post-implant dosimetry and 2) fine anatomic details for imaging and managing prostatic disease compared to CT images. Funding: LSU Faculty Start-up Funding

  9. WE-AB-BRA-11: Improved Imaging of Permanent Prostate Brachytherapy Seed Implants by Combining an Endorectal X-Ray Sensor with a CT Scanner

    Energy Technology Data Exchange (ETDEWEB)

    Steiner, J; Matthews, K; Jia, G [Louisiana State University, Baton Rouge, LA (United States)

    2016-06-15

    Purpose: To test feasibility of the use of a digital endorectal x-ray sensor for improved image resolution of permanent brachytherapy seed implants compared to conventional CT. Methods: Two phantoms simulating the male pelvic region were used to test the capabilities of a digital endorectal x-ray sensor for imaging permanent brachytherapy seed implants. Phantom 1 was constructed from acrylic plastic with cavities milled in the locations of the prostate and the rectum. The prostate cavity was filled a Styrofoam plug implanted with 10 training seeds. Phantom 2 was constructed from tissue-equivalent gelatins and contained a prostate phantom implanted with 18 strands of training seeds. For both phantoms, an intraoral digital dental x-ray sensor was placed in the rectum within 2 cm of the seed implants. Scout scans were taken of the phantoms over a limited arc angle using a CT scanner (80 kV, 120–200 mA). The dental sensor was removed from the phantoms and normal helical CT and scout (0 degree) scans using typical parameters for pelvic CT (120 kV, auto-mA) were collected. A shift-and add tomosynthesis algorithm was developed to localize seed plane location normal to detector face. Results: The endorectal sensor produced images with improved resolution compared to CT scans. Seed clusters and individual seed geometry were more discernable using the endorectal sensor. Seed 3D locations, including seeds that were not located in every projection image, were discernable using the shift and add algorithm. Conclusion: This work shows that digital endorectal x-ray sensors are a feasible method for improving imaging of permanent brachytherapy seed implants. Future work will consist of optimizing the tomosynthesis technique to produce higher resolution, lower dose images of 1) permanent brachytherapy seed implants for post-implant dosimetry and 2) fine anatomic details for imaging and managing prostatic disease compared to CT images. Funding: LSU Faculty Start-up Funding

  10. A NIR-BODIPY derivative for sensing copper(II) in blood and mitochondrial imaging

    Science.gov (United States)

    He, Shao-Jun; Xie, Yu-Wen; Chen, Qiu-Yun

    2018-04-01

    In order to develop NIR BODIPY for mitochondria targeting imaging agents and metal sensors, a side chain modified BODIPY (BPN) was synthesized and spectroscopically characterized. BPN has NIR emission at 765 nm when excited at 704 nm. The emission at 765 nm responded differently to Cu2+ and Mn2+ ions, respectively. The BPN coordinated with Cu2+ forming [BPNCu]2+ complex with quenched emission, while Mn2+ induced aggregation of BPN with specific fluorescence enhancement. Moreover, BPN can be applied to monitor Cu2+ in live cells and image mitochondria. Further, BPN was used as sensor for the detection of Cu2+ ions in serum with linear detection range of 0.45 μM-36.30 μM. Results indicate that BPN is a good sensor for the detection of Cu2+ in serum and image mitochondria. This study gives strategies for future design of NIR sensors for the analysis of metal ions in blood.

  11. The separation-combination method of linear structures in remote sensing image interpretation and its application

    International Nuclear Information System (INIS)

    Liu Linqin

    1991-01-01

    The separation-combination method a new kind of analysis method of linear structures in remote sensing image interpretation is introduced taking northwestern Fujian as the example, its practical application is examined. The practice shows that application results not only reflect intensities of linear structures in overall directions at different locations, but also contribute to the zonation of linear structures and display their space distribution laws. Based on analyses of linear structures, it can provide more information concerning remote sensing on studies of regional mineralization laws and the guide to ore-finding combining with mineralization

  12. NRT Lightning Imaging Sensor (LIS) on International Space Station (ISS) Provisional Science Data Vp0

    Data.gov (United States)

    National Aeronautics and Space Administration — The International Space Station (ISS) Lightning Imaging Sensor (LIS) datasets were collected by the LIS instrument on the ISS used to detect the distribution and...

  13. Polarization Imaging Apparatus with Auto-Calibration

    Science.gov (United States)

    Zou, Yingyin Kevin (Inventor); Zhao, Hongzhi (Inventor); Chen, Qiushui (Inventor)

    2013-01-01

    A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set, a first variable phase retarder (VPR) with its optical axis aligned 22.5 deg, a second variable phase retarder with its optical axis aligned 45 deg, a linear polarizer, a imaging sensor for sensing the intensity images of the sample, a controller and a computer. Two variable phase retarders were controlled independently by a computer through a controller unit which generates a sequential of voltages to control the phase retardations of the first and second variable phase retarders. A auto-calibration procedure was incorporated into the polarization imaging apparatus to correct the misalignment of first and second VPRs, as well as the half-wave voltage of the VPRs. A set of four intensity images, I(sub 0), I(sub 1), I(sub 2) and I(sub 3) of the sample were captured by imaging sensor when the phase retardations of VPRs were set at (0,0), (pi,0), (pi,pi) and (pi/2,pi), respectively. Then four Stokes components of a Stokes image, S(sub 0), S(sub 1), S(sub 2) and S(sub 3) were calculated using the four intensity images.

  14. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    Science.gov (United States)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  15. Quantum dots in imaging, drug delivery and sensor applications.

    Science.gov (United States)

    Matea, Cristian T; Mocan, Teodora; Tabaran, Flaviu; Pop, Teodora; Mosteanu, Ofelia; Puia, Cosmin; Iancu, Cornel; Mocan, Lucian

    2017-01-01

    Quantum dots (QDs), also known as nanoscale semiconductor crystals, are nanoparticles with unique optical and electronic properties such as bright and intensive fluorescence. Since most conventional organic label dyes do not offer the near-infrared (>650 nm) emission possibility, QDs, with their tunable optical properties, have gained a lot of interest. They possess characteristics such as good chemical and photo-stability, high quantum yield and size-tunable light emission. Different types of QDs can be excited with the same light wavelength, and their narrow emission bands can be detected simultaneously for multiple assays. There is an increasing interest in the development of nano-theranostics platforms for simultaneous sensing, imaging and therapy. QDs have great potential for such applications, with notable results already published in the fields of sensors, drug delivery and biomedical imaging. This review summarizes the latest developments available in literature regarding the use of QDs for medical applications.

  16. Comparison of Three Non-Imaging Angle-Diversity Receivers as Input Sensors of Nodes for Indoor Infrared Wireless Sensor Networks: Theory and Simulation

    Directory of Open Access Journals (Sweden)

    Beatriz R. Mendoza

    2016-07-01

    Full Text Available In general, the use of angle-diversity receivers makes it possible to reduce the impact of ambient light noise, path loss and multipath distortion, in part by exploiting the fact that they often receive the desired signal from different directions. Angle-diversity detection can be performed using a composite receiver with multiple detector elements looking in different directions. These are called non-imaging angle-diversity receivers. In this paper, a comparison of three non-imaging angle-diversity receivers as input sensors of nodes for an indoor infrared (IR wireless sensor network is presented. The receivers considered are the conventional angle-diversity receiver (CDR, the sectored angle-diversity receiver (SDR, and the self-orienting receiver (SOR, which have been proposed or studied by research groups in Spain. To this end, the effective signal-collection area of the three receivers is modelled and a Monte-Carlo-based ray-tracing algorithm is implemented which allows us to investigate the effect on the signal to noise ratio and main IR channel parameters, such as path loss and rms delay spread, of using the three receivers in conjunction with different combination techniques in IR links operating at low bit rates. Based on the results of the simulations, we show that the use of a conventional angle-diversity receiver in conjunction with the equal-gain combining technique provides the solution with the best signal to noise ratio, the lowest computational capacity and the lowest transmitted power requirements, which comprise the main limitations for sensor nodes in an indoor infrared wireless sensor network.

  17. Commissioning and quality assurance of the x-ray volume imaging system of an image-guided radiotherapy capable linear accelerator

    International Nuclear Information System (INIS)

    Muralidhar, K.R.; Narayana Murthy, P.; Kumar, Rajneesh

    2008-01-01

    An Image-Guided Radiotherapy-capable linear accelerator (Elekta Synergy) was installed at our hospital, which is equipped with a kV x-ray volume imaging (XVI) system and electronic portal imaging device (iViewGT). The objective of this presentation is to describe the results of commissioning measurements carried out on the XVI facility to verify the manufacturer's specifications and also to evolve a QA schedule which can be used to test its performance routinely. The QA program consists of a series of tests (safety features, geometric accuracy, and image quality). These tests were found to be useful to assess the performance of the XVI system and also proved that XVI system is very suitable for image-guided high-precision radiation therapy. (author)

  18. Commissioning and quality assurance of the X-ray volume Imaging system of an image-guided radiotherapy capable linear accelerator

    Directory of Open Access Journals (Sweden)

    Muralidhar K

    2008-01-01

    Full Text Available An Image-Guided Radiotherapy-capable linear accelerator (Elekta Synergy was installed at our hospital, which is equipped with a kV x-ray volume imaging (XVI system and electronic portal imaging device (iViewGT. The objective of this presentation is to describe the results of commissioning measurements carried out on the XVI facility to verify the manufacturer′s specifications and also to evolve a QA schedule which can be used to test its performance routinely. The QA program consists of a series of tests (safety features, geometric accuracy, and image quality. These tests were found to be useful to assess the performance of the XVI system and also proved that XVI system is very suitable for image-guided high-precision radiation therapy.

  19. Transition-edge sensor imaging arrays for astrophysics applications

    Science.gov (United States)

    Burney, Jennifer Anne

    Many interesting objects in our universe currently elude observation in the optical band: they are too faint or they vary rapidly and thus any structure in their radiation is lost over the period of an exposure. Conventional photon detectors cannot simultaneously provide energy resolution and time-stamping of individual photons at fast rates. Superconducting detectors have recently made the possibility of simultaneous photon counting, imaging, and energy resolution a reality. Our research group has pioneered the use of one such detector, the Transition-Edge Sensor (TES). TES physics is simple and elegant. A thin superconducting film, biased at its critical temperature, can act as a particle detector: an incident particle deposits energy and drives the film into its superconducting-normal transition. By inductively coupling the detector to a SQUID amplifier circuit, this resistance change can be read out as a current pulse, and its energy deduced by integrating over the pulse. TESs can be used to accurately time-stamp (to 0.1 [mu]s) and energy-resolve (0.15 eV at 1.6 eV) near-IR/visible/near-UV photons at rates of 30~kHz. The first astronomical observations using fiber-coupled detectors were made at the Stanford Student Observatory 0.6~m telescope in 1999. Further observations of the Crab Pulsar from the 107" telescope at the University of Texas McDonald Observatory showed rapid phase variations over the near-IR/visible/near-UV band. These preliminary observations provided a glimpse into a new realm of observations of pulsars, binary systems, and accreting black holes promised by TES arrays. This thesis describes the development, characterization, and preliminary use of the first camera system based on Transition-Edge Sensors. While single-device operation is relatively well-understood, the operation of a full imaging array poses significant challenges. This thesis addresses all aspects related to the creation and characterization of this cryogenic imaging

  20. Advanced microlens and color filter process technology for the high-efficiency CMOS and CCD image sensors

    Science.gov (United States)

    Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu

    2000-12-01

    New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.

  1. Robust linearized image reconstruction for multifrequency EIT of the breast.

    Science.gov (United States)

    Boverman, Gregory; Kao, Tzu-Jen; Kulkarni, Rujuta; Kim, Bong Seok; Isaacson, David; Saulnier, Gary J; Newell, Jonathan C

    2008-10-01

    Electrical impedance tomography (EIT) is a developing imaging modality that is beginning to show promise for detecting and characterizing tumors in the breast. At Rensselaer Polytechnic Institute, we have developed a combined EIT-tomosynthesis system that allows for the coregistered and simultaneous analysis of the breast using EIT and X-ray imaging. A significant challenge in EIT is the design of computationally efficient image reconstruction algorithms which are robust to various forms of model mismatch. Specifically, we have implemented a scaling procedure that is robust to the presence of a thin highly-resistive layer of skin at the boundary of the breast and we have developed an algorithm to detect and exclude from the image reconstruction electrodes that are in poor contact with the breast. In our initial clinical studies, it has been difficult to ensure that all electrodes make adequate contact with the breast, and thus procedures for the use of data sets containing poorly contacting electrodes are particularly important. We also present a novel, efficient method to compute the Jacobian matrix for our linearized image reconstruction algorithm by reducing the computation of the sensitivity for each voxel to a quadratic form. Initial clinical results are presented, showing the potential of our algorithms to detect and localize breast tumors.

  2. Fast responsive fluorescence turn-on sensor for Cu2+ and its application in live cell imaging

    International Nuclear Information System (INIS)

    Wang Jiaoliang; Li Hao; Long Liping; Xiao Guqing; Xie Dan

    2012-01-01

    A new effective fluorescent sensor based on rhodamine was synthesized, which was induced by Cu 2+ in aqueous media to produce turn-on fluorescence. The new sensor 1 exhibited good selectivity for Cu 2+ over other heavy and transition metal (HTM) ions in H 2 O/CH 3 CN(7:3, v/v). Upon addition of Cu 2+ , a remarkable color change from colorless to pink was easily observed by the naked eye, and the dramatic fluorescence turn-on was corroborated. Furthermore, kinetic assay indicates that sensor 1 could be used for real-time tracking of Cu 2+ in cells and organisms. In addition, the turn-on fluorescent change upon the addition of Cu 2+ was also applied in bioimaging. - Highlights: ► A new effective fluorescent sensor based on rhodamine was developed to detect Cu 2+ . ► The sensor exhibited fast response, good selectivity at physiological pH condition. ► The sensor was an effective intracellular Cu 2+ ion imaging agent.

  3. Scintillator high-gain avalanche rushing photoconductor active-matrix flat panel imager: zero-spatial frequency x-ray imaging properties of the solid-state SHARP sensor structure.

    Science.gov (United States)

    Wronski, M; Zhao, W; Tanioka, K; Decrescenzo, G; Rowlands, J A

    2012-11-01

    The authors are investigating the feasibility of a new type of solid-state x-ray imaging sensor with programmable avalanche gain: scintillator high-gain avalanche rushing photoconductor active matrix flat panel imager (SHARP-AMFPI). The purpose of the present work is to investigate the inherent x-ray detection properties of SHARP and demonstrate its wide dynamic range through programmable gain. A distributed resistive layer (DRL) was developed to maintain stable avalanche gain operation in a solid-state HARP. The signal and noise properties of the HARP-DRL for optical photon detection were investigated as a function of avalanche gain both theoretically and experimentally, and the results were compared with HARP tube (with electron beam readout) used in previous investigations of zero spatial frequency performance of SHARP. For this new investigation, a solid-state SHARP x-ray image sensor was formed by direct optical coupling of the HARP-DRL with a structured cesium iodide (CsI) scintillator. The x-ray sensitivity of this sensor was measured as a function of avalanche gain and the results were compared with the sensitivity of HARP-DRL measured optically. The dynamic range of HARP-DRL with variable avalanche gain was investigated for the entire exposure range encountered in radiography∕fluoroscopy (R∕F) applications. The signal from HARP-DRL as a function of electric field showed stable avalanche gain, and the noise associated with the avalanche process agrees well with theory and previous measurements from a HARP tube. This result indicates that when coupled with CsI for x-ray detection, the additional noise associated with avalanche gain in HARP-DRL is negligible. The x-ray sensitivity measurements using the SHARP sensor produced identical avalanche gain dependence on electric field as the optical measurements with HARP-DRL. Adjusting the avalanche multiplication gain in HARP-DRL enabled a very wide dynamic range which encompassed all clinically relevant

  4. Scintillator high-gain avalanche rushing photoconductor active-matrix flat panel imager: Zero-spatial frequency x-ray imaging properties of the solid-state SHARP sensor structure

    International Nuclear Information System (INIS)

    Wronski, M.; Zhao, W.; Tanioka, K.; DeCrescenzo, G.; Rowlands, J. A.

    2012-01-01

    Purpose: The authors are investigating the feasibility of a new type of solid-state x-ray imaging sensor with programmable avalanche gain: scintillator high-gain avalanche rushing photoconductor active matrix flat panel imager (SHARP-AMFPI). The purpose of the present work is to investigate the inherent x-ray detection properties of SHARP and demonstrate its wide dynamic range through programmable gain. Methods: A distributed resistive layer (DRL) was developed to maintain stable avalanche gain operation in a solid-state HARP. The signal and noise properties of the HARP-DRL for optical photon detection were investigated as a function of avalanche gain both theoretically and experimentally, and the results were compared with HARP tube (with electron beam readout) used in previous investigations of zero spatial frequency performance of SHARP. For this new investigation, a solid-state SHARP x-ray image sensor was formed by direct optical coupling of the HARP-DRL with a structured cesium iodide (CsI) scintillator. The x-ray sensitivity of this sensor was measured as a function of avalanche gain and the results were compared with the sensitivity of HARP-DRL measured optically. The dynamic range of HARP-DRL with variable avalanche gain was investigated for the entire exposure range encountered in radiography/fluoroscopy (R/F) applications. Results: The signal from HARP-DRL as a function of electric field showed stable avalanche gain, and the noise associated with the avalanche process agrees well with theory and previous measurements from a HARP tube. This result indicates that when coupled with CsI for x-ray detection, the additional noise associated with avalanche gain in HARP-DRL is negligible. The x-ray sensitivity measurements using the SHARP sensor produced identical avalanche gain dependence on electric field as the optical measurements with HARP-DRL. Adjusting the avalanche multiplication gain in HARP-DRL enabled a very wide dynamic range which encompassed all

  5. Hydrogen peroxide sensor: Uniformly decorated silver nanoparticles on polypyrrole for wide detection range

    Science.gov (United States)

    Nia, Pooria Moozarm; Meng, Woi Pei; Alias, Y.

    2015-12-01

    Electrochemically synthesized polypyrrole (PPy) decorated with silver nanoparticles (AgNPs) was prepared and used as a nonenzymatic sensor for hydrogen peroxide (H2O2) detection. Polypyrrole was fabricated through electrodeposition, while silver nanoparticles were deposited on polypyrrole by the same technique. The field emission scanning electron microscopy (FESEM) images showed that the electrodeposited AgNPs were aligned along the PPy uniformly and the mean particle size of AgNPs is around 25 nm. The electrocatalytic activity of AgNPs-PPy-GCE toward H2O2 was studied using chronoamperometry and cyclic voltammetry. The first linear section was in the range of 0.1-5 mM with a limit of detection of 0.115 μmol l-1 and the second linear section was raised to 120 mM with a correlation factor of 0.256 μmol l-1 (S/N of 3). Moreover, the sensor presented excellent stability, selectivity, repeatability and reproducibility. These excellent performances make AgNPs-PPy/GCE an ideal nonenzymatic H2O2 sensor.

  6. Radiometric, geometric, and image quality assessment of ALOS AVNIR-2 and PRISM sensors

    Science.gov (United States)

    Saunier, S.; Goryl, P.; Chander, G.; Santer, R.; Bouvet, M.; Collet, B.; Mambimba, A.; Kocaman, Aksakal S.

    2010-01-01

    The Advanced Land Observing Satellite (ALOS) was launched on January 24, 2006, by a Japan Aerospace Exploration Agency (JAXA) H-IIA launcher. It carries three remote-sensing sensors: 1) the Advanced Visible and Near-Infrared Radiometer type 2 (AVNIR-2); 2) the Panchromatic Remote-Sensing Instrument for Stereo Mapping (PRISM); and 3) the Phased-Array type L-band Synthetic Aperture Radar (PALSAR). Within the framework of ALOS Data European Node, as part of the European Space Agency (ESA), the European Space Research Institute worked alongside JAXA to provide contributions to the ALOS commissioning phase plan. This paper summarizes the strategy that was adopted by ESA to define and implement a data verification plan for missions operated by external agencies; these missions are classified by the ESA as third-party missions. The ESA was supported in the design and execution of this plan by GAEL Consultant. The verification of ALOS optical data from PRISM and AVNIR-2 sensors was initiated 4 months after satellite launch, and a team of principal investigators assembled to provide technical expertise. This paper includes a description of the verification plan and summarizes the methodologies that were used for radiometric, geometric, and image quality assessment. The successful completion of the commissioning phase has led to the sensors being declared fit for operations. The consolidated measurements indicate that the radiometric calibration of the AVNIR-2 sensor is stable and agrees with the Landsat-7 Enhanced Thematic Mapper Plus and the Envisat MEdium-Resolution Imaging Spectrometer calibration. The geometrical accuracy of PRISM and AVNIR-2 products improved significantly and remains under control. The PRISM modulation transfer function is monitored for improved characterization.

  7. Study and development of a laser based alignment system for the compact linear collider

    CERN Document Server

    AUTHOR|(CDS)2083149

    The first objective of the PhD thesis is to develop a new type of positioning sensor to align components at micrometre level over 200 m with respect to a laser beam as straight line reference. The second objective is to estimate the measurement accuracy of the total alignment system over 200 m. The context of the PhD thesis is the Compact Linear Collider project, which is a study for a future particle accelerator. The proposed positioning sensor is made of a camera and an open/close shutter. The sensor can measure the position of the laser beam with respect to its own coordinate system. To do a measurement, the shutter closes, a laser spot appears on it, the camera captures a picture of the laser spot and the coordinates of the laser spot centre are reconstructed in the sensor coordinate system with image processing. Such a measurement requires reference targets on the positioning sensor. To reach the rst objective of the PhD thesis, we used laser theory...

  8. Range-Measuring Video Sensors

    Science.gov (United States)

    Howard, Richard T.; Briscoe, Jeri M.; Corder, Eric L.; Broderick, David

    2006-01-01

    Optoelectronic sensors of a proposed type would perform the functions of both electronic cameras and triangulation- type laser range finders. That is to say, these sensors would both (1) generate ordinary video or snapshot digital images and (2) measure the distances to selected spots in the images. These sensors would be well suited to use on robots that are required to measure distances to targets in their work spaces. In addition, these sensors could be used for all the purposes for which electronic cameras have been used heretofore. The simplest sensor of this type, illustrated schematically in the upper part of the figure, would include a laser, an electronic camera (either video or snapshot), a frame-grabber/image-capturing circuit, an image-data-storage memory circuit, and an image-data processor. There would be no moving parts. The laser would be positioned at a lateral distance d to one side of the camera and would be aimed parallel to the optical axis of the camera. When the range of a target in the field of view of the camera was required, the laser would be turned on and an image of the target would be stored and preprocessed to locate the angle (a) between the optical axis and the line of sight to the centroid of the laser spot.

  9. Handheld and mobile hyperspectral imaging sensors for wide-area standoff detection of explosives and chemical warfare agents

    Science.gov (United States)

    Gomer, Nathaniel R.; Gardner, Charles W.; Nelson, Matthew P.

    2016-05-01

    Hyperspectral imaging (HSI) is a valuable tool for the investigation and analysis of targets in complex background with a high degree of autonomy. HSI is beneficial for the detection of threat materials on environmental surfaces, where the concentration of the target of interest is often very low and is typically found within complex scenery. Two HSI techniques that have proven to be valuable are Raman and shortwave infrared (SWIR) HSI. Unfortunately, current generation HSI systems have numerous size, weight, and power (SWaP) limitations that make their potential integration onto a handheld or field portable platform difficult. The systems that are field-portable do so by sacrificing system performance, typically by providing an inefficient area search rate, requiring close proximity to the target for screening, and/or eliminating the potential to conduct real-time measurements. To address these shortcomings, ChemImage Sensor Systems (CISS) is developing a variety of wide-field hyperspectral imaging systems. Raman HSI sensors are being developed to overcome two obstacles present in standard Raman detection systems: slow area search rate (due to small laser spot sizes) and lack of eye-safety. SWIR HSI sensors have been integrated into mobile, robot based platforms and handheld variants for the detection of explosives and chemical warfare agents (CWAs). In addition, the fusion of these two technologies into a single system has shown the feasibility of using both techniques concurrently to provide higher probability of detection and lower false alarm rates. This paper will provide background on Raman and SWIR HSI, discuss the applications for these techniques, and provide an overview of novel CISS HSI sensors focused on sensor design and detection results.

  10. High-speed particle tracking in microscopy using SPAD image sensors

    Science.gov (United States)

    Gyongy, Istvan; Davies, Amy; Miguelez Crespo, Allende; Green, Andrew; Dutton, Neale A. W.; Duncan, Rory R.; Rickman, Colin; Henderson, Robert K.; Dalgarno, Paul A.

    2018-02-01

    Single photon avalanche diodes (SPADs) are used in a wide range of applications, from fluorescence lifetime imaging microscopy (FLIM) to time-of-flight (ToF) 3D imaging. SPAD arrays are becoming increasingly established, combining the unique properties of SPADs with widefield camera configurations. Traditionally, the photosensitive area (fill factor) of SPAD arrays has been limited by the in-pixel digital electronics. However, recent designs have demonstrated that by replacing the complex digital pixel logic with simple binary pixels and external frame summation, the fill factor can be increased considerably. A significant advantage of such binary SPAD arrays is the high frame rates offered by the sensors (>100kFPS), which opens up new possibilities for capturing ultra-fast temporal dynamics in, for example, life science cellular imaging. In this work we consider the use of novel binary SPAD arrays in high-speed particle tracking in microscopy. We demonstrate the tracking of fluorescent microspheres undergoing Brownian motion, and in intra-cellular vesicle dynamics, at high frame rates. We thereby show how binary SPAD arrays can offer an important advance in live cell imaging in such fields as intercellular communication, cell trafficking and cell signaling.

  11. Ultrasensitive surveillance of sensors and processes

    International Nuclear Information System (INIS)

    Wegerich, S.W.; Jarman, K.K.; Gross, K.C.

    1999-01-01

    A method and apparatus for monitoring a source of data for determining an operating state of a working system are disclosed. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system

  12. Frequency selective non-linear blending to improve image quality in liver CT

    International Nuclear Information System (INIS)

    Bongers, M.N.; Bier, G.; Kloth, C.; Schabel, C.; Nikolaou, K.; Horger, M.; Fritz, J.

    2016-01-01

    To evaluate the effects of a new frequency selective non-linear blending (NLB) algorithm on the contrast resolution of liver CT with low intravascular concentration of iodine contrast. Our local ethics committee approved this retrospective study. The informed consent requirement was waived. CT exams of 25 patients (60% female, mean age: 65±16 years of age) with late phase CT scans of the liver were included as a model for poor intrahepatic vascular contrast enhancement. Optimal post-processing settings to enhance the contrast of hepatic vessels were determined. Outcome variables included signal-to-noise (SNR) and contrast-to-noise ratios (CNR) of hepatic vessels and SNR of liver parenchyma of standard and post-processed images. Image quality was quantified by two independent readers using Likert scales. The post-processing settings for the visualization of hepatic vasculature were optimal at a center of 115HU, delta of 25HU, and slope of 5. Image noise was statistically indifferent between standard and post-processed images. The CNR between the hepatic vasculature (HV) and liver parenchyma could be significantly increased for liver veins (CNR Standard 1.62±1.10, CNR NLB 3.6±2.94, p=0.0002) and portal veins (CNR Standard 1.31±0.85, CNR NLB 2.42±3.03, p=0.046). The SNR of liver parenchyma was significantly higher on post-processed images (SNR NLB 11.26±3.16, SNR Standard 8.85± 2.27, p=0.008). The overall image quality and depiction of HV were significantly higher on post-processed images (NLB DHV : 4 [3-4.75], S tandardDHV : 2 [1.3-2.5], p=<0.0001; NLBIQ : 4 [4-4], StandardIQ : 2 [2-3], p=<0.0001). The use of a frequency selective non-linear blending algorithm increases the contrast resolution of liver CT and can improve the visibility of the hepatic vasculature in the setting of a low contrast ratio between vessels and the parenchyma.

  13. Frequency selective non-linear blending to improve image quality in liver CT

    Energy Technology Data Exchange (ETDEWEB)

    Bongers, M.N.; Bier, G.; Kloth, C.; Schabel, C.; Nikolaou, K.; Horger, M. [University Hospital of Tuebingen (Germany). Dept. of Diagnostic and Interventional Radiology; Fritz, J. [Johns Hopkins University School of Medicine, Baltimore, MD (United States). Russell H. Morgan Dept. of Radiology and Radiological Science

    2016-12-15

    To evaluate the effects of a new frequency selective non-linear blending (NLB) algorithm on the contrast resolution of liver CT with low intravascular concentration of iodine contrast. Our local ethics committee approved this retrospective study. The informed consent requirement was waived. CT exams of 25 patients (60% female, mean age: 65±16 years of age) with late phase CT scans of the liver were included as a model for poor intrahepatic vascular contrast enhancement. Optimal post-processing settings to enhance the contrast of hepatic vessels were determined. Outcome variables included signal-to-noise (SNR) and contrast-to-noise ratios (CNR) of hepatic vessels and SNR of liver parenchyma of standard and post-processed images. Image quality was quantified by two independent readers using Likert scales. The post-processing settings for the visualization of hepatic vasculature were optimal at a center of 115HU, delta of 25HU, and slope of 5. Image noise was statistically indifferent between standard and post-processed images. The CNR between the hepatic vasculature (HV) and liver parenchyma could be significantly increased for liver veins (CNR{sub Standard} 1.62±1.10, CNR{sub NLB} 3.6±2.94, p=0.0002) and portal veins (CNR{sub Standard} 1.31±0.85, CNR{sub NLB} 2.42±3.03, p=0.046). The SNR of liver parenchyma was significantly higher on post-processed images (SNR{sub NLB} 11.26±3.16, SNR{sub Standard} 8.85± 2.27, p=0.008). The overall image quality and depiction of HV were significantly higher on post-processed images (NLB{sub DHV}: 4 [3-4.75], S{sub tandardDHV}: 2 [1.3-2.5], p=<0.0001; {sub NLBIQ}: 4 [4-4], {sub StandardIQ}: 2 [2-3], p=<0.0001). The use of a frequency selective non-linear blending algorithm increases the contrast resolution of liver CT and can improve the visibility of the hepatic vasculature in the setting of a low contrast ratio between vessels and the parenchyma.

  14. Low SWaP multispectral sensors using dichroic filter arrays

    Science.gov (United States)

    Dougherty, John; Varghese, Ron

    2015-06-01

    The benefits of multispectral imaging are well established in a variety of applications including remote sensing, authentication, satellite and aerial surveillance, machine vision, biomedical, and other scientific and industrial uses. However, many of the potential solutions require more compact, robust, and cost-effective cameras to realize these benefits. The next generation of multispectral sensors and cameras needs to deliver improvements in size, weight, power, portability, and spectral band customization to support widespread deployment for a variety of purpose-built aerial, unmanned, and scientific applications. A novel implementation uses micro-patterning of dichroic filters1 into Bayer and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition. Consistent with color image processing, individual spectral channels are de-mosaiced with each channel providing an image of the field of view. This approach can be implemented across a variety of wavelength ranges and on a variety of detector types including linear, area, silicon, and InGaAs. This dichroic filter array approach can also reduce payloads and increase range for unmanned systems, with the capability to support both handheld and autonomous systems. Recent examples and results of 4 band RGB + NIR dichroic filter arrays in multispectral cameras are discussed. Benefits and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches - including their passivity, spectral range, customization options, and scalable production.

  15. FDTD-based optical simulations methodology for CMOS image sensors pixels architecture and process optimization

    Science.gov (United States)

    Hirigoyen, Flavien; Crocherie, Axel; Vaillant, Jérôme M.; Cazaux, Yvon

    2008-02-01

    This paper presents a new FDTD-based optical simulation model dedicated to describe the optical performances of CMOS image sensors taking into account diffraction effects. Following market trend and industrialization constraints, CMOS image sensors must be easily embedded into even smaller packages, which are now equipped with auto-focus and short-term coming zoom system. Due to miniaturization, the ray-tracing models used to evaluate pixels optical performances are not accurate anymore to describe the light propagation inside the sensor, because of diffraction effects. Thus we adopt a more fundamental description to take into account these diffraction effects: we chose to use Maxwell-Boltzmann based modeling to compute the propagation of light, and to use a software with an FDTD-based (Finite Difference Time Domain) engine to solve this propagation. We present in this article the complete methodology of this modeling: on one hand incoherent plane waves are propagated to approximate a product-use diffuse-like source, on the other hand we use periodic conditions to limit the size of the simulated model and both memory and computation time. After having presented the correlation of the model with measurements we will illustrate its use in the case of the optimization of a 1.75μm pixel.

  16. Ultra-high-resolution photoelectronic digital radiographic imaging system for medicine

    International Nuclear Information System (INIS)

    Bamford, B.R.; Nudelman, S.; Quimette, D.R.; Ovitt, T.W.; Reisken, A.B.; Spackman, T.J.; Zaccheo, T.S.

    1989-01-01

    The authors report the development of a new type of digital radiographic imaging system for medicine. Unlike previous digital radiographic systems that could not match the spatial resolution of film-screen systems, this system has higher spatial resolution and wider dynamic range than film-screen-based systems. There are three components to the system: a microfocal spot x-ray tube, a camera consisting of a Tektronix TK-2048M 2048 x 2048 CCD image sensor in direct contact with a Kodak Min-R intensifying screen, and a Gould IP-9000 with 2048 x 2048 processing and display capabilities. The CCD image sensor is a large-area integrated circuit and is 55.3 mm x 55.3 mm. It has a linear dynamic range of 12 bits or 4,096 gray levels

  17. The influence of the oblique incident X-ray that affected the image quality of the X-ray CCD sensor

    International Nuclear Information System (INIS)

    Suzuki, Yosuke; Matsumoto, Nobue; Morita, Hiroshi; Ohkawa, Hiromitsu

    1998-01-01

    The influence of the oblique incident X-ray that affected the image quality of the X-ray CCD sensor was examined and its correction was investigated. CDR was adopted in this study and evaluated image quality, by measuring MTF. The oblique projection was clinically permissible to about an oblique incident angle of 40 degrees although it exerts an influence on the magnifying power and density. The estimation of the oblique entrance direction and oblique incident angle was possible, by developing an oblique incident correction marker. When an oblique incident angle of θ degrees was measured, a correction is possible, by compressing the image cos (θ) times perpendicular to the rotational axis of CCD sensor. There was small decline of MTF, in the image where a correction for the influence of oblique incidence was made. By observation of the digital subtracted picture of the image after correction of oblique projection and that of normal, the resemblance in the two images indicated that this correction method was reasonable. (author)

  18. Effect of mandibular plane angle on image dimensions in linear tomography

    Directory of Open Access Journals (Sweden)

    Bashizadeh Fakhar H

    2011-02-01

    Full Text Available "nBackground and Aims: Accurate bone measurements are essential for determining the optimal size and length of proposed implants. The radiologist should be aware of the head position effects on image dimensions in each imaging technique. The purpose of this study was to evaluate the effect of mandibular plane angle on image dimensions in linear tomography."nMaterials and Methods: In this in vitro study, the vertical dimensions of linear tomograms taken from 3 dry mandibles in different posteroantenior or mediolateral tilts were compared with actual condition. In order to evaluate the effects of head position in linear tomography, 16 series of images while mandibular plane angle was tilted with 5, 10, 15 and 20 degrees in anterior, posterior, medial, or lateral angulations as well as a series of standard images without any tilt in mandibular position were taken. Vertical distances between the alveolar crest and the superior border of the inferior alveolar canal were measured in posterior mandible and the vertical distances between the alveolar crest and inferior rim were measured in anterior mandible in 12 sites of tomograms. Each bone was then sectioned through the places marked with a radiopaque object. The radiographic values were compared with the real conditions. Repeat measure ANOVA was used to analyze the data."nResults: The findings of this study showed that there was significant statistical difference between standard position and 15º posteroanterior tilt (P<0.001. Also there was significant statistical difference between standard position and 10º lateral tilt (P<0.008, 15º tilt (P<0.001, and 20º upward tilt (P<0.001. In standard mandibular position with no tilt, the mean exact error was the same in all regions (0.22±0.19 mm except the premolar region which the mean exact error was calculated as 0.44±0.19 mm. The most mean exact error among various postroanterior tilts was seen in 20º lower tilt in the canine region (1±0.88 mm

  19. Applications of the Integrated High-Performance CMOS Image Sensor to Range Finders — from Optical Triangulation to the Automotive Field

    Directory of Open Access Journals (Sweden)

    Joe-Air Jiang

    2008-03-01

    Full Text Available With their significant features, the applications of complementary metal-oxidesemiconductor (CMOS image sensors covers a very extensive range, from industrialautomation to traffic applications such as aiming systems, blind guidance, active/passiverange finders, etc. In this paper CMOS image sensor-based active and passive rangefinders are presented. The measurement scheme of the proposed active/passive rangefinders is based on a simple triangulation method. The designed range finders chieflyconsist of a CMOS image sensor and some light sources such as lasers or LEDs. Theimplementation cost of our range finders is quite low. Image processing software to adjustthe exposure time (ET of the CMOS image sensor to enhance the performance oftriangulation-based range finders was also developed. An extensive series of experimentswere conducted to evaluate the performance of the designed range finders. From theexperimental results, the distance measurement resolutions achieved by the active rangefinder and the passive range finder can be better than 0.6% and 0.25% within themeasurement ranges of 1 to 8 m and 5 to 45 m, respectively. Feasibility tests onapplications of the developed CMOS image sensor-based range finders to the automotivefield were also conducted. The experimental results demonstrated that our range finders arewell-suited for distance measurements in this field.

  20. Control Design and Digital Implementation of a Fast 2-Degree-of-Freedom Translational Optical Image Stabilizer for Image Sensors in Mobile Camera Phones.

    Science.gov (United States)

    Wang, Jeremy H-S; Qiu, Kang-Fu; Chao, Paul C-P

    2017-10-13

    This study presents design, digital implementation and performance validation of a lead-lag controller for a 2-degree-of-freedom (DOF) translational optical image stabilizer (OIS) installed with a digital image sensor in mobile camera phones. Nowadays, OIS is an important feature of modern commercial mobile camera phones, which aims to mechanically reduce the image blur caused by hand shaking while shooting photos. The OIS developed in this study is able to move the imaging lens by actuating its voice coil motors (VCMs) at the required speed to the position that significantly compensates for imaging blurs by hand shaking. The compensation proposed is made possible by first establishing the exact, nonlinear equations of motion (EOMs) for the OIS, which is followed by designing a simple lead-lag controller based on established nonlinear EOMs for simple digital computation via a field-programmable gate array (FPGA) board in order to achieve fast response. Finally, experimental validation is conducted to show the favorable performance of the designed OIS; i.e., it is able to stabilize the lens holder to the desired position within 0.02 s, which is much less than previously reported times of around 0.1 s. Also, the resulting residual vibration is less than 2.2-2.5 μm, which is commensurate to the very small pixel size found in most of commercial image sensors; thus, significantly minimizing image blur caused by hand shaking.