Sample records for range image processing

  1. Image processing

    NARCIS (Netherlands)

    van der Heijden, Ferdinand; Spreeuwers, Lieuwe Jan; Blanken, Henk; Vries de, A.P.; Blok, H.E.; Feng, L; Feng, L.


    The field of image processing addresses handling and analysis of images for many purposes using a large number of techniques and methods. The applications of image processing range from enhancement of the visibility of cer- tain organs in medical images to object recognition for handling by

  2. ATCOM: accelerated image processing for terrestrial long-range imaging through atmospheric effects (United States)

    Curt, Petersen F.; Paolini, Aaron


    Long-range video surveillance performance is often severely diminished due to atmospheric turbulence. The larger apertures typically used for video-rate operation at long-range are particularly susceptible to scintillation and blurring effects that limit the overall diffraction efficiency and resolution. In this paper, we present research progress made toward a digital signal processing technique which aims to mitigate the effects of turbulence in real-time. Our previous work in this area focused on an embedded implementation for portable applications. Our more recent research has focused on functional enhancements to the same algorithm using general-purpose hardware. We present some techniques that were successfully employed to accelerate processing of high-definition color video streams and study performance under nonideal conditions involving moving objects and panning cameras. Finally, we compare the real-time performance of two implementations using a CPU and a GPU.


    Directory of Open Access Journals (Sweden)

    T. K. Kohoutek


    Full Text Available Unmanned Aerial Vehicles (UAVs are more and more used in civil areas like geomatics. Autonomous navigated platforms have a great flexibility in flying and manoeuvring in complex environments to collect remote sensing data. In contrast to standard technologies such as aerial manned platforms (airplanes and helicopters UAVs are able to fly closer to the object and in small-scale areas of high-risk situations such as landslides, volcano and earthquake areas and floodplains. Thus, UAVs are sometimes the only practical alternative in areas where access is difficult and where no manned aircraft is available or even no flight permission is given. Furthermore, compared to terrestrial platforms, UAVs are not limited to specific view directions and could overcome occlusions from trees, houses and terrain structures. Equipped with image sensors and/or laser scanners they are able to provide elevation models, rectified images, textured 3D-models and maps. In this paper we will describe a UAV platform, which can carry a range imaging (RIM camera including power supply and data storage for the detailed mapping and monitoring of complex structures, such as alpine riverbed areas. The UAV platform NEO from Swiss UAV was equipped with the RIM camera CamCube 2.0 by PMD Technologies GmbH to capture the surface structures. Its navigation system includes an autopilot. To validate the UAV-trajectory a 360° prism was installed and tracked by a total station. Within the paper a workflow for the processing of UAV-RIM data is proposed, which is based on the processing of differential GNSS data in combination with the acquired range images. Subsequently, the obtained results for the trajectory are compared and verified with a track of a UAV (Falcon 8, Ascending Technologies carried out with a total station simultaneously to the GNSS data acquisition. The results showed that the UAV's position using differential GNSS could be determined in the centimetre to the decimetre

  4. Long range image enhancement

    CSIR Research Space (South Africa)

    Duvenhage, B


    Full Text Available and Vision Computing, Auckland, New Zealand, 23-24 November 2015 Long Range Image Enhancement Bernardt Duvenhage Council for Scientific and Industrial Research South Africa Email: Abstract Turbulent pockets of air...

  5. Improvement of range spatial resolution of medical ultrasound imaging by element-domain signal processing (United States)

    Hasegawa, Hideyuki


    The range spatial resolution is an important factor determining the image quality in ultrasonic imaging. The range spatial resolution in ultrasonic imaging depends on the ultrasonic pulse length, which is determined by the mechanical response of the piezoelectric element in an ultrasonic probe. To improve the range spatial resolution without replacing the transducer element, in the present study, methods based on maximum likelihood (ML) estimation and multiple signal classification (MUSIC) were proposed. The proposed methods were applied to echo signals received by individual transducer elements in an ultrasonic probe. The basic experimental results showed that the axial half maximum of the echo from a string phantom was improved from 0.21 mm (conventional method) to 0.086 mm (ML) and 0.094 mm (MUSIC).

  6. [Research on the range of motion measurement system for spine based on LabVIEW image processing technology]. (United States)

    Li, Xiaofang; Deng, Linhong; Lu, Hu; He, Bin


    A measurement system based on the image processing technology and developed by LabVIEW was designed to quickly obtain the range of motion (ROM) of spine. NI-Vision module was used to pre-process the original images and calculate the angles of marked needles in order to get ROM data. Six human cadaveric thoracic spine segments T7-T10 were selected to carry out 6 kinds of loads, including left/right lateral bending, flexion, extension, cis/counterclockwise torsion. The system was used to measure the ROM of segment T8-T9 under the loads from 1 Nm to 5 Nm. The experimental results showed that the system is able to measure the ROM of the spine accurately and quickly, which provides a simple and reliable tool for spine biomechanics investigators.

  7. Introduction to sensors for ranging and imaging

    CERN Document Server

    Brooker, Graham


    ""This comprehensive text-reference provides a solid background in active sensing technology. It is concerned with active sensing, starting with the basics of time-of-flight sensors (operational principles, components), and going through the derivation of the radar range equation and the detection of echo signals, both fundamental to the understanding of radar, sonar and lidar imaging. Several chapters cover signal propagation of both electromagnetic and acoustic energy, target characteristics, stealth, and clutter. The remainder of the book introduces the range measurement process, active ima

  8. Characteristics of different frequency ranges in scanning electron microscope images

    Energy Technology Data Exchange (ETDEWEB)

    Sim, K. S., E-mail:; Nia, M. E.; Tan, T. L.; Tso, C. P.; Ee, C. S. [Faculty of Engineering and Technology, Multimedia University, 75450 Melaka (Malaysia)


    We demonstrate a new approach to characterize the frequency range in general scanning electron microscope (SEM) images. First, pure frequency images are generated from low frequency to high frequency, and then, the magnification of each type of frequency image is implemented. By comparing the edge percentage of the SEM image to the self-generated frequency images, we can define the frequency ranges of the SEM images. Characterization of frequency ranges of SEM images benefits further processing and analysis of those SEM images, such as in noise filtering and contrast enhancement.


    Directory of Open Access Journals (Sweden)

    J. Reznicek


    Full Text Available This paper examines the influence of raw image preprocessing and other selected processes on the accuracy of close-range photogrammetric measurement. The examined processes and features includes: raw image preprocessing, sensor unflatness, distance-dependent lens distortion, extending the input observations (image measurements by incorporating all RGB colour channels, ellipse centre eccentricity and target detecting. The examination of each effect is carried out experimentally by performing the validation procedure proposed in the German VDI guideline 2634/1. The validation procedure is based on performing standard photogrammetric measurements of high-accurate calibrated measuring lines (multi-scale bars with known lengths (typical uncertainty = 5 μm at 2 sigma. The comparison of the measured lengths with the known values gives the maximum length measurement error LME, which characterize the accuracy of the validated photogrammetric system. For higher reliability the VDI test field was photographed ten times independently with the same configuration and camera settings. The images were acquired with the metric ALPA 12WA camera. The tests are performed on all ten measurements which gives the possibility to measure the repeatability of the estimated parameters as well. The influences are examined by comparing the quality characteristics of the reference and tested settings.

  10. Robots and image processing (United States)

    Peterson, C. E.


    Developments in integrated circuit manufacture are discussed, with attention given to the current expectations of industrial automation. It is shown that the growing emphasis on image processing is a natural consequence of production requirements which have generated a small but significant range of vision applications. The state of the art in image processing is discussed, with the main research areas delineated. The main areas of application will be less in welding and diecasting than in assembly and machine tool loading, with vision becoming an ever more important facet of the installation. The two main approaches to processing images in a computer (depending on the aims of the project) are discussed. The first involves producing a system that does a specific task, the second is to achieve an understanding of some basic issues in object recognition.

  11. Image Processing Research (United States)


    Picture Processing," USCEE Report No. 530, 1974, pp. 11-19. 4.7 Spectral Sensitivity Estimation of a Color Image Scanner Clanton E. Mancill and William...Projects: the improvement of image fidelity and presentation format; (3) Image Data Extraction Projects: the recognition of objects within pictures ...representation; (5) Image Proc- essing Systems Projects: the development of image processing hardware and software support systems. 14. Key words : Image

  12. Hyperspectral image processing methods (United States)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  13. Enhanced dynamic range x-ray imaging. (United States)

    Haidekker, Mark A; Morrison, Logan Dain-Kelley; Sharma, Ajay; Burke, Emily


    X-ray images can suffer from excess contrast. Often, image exposure is chosen to visually optimize the region of interest, but at the expense of over- and underexposed regions elsewhere in the image. When image values are interpreted quantitatively as projected absorption, both over- and underexposure leads to the loss of quantitative information. We propose to combine multiple exposures into a composite that uses only pixels from those exposures in which they are neither under- nor overexposed. The composite image is created in analogy to visible-light high dynamic range photography. We present the mathematical framework for the recovery of absorbance from such composite images and demonstrate the method with biological and non-biological samples. We also show with an aluminum step-wedge that accurate recovery of step thickness from the absorbance values is possible, thereby highlighting the quantitative nature of the presented method. Due to the higher amount of detail encoded in an enhanced dynamic range x-ray image, we expect that the number of retaken images can be reduced, and patient exposure overall reduced. We also envision that the method can improve dual energy absorptiometry and even computed tomography by reducing the number of low-exposure ("photon-starved") projections. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Image Processing Diagnostics: Emphysema (United States)

    McKenzie, Alex


    Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

  15. High dynamic range imaging sensors and architectures

    CERN Document Server

    Darmont, Arnaud


    Illumination is a crucial element in many applications, matching the luminance of the scene with the operational range of a camera. When luminance cannot be adequately controlled, a high dynamic range (HDR) imaging system may be necessary. These systems are being increasingly used in automotive on-board systems, road traffic monitoring, and other industrial, security, and military applications. This book provides readers with an intermediate discussion of HDR image sensors and techniques for industrial and non-industrial applications. It describes various sensor and pixel architectures capable

  16. Medical image processing

    CERN Document Server

    Dougherty, Geoff


    This book is designed for end users in the field of digital imaging, who wish to update their skills and understanding with the latest techniques in image analysis. This book emphasizes the conceptual framework of image analysis and the effective use of image processing tools. It uses applications in a variety of fields to demonstrate and consolidate both specific and general concepts, and to build intuition, insight and understanding. Although the chapters are essentially self-contained they reference other chapters to form an integrated whole. Each chapter employs a pedagogical approach to e

  17. Biomedical Image Processing

    CERN Document Server

    Deserno, Thomas Martin


    In modern medicine, imaging is the most effective tool for diagnostics, treatment planning and therapy. Almost all modalities have went to directly digital acquisition techniques and processing of this image data have become an important option for health care in future. This book is written by a team of internationally recognized experts from all over the world. It provides a brief but complete overview on medical image processing and analysis highlighting recent advances that have been made in academics. Color figures are used extensively to illustrate the methods and help the reader to understand the complex topics.

  18. Range image segmentation for tree detection in forest scans

    Directory of Open Access Journals (Sweden)

    A. Bienert


    Full Text Available To make a tree-wise analysis inside a forest stand, the trees have to be identified. An interactive segmentation is often labourintensive and time-consuming. Therefore, an automatic detection process will aspired using a range image. This paper presents a method for the segmentation of range images extracted from terrestrial laser scanner point clouds of forest stands. After range image generation the segmentation is carried out with a connectivity analysis using the differences of the range values as homogeneity criterion. Subsequently, the tree detection is performed interactively by analysing one horizontal image line. When passing objects with a specific width, the object indicates a potential tree. By using the edge points of a segmented pixel group the tree position and diameter is calculated. Results from one test site are presented to show the performance of the method.

  19. High Dynamic Range Digital Imaging of Spacecraft (United States)

    Karr, Brian A.; Chalmers, Alan; Debattista, Kurt


    The ability to capture engineering imagery with a wide degree of dynamic range during rocket launches is critical for post launch processing and analysis [USC03, NNC86]. Rocket launches often present an extreme range of lightness, particularly during night launches. Night launches present a two-fold problem: capturing detail of the vehicle and scene that is masked by darkness, while also capturing detail in the engine plume.

  20. The image processing handbook

    CERN Document Server

    Russ, John C


    Now in its fifth edition, John C. Russ's monumental image processing reference is an even more complete, modern, and hands-on tool than ever before. The Image Processing Handbook, Fifth Edition is fully updated and expanded to reflect the latest developments in the field. Written by an expert with unequalled experience and authority, it offers clear guidance on how to create, select, and use the most appropriate algorithms for a specific application. What's new in the Fifth Edition? ·       A new chapter on the human visual process that explains which visual cues elicit a response from the vie

  1. Image processing occupancy sensor (United States)

    Brackney, Larry J.


    A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.

  2. High Resolution, Range/Range-Rate Imager Project (United States)

    National Aeronautics and Space Administration — Visidyne proposes to develop a design for a small, lightweight, high resolution, in x, y, and z Doppler imager to assist in the guidance, navigation and control...

  3. Onboard image processing (United States)

    Martin, D. R.; Samulon, A. S.


    The possibility of onboard geometric correction of Thematic Mapper type imagery to make possible image registration is considered. Typically, image registration is performed by processing raw image data on the ground. The geometric distortion (e.g., due to variation in spacecraft location and viewing angle) is estimated by using a Kalman filter updated by correlating the received data with a small reference subimage, which has known location. Onboard image processing dictates minimizing the complexity of the distortion estimation while offering the advantages of a real time environment. In keeping with this, the distortion estimation can be replaced by information obtained from the Global Positioning System and from advanced star trackers. Although not as accurate as the conventional ground control point technique, this approach is capable of achieving subpixel registration. Appropriate attitude commands can be used in conjunction with image processing to achieve exact overlap of image frames. The magnitude of the various distortion contributions, the accuracy with which they can be measured in real time, and approaches to onboard correction are investigated.

  4. Geology And Image Processing (United States)

    Daily, Mike


    The design of digital image processing systems for geological applications will be driven by the nature and complexity of the intended use, by the types and quantities of data, and by systems considerations. Image processing will be integrated with geographic information systems (GIS) and data base management systems (DBMS). Dense multiband data sets from radar and multispectral scanners (MSS) will tax memory, bus, and processor architectures. Array processors and dedicated-function chips (VLSI/VHSIC) will allow the routine use of FFT and classification algorithms. As this geoprocessing capability becomes available to a larger segment of the geological community, user friendliness and smooth interaction will become a major concern.

  5. Unsynchronized scanning with a low-cost laser range finder for real-time range imaging (United States)

    Hatipoglu, Isa; Nakhmani, Arie


    Range imaging plays an essential role in many fields: 3D modeling, robotics, heritage, agriculture, forestry, reverse engineering. One of the most popular range-measuring technologies is laser scanner due to its several advantages: long range, high precision, real-time measurement capabilities, and no dependence on lighting conditions. However, laser scanners are very costly. Their high cost prevents widespread use in applications. Due to the latest developments in technology, now, low-cost, reliable, faster, and light-weight 1D laser range finders (LRFs) are available. A low-cost 1D LRF with a scanning mechanism, providing the ability of laser beam steering for additional dimensions, enables to capture a depth map. In this work, we present an unsynchronized scanning with a low-cost LRF to decrease scanning period and reduce vibrations caused by stop-scan in synchronized scanning. Moreover, we developed an algorithm for alignment of unsynchronized raw data and proposed range image post-processing framework. The proposed technique enables to have a range imaging system for a fraction of the price of its counterparts. The results prove that the proposed method can fulfill the need for a low-cost laser scanning for range imaging for static environments because the most significant limitation of the method is the scanning period which is about 2 minutes for 55,000 range points (resolution of 250x220 image). In contrast, scanning the same image takes around 4 minutes in synchronized scanning. Once faster, longer range, and narrow beam LRFs are available, the methods proposed in this work can produce better results.

  6. Hyperspectral image processing

    CERN Document Server

    Wang, Liguo


    Based on the authors’ research, this book introduces the main processing techniques in hyperspectral imaging. In this context, SVM-based classification, distance comparison-based endmember extraction, SVM-based spectral unmixing, spatial attraction model-based sub-pixel mapping, and MAP/POCS-based super-resolution reconstruction are discussed in depth. Readers will gain a comprehensive understanding of these cutting-edge hyperspectral imaging techniques. Researchers and graduate students in fields such as remote sensing, surveying and mapping, geosciences and information systems will benefit from this valuable resource.

  7. Disocclusion of 3d LIDAR Point Clouds Using Range Images (United States)

    Biasutti, P.; Aujol, J.-F.; Brédif, M.; Bugeau, A.


    This paper proposes a novel framework for the disocclusion of mobile objects in 3D LiDAR scenes aquired via street-based Mobile Mapping Systems (MMS). Most of the existing lines of research tackle this problem directly in the 3D space. This work promotes an alternative approach by using a 2D range image representation of the 3D point cloud, taking advantage of the fact that the problem of disocclusion has been intensively studied in the 2D image processing community over the past decade. First, the point cloud is turned into a 2D range image by exploiting the sensor's topology. Using the range image, a semi-automatic segmentation procedure based on depth histograms is performed in order to select the occluding object to be removed. A variational image inpainting technique is then used to reconstruct the area occluded by that object. Finally, the range image is unprojected as a 3D point cloud. Experiments on real data prove the effectiveness of this procedure both in terms of accuracy and speed.


    Directory of Open Access Journals (Sweden)

    P. Biasutti


    Full Text Available This paper proposes a novel framework for the disocclusion of mobile objects in 3D LiDAR scenes aquired via street-based Mobile Mapping Systems (MMS. Most of the existing lines of research tackle this problem directly in the 3D space. This work promotes an alternative approach by using a 2D range image representation of the 3D point cloud, taking advantage of the fact that the problem of disocclusion has been intensively studied in the 2D image processing community over the past decade. First, the point cloud is turned into a 2D range image by exploiting the sensor’s topology. Using the range image, a semi-automatic segmentation procedure based on depth histograms is performed in order to select the occluding object to be removed. A variational image inpainting technique is then used to reconstruct the area occluded by that object. Finally, the range image is unprojected as a 3D point cloud. Experiments on real data prove the effectiveness of this procedure both in terms of accuracy and speed.

  9. Introduction to computer image processing (United States)

    Moik, J. G.


    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  10. Sampling Number Effects in 2D and Range Imaging of Range-gated Acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Seong-Ouk; Park, Seung-Kyu; Baik, Sung-Hoon; Cho, Jai-Wan; Jeong, Kyung-Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)


    In this paper, we analyzed the number effects of sampling images for making a 2D image and a range image from acquired RGI images. We analyzed the number effects of RGI images for making a 2D image and a range image using a RGI vision system. As the results, 2D image quality was not much depended on the number of sampling images but on how much well extract efficient RGI images. But, the number of RGI images was important for making a range image because range image quality was proportional to the number of RGI images. Image acquiring in a monitoring area of nuclear industry is an important function for safety inspection and preparing appropriate control plans. To overcome the non-visualization problem caused by airborne obstacle particles, vision systems should have extra-functions, such as active illumination lightening through disturbance airborne particles. One of these powerful active vision systems is a range-gated imaging system. The vision system based on the range-gated imaging system can acquire image data from raining or smoking environments. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra-short exposure time to only get the illumination light. Here, the illuminant illuminates objects by flashing strong light through airborne disturbance particles. Thus, in contrast to passive conventional vision systems, the RGI active vision technology robust for low-visibility environments.

  11. Introduction to digital image processing

    CERN Document Server

    Pratt, William K


    CONTINUOUS IMAGE CHARACTERIZATION Continuous Image Mathematical Characterization Image RepresentationTwo-Dimensional SystemsTwo-Dimensional Fourier TransformImage Stochastic CharacterizationPsychophysical Vision Properties Light PerceptionEye PhysiologyVisual PhenomenaMonochrome Vision ModelColor Vision ModelPhotometry and ColorimetryPhotometryColor MatchingColorimetry ConceptsColor SpacesDIGITAL IMAGE CHARACTERIZATION Image Sampling and Reconstruction Image Sampling and Reconstruction ConceptsMonochrome Image Sampling SystemsMonochrome Image Reconstruction SystemsColor Image Sampling SystemsImage QuantizationScalar QuantizationProcessing Quantized VariablesMonochrome and Color Image QuantizationDISCRETE TWO-DIMENSIONAL LINEAR PROCESSING Discrete Image Mathematical Characterization Vector-Space Image RepresentationGeneralized Two-Dimensional Linear OperatorImage Statistical CharacterizationImage Probability Density ModelsLinear Operator Statistical RepresentationSuperposition and ConvolutionFinite-Area Superp...

  12. A Review on Image Processing


    Amandeep Kour; Vimal Kishore Yadav; Vikas Maheshwari; Deepak Prashar


    Image Processing includes changing the nature of an image in order to improve its pictorial information for human interpretation, for autonomous machine perception. Digital image processing is a subset of the electronic domain wherein the image is converted to an array of small integers, called pixels, representing a physical quantity such as scene radiance, stored in a digital memory, and processed by computer or other digital hardware. Interest in digital image processing methods stems from...

  13. scikit-image: image processing in Python

    Directory of Open Access Journals (Sweden)

    Stéfan van der Walt


    Full Text Available scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage,

  14. scikit-image: image processing in Python. (United States)

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony


    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage,

  15. ISAR imaging using the instantaneous range instantaneous Doppler method

    CSIR Research Space (South Africa)

    Wazna, TM


    Full Text Available In Inverse Synthetic Aperture Radar (ISAR) imaging, the Range Instantaneous Doppler (RID) method is used to compensate for the nonuniform rotational motion of the target that degrades the Doppler resolution of the ISAR image. The Instantaneous Range...

  16. Smart Image Enhancement Process (United States)

    Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)


    Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

  17. Image processing and recognition for biological images. (United States)

    Uchida, Seiichi


    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.

  18. Image processing with ImageJ

    CERN Document Server

    Pascau, Javier


    The book will help readers discover the various facilities of ImageJ through a tutorial-based approach.This book is targeted at scientists, engineers, technicians, and managers, and anyone who wishes to master ImageJ for image viewing, processing, and analysis. If you are a developer, you will be able to code your own routines after you have finished reading this book. No prior knowledge of ImageJ is expected.

  19. Stochastic processes and long range dependence

    CERN Document Server

    Samorodnitsky, Gennady


    This monograph is a gateway for researchers and graduate students to explore the profound, yet subtle, world of long-range dependence (also known as long memory). The text is organized around the probabilistic properties of stationary processes that are important for determining the presence or absence of long memory. The first few chapters serve as an overview of the general theory of stochastic processes which gives the reader sufficient background, language, and models for the subsequent discussion of long memory. The later chapters devoted to long memory begin with an introduction to the subject along with a brief history of its development, followed by a presentation of what is currently the best known approach, applicable to stationary processes with a finite second moment. The book concludes with a chapter devoted to the author’s own, less standard, point of view of long memory as a phase transition, and even includes some novel results. Most of the material in the book has not previously been publis...

  20. Scannerless laser range imaging using loss modulation

    Energy Technology Data Exchange (ETDEWEB)

    Sandusky, John V [Albuquerque, NM


    A scannerless 3-D imaging apparatus is disclosed which utilizes an amplitude modulated cw light source to illuminate a field of view containing a target of interest. Backscattered light from the target is passed through one or more loss modulators which are modulated at the same frequency as the light source, but with a phase delay .delta. which can be fixed or variable. The backscattered light is demodulated by the loss modulator and detected with a CCD, CMOS or focal plane array (FPA) detector to construct a 3-D image of the target. The scannerless 3-D imaging apparatus, which can operate in the eye-safe wavelength region 1.4-1.7 .mu.m and which can be constructed as a flash LADAR, has applications for vehicle collision avoidance, autonomous rendezvous and docking, robotic vision, industrial inspection and measurement, 3-D cameras, and facial recognition.

  1. Recent developments in digital image processing at the Image Processing Laboratory of JPL. (United States)

    O'Handley, D. A.


    Review of some of the computer-aided digital image processing techniques recently developed. Special attention is given to mapping and mosaicking techniques and to preliminary developments in range determination from stereo image pairs. The discussed image processing utilization areas include space, biomedical, and robotic applications.

  2. Fundamentals of electronic image processing

    CERN Document Server

    Weeks, Arthur R


    This book is directed to practicing engineers and scientists who need to understand the fundamentals of image processing theory and algorithms to perform their technical tasks. It is intended to fill the gap between existing high-level texts dedicated to specialists in the field and the need for a more practical, fundamental text on image processing. A variety of example images are used to enhance reader understanding of how particular image processing algorithms work.

  3. 3D imaging without range information (United States)

    Rogers, J. D.; Myatt, D. R.


    Three-dimensional (3D) imaging technologies have considerable potential for aiding military operations in areas such as reconnaissance, mission planning and situational awareness through improved visualisation and user-interaction. This paper describes the development of fast 3D imaging capabilities from low-cost, passive sensors. The two systems discussed here are capable of passive depth perception and recovering 3D structure from a single electro-optic sensor attached to an aerial vehicle that is, for example, circling a target. Based on this example, the proposed method has been shown to produce high quality results when positional data of the sensor is known, and also in the more challenging case when the sensor geometry must be estimated from the input imagery alone. The methods described exploit prior knowledge concerning the type of sensor that is used to produce a more robust output.

  4. Visual Control of Robots Using Range Images

    Directory of Open Access Journals (Sweden)

    Fernando Torres


    Full Text Available In the last years, 3D-vision systems based on the time-of-flight (ToF principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.

  5. Eye Redness Image Processing Techniques (United States)

    Adnan, M. R. H. Mohd; Zain, Azlan Mohd; Haron, Habibollah; Alwee, Razana; Zulfaezal Che Azemin, Mohd; Osman Ibrahim, Ashraf


    The use of photographs for the assessment of ocular conditions has been suggested to further standardize clinical procedures. The selection of the photographs to be used as scale reference images was subjective. Numerous methods have been proposed to assign eye redness scores by computational methods. Image analysis techniques have been investigated over the last 20 years in an attempt to forgo subjective grading scales. Image segmentation is one of the most important and challenging problems in image processing. This paper briefly outlines the comprehensive of image processing and the implementation of image segmentation in eye redness.

  6. Building accurate geometric models from abundant range imaging information (United States)

    Diegert, Carl F.; Sackos, John T.; Nellums, Robert O.


    We define two simple metrics for accuracy of models built from range imaging information. We apply the metric to a model built from a recent range image taken at the laser radar Development and Evaluation Facility, Eglin AFB, using a scannerless range imager (SRI) from Sandia National Laboratories. We also present graphical displays of the residual information produced as a byproduct of this measurement, and discuss mechanisms that these data suggest for further improvement in the performance of this already impressive SRI.

  7. Cooperative processes in image segmentation (United States)

    Davis, L. S.


    Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.

  8. Pile volume measurement by range imaging camera in indoor environment

    Directory of Open Access Journals (Sweden)

    C. Altuntas


    Full Text Available Range imaging (RIM camera is recent technology in 3D location measurement. The new study areas have been emerged in measurement and data processing together with RIM camera. It has low-cost and fast measurement technique compared to the current measurement techniques. However its measurement accuracy varies according to effects resulting from the device and the environment. The direct sunlight is affect measurement accuracy of the camera. Thus, RIM camera should be used for indoor measurement. In this study gravel pile volume was measured by SwissRanger SR4000 camera. The measured volume is acquired as different 8.13% from the known.

  9. Signal and image processing in medical applications

    CERN Document Server

    Kumar, Amit; Rahim, B Abdul; Kumar, D Sravan


    This book highlights recent findings on and analyses conducted on signals and images in the area of medicine. The experimental investigations involve a variety of signals and images and their methodologies range from very basic to sophisticated methods. The book explains how signal and image processing methods can be used to detect and forecast abnormalities in an easy-to-follow manner, offering a valuable resource for researchers, engineers, physicians and bioinformatics researchers alike.

  10. Dynamic range compression and detail enhancement algorithm for infrared image. (United States)

    Sun, Gang; Liu, Songlin; Wang, Weihua; Chen, Zengping


    For infrared imaging systems with high sampling width applying to the traditional display device or real-time processing system with 8-bit data width, this paper presents a new high dynamic range compression and detail enhancement (DRCDDE) algorithm for infrared images. First, a bilateral filter is adopted to separate the original image into two parts: the base component that contains large-scale signal variations, and the detail component that contains high-frequency information. Then, the operator model for DRC with local-contrast preservation is established, along with a new proposed nonlinear intensity transfer function (ITF) to implement adaptive DRC of the base component. For the detail component, depending on the local statistical characteristics, we set up suitable intensity level extension criteria to enhance the low-contrast details and suppress noise. Finally, the results of the two components are recombined with a weighted coefficient. Experiment results by real infrared data, and quantitative comparison with other well-established methods, show the better performance of the proposed algorithm. Furthermore, the technique could effectively project a dim target while suppressing noise, which is beneficial to image display and target detection.

  11. Industrial Applications of Image Processing (United States)

    Ciora, Radu Adrian; Simion, Carmen Mihaela


    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  12. Target Image Matching Algorithm Based on Binocular CCD Ranging

    Directory of Open Access Journals (Sweden)

    Dongming Li


    Full Text Available This paper proposed target image in a subpixel level matching algorithm for binocular CCD ranging, which is based on the principle of binocular CCD ranging. In the paper, firstly, we introduced the ranging principle of the binocular ranging system and deduced a binocular parallax formula. Secondly, we deduced the algorithm which was named improved cross-correlation matching algorithm and cubic surface fitting algorithm for target images matched, and it could achieve a subpixel level matching for binocular CCD ranging images. Lastly, through experiment we have analyzed and verified the actual CCD ranging images, then analyzed the errors of the experimental results and corrected the formula of calculating system errors. Experimental results showed that the actual measurement accuracy of a target within 3 km was higher than 0.52%, which meet the accuracy requirements of the high precision binocular ranging.

  13. Real-time extended dynamic range imaging in shearography. (United States)

    Groves, Roger M; Pedrini, Giancarlo; Osten, Wolfgang


    Extended dynamic range (EDR) imaging is a postprocessing technique commonly associated with photography. Multiple images of a scene are recorded by the camera using different shutter settings and are merged into a single higher dynamic range image. Speckle interferometry and holography techniques require a well-modulated intensity signal to extract the phase information, and of these techniques shearography is most sensitive to different object surface reflectivities as it uses self-referencing from a sheared image. In this paper the authors demonstrate real-time EDR imaging in shearography and present experimental results from a difficult surface reflectivity sample: a wooden panel painting containing gold and dark earth color paint.

  14. [Imaging center - optimization of the imaging process]. (United States)

    Busch, H-P


    Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams. © Georg Thieme Verlag KG Stuttgart · New York.

  15. Statistical Image Processing. (United States)


    spectral analysist texture image analysis and classification, __ image software package, automatic spatial clustering.ITWA domenit hi ba apa for...ICOLOR(256),IBW(256) 1502 FORMATO (30( CNO(N): fF12.1)) 1503 FORMAT(o *FMINo DMRGE:0f2E20.8) 1504 FORMAT(/o IMRGE:or15) 1505 FOR14ATV FIRST SUBIMAGE:v...1506 FORMATO ’ JOIN CLUSTER NL:0) 1507 FORMAT( NEW CLUSTER:O) 1508 FORMAT( LLBS.GE.600) 1532 FORMAT(15XoTHETA ,7X, SIGMA-SQUAREr3Xe MERGING-DISTANCE

  16. Building country image process

    Directory of Open Access Journals (Sweden)

    Zubović Jovan


    Full Text Available The same branding principles are used for countries as they are used for the products, only the methods are different. Countries are competing among themselves in tourism, foreign investments and exports. Country turnover is at the level that the country's reputation is. The countries that begin as unknown or with a bad image will have limits in operations or they will be marginalized. As a result they will be at the bottom of the international influence scale. On the other hand, countries with a good image, like Germany (despite two world wars will have their products covered with a special "aura".

  17. Image Processing and Geographic Information (United States)

    McLeod, Ronald G.; Daily, Julie; Kiss, Kenneth


    A Geographic Information System, which is a product of System Development Corporation's Image Processing System and a commercially available Data Base Management System, is described. The architecture of the system allows raster (image) data type, graphics data type, and tabular data type input and provides for the convenient analysis and display of spatial information. A variety of functions are supported through the Geographic Information System including ingestion of foreign data formats, image polygon encoding, image overlay, image tabulation, costatistical modelling of image and tabular information, and tabular to image conversion. The report generator in the DBMS is utilized to prepare quantitative tabular output extracted from spatially referenced images. An application of the Geographic Information System to a variety of data sources and types is highlighted. The application utilizes sensor image data, graphically encoded map information available from government sources, and statistical tables.

  18. SWNT Imaging Using Multispectral Image Processing (United States)

    Blades, Michael; Pirbhai, Massooma; Rotkin, Slava V.


    A flexible optical system was developed to image carbon single-wall nanotube (SWNT) photoluminescence using the multispectral capabilities of a typical CCD camcorder. The built in Bayer filter of the CCD camera was utilized, using OpenCV C++ libraries for image processing, to decompose the image generated in a high magnification epifluorescence microscope setup into three pseudo-color channels. By carefully calibrating the filter beforehand, it was possible to extract spectral data from these channels, and effectively isolate the SWNT signals from the background.

  19. Joint focus stacking and high dynamic range imaging (United States)

    Qian, Qinchun; Gunturk, Bahadir K.; Batur, Aziz U.


    Focus stacking and high dynamic range (HDR) imaging are two paradigms of computational photography. Focus stacking aims to produce an image with greater depth of field (DOF) from a set of images taken with different focus distances, whereas HDR imaging aims to produce an image with higher dynamic range from a set of images taken with different exposure settings. In this paper, we present an algorithm which combines focus stacking and HDR imaging in order to produce an image with both higher dynamic range and greater DOF than any of the input images. The proposed algorithm includes two main parts: (i) joint photometric and geometric registration and (ii) joint focus stacking and HDR image creation. In the first part, images are first photometrically registered using an algorithm that is insensitive to small geometric variations, and then geometrically registered using an optical flow algorithm. In the second part, images are merged through weighted averaging, where the weights depend on both local sharpness and exposure information. We provide experimental results with real data to illustrate the algorithm. The algorithm is also implemented on a smartphone with Android operating system.


    Directory of Open Access Journals (Sweden)

    Anand Deshpande


    Full Text Available The iris segmentation plays a major role in an iris recognition system to increase the performance of the system. This paper proposes a novel method for segmentation of iris images to extract the iris part of long range captured eye image and an approach to select best iris frame from the iris polar image sequences by analyzing the quality of iris polar images. The quality of iris image is determined by the frequency components present in the iris polar images. The experiments are carried out on CASIA-long range captured iris image sequences. The proposed segmentation method is compared with Hough transform based segmentation and it has been determined that the proposed method gives higher accuracy for segmentation than Hough transform.


    Directory of Open Access Journals (Sweden)

    Preuss Ryszard


    Full Text Available This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft . At present, image data obtained by various registration systems (metric and non - metric cameras placed on airplanes , satellites , or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images . For fast images georeferencing automatic image matching algorithms are currently applied . They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage . Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object ( area. In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic , DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules . I mage processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters . The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system.

  2. Image processing for optical mapping. (United States)

    Ravindran, Prabu; Gupta, Aditya


    Optical Mapping is an established single-molecule, whole-genome analysis system, which has been used to gain a comprehensive understanding of genomic structure and to study structural variation of complex genomes. A critical component of Optical Mapping system is the image processing module, which extracts single molecule restriction maps from image datasets of immobilized, restriction digested and fluorescently stained large DNA molecules. In this review, we describe robust and efficient image processing techniques to process these massive datasets and extract accurate restriction maps in the presence of noise, ambiguity and confounding artifacts. We also highlight a few applications of the Optical Mapping system.

  3. Biomedical signal and image processing

    CERN Document Server

    Najarian, Kayvan


    INTRODUCTION TO DIGITAL SIGNAL AND IMAGE PROCESSINGSignals and Biomedical Signal ProcessingIntroduction and OverviewWhat is a ""Signal""?Analog, Discrete, and Digital SignalsProcessing and Transformation of SignalsSignal Processing for Feature ExtractionSome Characteristics of Digital ImagesSummaryProblemsFourier TransformIntroduction and OverviewOne-Dimensional Continuous Fourier TransformSampling and NYQUIST RateOne-Dimensional Discrete Fourier TransformTwo-Dimensional Discrete Fourier TransformFilter DesignSummaryProblemsImage Filtering, Enhancement, and RestorationIntroduction and Overview

  4. Range image registration using a photometric metric under unknown lighting. (United States)

    Thomas, Diego; Sugimoto, Akihiro


    Based on the spherical harmonics representation of image formation, we derive a new photometric metric for evaluating the correctness of a given rigid transformation aligning two overlapping range images captured under unknown, distant, and general illumination. We estimate the surrounding illumination and albedo values of points of the two range images from the point correspondences induced by the input transformation. We then synthesize the color of both range images using albedo values transferred using the point correspondences to compute the photometric reprojection error. This way allows us to accurately register two range images by finding the transformation that minimizes the photometric reprojection error. We also propose a practical method using the proposed photometric metric to register pairs of range images devoid of salient geometric features, captured under unknown lighting. Our method uses a hypothesize-and-test strategy to search for the transformation that minimizes our photometric metric. Transformation candidates are efficiently generated by employing the spherical representation of each range image. Experimental results using both synthetic and real data demonstrate the usefulness of the proposed metric.

  5. Applications of Digital Image Processing 11 (United States)

    Cho, Y. -C.


    A new technique, digital image velocimetry, is proposed for the measurement of instantaneous velocity fields of time dependent flows. A time sequence of single-exposure images of seed particles are captured with a high-speed camera, and a finite number of the single-exposure images are sampled within a prescribed period in time. The sampled images are then digitized on an image processor, enhanced, and superimposed to construct an image which is equivalent to a multiple exposure image used in both laser speckle velocimetry and particle image velocimetry. The superimposed image and a single-exposure Image are digitally Fourier transformed for extraction of information on the velocity field. A great enhancement of the dynamic range of the velocity measurement is accomplished through the new technique by manipulating the Fourier transform of both the single-exposure image and the superimposed image. Also the direction of the velocity vector is unequivocally determined. With the use of a high-speed video camera, the whole process from image acquisition to velocity determination can be carried out electronically; thus this technique can be developed into a real-time capability.

  6. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)


    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  7. Rotation Covariant Image Processing for Biomedical Applications

    Directory of Open Access Journals (Sweden)

    Henrik Skibbe


    Full Text Available With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.

  8. Range imager performance comparison in homodyne and heterodyne operating modes (United States)

    Conroy, Richard M.; Dorrington, Adrian A.; Künnemeyer, Rainer; Cree, Michael J.


    Range imaging cameras measure depth simultaneously for every pixel in a given field of view. In most implementations the basic operating principles are the same. A scene is illuminated with an intensity modulated light source and the reflected signal is sampled using a gain-modulated imager. Previously we presented a unique heterodyne range imaging system that employed a bulky and power hungry image intensifier as the high speed gain-modulation mechanism. In this paper we present a new range imager using an internally modulated image sensor that is designed to operate in heterodyne mode, but can also operate in homodyne mode. We discuss homodyne and heterodyne range imaging, and the merits of the various types of hardware used to implement these systems. Following this we describe in detail the hardware and firmware components of our new ranger. We experimentally compare the two operating modes and demonstrate that heterodyne operation is less sensitive to some of the limitations suffered in homodyne mode, resulting in better linearity and ranging precision characteristics. We conclude by showing various qualitative examples that demonstrate the system's three-dimensional measurement performance.


    Directory of Open Access Journals (Sweden)

    B. Jutzi


    Full Text Available Obtaining a 3D description of man-made and natural environments is a basic task in Computer Vision and Remote Sensing. To this end, laser scanning is currently one of the dominating techniques to gather reliable 3D information. The scanning principle inherently needs a certain time interval to acquire the 3D point cloud. On the other hand, new active sensors provide the possibility of capturing range information by images with a single measurement. With this new technique image-based active ranging is possible which allows capturing dynamic scenes, e.g. like walking pedestrians in a yard or moving vehicles. Unfortunately most of these range imaging sensors have strong technical limitations and are not yet sufficient for airborne data acquisition. It can be seen from the recent development of highly specialized (far-range imaging sensors – so called flash-light lasers – that most of the limitations could be alleviated soon, so that future systems will be equipped with improved image size and potentially expanded operating range. The presented work is a first step towards the development of methods capable for application of range images in outdoor environments. To this end, an experimental setup was set up for investigating these proposed possibilities. With the experimental setup a measurement campaign was carried out and first results will be presented within this paper.

  10. Color Sensitivity Multiple Exposure Fusion using High Dynamic Range Image

    Directory of Open Access Journals (Sweden)

    Varsha Borole


    Full Text Available In this paper, we present a high dynamic range imaging (HDRI method using a capturing camera image using normally exposure, over exposure and under exposure. We make three different images from a multiple input image using local histogram stretching. Because the proposed method generated three histogram-stretched images from a multiple input image, ghost artifacts that are the result of the relative motion between the camera and objects during exposure time, are inherently removed. Therefore, the proposed method can be applied to a consumer compact camera to provide the ghost artifacts free HDRI. Experiments with several sets of test images with different exposures show that the proposed method gives a better performance than existing methods in terms of visual results and computation time.

  11. Aerial Triangulation Close-range Images with Dual Quaternion

    Directory of Open Access Journals (Sweden)

    SHENG Qinghong


    Full Text Available A new method for the aerial triangulation of close-range images based on dual quaternion is presented. Using dual quaternion to represent the spiral screw motion of the beam in the space, the real part of dual quaternion represents the angular elements of all the beams in the close-range area networks, the real part and the dual part of dual quaternion represents the line elements corporately. Finally, an aerial triangulation adjustment model based on dual quaternion is established, and the elements of interior orientation and exterior orientation and the object coordinates of the ground points are calculated. Real images and large attitude angle simulated images are selected to run the experiments of aerial triangulation. The experimental results show that the new method for the aerial triangulation of close-range images based on dual quaternion can obtain higher accuracy.

  12. A novel track imaging system as a range counter

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Z. [National Institute of Radiological Sciences (Japan); Matsufuji, N. [National Institute of Radiological Sciences (Japan); Tokyo Institute of Technology (Japan); Kanayama, S. [Chiba University (Japan); Ishida, A. [National Institute of Radiological Sciences (Japan); Tokyo Institute of Technology (Japan); Kohno, T. [Tokyo Institute of Technology (Japan); Koba, Y.; Sekiguchi, M.; Kitagawa, A.; Murakami, T. [National Institute of Radiological Sciences (Japan)


    An image-intensified, camera-based track imaging system has been developed to measure the tracks of ions in a scintillator block. To study the performance of the detector unit in the system, two types of scintillators, a dosimetrically tissue-equivalent plastic scintillator EJ-240 and a CsI(Tl) scintillator, were separately irradiated with carbon ion ({sup 12}C) beams of therapeutic energy from HIMAC at NIRS. The images of individual ion tracks in the scintillators were acquired by the newly developed track imaging system. The ranges reconstructed from the images are reported here. The range resolution of the measurements is 1.8 mm for 290 MeV/u carbon ions, which is considered a significant improvement on the energy resolution of the conventional ΔE/E method. The detector is compact and easy to handle, and it can fit inside treatment rooms for in-situ studies, as well as satisfy clinical quality assurance purposes.

  13. Fuzzy image processing in sun sensor (United States)

    Mobasser, S.; Liebe, C. C.; Howard, A.


    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  14. Differential morphology and image processing. (United States)

    Maragos, P


    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  15. Computational Intelligence in Image Processing

    CERN Document Server

    Siarry, Patrick


    Computational intelligence based techniques have firmly established themselves as viable, alternate, mathematical tools for more than a decade. They have been extensively employed in many systems and application domains, among these signal processing, automatic control, industrial and consumer electronics, robotics, finance, manufacturing systems, electric power systems, and power electronics. Image processing is also an extremely potent area which has attracted the atten­tion of many researchers who are interested in the development of new computational intelligence-based techniques and their suitable applications, in both research prob­lems and in real-world problems. Part I of the book discusses several image preprocessing algorithms; Part II broadly covers image compression algorithms; Part III demonstrates how computational intelligence-based techniques can be effectively utilized for image analysis purposes; and Part IV shows how pattern recognition, classification and clustering-based techniques can ...

  16. Image processing in medical ultrasound

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian

    This Ph.D project addresses image processing in medical ultrasound and seeks to achieve two major scientific goals: First to develop an understanding of the most significant factors influencing image quality in medical ultrasound, and secondly to use this knowledge to develop image processing...... methods for enhancing the diagnostic value of medical ultrasound. The project is an industrial Ph.D project co-sponsored by BK Medical ApS., with the commercial goal to improve the image quality of BK Medicals scanners. Currently BK Medical employ a simple conventional delay-and-sum beamformer to generate......-time data acquisition system. The system were implemented using the commercial available 2202 ProFocus BK Medical ultrasound scanner equipped with a research interface and a standard PC. The main feature of the system is the possibility to acquire several seconds of interleaved data, switching between...

  17. Digital processing of radiographic images (United States)

    Bond, A. D.; Ramapriyan, H. K.


    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  18. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Sensakovic, William F.; O' Dell, M.C.; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura [Florida Hospital, Imaging Administration, Orlando, FL (United States)


    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA{sup 2} by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  19. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs. (United States)

    Sensakovic, William F; O'Dell, M Cody; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura


    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA(2) by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image processing can

  20. Image processing of galaxy photographs (United States)

    Arp, H.; Lorre, J.


    New computer techniques for analyzing and processing photographic images of galaxies are presented, with interesting scientific findings gleaned from the processed photographic data. Discovery and enhancement of very faint and low-contrast nebulous features, improved resolution of near-limit detail in nebulous and stellar images, and relative colors of a group of nebulosities in the field are attained by the methods. Digital algorithms, nonlinear pattern-recognition filters, linear convolution filters, plate averaging and contrast enhancement techniques, and an atmospheric deconvolution technique are described. New detail is revealed in images of NGC 7331, Stephan's Quintet, Seyfert's Sextet, and the jet in M87, via processes of addition of plates, star removal, contrast enhancement, standard deviation filtering, and computer ratioing to bring out qualitative color differences.

  1. Automatic face segmentation and facial landmark detection in range images. (United States)

    Pamplona Segundo, Maurício; Silva, Luciano; Bellon, Olga Regina Pereira; Queirolo, Chauã C


    We present a methodology for face segmentation and facial landmark detection in range images. Our goal was to develop an automatic process to be embedded in a face recognition system using only depth information as input. To this end, our segmentation approach combines edge detection, region clustering, and shape analysis to extract the face region, and our landmark detection approach combines surface curvature information and depth relief curves to find the nose and eye landmarks. The experiments were performed using the two available versions of the Face Recognition Grand Challenge database and the BU-3DFE database, in order to validate our proposed methodology and its advantages for 3-D face recognition purposes. We present an analysis regarding the accuracy of our segmentation and landmark detection approaches. Our results were better compared to state-of-the-art works published in the literature. We also performed an evaluation regarding the influence of the segmentation process in our 3-D face recognition system and analyzed the improvements obtained when applying landmark-based techniques to deal with facial expressions.

  2. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process

    Directory of Open Access Journals (Sweden)

    Isao Takayanagi


    Full Text Available To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR approach.

  3. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process. (United States)

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori


    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke-. Readout noise under the highest pixel gain condition is 1 e- with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7", 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach.

  4. Early Skin Tumor Detection from Microscopic Images through Image Processing

    Directory of Open Access Journals (Sweden)



    Full Text Available The research is done to provide appropriate detection technique for skin tumor detection. The work is done by using the image processing toolbox of MATLAB. Skin tumors are unwanted skin growth with different causes and varying extent of malignant cells. It is a syndrome in which skin cells mislay the ability to divide and grow normally. Early detection of tumor is the most important factor affecting the endurance of a patient. Studying the pattern of the skin cells is the fundamental problem in medical image analysis. The study of skin tumor has been of great interest to the researchers. DIP (Digital Image Processing allows the use of much more complex algorithms for image processing, and hence, can offer both more sophisticated performance at simple task, and the implementation of methods which would be impossibly by analog means. It allows much wider range of algorithms to be applied to the input data and can avoid problems such as build up of noise and signal distortion during processing. The study shows that few works has been done on cellular scale for the images of skin. This research allows few checks for the early detection of skin tumor using microscopic images after testing and observing various algorithms. After analytical evaluation the result has been observed that the proposed checks are time efficient techniques and appropriate for the tumor detection. The algorithm applied provides promising results in lesser time with accuracy. The GUI (Graphical User Interface that is generated for the algorithm makes the system user friendly

  5. Corner-point criterion for assessing nonlinear image processing imagers (United States)

    Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory


    Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to

  6. Calibration and control for range imaging in mobile robot navigation

    Energy Technology Data Exchange (ETDEWEB)

    Dorum, O.H. [Norges Tekniske Hoegskole, Trondheim (Norway). Div. of Computer Systems and Telematics; Hoover, A. [University of South Florida, Tampa, FL (United States). Dept. of Computer Science and Engineering; Jones, J.P. [Oak Ridge National Lab., TN (United States)


    This paper addresses some issues in the development of sensor-based systems for mobile robot navigation which use range imaging sensors as the primary source for geometric information about the environment. In particular, we describe a model of scanning laser range cameras which takes into account the properties of the mechanical system responsible for image formation and a calibration procedure which yields improved accuracy over previous models. In addition, we describe an algorithm which takes the limitations of these sensors into account in path planning and path execution. In particular, range imaging sensors are characterized by a limited field of view and a standoff distance -- a minimum distance nearer than which surfaces cannot be sensed. These limitations can be addressed by enriching the concept of configuration space to include information about what can be sensed from a given configuration, and using this information to guide path planning and path following.

  7. Selections from 2017: Image Processing with AstroImageJ (United States)

    Kohler, Susanna


    Editors note:In these last two weeks of 2017, well be looking at a few selections that we havent yet discussed on AAS Nova from among the most-downloaded paperspublished in AAS journals this year. The usual posting schedule will resume in January.AstroImageJ: Image Processing and Photometric Extraction for Ultra-Precise Astronomical Light CurvesPublished January2017The AIJ image display. A wide range of astronomy specific image display options and image analysis tools are available from the menus, quick access icons, and interactive histogram. [Collins et al. 2017]Main takeaway:AstroImageJ is a new integrated software package presented in a publication led byKaren Collins(Vanderbilt University,Fisk University, andUniversity of Louisville). Itenables new users even at the level of undergraduate student, high school student, or amateur astronomer to quickly start processing, modeling, and plotting astronomical image data.Why its interesting:Science doesnt just happen the momenta telescope captures a picture of a distantobject. Instead, astronomical images must firstbe carefully processed to clean up thedata, and this data must then be systematically analyzed to learn about the objects within it. AstroImageJ as a GUI-driven, easily installed, public-domain tool is a uniquelyaccessible tool for thisprocessing and analysis, allowing even non-specialist users to explore and visualizeastronomical data.Some features ofAstroImageJ:(as reported by Astrobites)Image calibration:generate master flat, dark, and bias framesImage arithmetic:combineimages viasubtraction, addition, division, multiplication, etc.Stack editing:easily perform operations on a series of imagesImage stabilization and image alignment featuresPrecise coordinate converters:calculate Heliocentric and Barycentric Julian DatesWCS coordinates:determine precisely where atelescope was pointed for an image by PlateSolving using Astronomy.netMacro and plugin support:write your own macrosMulti-aperture photometry

  8. CMOS imagers from phototransduction to image processing

    CERN Document Server

    Etienne-Cummings, Ralph


    The idea of writing a book on CMOS imaging has been brewing for several years. It was placed on a fast track after we agreed to organize a tutorial on CMOS sensors for the 2004 IEEE International Symposium on Circuits and Systems (ISCAS 2004). This tutorial defined the structure of the book, but as first time authors/editors, we had a lot to learn about the logistics of putting together information from multiple sources. Needless to say, it was a long road between the tutorial and the book, and it took more than a few months to complete. We hope that you will find our journey worthwhile and the collated information useful. The laboratories of the authors are located at many universities distributed around the world. Their unifying theme, however, is the advancement of knowledge for the development of systems for CMOS imaging and image processing. We hope that this book will highlight the ideas that have been pioneered by the authors, while providing a roadmap for new practitioners in this field to exploit exc...

  9. Ladar range image denoising by a nonlocal probability statistics algorithm (United States)

    Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi


    According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.

  10. Multimedia image and video processing

    CERN Document Server

    Guan, Ling


    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  11. Linear Algebra and Image Processing (United States)

    Allali, Mohamed


    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  12. Passive millimeter-wave imaging at short and medium range (United States)

    Essen, H.; Fuchs, H.-H.; Nötel, D.; Klöppel, F.; Pergande, P.; Stanko, S.


    During recent year's research on radiometric signatures, non-imaging, of the exhaust jet of missiles and imaging, on small vehicles in critical background scenarios were conducted by the mmW/submmW-group at FGAN-FHR. The equipment used for these investigations was of low technological status using simple single channel radiometers on a scanning pedestal. Meanwhile components of improved performance are available on a cooperative basis with the Institute for Applied Solid State Physics (Fraunhofer-IAF). Using such components a considerable progress concerning the temperature resolution and image generation time could be achieved. Emphasis has been put on the development of a demonstrator for CWD applications and on an imaging system for medium range applications, up to 200 m. The short range demonstrator is a scanning system operating alternatively at 35 GHz or 94 GHz to detect hidden materials as explosives, guns, knifes beneath the clothing. The demonstrator uses a focal plane array approach using 4 channels in azimuth, while mechanical scanning is used for the elevation. The medium range demonstrator currently employs a single channel radiometer on a pedestal for elevation over azimuth scanning. To improve the image quality, methods have been implemented using a Lorentzian algorithm with Wiener filtering.

  13. Shadow correction in high dynamic range images for generating orthophotos (United States)

    Suzuki, Hideo; Chikatsu, Hirofumi


    High dynamic range imagery is widely used in remote sensing. With the widespread use of aerial digital cameras such as the DMC, ADS40, RMK-D, and UltraCamD, high dynamic range imaging is generally expected for generating minuteness orthophotos in digital aerial photogrammetry. However, high dynamic range images (12-bit, 4,096 gray levels) are generally compressed into an 8-bit depth digital image (256 gray levels) owing to huge amount of data and interface with peripherals such as monitors and printers. This means that a great deal of image data is eliminated from the original image, and this introduces a new shadow problem. In particular, the influence of shadows in urban areas causes serious problems when generating minuteness orthophotos and performing house detection. Therefore, shadow problems can be solved by addressing the image compression problems. There is a large body of literature on image compression techniques such as logarithmic compression and tone mapping algorithms. However, logarithmic compression tends to cause loss of details in dark and/or light areas. Furthermore, the logarithmic method intends to operate on the full scene. This means that high-resolution luminance information can not be obtained. Even though tone mapping algorithms have the ability to operate over both full scene and local scene, background knowledge is required. To resolve the shadow problem in digital aerial photogrammetry, shadow areas should be recognized and corrected automatically without the loss of luminance information. To this end, a practical shadow correction method using 12-bit real data acquired by DMC is investigated in this paper.

  14. Advances in iterative multigrid PIV image processing (United States)

    Scarano, F.; Riethmuller, M. L.


    An image-processing technique is proposed, which performs iterative interrogation of particle image velocimetry (PIV) recordings. The method is based on cross-correlation, enhancing the matching performances by means of a relative transformation between the interrogation areas. On the basis of an iterative prediction of the tracers motion, window offset and deformation are applied, accounting for the local deformation of the fluid continuum. In addition, progressive grid refinement is applied in order to maximise the spatial resolution. The performances of the method are analysed and compared with the conventional cross correlation with and without the effect of a window discrete offset. The assessment of performance through synthetic PIV images shows that a remarkable improvement can be obtained in terms of precision and dynamic range. Moreover, peak-locking effects do not affect the method in practice. The velocity gradient range accessed with the application of a relative window deformation (linear approximation) is significantly enlarged, as confirmed in the experimental results.

  15. Comparison of range migration correction algorithms for range-Doppler processing (United States)

    Uysal, Faruk


    The next generation digital radars are able to provide high-range resolution by the advancement of radar hardware technologies. These systems take advantage of coherent integration and Doppler processing technique to increase the target's signal-to-noise ratio. Due to the high-range resolution (small range cells) and fast target motion, a target migrates through multiple range cells within a coherent processing interval. Range cell migration (also known as range walk) occurs and degrades the coherent integration gain. There are many approaches in the literature to correct these unavoidable effects and focus the target in the range-Doppler domain. We demonstrate some of these methods on an operational frequency-modulated continuous-wave (FMCW) radar and point out practical issues in the application.

  16. Range-gated imaging for near-field target identification

    Energy Technology Data Exchange (ETDEWEB)

    Yates, G.J.; Gallegos, R.A.; McDonald, T.E. [and others


    The combination of two complementary technologies developed independently at Los Alamos National Laboratory (LANL) and Sandia National Laboratory (SNL) has demonstrated feasibility of target detection and image capture in a highly light-scattering, medium. The technique uses a compact SNL developed Photoconductive Semiconductor Switch/Laser Diode Array (PCSS/LDA) for short-range (distances of 8 to 10 m) large Field-Of-View (FOV) target illumination. Generation of a time-correlated echo signal is accomplished using a photodiode. The return image signal is recorded with a high-speed shuttered Micro-Channel-Plate Image Intensifier (MCPII), declined by LANL and manufactured by Philips Photonics. The MCPII is rated using a high-frequency impedance-matching microstrip design to produce 150 to 200 ps duration optical exposures. The ultra first shuttering producer depth resolution of a few inches along the optic axis between the MCPII and the target, producing enhanced target images effectively deconvolved from noise components from the scattering medium in the FOV. The images from the MCPII are recorded with an RS-170 Charge-Coupled-Device camera and a Big Sky, Beam Code, PC-based digitizer frame grabber and analysis package. Laser pulse data were obtained by the but jitter problems and spectral mismatches between diode spectral emission wavelength and MCPII photocathode spectral sensitivity prevented the capture of fast gating imaging with this demonstration system. Continued development of the system is underway.

  17. Stereo Vision-Based High Dynamic Range Imaging Using Differently-Exposed Image Pair. (United States)

    Park, Won-Jae; Ji, Seo-Won; Kang, Seok-Jae; Jung, Seung-Won; Ko, Sung-Jea


    In this paper, a high dynamic range (HDR) imaging method based on the stereo vision system is presented. The proposed method uses differently exposed low dynamic range (LDR) images captured from a stereo camera. The stereo LDR images are first converted to initial stereo HDR images using the inverse camera response function estimated from the LDR images. However, due to the limited dynamic range of the stereo LDR camera, the radiance values in under/over-exposed regions of the initial main-view (MV) HDR image can be lost. To restore these radiance values, the proposed stereo matching and hole-filling algorithms are applied to the stereo HDR images. Specifically, the auxiliary-view (AV) HDR image is warped by using the estimated disparity between initial the stereo HDR images and then effective hole-filling is applied to the warped AV HDR image. To reconstruct the final MV HDR, the warped and hole-filled AV HDR image is fused with the initial MV HDR image using the weight map. The experimental results demonstrate objectively and subjectively that the proposed stereo HDR imaging method provides better performance compared to the conventional method.

  18. Stereo Vision-Based High Dynamic Range Imaging Using Differently-Exposed Image Pair

    Directory of Open Access Journals (Sweden)

    Won-Jae Park


    Full Text Available In this paper, a high dynamic range (HDR imaging method based on the stereo vision system is presented. The proposed method uses differently exposed low dynamic range (LDR images captured from a stereo camera. The stereo LDR images are first converted to initial stereo HDR images using the inverse camera response function estimated from the LDR images. However, due to the limited dynamic range of the stereo LDR camera, the radiance values in under/over-exposed regions of the initial main-view (MV HDR image can be lost. To restore these radiance values, the proposed stereo matching and hole-filling algorithms are applied to the stereo HDR images. Specifically, the auxiliary-view (AV HDR image is warped by using the estimated disparity between initial the stereo HDR images and then effective hole-filling is applied to the warped AV HDR image. To reconstruct the final MV HDR, the warped and hole-filled AV HDR image is fused with the initial MV HDR image using the weight map. The experimental results demonstrate objectively and subjectively that the proposed stereo HDR imaging method provides better performance compared to the conventional method.

  19. The emerging versatility of a scannerless range imager

    Energy Technology Data Exchange (ETDEWEB)

    Sackos, J.; Bradley, B.; Nellums, B.; Diegert, C.


    Sandia National Laboratories is nearing the completion of the initial development of a unique type of range imaging sensor. This innovative imaging optical radar is based on an active flood-light scene illuminator and an image intensified CCD camera receiver. It is an all solid-state device (no moving parts) and offers significant size, performance, reliability, simplicity, and affordability advantages over other types of 3-D sensor technologies, including: scanned laser radar, stereo vision, and structured lighting. The sensor is based on low cost, commercially available hardware, and is very well suited for affordable application to a wide variety of military and commercial uses, including: munition guidance, target recognition, robotic vision, automated inspection, driver enhanced vision, collision avoidance, site security and monitoring, terrain mapping, and facility surveying. This paper reviews the sensor technology and its development for the advanced conventional munition guidance application, and discusses a few of the many other emerging applications for this new innovative sensor technology.

  20. Close-range imaging and research priorities in Europe

    Directory of Open Access Journals (Sweden)

    P. Patias


    Full Text Available Since 1984, the European Union’s Framework Program for Research and Innovation has been the main instrument for funding research. Specific priorities, objectives and types of funded activities vary between funding periods. Horizon 2020 is the biggest EU Research and Innovation programme ever with nearly € 80 billion of funding available over 7 years (2014–2020. H2020 is based on three pillars: (i Excellent science, (ii Industrial leadership, (iii Societal challenges. The current economic crisis in Europe and elsewhere leads to extended shortage of research budgets in national levels, which in turn leads researchers to search funds in the highly competitive transnational research instruments, as H2020. This paper : - draws the overall picture of Horizon 2020 - investigates the position of close-range imaging technologies, applications and research areas - presents the research challenges in H2020 that offer funding opportunities in close-range imaging

  1. Adaptive Optics for Satellite Imaging and Space Debris Ranging (United States)

    Bennet, F.; D'Orgeville, C.; Price, I.; Rigaut, F.; Ritchie, I.; Smith, C.

    Earth's space environment is becoming crowded and at risk of a Kessler syndrome, and will require careful management for the future. Modern low noise high speed detectors allow for wavefront sensing and adaptive optics (AO) in extreme circumstances such as imaging small orbiting bodies in Low Earth Orbit (LEO). The Research School of Astronomy and Astrophysics (RSAA) at the Australian National University have been developing AO systems for telescopes between 1 and 2.5m diameter to image and range orbiting satellites and space debris. Strehl ratios in excess of 30% can be achieved for targets in LEO with an AO loop running at 2kHz, allowing the resolution of small features (system developed at RSAA consists of a high speed EMCCD Shack-Hartmann wavefront sensor, a deformable mirror (DM), and realtime computer (RTC), and an imaging camera. The system works best as a laser guide star system but will also function as a natural guide star AO system, with the target itself being the guide star. In both circumstances tip-tilt is provided by the target on the imaging camera. The fast tip-tilt modes are not corrected optically, and are instead removed by taking images at a moderate speed (>30Hz) and using a shift and add algorithm. This algorithm can also incorporate lucky imaging to further improve the final image quality. A similar AO system for space debris ranging is also in development in collaboration with Electro Optic Systems (EOS) and the Space Environment Management Cooperative Research Centre (SERC), at the Mount Stromlo Observatory in Canberra, Australia. The system is designed for an AO corrected upward propagated 1064nm pulsed laser beam, from which time of flight information is used to precisely range the target. A 1.8m telescope is used for both propagation and collection of laser light. A laser guide star, Shack-Hartmann wavefront sensor, and DM are used for high order correction, and tip-tilt correction provided by reflected sunlight from the target. The

  2. [Digital thoracic radiology: devices, image processing, limits]. (United States)

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E


    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing.

  3. The application of camera calibration in range-gated 3D imaging technology (United States)

    Liu, Xiao-quan; Wang, Xian-wei; Zhou, Yan


    Range-gated laser imaging technology was proposed in 1966 by LF Gillespiethe in U.S. Army Night Vision Laboratory(NVL). Using pulse laser and intensified charge-coupled device(ICCD) as light source and detector respectively, range-gated laser imaging technology can realize space-slice imaging while restraining the atmospheric backs-catter, and in turn detect the target effectively, by controlling the delay between the laser pulse and strobe. Owing to the constraints of the development of key components such as narrow pulse laser and gated imaging devices, the research has been progressed slowly in the next few decades. Until the beginning of this century, as the hardware technology continues to mature, this technology has developed rapidly in fields such as night vision, underwater imaging, biomedical imaging, three-dimensional imaging, especially range-gated three-dimensional(3-D) laser imaging field purposing of access to target spatial information. 3-D reconstruction is the processing of restoration of 3-D objects visible surface geometric structure from three-dimensional(2-D) image. Range-gated laser imaging technology can achieve gated imaging of slice space to form a slice image, and in turn provide the distance information corresponding to the slice image. But to inverse the information of 3-D space, we need to obtain the imaging visual field of system, that is, the focal length of the system. Then based on the distance information of the space slice, the spatial information of each unit space corresponding to each pixel can be inversed. Camera calibration is an indispensable step in 3-D reconstruction, including analysis of the internal structure of camera parameters and the external parameters . In order to meet the technical requirements of the range-gated 3-D imaging, this paper intends to study the calibration of the zoom lens system. After summarizing the camera calibration technique comprehensively, a classic calibration method based on line is

  4. Imaging using long range dipolar field effects Nuclear magnetic resonance

    CERN Document Server

    Gutteridge, S


    The work in this thesis has been undertaken by the except where indicated in reference, within the Magnetic Resonance Centre, at the University of Nottingham during the period from October 1998 to March 2001. This thesis details the different characteristics of the long range dipolar field and its application to magnetic resonance imaging. The long range dipolar field is usually neglected in nuclear magnetic resonance experiments, as molecular tumbling decouples its effect at short distances. However, in highly polarised samples residual long range components have a significant effect on the evolution of the magnetisation, giving rise to multiple spin echoes and unexpected quantum coherences. Three applications utilising these dipolar field effects are documented in this thesis. The first demonstrates the spatial sensitivity of the signal generated via dipolar field effects in structured liquid state samples. The second utilises the signal produced by the dipolar field to create proton spin density maps. Thes...

  5. Biomedical signal and image processing. (United States)

    Cerutti, Sergio; Baselli, Giuseppe; Bianchi, Anna; Caiani, Enrico; Contini, Davide; Cubeddu, Rinaldo; Dercole, Fabio; Rienzo, Luca; Liberati, Diego; Mainardi, Luca; Ravazzani, Paolo; Rinaldi, Sergio; Signorini, Maria; Torricelli, Alessandro


    Generally, physiological modeling and biomedical signal processing constitute two important paradigms of biomedical engineering (BME): their fundamental concepts are taught starting from undergraduate studies and are more completely dealt with in the last years of graduate curricula, as well as in Ph.D. courses. Traditionally, these two cultural aspects were separated, with the first one more oriented to physiological issues and how to model them and the second one more dedicated to the development of processing tools or algorithms to enhance useful information from clinical data. A practical consequence was that those who did models did not do signal processing and vice versa. However, in recent years,the need for closer integration between signal processing and modeling of the relevant biological systems emerged very clearly [1], [2]. This is not only true for training purposes(i.e., to properly prepare the new professional members of BME) but also for the development of newly conceived research projects in which the integration between biomedical signal and image processing (BSIP) and modeling plays a crucial role. Just to give simple examples, topics such as brain–computer machine or interfaces,neuroengineering, nonlinear dynamical analysis of the cardiovascular (CV) system,integration of sensory-motor characteristics aimed at the building of advanced prostheses and rehabilitation tools, and wearable devices for vital sign monitoring and others do require an intelligent fusion of modeling and signal processing competences that are certainly peculiar of our discipline of BME.

  6. Fast processing of foreign fiber images by image blocking

    Directory of Open Access Journals (Sweden)

    Yutao Wu


    Full Text Available In the textile industry, it is always the case that cotton products are constitutive of many types of foreign fibers which affect the overall quality of cotton products. As the foundation of the foreign fiber automated inspection, image process exerts a critical impact on the process of foreign fiber identification. This paper presents a new approach for the fast processing of foreign fiber images. This approach includes five main steps, image block, image pre-decision, image background extraction, image enhancement and segmentation, and image connection. At first, the captured color images were transformed into gray-scale images; followed by the inversion of gray-scale of the transformed images ; then the whole image was divided into several blocks. Thereafter, the subsequent step is to judge which image block contains the target foreign fiber image through image pre-decision. Then we segment the image block via OSTU which possibly contains target images after background eradication and image strengthening. Finally, we connect those relevant segmented image blocks to get an intact and clear foreign fiber target image. The experimental result shows that this method of segmentation has the advantage of accuracy and speed over the other segmentation methods. On the other hand, this method also connects the target image that produce fractures therefore getting an intact and clear foreign fiber target image.

  7. Adapting range migration techniques for imaging with metasurface antennas: analysis and limitations (United States)

    Pulido Mancera, Laura; Fromenteze, Thomas; Sleasman, Timothy; Boyarsky, Michael; Imani, Mohammadreza F.; Reynolds, Matthew S.; Smith, David R.


    Dynamic metasurface antennas are planar structures that exhibit remarkable capabilities in controlling electromagnetic wave-fronts, advantages which are particularly attractive for microwave imaging. These antennas exhibit strong frequency dispersion and produce diverse radiation patterns. Such behavior presents unique challenges for integration with conventional imaging algorithms. We analyze an adapted version of the range migration algorithm (RMA) for use with dynamic metasurfaces in image reconstruction. Focusing on the the proposed pre-processing step, that ultimately allows a fast processing of the backscattered signal in the spatial frequency domain from which the fast Fourier transform can efficiently reconstruct the scene. Numerical studies illustrate imaging performance using both conventional methods and the adapted RMA, demonstrating that the RMA can reconstruct images with comparable quality in a fraction of the time. In this paper, we demonstrate the capabilities of the algorithm as a fast reconstruction tool, and we analyze the limitations of the presented technique in terms of image quality.

  8. Support Routines for In Situ Image Processing (United States)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean


    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the

  9. Active resonant subwavelength grating for scannerless range imaging sensors.

    Energy Technology Data Exchange (ETDEWEB)

    Kemme, Shanalyn A.; Nellums, Robert O.; Boye, Robert R.; Peters, David William


    In this late-start LDRD, we will present a design for a wavelength-agile, high-speed modulator that enables a long-term vision for the THz Scannerless Range Imaging (SRI) sensor. It takes the place of the currently-utilized SRI micro-channel plate which is limited to photocathode sensitive wavelengths (primarily in the visible and near-IR regimes). Two of Sandia's successful technologies--subwavelength diffractive optics and THz sources and detectors--are poised to extend the capabilities of the SRI sensor. The goal is to drastically broaden the SRI's sensing waveband--all the way to the THz regime--so the sensor can see through image-obscuring, scattering environments like smoke and dust. Surface properties, such as reflectivity, emissivity, and scattering roughness, vary greatly with the illuminating wavelength. Thus, objects that are difficult to image at the SRI sensor's present near-IR wavelengths may be imaged more easily at the considerably longer THz wavelengths (0.1 to 1mm). The proposed component is an active Resonant Subwavelength Grating (RSG). Sandia invested considerable effort on a passive RSG two years ago, which resulted in a highly-efficient (reflectivity greater than gold), wavelength-specific reflector. For this late-start LDRD proposal, we will transform the passive RSG design into an active laser-line reflector.

  10. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M


    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  11. Image seeker simulation for short-range surface-to-surface missile (United States)

    Jin, Sang-Hun; Kang, Ho-Gyun


    This paper presents an image seeker simulation including image processing, servo control, target model, and missile trajectory. We propose a software architecture for a seeker embedded computer. It makes core processing algorithms including image processing reusable at the source level through multiple platforms. The embedded software simulator implemented in C/C++, the servo control simulator implemented in Matlab, and the integrated simulator combined the both simulators based on Windows Component Object Module (COM) technology is presented. The integrated simulation enables developers to practice an interactive study between image processing and servo control about missions including lock-on and target tracking. The implemented simulator can be operated in low cost computer systems. This can be used to algorithm development and analysis at the design, implementation, and evaluation. Simulation examples for a short range ground-to-ground missile seeker are presented.

  12. Three-dimensional near-field MIMO array imaging using range migration techniques. (United States)

    Zhuge, Xiaodong; Yarovoy, Alexander G


    This paper presents a 3-D near-field imaging algorithm that is formulated for 2-D wideband multiple-input-multiple-output (MIMO) imaging array topology. The proposed MIMO range migration technique performs the image reconstruction procedure in the frequency-wavenumber domain. The algorithm is able to completely compensate the curvature of the wavefront in the near-field through a specifically defined interpolation process and provides extremely high computational efficiency by the application of the fast Fourier transform. The implementation aspects of the algorithm and the sampling criteria of a MIMO aperture are discussed. The image reconstruction performance and computational efficiency of the algorithm are demonstrated both with numerical simulations and measurements using 2-D MIMO arrays. Real-time 3-D near-field imaging can be achieved with a real-aperture array by applying the proposed MIMO range migration techniques.

  13. Analysis on expressible depth range of integral imaging based on degree of voxel overlap. (United States)

    Kim, Young Min; Choi, Ki-Hong; Min, Sung-Wook


    This paper proposes a practical method to analyze the expressible depth range of an integral imaging system based on image blur at defocused depths, which is one of the most noticeable image degradations, caused by overlaps among voxels in both the real and focused mode. In order to obtain the preferably precise area of overlaps among voxels at each depth, display pixels are regarded as surface light sources in the process of voxel size calculation. As a criterion for determining the range, we determine the tolerable limit of the overlaps among voxels to be at least resolved from each other. Based on this principle, several mathematical expressions about the expressible depth range can be derived in both the real mode and focused mode, and their feasibilities are demonstrated by several experiments. The analyses are processed based on both wave optics and ray optics.

  14. Statistical image processing and multidimensional modeling

    CERN Document Server

    Fieguth, Paul


    Images are all around us! The proliferation of low-cost, high-quality imaging devices has led to an explosion in acquired images. When these images are acquired from a microscope, telescope, satellite, or medical imaging device, there is a statistical image processing task: the inference of something - an artery, a road, a DNA marker, an oil spill - from imagery, possibly noisy, blurry, or incomplete. A great many textbooks have been written on image processing. However this book does not so much focus on images, per se, but rather on spatial data sets, with one or more measurements taken over

  15. Range imaging results from polar mesosphere summer echoes (United States)

    Zecha, Marius; Hoffmann, Peter; Rapp, Markus; Chen, Jenn-Shyong

    The range resolution of pulsed radars is usually limited by the transmitting pulse length and the sampling time. The so-called range imaging (RIM) has been developed to reduce these lim-itations. To apply this method the radar operates alternately over a set of distinct frequencies. Then the phase differences of the receiving signals can be used for optimization methods to generate high-resolution maps of reflections as function of range insight the pulse length. The technique has been implemented on the ALWIN VHF radar in Andenes (69) and the OSWIN VHF radar in Kühlungsborn (54N). Here we present results of the RIM method from measurements in polar mesosphere summer echoes -PMSE. These strong radar echoes are linked to ice particle clouds in the mesopause region. The dynamic of the PMSE can be reflected very well by RIM. The movement of PMSE and the edges of the extension can be tracked with a high altitude resolution. Comparisons between simultaneous measurements by RIM and by standard radar techniques demonstrate the advan-tages of RIM. Wave structures can be identified with RIM whereas they are not detectable with the lesser resolution of the standard measurements. Gravity wave parameter associated with these variations are estimated using the simultaneous measured velocity field.

  16. Medical image processing on the GPU - past, present and future. (United States)

    Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M


    Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Eliminating "Hotspots" in Digital Image Processing (United States)

    Salomon, P. M.


    Signals from defective picture elements rejected. Image processing program for use with charge-coupled device (CCD) or other mosaic imager augmented with algorithm that compensates for common type of electronic defect. Algorithm prevents false interpretation of "hotspots". Used for robotics, image enhancement, image analysis and digital television.

  18. Tensors in image processing and computer vision

    CERN Document Server

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong


    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  19. Strategies for registering range images from unknown camera positions (United States)

    Bernardini, Fausto; Rushmeier, Holly E.


    We describe a project to construct a 3D numerical model of Michelangelo's Florentine Pieta to be used in a study of the sculpture. Here we focus on the registration of the range images used to construct the model. The major challenge was the range of length scales involved. A resolution of 1 mm or less required for the 2.25 m tall piece. To achieve this resolution, we could only acquire an area of 20 by 20 cm per scan. A total of approximately 700 images were required. Ideally, a tracker would be attached to the scanner to record position and pose. The use of a tracker was not possible in the field. Instead, we used a crude-to-fine approach to registering the meshes to one another. The crudest level consisted of pairwise manual registration, aided by texture maps containing laser dots that were projected onto the sculpture. This crude alignment was refined by an automatic registration of laser dot centers. In this phase, we found that consistency constraints on dot matches were essential to obtaining accurate results. The laser dot alignment was refined by an automatic registration of laser dot centers. In this phase, we found that consistency constraints on dot matches were essential to obtaining accurate results. The laser dot alignment was further refined using a variation of the ICP algorithm developed by Besl and McKay. In the application of ICP to global registration, we developed a method to avoid one class of local minima by finding a set of points, rather than the single point, that matches each candidate point.

  20. Introduction to image processing and analysis

    CERN Document Server

    Russ, John C


    ADJUSTING PIXEL VALUES Optimizing Contrast Color Correction Correcting Nonuniform Illumination Geometric Transformations Image Arithmetic NEIGHBORHOOD OPERATIONS Convolution Other Neighborhood Operations Statistical Operations IMAGE PROCESSING IN THE FOURIER DOMAIN The Fourier Transform Removing Periodic Noise Convolution and Correlation Deconvolution Other Transform Domains Compression BINARY IMAGES Thresholding Morphological Processing Other Morphological Operations Boolean Operations MEASUREMENTS Global Measurements Feature Measurements Classification APPENDIX: SOFTWARE REFERENCES AND LITERATURE INDEX.

  1. Applications Of Image Processing In Criminalistics (United States)

    Krile, Thomas F.; Walkup, John F.; Barsallo, Adonis; Olimb, Hal; Tarng, Jaw-Horng


    A review of some basic image processing techniques for enhancement and restoration of images is given. Both digital and optical approaches are discussed. Fingerprint images are used as examples to illustrate the various processing techniques and their potential applications in criminalistics.

  2. Fuzzy image processing and applications with Matlab

    CERN Document Server

    Chaira, Tamalika


    In contrast to classical image analysis methods that employ ""crisp"" mathematics, fuzzy set techniques provide an elegant foundation and a set of rich methodologies for diverse image-processing tasks. However, a solid understanding of fuzzy processing requires a firm grasp of essential principles and background knowledge.Fuzzy Image Processing and Applications with MATLAB® presents the integral science and essential mathematics behind this exciting and dynamic branch of image processing, which is becoming increasingly important to applications in areas such as remote sensing, medical imaging,

  3. Optoelectronic imaging of speckle using image processing method (United States)

    Wang, Jinjiang; Wang, Pengfei


    A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.

  4. Topographic laser ranging and scanning principles and processing

    CERN Document Server

    Shan, Jie


    A systematic, in-depth introduction to theories and principles of Light Detection and Ranging (LiDAR) technology is long overdue, as it is the most important geospatial data acquisition technology to be introduced in recent years. An advanced discussion, this text fills the void.Professionals in fields ranging from geology, geography and geoinformatics to physics, transportation, and law enforcement will benefit from this comprehensive discussion of topographic LiDAR principles, systems, data acquisition, and data processing techniques. The book covers ranging and scanning fundamentals, and broad, contemporary analysis of airborne LiDAR systems, as well as those situated on land and in space. The authors present data collection at the signal level in terms of waveforms and their properties; at the system level with regard to calibration and georeferencing; and at the data level to discuss error budget, quality control, and data organization. They devote the bulk of the book to LiDAR data processing and inform...

  5. Combining image-processing and image compression schemes (United States)

    Greenspan, H.; Lee, M.-C.


    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  6. Determination of visual range during fog and mist using digital camera images

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, John R; Moogan, Jamie C, E-mail: [School of Physical, Environmental and Mathematical Sciences, UNSW-ADFA, Canberra ACT, 2600 (Australia)


    During the winter of 2008, daily time series of images of five 'unit-cell chequerboard' targets were acquired using a digital camera. The camera and targets were located in the Majura Valley approximately 3 km from Canberra airport. We show how the contrast between the black and white sections of the targets is related to the meteorological range (or standard visual range), and compare estimates of this quantity derived from images acquired during fog and mist conditions with those from the Vaisala FD-12 visibility meter operated by the Bureau of Meteorology at Canberra Airport. The two sets of ranges are consistent but show the variability of visibility in the patchy fog conditions that often prevail in the Majura Valley. Significant spatial variations of the light extinction coefficient were found to occur over the longest 570 m optical path sampled by the imaging system. Visual ranges could be estimated out to ten times the distance to the furthest target, or approximately 6 km, in these experiments. Image saturation of the white sections of the targets was the major limitation on the quantitative interpretation of the images. In the future, the camera images will be processed in real time so that the camera exposure can be adjusted to avoid saturation.

  7. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  8. Digital image processing techniques in archaeology

    Digital Repository Service at National Institute of Oceanography (India)

    Santanam, K.; Vaithiyanathan, R.; Tripati, S.

    Digital image processing involves the manipulation and interpretation of digital images with the aid of a computer. This form of remote sensing actually began in the 1960's with a limited number of researchers analysing multispectral scanner data...

  9. The New Approach of Using Image and Range Based Methods for Quality Control of Dimension Stone (United States)

    Levytskyi, Volodymyr


    The basis for the quality control of commodity dimension stone blocks for mining industry is the study of fracturing. The identification of fracturing in rock masses is one of the most important aspects in rock mass modelling. Traditional methods for determination properties of fracturing are difficult and hazardous. This paper describes a new approach of fracturing identification, based on image and range data, which realized by image processing and special software. In this article describes a method using new computer algorithms that allow for automated identification and calculation of fracturing parameters. Different digital filters for image processing and mathematical dependences are analyzed. The digital imaging technique has the potential for being used in real time applications. The purpose of this paper is the accurate and fast mapping of fracturing in some walls of the Bukinsky gabbro deposit.

  10. Programmable remapper for image processing (United States)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)


    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  11. Amplitude image processing by diffractive optics. (United States)

    Cagigal, Manuel P; Valle, Pedro J; Canales, V F


    In contrast to the standard digital image processing, which operates over the detected image intensity, we propose to perform amplitude image processing. Amplitude processing, like low pass or high pass filtering, is carried out using diffractive optics elements (DOE) since it allows to operate over the field complex amplitude before it has been detected. We show the procedure for designing the DOE that corresponds to each operation. Furthermore, we accomplish an analysis of amplitude image processing performances. In particular, a DOE Laplacian filter is applied to simulated astronomical images for detecting two stars one Airy ring apart. We also check by numerical simulations that the use of a Laplacian amplitude filter produces less noisy images than the standard digital image processing.

  12. Image processing in diabetic related causes

    CERN Document Server

    Kumar, Amit


    This book is a collection of all the experimental results and analysis carried out on medical images of diabetic related causes. The experimental investigations have been carried out on images starting from very basic image processing techniques such as image enhancement to sophisticated image segmentation methods. This book is intended to create an awareness on diabetes and its related causes and image processing methods used to detect and forecast in a very simple way. This book is useful to researchers, Engineers, Medical Doctors and Bioinformatics researchers.

  13. Towards Process-based Range Modeling of Many Species. (United States)

    Evans, Margaret E K; Merow, Cory; Record, Sydne; McMahon, Sean M; Enquist, Brian J


    Understanding and forecasting species' geographic distributions in the face of global change is a central priority in biodiversity science. The existing view is that one must choose between correlative models for many species versus process-based models for few species. We suggest that opportunities exist to produce process-based range models for many species, by using hierarchical and inverse modeling to borrow strength across species, fill data gaps, fuse diverse data sets, and model across biological and spatial scales. We review the statistical ecology and population and range modeling literature, illustrating these modeling strategies in action. A variety of large, coordinated ecological datasets that can feed into these modeling solutions already exist, and we highlight organisms that seem ripe for the challenge. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Digital signal processing techniques and applications in radar image processing

    CERN Document Server

    Wang, Bu-Chin


    A self-contained approach to DSP techniques and applications in radar imagingThe processing of radar images, in general, consists of three major fields: Digital Signal Processing (DSP); antenna and radar operation; and algorithms used to process the radar images. This book brings together material from these different areas to allow readers to gain a thorough understanding of how radar images are processed.The book is divided into three main parts and covers:* DSP principles and signal characteristics in both analog and digital domains, advanced signal sampling, and

  15. Semi-automated Image Processing for Preclinical Bioluminescent Imaging. (United States)

    Slavine, Nikolai V; McColl, Roderick W

    Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.

  16. Advances and applications of optimised algorithms in image processing

    CERN Document Server

    Oliva, Diego


    This book presents a study of the use of optimization algorithms in complex image processing problems. The problems selected explore areas ranging from the theory of image segmentation to the detection of complex objects in medical images. Furthermore, the concepts of machine learning and optimization are analyzed to provide an overview of the application of these tools in image processing. The material has been compiled from a teaching perspective. Accordingly, the book is primarily intended for undergraduate and postgraduate students of Science, Engineering, and Computational Mathematics, and can be used for courses on Artificial Intelligence, Advanced Image Processing, Computational Intelligence, etc. Likewise, the material can be useful for research from the evolutionary computation, artificial intelligence and image processing co.

  17. Image Processing and Features Extraction of Fingerprint Images ...

    African Journals Online (AJOL)

    Several fingerprint matching algorithms have been developed for minutiae or template matching of fingerprint templates. The efficiency of these fingerprint matching algorithms depends on the success of the image processing and features extraction steps employed. Fingerprint image processing and analysis is hence an ...

  18. Acquisition and Post-Processing of Immunohistochemical Images. (United States)

    Sedgewick, Jerry


    Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.

  19. An overview of medical image processing methods

    African Journals Online (AJOL)



    Jun 14, 2010 ... images through computer simulations has already in- creased the interests of many researchers. 3D image rendering usually refers to the analysis of the ..... Digital Image Processing. Reading,. MA: Addison-Wesley Publishing Company. Gose E, Johnsonbaugh R, Jost S (1996). Pattern Recognition and.

  20. Depth maps and high-dynamic range image generation from alternating exposure multiview images (United States)

    Heo, Yong Seok; Lee, Kyoung Mu; Lee, Sang Uk


    For stereo matching, it is hard to find accurate correspondence for saturated regions, such as too dark or too bright regions, because there is rarely reliable information to match. In this situation, conventional high-dynamic range (HDR) imaging techniques combining multiple exposures for each viewpoint can be adopted to generate well-exposed stereo images. This approach is, however, time-consuming and needs much memory to store multiple exposures for each viewpoint. We propose an efficient method to generate HDR multiview images as well as corresponding accurate depth maps. First, we take a single exposure for each viewpoint with alternating exposure setting, such as short and long exposure, as functions of viewpoint changes. Then, we compute an initial depth map for each view only using neighboring images that have the same exposure. To reduce the error of the initial depth maps for the saturated regions, we adopt the fusion move algorithm fusing neighboring depth maps that have different error regions. Finally, using the enhanced depth maps, we generate artifact-free and sharp HDR images using the joint bilateral filtering and a detail-transfer technique. Experimental results show that our method produces both consistent HDR images and accurate depth maps for various indoor and outdoor multiview images.

  1. Using Image Processing to Determine Emphysema Severity (United States)

    McKenzie, Alexander; Sadun, Alberto


    Currently X-rays and computerized tomography (CT) scans are used to detect emphysema, but other tests are required to accurately quantify the amount of lung that has been affected by the disease. These images clearly show if a patient has emphysema, but are unable by visual scan alone, to quantify the degree of the disease, as it presents as subtle, dark spots on the lung. Our goal is to use these CT scans to accurately diagnose and determine emphysema severity levels in patients. This will be accomplished by performing several different analyses of CT scan images of several patients representing a wide range of severity of the disease. In addition to analyzing the original CT data, this process will convert the data to one and two bit images and will then examine the deviation from a normal distribution curve to determine skewness. Our preliminary results show that this method of assessment appears to be more accurate and robust than the currently utilized methods, which involve looking at percentages of radiodensities in the air passages of the lung.

  2. Applied medical image processing a basic course

    CERN Document Server

    Birkfellner, Wolfgang


    A widely used, classroom-tested text, Applied Medical Image Processing: A Basic Course delivers an ideal introduction to image processing in medicine, emphasizing the clinical relevance and special requirements of the field. Avoiding excessive mathematical formalisms, the book presents key principles by implementing algorithms from scratch and using simple MATLAB®/Octave scripts with image data and illustrations on an accompanying CD-ROM or companion website. Organized as a complete textbook, it provides an overview of the physics of medical image processing and discusses image formats and data storage, intensity transforms, filtering of images and applications of the Fourier transform, three-dimensional spatial transforms, volume rendering, image registration, and tomographic reconstruction.

  3. Image processing techniques for remote sensing data

    Digital Repository Service at National Institute of Oceanography (India)

    RameshKumar, M.R.

    interpretation and for processing of scene data for autonomous machine perception. The technique of digital image processing are used for' automatic character/pattern recognition, industrial robots for product assembly and inspection, military recognizance...

  4. A new approach towards image based virtual 3D city modeling by using close range photogrammetry (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.


    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  5. Non-linear Post Processing Image Enhancement (United States)

    Hunt, Shawn; Lopez, Alex; Torres, Angel


    A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

  6. Quantitative image processing in fluid mechanics (United States)

    Hesselink, Lambertus; Helman, James; Ning, Paul


    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  7. A color image processing pipeline for digital microscope (United States)

    Liu, Yan; Liu, Peng; Zhuang, Zhefeng; Chen, Enguo; Yu, Feihong


    Digital microscope has found wide application in the field of biology, medicine et al. A digital microscope differs from traditional optical microscope in that there is no need to observe the sample through an eyepiece directly, because the optical image is projected directly on the CCD/CMOS camera. However, because of the imaging difference between human eye and sensor, color image processing pipeline is needed for the digital microscope electronic eyepiece to get obtain fine image. The color image pipeline for digital microscope, including the procedures that convert the RAW image data captured by sensor into real color image, is of great concern to the quality of microscopic image. The color pipeline for digital microscope is different from digital still cameras and video cameras because of the specific requirements of microscopic image, which should have the characters of high dynamic range, keeping the same color with the objects observed and a variety of image post-processing. In this paper, a new color image processing pipeline is proposed to satisfy the requirements of digital microscope image. The algorithm of each step in the color image processing pipeline is designed and optimized with the purpose of getting high quality image and accommodating diverse user preferences. With the proposed pipeline implemented on the digital microscope platform, the output color images meet the various analysis requirements of images in the medicine and biology fields very well. The major steps of color imaging pipeline proposed include: black level adjustment, defect pixels removing, noise reduction, linearization, white balance, RGB color correction, tone scale correction and gamma correction.

  8. Deformable Mirror Light Modulators For Image Processing (United States)

    Boysel, R. Mark; Florence, James M.; Wu, Wen-Rong


    The operational characteristics of deformable mirror device (DMD) spatial light modulators for image processing applications are presented. The two DMD pixel structures of primary interest are the torsion hinged pixel for amplitude modulation and the flexure hinged or piston element pixel for phase modulation. The optical response characteristics of these structures are described. Experimental results detailing the performance of the pixel structures and addressing architectures are presented and are compared with the analytical results. Special emphasis is placed on the specification, from the experimental data, of the basic device performance parameters of the different modulator types. These parameters include modulation range (contrast ratio and phase modulation depth), individual pixel response time, and full array address time. The performance characteristics are listed for comparison with those of other light modulators (LCLV, LCTV, and MOSLM) for applications in the input plane and Fourier plane of a conventional coherent optical image processing system. The strengths and weaknesses of the existing DMD modulators are assessed and the potential for performance improvements is outlined.

  9. Water surface capturing by image processing (United States)

    An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

  10. Processor for Real-Time Atmospheric Compensation in Long-Range Imaging Project (United States)

    National Aeronautics and Space Administration — Long-range imaging is a critical component to many NASA applications including range surveillance, launch tracking, and astronomical observation. However,...

  11. Zero Range Process and Multi-Dimensional Random Walks (United States)

    Bogoliubov, Nicolay M.; Malyshev, Cyril


    The special limit of the totally asymmetric zero range process of the low-dimensional non-equilibrium statistical mechanics described by the non-Hermitian Hamiltonian is considered. The calculation of the conditional probabilities of the model are based on the algebraic Bethe ansatz approach. We demonstrate that the conditional probabilities may be considered as the generating functions of the random multi-dimensional lattice walks bounded by a hyperplane. This type of walks we call the walks over the multi-dimensional simplicial lattices. The answers for the conditional probability and for the number of random walks in the multi-dimensional simplicial lattice are expressed through the symmetric functions.

  12. Automatic processing, analysis, and recognition of images (United States)

    Abrukov, Victor S.; Smirnov, Evgeniy V.; Ivanov, Dmitriy G.


    New approaches and computer codes (A&CC) for automatic processing, analysis and recognition of images are offered. The A&CC are based on presentation of object image as a collection of pixels of various colours and consecutive automatic painting of distinguished itself parts of the image. The A&CC have technical objectives centred on such direction as: 1) image processing, 2) image feature extraction, 3) image analysis and some others in any consistency and combination. The A&CC allows to obtain various geometrical and statistical parameters of object image and its parts. Additional possibilities of the A&CC usage deal with a usage of artificial neural networks technologies. We believe that A&CC can be used at creation of the systems of testing and control in a various field of industry and military applications (airborne imaging systems, tracking of moving objects), in medical diagnostics, at creation of new software for CCD, at industrial vision and creation of decision-making system, etc. The opportunities of the A&CC are tested at image analysis of model fires and plumes of the sprayed fluid, ensembles of particles, at a decoding of interferometric images, for digitization of paper diagrams of electrical signals, for recognition of the text, for elimination of a noise of the images, for filtration of the image, for analysis of the astronomical images and air photography, at detection of objects.

  13. Image processing and communications challenges 5

    CERN Document Server


    This textbook collects a series of research papers in the area of Image Processing and Communications which not only introduce a summary of current technology but also give an outlook of potential feature problems in this area. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in image processing and communications. The book is divided into two parts. Part I deals with image processing. A comprehensive survey of different methods  of image processing, computer vision  is also presented. Part II deals with the telecommunications networks and computer networks. Applications in these areas are considered. In conclusion, the edited book comprises papers on diverse aspects of image processing  and communications systems. There are theoretical aspects as well as application papers.

  14. Long-Range Reconnaissance Imager on New Horizons (United States)

    Cheng, A. F.; Weaver, H. A.; Conard, S. J.; Hayes, J. R.; Morgan, M. F.; Noble, M.; Taylor, H. W.; Barnouin, O.; Boldt, J. D.; Darlington, E. H.; Grey, M. P.; Magee, T.; Rossano, E.; Schlemm, C.; Kosakowski, K. E.; Sampath, D.


    LORRI is the highest resolution imager on the New Horizons (NH) mission to Pluto and the Kuiper belt. LORRI produced superb images of Jupiter and its satellites even though those bodies are ~35 times brighter than bodies in the Pluto system.

  15. The Ansel Adams zone system: HDR capture and range compression by chemical processing (United States)

    McCann, John J.


    We tend to think of digital imaging and the tools of PhotoshopTM as a new phenomenon in imaging. We are also familiar with multiple-exposure HDR techniques intended to capture a wider range of scene information, than conventional film photography. We know about tone-scale adjustments to make better pictures. We tend to think of everyday, consumer, silver-halide photography as a fixed window of scene capture with a limited, standard range of response. This description of photography is certainly true, between 1950 and 2000, for instant films and negatives processed at the drugstore. These systems had fixed dynamic range and fixed tone-scale response to light. All pixels in the film have the same response to light, so the same light exposure from different pixels was rendered as the same film density. Ansel Adams, along with Fred Archer, formulated the Zone System, staring in 1940. It was earlier than the trillions of consumer photos in the second half of the 20th century, yet it was much more sophisticated than today's digital techniques. This talk will describe the chemical mechanisms of the zone system in the parlance of digital image processing. It will describe the Zone System's chemical techniques for image synthesis. It also discusses dodging and burning techniques to fit the HDR scene into the LDR print. Although current HDR imaging shares some of the Zone System's achievements, it usually does not achieve all of them.

  16. Digital radiography image quality: image processing and display. (United States)

    Krupinski, Elizabeth A; Williams, Mark B; Andriole, Katherine; Strauss, Keith J; Applegate, Kimberly; Wyatt, Margaret; Bjork, Sandra; Seibert, J Anthony


    This article on digital radiography image processing and display is the second of two articles written as part of an intersociety effort to establish image quality standards for digital and computed radiography. The topic of the other paper is digital radiography image acquisition. The articles were developed collaboratively by the ACR, the American Association of Physicists in Medicine, and the Society for Imaging Informatics in Medicine. Increasingly, medical imaging and patient information are being managed using digital data during acquisition, transmission, storage, display, interpretation, and consultation. The management of data during each of these operations may have an impact on the quality of patient care. These articles describe what is known to improve image quality for digital and computed radiography and to make recommendations on optimal acquisition, processing, and display. The practice of digital radiography is a rapidly evolving technology that will require timely revision of any guidelines and standards.

  17. Image processing for cameras with fiber bundle image relay. (United States)

    Olivas, Stephen J; Arianpour, Ashkan; Stamenov, Igor; Morrison, Rick; Stack, Ron A; Johnson, Adam R; Agurok, Ilya P; Ford, Joseph E


    Some high-performance imaging systems generate a curved focal surface and so are incompatible with focal plane arrays fabricated by conventional silicon processing. One example is a monocentric lens, which forms a wide field-of-view high-resolution spherical image with a radius equal to the focal length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors. However, such fiber-coupled imaging systems suffer from artifacts due to image sampling and incoherent light transfer by the fiber bundle as well as resampling by the focal plane, resulting in a fixed obscuration pattern. Here, we describe digital image processing techniques to improve image quality in a compact 126° field-of-view, 30 megapixel panoramic imager, where a 12 mm focal length F/1.35 lens made of concentric glass surfaces forms a spherical image surface, which is fiber-coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image formation onto the 2.5 μm pitch fiber bundle, image transfer by the fiber bundle, and sensing by a 1.75 μm pitch backside illuminated color focal plane. We demonstrate methods to mitigate moiré artifacts and local obscuration, correct for sphere to plane mapping distortion and vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with a 10× larger commercial camera with comparable field-of-view and light collection.

  18. A General Range-Velocity Processing Scheme for Discontinuous Spectrum FMCW Signal in HFSWR Applications

    Directory of Open Access Journals (Sweden)

    Mengguan Pan


    Full Text Available Discontinuous spectrum signal which has separate subbands distributed over a wide spectrum band is a solution to synthesize a wideband waveform in a highly congested spectrum environment. In this paper, we present a general range-velocity processing scheme for the discontinuous spectrum-frequency modulated continuous wave (DS-FMCW signal specifically. In range domain, we propose a simple time rearrangement operation which converts the range transform problem of the DS-FMCW signal to a general spectral estimation problem of nonuniformly sampled data. Conventional periodogram results in a dirty range spectrum with high sidelobes which cannot be suppressed by traditional spectral weighting. In this paper, we introduce the iterative adaptive approach (IAA in the estimation of the range spectrum. IAA is shown to have the ability to provide a clean range spectrum. On the other hand, the discontinuity of the signal spectrum has little impact on the velocity processing. However, with the range resolution improved, the influence of the target motion becomes nonnegligible. We present a velocity compensation strategy which includes the intersweep compensation and in-sweep compensation. Our processing scheme with the velocity compensation is shown to provide an accurate and clean range-velocity image which benefits the following detection process.

  19. Cellular automata in image processing and geometry

    CERN Document Server

    Adamatzky, Andrew; Sun, Xianfang


    The book presents findings, views and ideas on what exact problems of image processing, pattern recognition and generation can be efficiently solved by cellular automata architectures. This volume provides a convenient collection in this area, in which publications are otherwise widely scattered throughout the literature. The topics covered include image compression and resizing; skeletonization, erosion and dilation; convex hull computation, edge detection and segmentation; forgery detection and content based retrieval; and pattern generation. The book advances the theory of image processing, pattern recognition and generation as well as the design of efficient algorithms and hardware for parallel image processing and analysis. It is aimed at computer scientists, software programmers, electronic engineers, mathematicians and physicists, and at everyone who studies or develops cellular automaton algorithms and tools for image processing and analysis, or develops novel architectures and implementations of mass...

  20. On some applications of diffusion processes for image processing

    Energy Technology Data Exchange (ETDEWEB)

    Morfu, S., E-mail: smorfu@u-bourgogne.f [Laboratoire d' Electronique, Informatique et Image (LE2i), UMR Cnrs 5158, Aile des Sciences de l' Ingenieur, BP 47870, 21078 Dijon Cedex (France)


    We propose a new algorithm inspired by the properties of diffusion processes for image filtering. We show that purely nonlinear diffusion processes ruled by Fisher equation allows contrast enhancement and noise filtering, but involves a blurry image. By contrast, anisotropic diffusion, described by Perona and Malik algorithm, allows noise filtering and preserves the edges. We show that combining the properties of anisotropic diffusion with those of nonlinear diffusion provides a better processing tool which enables noise filtering, contrast enhancement and edge preserving.

  1. ARTIP: Automated Radio Telescope Image Processing Pipeline (United States)

    Sharma, Ravi; Gyanchandani, Dolly; Kulkarni, Sarang; Gupta, Neeraj; Pathak, Vineet; Pande, Arti; Joshi, Unmesh


    The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.

  2. Image Processing Language. Phase 1 (United States)


    the domain of fGg is A U B and rf(:,y) (:,y) E A - B (fQg) = g(x,y) (:,y) e B - A •f ~y) + g(x’y) (ex,y) E A n B b. Ilultiplication (Range Induced...y) E A - B ( fGg ) (:,y) = (y) ,y) E B - A x,y) v g(x,y) (x,y) e A r) B d. Division (Range Induced). Each grey value z which is not zero has a

  3. Imaging process and VIP engagement

    Directory of Open Access Journals (Sweden)

    Starčević Slađana


    Full Text Available It's often quoted that celebrity endorsement advertising has been recognized as "an ubiquitous feature of the modern marketing". The researches have shown that this kind of engagement has been producing significantly more favorable reactions of consumers, that is, a higher level of an attention for the advertising messages, a better recall of the message and a brand name, more favorable evaluation and purchasing intentions of the brand, in regard to engagement of the non-celebrity endorsers. A positive influence on a firm's profitability and prices of stocks has also been shown. Therefore marketers leaded by the belief that celebrities represent the effective ambassadors in building of positive brand image or company image and influence an improvement of the competitive position, invest enormous amounts of money for signing the contracts with them. However, this strategy doesn't guarantee success in any case, because it's necessary to take into account many factors. This paper summarizes the results of previous researches in this field and also the recommendations for a more effective use of this kind of advertising.

  4. Crack Length Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune


    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better then that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  5. Crack Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune

    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal, Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better than that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  6. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R


    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  7. Lung Cancer Detection Using Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Mokhled S. AL-TARAWNEH


    Full Text Available Recently, image processing techniques are widely used in several medical areas for image improvement in earlier detection and treatment stages, where the time factor is very important to discover the abnormality issues in target images, especially in various cancer tumours such as lung cancer, breast cancer, etc. Image quality and accuracy is the core factors of this research, image quality assessment as well as improvement are depending on the enhancement stage where low pre-processing techniques is used based on Gabor filter within Gaussian rules. Following the segmentation principles, an enhanced region of the object of interest that is used as a basic foundation of feature extraction is obtained. Relying on general features, a normality comparison is made. In this research, the main detected features for accurate images comparison are pixels percentage and mask-labelling.

  8. The NPS Virtual Thermal Image processing model


    Kenter, Yucel.


    A new virtual thermal image-processing model that has been developed at the Naval Postgraduate School is introduced in this thesis. This visualization program is based on an earlier work, the Visibility MRTD model, which is focused on predicting the minimum resolvable temperature difference (MRTD). The MRTD is a standard performance measure for forward-looking infrared (FLIR) imaging systems. It takes into account thermal imaging system modeling concerns, such as modulation transfer functions...

  9. Long-range non-contact imaging photoplethysmography: cardiac pulse wave sensing at a distance (United States)

    Blackford, Ethan B.; Estepp, Justin R.; Piasecki, Alyssa M.; Bowers, Margaret A.; Klosterman, Samantha L.


    Non-contact, imaging photoplethysmography uses photo-optical sensors to measure variations in light absorption, caused by blood volume pulsations, to assess cardiopulmonary parameters including pulse rate, pulse rate variability, and respiration rate. Recently, researchers have studied the applications and methodology of imaging photoplethysmography. Basic research has examined some of the variables affecting data quality and accuracy of imaging photoplethysmography including signal processing, imager parameters (e.g. frame rate and resolution), lighting conditions, subject motion, and subject skin tone. This technology may be beneficial for long term or continuous monitoring where contact measurements may be harmful (e.g. skin sensitivities) or where imperceptible or unobtrusive measurements are desirable. Using previously validated signal processing methods, we examined the effects of imager-to-subject distance on one-minute, windowed estimates of pulse rate. High-resolution video of 22, stationary participants was collected using an enthusiast-grade, mirrorless, digital camera equipped with a fully-manual, super-telephoto lens at distances of 25, 50, and 100 meters with simultaneous contact measurements of electrocardiography, and fingertip photoplethysmography. By comparison, previous studies have usually been conducted with imager-to-subject distances of up to only a few meters. Mean absolute error for one-minute, windowed, pulse rate estimates (compared to those derived from gold-standard electrocardiography) were 2.0, 4.1, and 10.9 beats per minute at distances of 25, 50, and 100 meters, respectively. Long-range imaging presents several unique challenges among which include decreased, observed light reflectance and smaller regions of interest. Nevertheless, these results demonstrate that accurate pulse rate measurements can be obtained from over long imager-to-participant distances given these constraints.

  10. In-Vivo High Dynamic Range Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Jensen, Jørgen Arendt


    Current vector flow systems are limited in their detectable range of blood flow velocities. Previous work on phantoms has shown that the velocity range can be extended using synthetic aperture directional beamforming combined with an adaptive multi-lag approach. This paper presents a first invivo...


    Directory of Open Access Journals (Sweden)

    D. Heinemann


    Full Text Available Printed Circuit Boards (PCB play an important role in the manufacturing of electronic devices. To ensure a correct function of the PCBs a certain amount of solder paste is needed during the placement of components. The aim of the current research is to develop an real-time, closed-loop solution for the analysis of the printing process where solder is printed onto PCBs. Close range photogrammetry allows for determination of the solder volume and a subsequent correction if necessary. Photogrammetry is an image based method for three dimensional reconstruction from two dimensional image data of an object. A precise camera calibration is indispensable for an accurate reconstruction. In our certain application it is not possible to use calibration methods with two dimensional calibration targets. Therefore a special calibration target was developed and manufactured, which allows for single image camera calibration.

  12. Digital Image Processing in Private Industry. (United States)

    Moore, Connie


    Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success…

  13. Mapping spatial patterns with morphological image processing (United States)

    Peter Vogt; Kurt H. Riitters; Christine Estreguil; Jacek Kozak; Timothy G. Wade; James D. Wickham


    We use morphological image processing for classifying spatial patterns at the pixel level on binary land-cover maps. Land-cover pattern is classified as 'perforated,' 'edge,' 'patch,' and 'core' with higher spatial precision and thematic accuracy compared to a previous approach based on image convolution, while retaining the...

  14. Adaptive optics instrument for long-range imaging. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Crawford, T.M.


    The science and history of imaging through a turbulent atmosphere is reviewed in detail. Traditional methods for reducing the effects of turbulence are presented. A simplified method for turbulence reduction called the Sheared Coherent Interferometric Photography (SCIP) method is presented. Implementation of SCIP is discussed along with experimental results. Limitations in the use of this method are discussed along with recommendations for future improvements.

  15. Checking Fits With Digital Image Processing (United States)

    Davis, R. M.; Geaslen, W. D.


    Computer-aided video inspection of mechanical and electrical connectors feasible. Report discusses work done on digital image processing for computer-aided interface verification (CAIV). Two kinds of components examined: mechanical mating flange and electrical plug.

  16. Imaging partons in exclusive scattering processes

    Energy Technology Data Exchange (ETDEWEB)

    Diehl, Markus


    The spatial distribution of partons in the proton can be probed in suitable exclusive scattering processes. I report on recent performance estimates for parton imaging at a proposed Electron-Ion Collider.

  17. Prototype system for proton beam range measurement based on gamma electron vertex imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Han Rim [Neutron Utilization Technology Division, Korea Atomic Energy Research Institute, 111, Daedeok-daero 989beon-gil, Yuseong-gu, Daejeon 34057 (Korea, Republic of); Kim, Sung Hun; Park, Jong Hoon [Department of Nuclear Engineering, Hanyang University, Seongdong-gu, Seoul 04763 (Korea, Republic of); Jung, Won Gyun [Heavy-ion Clinical Research Division, Korean Institute of Radiological & Medical Sciences, Seoul 01812 (Korea, Republic of); Lim, Hansang [Department of Electronics Convergence Engineering, Kwangwoon University, Seoul 01897 (Korea, Republic of); Kim, Chan Hyeong, E-mail: [Department of Nuclear Engineering, Hanyang University, Seongdong-gu, Seoul 04763 (Korea, Republic of)


    In proton therapy, for both therapeutic effectiveness and patient safety, it is very important to accurately measure the proton dose distribution, especially the range of the proton beam. For this purpose, recently we proposed a new imaging method named gamma electron vertex imaging (GEVI), in which the prompt gammas emitting from the nuclear reactions of the proton beam in the patient are converted to electrons, and then the converted electrons are tracked to determine the vertices of the prompt gammas, thereby producing a 2D image of the vertices. In the present study, we developed a prototype GEVI system, including dedicated signal processing and data acquisition systems, which consists of a beryllium plate (= electron converter) to convert the prompt gammas to electrons, two double-sided silicon strip detectors (= hodoscopes) to determine the trajectories of those converted electrons, and a plastic scintillation detector (= calorimeter) to measure their kinetic energies. The system uses triple coincidence logic and multiple energy windows to select only the events from prompt gammas. The detectors of the prototype GEVI system were evaluated for electronic noise level, energy resolution, and time resolution. Finally, the imaging capability of the GEVI system was tested by imaging a {sup 90}Sr beta source, a {sup 60}Co gamma source, and a 45-MeV proton beam in a PMMA phantom. The overall results of the present study generally show that the prototype GEVI system can image the vertices of the prompt gammas produced by the proton nuclear interactions.

  18. Study on Processing Method of Image Shadow

    Directory of Open Access Journals (Sweden)

    Wang Bo


    Full Text Available In order to effectively remove disturbance of shadow and enhance robustness of information processing of computer visual image, this paper makes study on inspection and removal of image shadow. It makes study the continual removal algorithm of shadow based on integration, the illumination surface and texture, it respectively introduces their work principles and realization method, it can effectively carrying processing for shadow by test.

  19. Image quality dependence on image processing software in ...

    African Journals Online (AJOL)

    Background. Image post-processing gives computed radiography (CR) a considerable advantage over film-screen systems. After digitisation of information from CR plates, data are routinely processed using manufacturer-specific software. Agfa CR readers use MUSICA software, and an upgrade with significantly different ...

  20. Applications of the Integrated High-Performance CMOS Image Sensor to Range Finders - from Optical Triangulation to the Automotive Field. (United States)

    Wu, Jih-Huah; Pen, Cheng-Chung; Jiang, Joe-Air


    With their significant features, the applications of complementary metal-oxidesemiconductor (CMOS) image sensors covers a very extensive range, from industrialautomation to traffic applications such as aiming systems, blind guidance, active/passiverange finders, etc. In this paper CMOS image sensor-based active and passive rangefinders are presented. The measurement scheme of the proposed active/passive rangefinders is based on a simple triangulation method. The designed range finders chieflyconsist of a CMOS image sensor and some light sources such as lasers or LEDs. Theimplementation cost of our range finders is quite low. Image processing software to adjustthe exposure time (ET) of the CMOS image sensor to enhance the performance oftriangulation-based range finders was also developed. An extensive series of experimentswere conducted to evaluate the performance of the designed range finders. From theexperimental results, the distance measurement resolutions achieved by the active rangefinder and the passive range finder can be better than 0.6% and 0.25% within themeasurement ranges of 1 to 8 m and 5 to 45 m, respectively. Feasibility tests onapplications of the developed CMOS image sensor-based range finders to the automotivefield were also conducted. The experimental results demonstrated that our range finders arewell-suited for distance measurements in this field.

  1. Challenges in 3DTV image processing (United States)

    Redert, André; Berretty, Robert-Paul; Varekamp, Chris; van Geest, Bart; Bruijns, Jan; Braspenning, Ralph; Wei, Qingqing


    Philips provides autostereoscopic three-dimensional display systems that will bring the next leap in visual experience, adding true depth to video systems. We identified three challenges specifically for 3D image processing: 1) bandwidth and complexity of 3D images, 2) conversion of 2D to 3D content, and 3) object-based image/depth processing. We discuss these challenges and our solutions via several examples. In conclusion, the solutions have enabled the market introduction of several professional 3D products, and progress is made rapidly towards consumer 3DTV.

  2. The Dark Energy Survey Image Processing Pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Morganson, E.; et al.


    The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a 5000 square degree survey of the southern sky in five optical bands (g,r,i,z,Y) to a depth of ~24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g,r,i,z) over 27 square degrees. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On a bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.

  3. Fluorescence imaging of viscous materials in the ultraviolet-visible wavelength range

    Energy Technology Data Exchange (ETDEWEB)

    Murr, Patrik J., E-mail:; Rauscher, Markus S.; Tremmel, Anton; Schardt, Michael; Koch, Alexander W. [Institute for Measurement Systems and Sensor Technology, Technische Universität München, Theresienstraße 90, 80333 München (Germany)


    This paper presents an approach of an innovative measurement principle for the quality control of viscous materials during a manufacturing process based on fluorescence imaging. The main contribution to the state of the art provided by this measurement system is that three equal fluorescence images of a static or moving viscous object are available in different optical paths. The independent images are obtained by two beam splitters which are connected in series. Based on these images, it is possible to evaluate each image separately. In our case, three optical bandpass filters with different center wavelengths of 405 nm, 420 nm, and 440 nm were used to filter the separate fluorescence images. The developed system is useable for the detection of impurities in the micrometer range. Further, incorrect mixing ratios of particular components and wrong single components in the viscous materials can be detected with the setup. Moreover, it is possible to realize static and dynamic measurements. In this case the maximum speed of the objects was 0.2 m/s for the dynamic measurements. Advantages of this measurement setup are the universality due to the use of optical standard components, the small dimension and the opportunity to integrate it easily into ongoing processes. In addition, the measurement system works on a non-contact basis. Thus, the expense for maintenance is at a very low level compared to currently available measurement setups for the investigated application. Furthermore, the setup provides for the first time a simultaneous analysis of more than one component and the detection of impurities concerning their nature and size in a manufacturing process.

  4. Brain's tumor image processing using shearlet transform (United States)

    Cadena, Luis; Espinosa, Nikolai; Cadena, Franklin; Korneeva, Anna; Kruglyakov, Alexey; Legalov, Alexander; Romanenko, Alexey; Zotin, Alexander


    Brain tumor detection is well known research area for medical and computer scientists. In last decades there has been much research done on tumor detection, segmentation, and classification. Medical imaging plays a central role in the diagnosis of brain tumors and nowadays uses methods non-invasive, high-resolution techniques, especially magnetic resonance imaging and computed tomography scans. Edge detection is a fundamental tool in image processing, particularly in the areas of feature detection and feature extraction, which aim at identifying points in a digital image at which the image has discontinuities. Shearlets is the most successful frameworks for the efficient representation of multidimensional data, capturing edges and other anisotropic features which frequently dominate multidimensional phenomena. The paper proposes an improved brain tumor detection method by automatically detecting tumor location in MR images, its features are extracted by new shearlet transform.

  5. Detection of pitting corrosion in steel using image processing


    Ghosh, Bidisha; Pakrashi, Vikram; Schoefs, Franck


    This paper presents an image processing based detection method for detecting pitting corrosion in steel structures. High Dynamic Range (HDR) imaging has been carried out in this regard to demonstrate the effectiveness of such relatively inexpensive techniques that are of immense benefit to Non – Destructive – Tesing (NDT) community. The pitting corrosion of a steel sample in marine environment is successfully detected in this paper using the proposed methodology. It is observed, that the prop...

  6. Lookup Table Hough Transform for Real Time Range Image Segmentation and Featureless Co-Registration

    NARCIS (Netherlands)

    Gorte, B.G.H.; Sithole, G.


    The paper addresses range image segmentation, particularly of data recorded by range cameras, such as the Microsoft Kinect and the Mesa Swissranger SR4000. These devices record range images at video frame rates and allow for acqui-sition of 3-dimensional measurement sequences that can be used for 3D

  7. Invariance principle, multifractional Gaussian processes and long-range dependence


    Cohen, Serge; Marty, Renaud


    This paper is devoted to establish an invariance principle where the limit process is a multifractional Gaussian process with a multifractional function which takes its values in $(1/2,1)$. Some properties, such as regularity and local self-similarity of this process are studied. Moreover the limit process is compared to the multifractional Brownian motion.

  8. Image processing of 2D resistivity data for imaging faults (United States)

    Nguyen, F.; Garambois, S.; Jongmans, D.; Pirard, E.; Loke, M. H.


    A methodology to locate automatically limits or boundaries between different geological bodies in 2D electrical tomography is proposed, using a crest line extraction process in gradient images. This method is applied on several synthetic models and on field data set acquired on three experimental sites during the European project PALEOSIS where trenches were dug. The results presented in this work are valid for electrical tomographies data collected with a Wenner-alpha array and computed with an l 1 norm (blocky inversion) as optimization method. For the synthetic cases, three geometric contexts are modelled: a vertical and a dipping fault juxtaposing two different geological formations and a step-like structure. A superficial layer can cover each geological structure. In these three situations, the method locates the synthetic faults and layer boundaries, and determines fault displacement but with several limitations. The estimated fault positions correlate exactly with the synthetic ones if a conductive (or no superficial) layer overlies the studied structure. When a resistive layer with a thickness of 6 m covers the model, faults are positioned with a maximum error of 1 m. Moreover, when a resistive and/or a thick top layer is present, the resolution significantly decreases for the fault displacement estimation (error up to 150%). The tests with the synthetic models for surveys using the Wenner-alpha array indicate that the proposed methodology is best suited to vertical and horizontal contacts. Application of the methodology to real data sets shows that a lateral resistivity contrast of 1:5-1:10 leads to exact faults location. A fault contact with a resistivity contrast of 1:0.75 and overlaid by a resistive layer with a thickness of 1 m gives an error location ranging from 1 to 3 m. Moreover, no result is obtained for a contact with very low contrasts (˜1:0.85) overlaid by a resistive soil. The method shows poor results when vertical gradients are greater than

  9. Design and implementation of range-gated underwater laser imaging system (United States)

    Ge, Wei-long; Zhang, Xiao-hui


    A range-gated underwater laser imaging system is designed and implemented in this article, which is made up of laser illumination subsystem, photoelectric imaging subsystem and control subsystem. The experiment of underwater target drone detection has been done, the target of distance 40m far from the range-gated underwater laser imaging system can be imaged in the pool which water attenuation coefficient is 0.159m-1. Experimental results show that the range-gated underwater laser imaging system can detect underwater objects effectively.

  10. Digital Image Processing application to spray and flammability studies (United States)

    Hernan, M. A.; Parikh, P.; Sarohia, V.


    Digital Image Processing has been integrated into a new technique for measurements of fuel spray characteristics. The advantages of this technique are: a wide dynamic range of droplet sizes, accounting for nonspherical droplet shapes not possible with other spray assessment techniques. Finally, the technique has been applied to the study of turbojet engine fuel nozzle atomization performance with Jet A and antimisting fuel.

  11. Fundamental Concepts of Digital Image Processing (United States)

    Twogood, R. E.


    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  12. Fundamental concepts of digital image processing

    Energy Technology Data Exchange (ETDEWEB)

    Twogood, R.E.


    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  13. Traffic analysis and control using image processing (United States)

    Senthilkumar, K.; Ellappan, Vijayan; Arun, A. R.


    This paper shows the work on traffic analysis and control till date. It shows an approach to regulate traffic the use of image processing and MATLAB systems. This concept uses computational images that are to be compared with original images of the street taken in order to determine the traffic level percentage and set the timing for the traffic signal accordingly which are used to reduce the traffic stoppage on traffic lights. They concept proposes to solve real life scenarios in the streets, thus enriching the traffic lights by adding image receivers like HD cameras and image processors. The input is then imported into MATLAB to be used. as a method for calculating the traffic on roads. Their results would be computed in order to adjust the traffic light timings on a particular street, and also with respect to other similar proposals but with the added value of solving a real, big instance.

  14. Digital-image processing and image analysis of glacier ice (United States)

    Fitzpatrick, Joan J.


    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  15. Employing image processing techniques for cancer detection using microarray images. (United States)

    Dehghan Khalilabad, Nastaran; Hassanpour, Hamid


    Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A brief review of digital image processing (United States)

    Billingsley, F. C.


    The review is presented with particular reference to Skylab S-192 and Landsat MSS imagery. Attention is given to rectification (calibration) processing with emphasis on geometric correction of image distortions. Image enhancement techniques (e.g., the use of high pass digital filters to eliminate gross shading to allow emphasis of the fine detail) are described along with data analysis and system considerations (software philosophy).

  17. Geographical ranges in macroecology: Processes, patterns and implications

    DEFF Research Database (Denmark)

    Borregaard, Michael Krabbe

    , are distributed over the entire Earth. Species’ ranges are one of the basic units of the science of macroecology, which deals with patterns in the distribution of life on Earth. An example of such patterns is the large geographic variation in species richness between areas. These patterns are closely linked......, I draw upon a wide range of approaches, including statistical comparative analysis, computer simulations and null models. The core of the thesis is constituted by five independent scientific articles. These chapters fall naturally within two thematic groups: The first group consists of articles...... that investigate how ecology and evolution determine species’ ranges. The central paper in this group is a large review article about one of the best described patterns in ecology: That species with large ranges tend to also be very locally abundant within their range. In the article I review the potential causes...

  18. PCB Fault Detection Using Image Processing (United States)

    Nayak, Jithendra P. R.; Anitha, K.; Parameshachari, B. D., Dr.; Banu, Reshma, Dr.; Rashmi, P.


    The importance of the Printed Circuit Board inspection process has been magnified by requirements of the modern manufacturing environment where delivery of 100% defect free PCBs is the expectation. To meet such expectations, identifying various defects and their types becomes the first step. In this PCB inspection system the inspection algorithm mainly focuses on the defect detection using the natural images. Many practical issues like tilt of the images, bad light conditions, height at which images are taken etc. are to be considered to ensure good quality of the image which can then be used for defect detection. Printed circuit board (PCB) fabrication is a multidisciplinary process, and etching is the most critical part in the PCB manufacturing process. The main objective of Etching process is to remove the exposed unwanted copper other than the required circuit pattern. In order to minimize scrap caused by the wrongly etched PCB panel, inspection has to be done in early stage. However, all of the inspections are done after the etching process where any defective PCB found is no longer useful and is simply thrown away. Since etching process costs 0% of the entire PCB fabrication, it is uneconomical to simply discard the defective PCBs. In this paper a method to identify the defects in natural PCB images and associated practical issues are addressed using Software tools and some of the major types of single layer PCB defects are Pattern Cut, Pin hole, Pattern Short, Nick etc., Therefore the defects should be identified before the etching process so that the PCB would be reprocessed. In the present approach expected to improve the efficiency of the system in detecting the defects even in low quality images

  19. Mathematical foundations of image processing and analysis

    CERN Document Server

    Pinoli, Jean-Charles


    Mathematical Imaging is currently a rapidly growing field in applied mathematics, with an increasing need for theoretical mathematics. This book, the second of two volumes, emphasizes the role of mathematics as a rigorous basis for imaging sciences. It provides a comprehensive and convenient overview of the key mathematical concepts, notions, tools and frameworks involved in the various fields of gray-tone and binary image processing and analysis, by proposing a large, but coherent, set of symbols and notations, a complete list of subjects and a detailed bibliography. It establishes a bridg

  20. An Analog Gamma Correction Scheme for High Dynamic Range CMOS Logarithmic Image Sensors (United States)

    Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi


    In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process. PMID:25517692

  1. Iterative elimination algorithm for thermal image processing

    Directory of Open Access Journals (Sweden)

    A. H. Alkali


    Full Text Available Segmentation is employed in everyday image processing, in order to remove unwanted objects present in the image. There are scenarios where segmentation alone does not do the intended job automatically. In such cases, subjective means are required to eliminate the remnants which are time consuming especially when multiple images are involved. It is also not feasible when real-time applications are involved. This is even compounded when thermal imaging is involved as both foreground and background objects can have similar thermal distribution, thus making it impossible for straight segmentation to distinguish between the two. In this study, a real-time Iterative Elimination Algorithm (IEA was developed and it was shown that false foreground was removed in thermal images where segmentation failed to do so. The algorithm was tested on thermal images that were segmented using the inter-variance thresholding. The thermal images contained human subjects as foreground with some background objects having similar thermal distribution as the subject. Informed consent was obtained from the subject that voluntarily took part in the study. The IEA was only tested on thermal images and failed when false background object was connected to the foreground after segmentation.

  2. Wide Output Range Power Processing Unit for Electric Propulsion Project (United States)

    National Aeronautics and Space Administration — Hall thrusters can be operated over a wide range of specific impulse while maintaining high efficiency. However S/C power system constraints on electric propulsion...

  3. Wide Output Range Power Processing Unit for Electric Propulsion Project (United States)

    National Aeronautics and Space Administration — A power supply concept capable of operation over 25:1 and 64:1 impedance ranges at full power has been successfully demonstrated in our Phase I effort at...

  4. Laser range scanning for image-guided neurosurgery: investigation of image-to-physical space registrations. (United States)

    Cao, Aize; Thompson, R C; Dumpuri, P; Dawant, B M; Galloway, R L; Ding, S; Miga, M I


    In this article a comprehensive set of registration methods is utilized to provide image-to-physical space registration for image-guided neurosurgery in a clinical study. Central to all methods is the use of textured point clouds as provided by laser range scanning technology. The objective is to perform a systematic comparison of registration methods that include both extracranial (skin marker point-based registration (PBR), and face-based surface registration) and intracranial methods (feature PBR, cortical vessel-contour registration, a combined geometry/intensity surface registration method, and a constrained form of that method to improve robustness). The platform facilitates the selection of discrete soft-tissue landmarks that appear on the patient's intraoperative cortical surface and the preoperative gadolinium-enhanced magnetic resonance (MR) image volume, i.e., true corresponding novel targets. In an 11 patient study, data were taken to allow statistical comparison among registration methods within the context of registration error. The results indicate that intraoperative face-based surface registration is statistically equivalent to traditional skin marker registration. The four intracranial registration methods were investigated and the results demonstrated a target registration error of 1.6 +/- 0.5 mm, 1.7 +/- 0.5 mm, 3.9 +/- 3.4 mm, and 2.0 +/- 0.9 mm, for feature PBR, cortical vessel-contour registration, unconstrained geometric/intensity registration, and constrained geometric/intensity registration, respectively. When analyzing the results on a per case basis, the constrained geometric/intensity registration performed best, followed by feature PBR, and finally cortical vessel-contour registration. Interestingly, the best target registration errors are similar to targeting errors reported using bone-implanted markers within the context of rigid targets. The experience in this study as with others is that brain shift can compromise extracranial

  5. A representation for mammographic image processing. (United States)

    Highnam, R; Brady, M; Shepstone, B


    Mammographic image analysis is typically performed using standard, general-purpose algorithms. We note the dangers of this approach and show that an alternative physics-model-based approach can be developed to calibrate the mammographic imaging process. This enables us to obtain, at each pixel, a quantitative measure of the breast tissue. The measure we use is h(int) and this represents the thickness of 'interesting' (non-fat) tissue between the pixel and the X-ray source. The thicknesses over the image constitute what we term the h(int) representation, and it can most usefully be regarded as a surface that conveys information about the anatomy of the breast. The representation allows image enhancement through removing the effects of degrading factors, and also effective image normalization since all changes in the image due to variations in the imaging conditions have been removed. Furthermore, the h(int) representation gives us a basis upon which to build object models and to reason about breast anatomy. We use this ability to choose features that are robust to breast compression and variations in breast composition. In this paper we describe the h(int) representation, show how it can be computed, and then illustrate how it can be applied to a variety of mammographic image processing tasks. The breast thickness turns out to be a key parameter in the computation of h(int), but it is not normally recorded. We show how the breast thickness can be estimated from an image, and examine the sensitivity of h(int) to this estimate. We then show how we can simulate any projective X-ray examination and can simulate the appearance of anatomical structures within the breast. We follow this with a comparison between the h(int) representation and conventional representations with respect to invariance to imaging conditions and the surrounding tissue. Initial results indicate that image analysis is far more robust when specific consideration is taken of the imaging process and

  6. Dictionary of computer vision and image processing

    CERN Document Server

    Fisher, Robert B; Dawson-Howe, Kenneth; Fitzgibbon, Andrew; Robertson, Craig; Trucco, Emanuele; Williams, Christopher K I


    Written by leading researchers, the 2nd Edition of the Dictionary of Computer Vision & Image Processing is a comprehensive and reliable resource which now provides explanations of over 3500 of the most commonly used terms across image processing, computer vision and related fields including machine vision. It offers clear and concise definitions with short examples or mathematical precision where necessary for clarity that ultimately makes it a very usable reference for new entrants to these fields at senior undergraduate and graduate level, through to early career researchers to help build u

  7. Practical image and video processing using MATLAB

    CERN Document Server

    Marques, Oge


    "The book provides a practical introduction to the most important topics in image and video processing using MATLAB (and its Image Processing Toolbox) as a tool to demonstrate the most important techniques and algorithms. The contents are presented in a clear, technically accurate, objective way, with just enough mathematical detail. Most of the chapters are supported by figures, examples, illustrative problems, MATLAB scripts, suggestions for further reading, bibliographical references, useful Web sites, and exercises and computer projects to extend the understanding of their contents"--

  8. Multiple Constraints Based Robust Matching of Poor-Texture Close-Range Images for Monitoring a Simulated Landslide

    Directory of Open Access Journals (Sweden)

    Gang Qiao


    Full Text Available Landslides are one of the most destructive geo-hazards that can bring about great threats to both human lives and infrastructures. Landslide monitoring has been always a research hotspot. In particular, landslide simulation experimentation is an effective tool in landslide research to obtain critical parameters that help understand the mechanism and evaluate the triggering and controlling factors of slope failure. Compared with other traditional geotechnical monitoring approaches, the close-range photogrammetry technique shows potential in tracking and recording the 3D surface deformation and failure processes. In such cases, image matching usually plays a critical role in stereo image processing for the 3D geometric reconstruction. However, the complex imaging conditions such as rainfall, mass movement, illumination, and ponding will reduce the texture quality of the stereo images, bringing about difficulties in the image matching process and resulting in very sparse matches. To address this problem, this paper presents a multiple-constraints based robust image matching approach for poor-texture close-range images particularly useful in monitoring a simulated landslide. The Scale Invariant Feature Transform (SIFT algorithm was first applied to the stereo images for generation of scale-invariate feature points, followed by a two-step matching process: feature-based image matching and area-based image matching. In the first feature-based matching step, the triangulation process was performed based on the SIFT matches filtered by the Fundamental Matrix (FM and a robust checking procedure, to serve as the basic constraints for feature-based iterated matching of all the non-matched SIFT-derived feature points inside each triangle. In the following area-based image-matching step, the corresponding points of the non-matched features in each triangle of the master image were predicted in the homologous triangle of the searching image by using geometric

  9. A low-noise wide dynamic range CMOS image sensor with low and high temperatures resistance (United States)

    Mizobuchi, Koichi; Adachi, Satoru; Tejada, Jose; Oshikubo, Hiromichi; Akahane, Nana; Sugawa, Shigetoshi


    A temperature-resistant 1/3 inch SVGA (800×600 pixels) 5.6 μm pixel pitch wide-dynamic-range (WDR) CMOS image sensor has been developed using a lateral-over-flow-integration-capacitor (LOFIC) in a pixel. The sensor chips are fabricated through 0.18 μm 2P3M process with totally optimized front-end-of-line (FEOL) & back-end-of-line (BEOL) for a lower dark current. By implementing a low electrical field potential design for photodiodes, reducing damages, recovering crystal defects and terminating interface states in the FEOL+BEOL, the dark current is improved to 12 e - /pixel-sec at 60 deg.C with 50% reduction from the previous very-low-dark-current (VLDC) FEOL and its contribution to the temporal noise is improved. Furthermore, design optimizations of the readout circuits, especially a signal-and noise-hold circuit and a programmable-gain-amplifier (PGA) are also implemented. The measured temporal noise is 2.4 e -rms at 60 fps (:36 MHz operation). The dynamic-range (DR) is extended to 100 dB with 237 ke - full well capacity. In order to secure the temperature-resistance, the sensor chip also receives both an inorganic cap onto micro lens and a metal hermetic seal package assembly. Image samples at low & high temperatures show significant improvement in image qualities.

  10. CT scan range estimation using multiple body parts detection: let PACS learn the CT image content. (United States)

    Wang, Chunliang; Lundström, Claes


    The aim of this study was to develop an efficient CT scan range estimation method that is based on the analysis of image data itself instead of metadata analysis. This makes it possible to quantitatively compare the scan range of two studies. In our study, 3D stacks are first projected to 2D coronal images via a ray casting-like process. Trained 2D body part classifiers are then used to recognize different body parts in the projected image. The detected candidate regions go into a structure grouping process to eliminate false-positive detections. Finally, the scale and position of the patient relative to the projected figure are estimated based on the detected body parts via a structural voting. The start and end lines of the CT scan are projected to a standard human figure. The position readout is normalized so that the bottom of the feet represents 0.0, and the top of the head is 1.0. Classifiers for 18 body parts were trained using 184 CT scans. The final application was tested on 136 randomly selected heterogeneous CT scans. Ground truth was generated by asking two human observers to mark the start and end positions of each scan on the standard human figure. When compared with the human observers, the mean absolute error of the proposed method is 1.2% (max: 3.5%) and 1.6% (max: 5.4%) for the start and end positions, respectively. We proposed a scan range estimation method using multiple body parts detection and relative structure position analysis. In our preliminary tests, the proposed method delivered promising results.

  11. On the simulation and mitigation of anisoplanatic optical turbulence for long range imaging (United States)

    Hardie, Russell C.; LeMaster, Daniel A.


    We describe a numerical wave propagation method for simulating long range imaging of an extended scene under anisoplanatic conditions. Our approach computes an array of point spread functions (PSFs) for a 2D grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. To validate the simulation we compare simulated outputs with the theoretical anisoplanatic tilt correlation and differential tilt variance. This is in addition to comparing the long- and short-exposure PSFs, and isoplanatic angle. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. The simulation tool is also used here to quantitatively evaluate a recently proposed block- matching and Wiener filtering (BMWF) method for turbulence mitigation. In this method block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged and processed with a Wiener filter for restoration. A novel aspect of the proposed BMWF method is that the PSF model used for restoration takes into account the level of geometric correction achieved during image registration. This way, the Wiener filter is able fully exploit the reduced blurring achieved by registration. The BMWF method is relatively simple computationally, and yet, has excellent performance in comparison to state-of-the-art benchmark methods.

  12. Processing Images of Craters for Spacecraft Navigation (United States)

    Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.


    A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

  13. Hardware implementation of machine vision systems: image and video processing (United States)

    Botella, Guillermo; García, Carlos; Meyer-Bäse, Uwe


    This contribution focuses on different topics covered by the special issue titled `Hardware Implementation of Machine vision Systems' including FPGAs, GPUS, embedded systems, multicore implementations for image analysis such as edge detection, segmentation, pattern recognition and object recognition/interpretation, image enhancement/restoration, image/video compression, image similarity and retrieval, satellite image processing, medical image processing, motion estimation, neuromorphic and bioinspired vision systems, video processing, image formation and physics based vision, 3D processing/coding, scene understanding, and multimedia.

  14. Anomalous diffusion process applied to magnetic resonance image enhancement (United States)

    Senra Filho, A. C. da S.; Garrido Salmon, C. E.; Murta Junior, L. O.


    Diffusion process is widely applied to digital image enhancement both directly introducing diffusion equation as in anisotropic diffusion (AD) filter, and indirectly by convolution as in Gaussian filter. Anomalous diffusion process (ADP), given by a nonlinear relationship in diffusion equation and characterized by an anomalous parameters q, is supposed to be consistent with inhomogeneous media. Although classic diffusion process is widely studied and effective in various image settings, the effectiveness of ADP as an image enhancement is still unknown. In this paper we proposed the anomalous diffusion filters in both isotropic (IAD) and anisotropic (AAD) forms for magnetic resonance imaging (MRI) enhancement. Filters based on discrete implementation of anomalous diffusion were applied to noisy MRI T2w images (brain, chest and abdominal) in order to quantify SNR gains estimating the performance for the proposed anomalous filter when realistic noise is added to those images. Results show that for images containing complex structures, e.g. brain structures, anomalous diffusion presents the highest enhancements when compared to classical diffusion approach. Furthermore, ADP presented a more effective enhancement for images containing Rayleigh and Gaussian noise. Anomalous filters showed an ability to preserve anatomic edges and a SNR improvement of 26% for brain images, compared to classical filter. In addition, AAD and IAD filters showed optimum results for noise distributions that appear on extreme situations on MRI, i.e. in low SNR images with approximate Rayleigh noise distribution, and for high SNR images with Gaussian or non central χ noise distributions. AAD and IAD filter showed the best results for the parametric range 1.2 MRI. This study indicates the proposed anomalous filters as promising approaches in qualitative and quantitative MRI enhancement.

  15. Anomalous diffusion process applied to magnetic resonance image enhancement. (United States)

    Senra Filho, A C da S; Salmon, C E Garrido; Murta Junior, L O


    Diffusion process is widely applied to digital image enhancement both directly introducing diffusion equation as in anisotropic diffusion (AD) filter, and indirectly by convolution as in Gaussian filter. Anomalous diffusion process (ADP), given by a nonlinear relationship in diffusion equation and characterized by an anomalous parameters q, is supposed to be consistent with inhomogeneous media. Although classic diffusion process is widely studied and effective in various image settings, the effectiveness of ADP as an image enhancement is still unknown. In this paper we proposed the anomalous diffusion filters in both isotropic (IAD) and anisotropic (AAD) forms for magnetic resonance imaging (MRI) enhancement. Filters based on discrete implementation of anomalous diffusion were applied to noisy MRI T2w images (brain, chest and abdominal) in order to quantify SNR gains estimating the performance for the proposed anomalous filter when realistic noise is added to those images. Results show that for images containing complex structures, e.g. brain structures, anomalous diffusion presents the highest enhancements when compared to classical diffusion approach. Furthermore, ADP presented a more effective enhancement for images containing Rayleigh and Gaussian noise. Anomalous filters showed an ability to preserve anatomic edges and a SNR improvement of 26% for brain images, compared to classical filter. In addition, AAD and IAD filters showed optimum results for noise distributions that appear on extreme situations on MRI, i.e. in low SNR images with approximate Rayleigh noise distribution, and for high SNR images with Gaussian or non central χ noise distributions. AAD and IAD filter showed the best results for the parametric range 1.2 < q < 1.6, suggesting that the anomalous diffusion regime is more suitable for MRI. This study indicates the proposed anomalous filters as promising approaches in qualitative and quantitative MRI enhancement.

  16. Onboard Image Processing System for Hyperspectral Sensor. (United States)

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun


    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost.




    In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963

  18. Simplified labeling process for medical image segmentation. (United States)

    Gao, Mingchen; Huang, Junzhou; Huang, Xiaolei; Zhang, Shaoting; Metaxas, Dimitris N


    Image segmentation plays a crucial role in many medical imaging applications by automatically locating the regions of interest. Typically supervised learning based segmentation methods require a large set of accurately labeled training data. However, thel labeling process is tedious, time consuming and sometimes not necessary. We propose a robust logistic regression algorithm to handle label outliers such that doctors do not need to waste time on precisely labeling images for training set. To validate its effectiveness and efficiency, we conduct carefully designed experiments on cervigram image segmentation while there exist label outliers. Experimental results show that the proposed robust logistic regression algorithms achieve superior performance compared to previous methods, which validates the benefits of the proposed algorithms.

  19. Conceptualization, Cognitive Process between Image and Word

    Directory of Open Access Journals (Sweden)

    Aurel Ion Clinciu


    Full Text Available The study explores the process of constituting and organizing the system of concepts. After a comparative analysis of image and concept, conceptualization is reconsidered through raising for discussion the relations of concept with image in general and with self-image mirrored in body schema in particular. Taking into consideration the notion of mental space, there is developed an articulated perspective on conceptualization which has the images of mental space at one pole and the categories of language and operations of thinking at the other pole. There are explored the explicative possibilities of the notion of Tversky’s diagrammatic space as an element which is necessary to understand the genesis of graphic behaviour and to define a new construct, graphic intelligence.

  20. Improved Feature Detection in Fused Intensity-Range Images with Complex SIFT (ℂSIFT

    Directory of Open Access Journals (Sweden)

    Boris Jutzi


    Full Text Available The real and imaginary parts are proposed as an alternative to the usual Polar representation of complex-valued images. It is proven that the transformation from Polar to Cartesian representation contributes to decreased mutual information, and hence to greater distinctiveness. The Complex Scale-Invariant Feature Transform (ℂSIFT detects distinctive features in complex-valued images. An evaluation method for estimating the uniformity of feature distributions in complex-valued images derived from intensity-range images is proposed. In order to experimentally evaluate the proposed methodology on intensity-range images, three different kinds of active sensing systems were used: Range Imaging, Laser Scanning, and Structured Light Projection devices (PMD CamCube 2.0, Z+F IMAGER 5003, Microsoft Kinect.

  1. Digital image processing of vascular angiograms (United States)

    Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.


    The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.

  2. The Range of Microbial Risks in Food Processing

    NARCIS (Netherlands)

    Zwietering, M.H.; Straver, J.M.; Asselt, van E.D.


    Foodborne illnesses can be caused by a wide range of microorganisms. Data analysis can help to determine which microorganisms give the highest contribution to the number of foodborne illnesses. This helps to decide which pathogen(s) to focus on in order to reduce the number of illnesses. The same

  3. Northwest range-plant symbols adapted to automatic data processing. (United States)

    George A. Garrison; Jon M. Skovlin


    Many range technicians, agronomists, foresters, biologists, and botanists of various educational institutions and government agencies in the Northwest have been using a four-letter symbol list or code compiled 12 years ago from records of plants collected by the U.S. Forest Service in Oregon and Washington, This code has served well as a means of entering plant names...

  4. Speckle pattern processing by digital image correlation

    Directory of Open Access Journals (Sweden)

    Gubarev Fedor


    Full Text Available Testing the method of speckle pattern processing based on the digital image correlation is carried out in the current work. Three the most widely used formulas of the correlation coefficient are tested. To determine the accuracy of the speckle pattern processing, test speckle patterns with known displacement are used. The optimal size of a speckle pattern template used for determination of correlation and corresponding the speckle pattern displacement is also considered in the work.

  5. Optimisation in signal and image processing

    CERN Document Server

    Siarry, Patrick


    This book describes the optimization methods most commonly encountered in signal and image processing: artificial evolution and Parisian approach; wavelets and fractals; information criteria; training and quadratic programming; Bayesian formalism; probabilistic modeling; Markovian approach; hidden Markov models; and metaheuristics (genetic algorithms, ant colony algorithms, cross-entropy, particle swarm optimization, estimation of distribution algorithms, and artificial immune systems).

  6. Image Processing in Amateur Astro-Photography

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 2. Image Processing in Amateur Astro-Photography. Anurag Garg. Classroom Volume 15 Issue 2 February 2010 pp 170-175. Fulltext. Click here to view fulltext PDF. Permanent link: ...

  7. Stochastic processes, estimation theory and image enhancement (United States)

    Assefi, T.


    An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

  8. Limiting liability via high resolution image processing

    Energy Technology Data Exchange (ETDEWEB)

    Greenwade, L.E.; Overlin, T.K.


    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  9. An image-processing methodology for extracting bloodstain pattern features. (United States)

    Arthur, Ravishka M; Humburg, Philomena J; Hoogenboom, Jerry; Baiker, Martin; Taylor, Michael C; de Bruin, Karla G


    There is a growing trend in forensic science to develop methods to make forensic pattern comparison tasks more objective. This has generally involved the application of suitable image-processing methods to provide numerical data for identification or comparison. This paper outlines a unique image-processing methodology that can be utilised by analysts to generate reliable pattern data that will assist them in forming objective conclusions about a pattern. A range of features were defined and extracted from a laboratory-generated impact spatter pattern. These features were based in part on bloodstain properties commonly used in the analysis of spatter bloodstain patterns. The values of these features were consistent with properties reported qualitatively for such patterns. The image-processing method developed shows considerable promise as a way to establish measurable discriminating pattern criteria that are lacking in current bloodstain pattern taxonomies. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Digital processing of stereoscopic image pairs. (United States)

    Levine, M. D.


    The problem under consideration is concerned with scene analysis during robot navigation on the surface of Mars. In this mode, the world model of the robot must be continuously updated to include sightings of new obstacles and scientific samples. In order to describe the content of a particular scene, it is first necessary to segment it into known objects. One technique for accomplishing this segmentation is by analyzing the pair of images produced by the stereoscopic cameras mounted on the robot. A heuristic method is presented for determining the range for each point in the two-dimensional scene under consideration. The method is conceptually based on a comparison of corresponding points in the left and right images of the stereo pair. However, various heuristics which are adaptive in nature are used to make the algorithm both efficient and accurate. Examples are given of the use of this so-called range picture for the purpose of scene segmentation.

  11. Study of CT-based positron range correction in high resolution 3D PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Cal-Gonzalez, J., E-mail: [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Herraiz, J.L. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Espana, S. [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Vicente, E. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Instituto de Estructura de la Materia, Consejo Superior de Investigaciones Cientificas (CSIC), Madrid (Spain); Herranz, E. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital General Universitario Gregorio Maranon, Madrid (Spain); Vaquero, J.J. [Dpto. de Bioingenieria e Ingenieria Espacial, Universidad Carlos III, Madrid (Spain); Udias, J.M. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain)


    Positron range limits the spatial resolution of PET images and has a different effect for different isotopes and positron propagation materials. Therefore it is important to consider it during image reconstruction, in order to obtain optimal image quality. Positron range distributions for most common isotopes used in PET in different materials were computed using the Monte Carlo simulations with PeneloPET. The range profiles were introduced into the 3D OSEM image reconstruction software FIRST and employed to blur the image either in the forward projection or in the forward and backward projection. The blurring introduced takes into account the different materials in which the positron propagates. Information on these materials may be obtained, for instance, from a segmentation of a CT image. The results of introducing positron blurring in both forward and backward projection operations was compared to using it only during forward projection. Further, the effect of different shapes of positron range profile in the quality of the reconstructed images with positron range correction was studied. For high positron energy isotopes, the reconstructed images show significant improvement in spatial resolution when positron range is taken into account during reconstruction, compared to reconstructions without positron range modeling.

  12. Improving the effectiveness of detailed processing by dynamic control of processing with high sports range

    Directory of Open Access Journals (Sweden)

    Yu.V. Shapoval


    Full Text Available In this article the possibility of increasing the efficiency of the processing of parts with a diameter of up to 20 mm is analyzed, namely: vibration resistance of the cutting process at pinching due to cutting speed control in the processing, forecasting and selection of rotational frequencies, which ensure the stability of the processing system, controlling the dynamics of the process of displacement of the additional mass. The method of investigation of vibration processes during the sharpening is developed. As a result of the processing of experimental data, it was found that when an oscillatory motion is applied to the spindle rotation, the overall level of oscillation decreases, which is reflected on the quality of the treated surface. The choice of a previously known spindle rotation frequency range at which the lowest value of the oscillation amplitude of the instrument is observed in the radial direction to the detail part, allows you to increase the processing efficiency while maintaining the drawing requirements for roughness by increasing the spindle rotational speed. The combination of the node of the own forms of oscillation and the cutting zone, by dynamically controlling the fluctuations of the lathe armature due to the increase of the inertia characteristics of the machine and the reduction of the oscillation amplitude of the tool, can improve the accuracy of machining and roughness of the processed surface of the component at higher spindle speeds.

  13. Subband/transform functions for image processing (United States)

    Glover, Daniel


    Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

  14. Automation of data processing | G | African Journal of Range and ...

    African Journals Online (AJOL)

    Data processing can be time-consuming when experiments with advanced designs are employed. This, coupled with a shortage of research workers, necessitates automation. It is suggested that with automation the first step is to determine how the data must be analysed. The second step is to determine what programmes ...

  15. Driver drowsiness detection using ANN image processing (United States)

    Vesselenyi, T.; Moca, S.; Rus, A.; Mitran, T.; Tătaru, B.


    The paper presents a study regarding the possibility to develop a drowsiness detection system for car drivers based on three types of methods: EEG and EOG signal processing and driver image analysis. In previous works the authors have described the researches on the first two methods. In this paper the authors have studied the possibility to detect the drowsy or alert state of the driver based on the images taken during driving and by analyzing the state of the driver’s eyes: opened, half-opened and closed. For this purpose two kinds of artificial neural networks were employed: a 1 hidden layer network and an autoencoder network.

  16. Study on enhancing dynamic range of CCD imaging based on digital micro-mirror device (United States)

    Zhou, Wang


    DMD used as SLM modulation area array CCD design is proposed in the paper. It can Solve a problem in exposing high-contrast scenes by ordinary CCD camera, with images appearing over-exposure or under exposure, bringing a loss of the details of the photo. The method adoptes a forecast imaging scene, CCD is purposely designed by way of more exposure regions and exposure times. Through modulation function of DMD micro-mirror, CCD is exposed with sub-region and time-sharing, at the same time a purposely designed structure of image data enhances the area CCD dynamic range. Experiments shows: This method not only improves visible quality of an image and clear details in the backlighting or highlight, but also enhances the dynamic range of image data. The high-quality image and high dynamic range data are real-time captured, the "fused" software is no longer required.

  17. Research on Image processing in laser triangulation system

    Energy Technology Data Exchange (ETDEWEB)

    Liu Kai; Wang Qianqian; Wang Yang; Liu Chenrui, E-mail: [School of Optoelectronics, Beijing Institute of Technology, 100081 Beijing (China)


    Laser Triangulation Ranging is a kind of displacement distance measurement method which is based on the principle of optical triangulation using laser as the light source. It is superior in simple structure, high-speed, high-accuracy, anti-jamming capability and adaptability laser triangulation ranging. Therefore it is widely used in various fields such as industrial production, road test, three-dimensional face detection, and so on. In current study the features of the spot images achieved by CCD in laser triangulation system were analyzed, and the appropriate algorithms for spot images were discussed. Experimental results showed that the precision and stability of the spot location were enhanced significantly after applying these image processing algorithms.

  18. Illuminating magma shearing processes via synchrotron imaging (United States)

    Lavallée, Yan; Cai, Biao; Coats, Rebecca; Kendrick, Jackie E.; von Aulock, Felix W.; Wallace, Paul A.; Le Gall, Nolwenn; Godinho, Jose; Dobson, Katherine; Atwood, Robert; Holness, Marian; Lee, Peter D.


    Our understanding of geomaterial behaviour and processes has long fallen short due to inaccessibility into material as "something" happens. In volcanology, research strategies have increasingly sought to illuminate the subsurface of materials at all scales, from the use of muon tomography to image the inside of volcanoes to the use of seismic tomography to image magmatic bodies in the crust, and most recently, we have added synchrotron-based x-ray tomography to image the inside of material as we test it under controlled conditions. Here, we will explore some of the novel findings made on the evolution of magma during shearing. These will include observations and discussions of magma flow and failure as well as petrological reaction kinetics.

  19. Automatic image analysis of multicellular apoptosis process. (United States)

    Ziraldo, Riccardo; Link, Nichole; Abrams, John; Ma, Lan


    Apoptotic programmed cell death (PCD) is a common and fundamental aspect of developmental maturation. Image processing techniques have been developed to detect apoptosis at the single-cell level in a single still image, while an efficient algorithm to automatically analyze the temporal progression of apoptosis in a large population of cells is unavailable. In this work, we have developed an ImageJ-based program that can quantitatively analyze time-lapse microscopy movies of live tissues undergoing apoptosis with a fluorescent cellular marker, and subsequently extract the temporospatial pattern of multicellular response. The protocol is applied to characterize apoptosis of Drosophila wing epithelium cells at eclosion. Using natural anatomic structures as reference, we identify dynamic patterns in the progression of apoptosis within the wing tissue, which not only confirms the previously observed collective cell behavior from a quantitative perspective for the first time, but also reveals a plausible role played by the anatomic structures in Drosophila apoptosis.

  20. Efficient processing of 3-sided range queries with probabilistic guarantees

    DEFF Research Database (Denmark)

    Kaporis, Alexis; Papadopoulos, Apostolos; Sioutas, Spyros


    over the O(log n) update time bound achieved by the classic Priority Search Tree of McCreight [23], as well as over the Fusion Priority Search Tree of Willard [30], which requires O(log n/log log n) time for all operations. Moreover, we externalize this solution, gaining O(logB n + t/B) worst case...... and O(logBlogn) amortized expected with high probability I/Os for query and update operations respectively, where B is the disk block size. Then, combining the Modified Priority Search Tree [27] with the Priority Search Tree [23], we achieve a query time of O(log log n + t) expected with high......This work studies the problem of 2-dimensional searching for the 3-sided range query of the form [a, b] x (-∞, c] in both main and external memory, by considering a variety of input distributions. A dynamic linear main memory solution is proposed, which answers 3-sided queries in O(log n + t) worst...

  1. Sorting Olive Batches for the Milling Process Using Image Processing. (United States)

    Aguilera Puerto, Daniel; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan


    The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results.

  2. Computer Vision and Image Processing: A Paper Review

    Directory of Open Access Journals (Sweden)

    victor - wiley


    Full Text Available Computer vision has been studied from many persective. It expands from raw data recording into techniques and ideas combining digital image processing, pattern recognition, machine learning and computer graphics. The wide usage has attracted many scholars to integrate with many disciplines and fields. This paper provide a survey of the recent technologies and theoretical concept explaining the development of computer vision especially related to image processing using different areas of their field application. Computer vision helps scholars to analyze images and video to obtain necessary information,    understand information on events or descriptions, and scenic pattern. It used method of multi-range application domain with massive data analysis. This paper provides contribution of recent development on reviews related to computer vision, image processing, and their related studies. We categorized the computer vision mainstream into four group e.g., image processing, object recognition, and machine learning. We also provide brief explanation on the up-to-date information about the techniques and their performance.

  3. Processing images with programming language Halide




    The thesis contains a presentation of a recently created programming language Halide and its comparison to an already established image processing library OpenCV. We compare the execution times of the implementations with the same functionality and their length (in terms of number of lines). The implementations consist of morphological operations and template matching. Operations are implemented in four versions. The first version is made in C++ and only uses OpenCV’s objects. The second ...

  4. Digital image processing for information extraction. (United States)

    Billingsley, F. C.


    The modern digital computer has made practical image processing techniques for handling nonlinear operations in both the geometrical and the intensity domains, various types of nonuniform noise cleanup, and the numerical analysis of pictures. An initial requirement is that a number of anomalies caused by the camera (e.g., geometric distortion, MTF roll-off, vignetting, and nonuniform intensity response) must be taken into account or removed to avoid their interference with the information extraction process. Examples illustrating these operations are discussed along with computer techniques used to emphasize details, perform analyses, classify materials by multivariate analysis, detect temporal differences, and aid in human interpretation of photos.

  5. Phase Superposition Processing for Ultrasonic Imaging (United States)

    Tao, L.; Ma, X. R.; Tian, H.; Guo, Z. X.


    In order to improve the resolution of defect reconstruction for non-destructive evaluation, a new phase superposition processing (PSP) method has been developed on the basis of a synthetic aperture focusing technique (SAFT). The proposed method synthesizes the magnitudes of phase-superposed delayed signal groups. A satisfactory image can be obtained by a simple algorithm processing time domain radio frequency signals directly. In this paper, the theory of PSP is introduced and some simulation and experimental results illustrating the advantage of PSP are given.

  6. Electro-optic modulation methods in range-gated active imaging. (United States)

    Chen, Zhen; Liu, Bo; Liu, Enhai; Peng, Zhangxian


    A time-resolved imaging method based on electro-optic modulation is proposed in this paper. To implement range resolution, two kinds of polarization-modulated methods are designed, and high spatial and range resolution can be achieved by the active imaging system. In the system, with polarization beam splitting the incident light is split into two parts, one of which is modulated with cos(2) function and the other is modulated with sin(2) function. Afterward, a depth map can be obtained from two simultaneously received images by dual electron multiplying charge-coupled devices. Furthermore, an intensity image can also be obtained from the two images. Comparisons of the two polarization-modulated methods indicate that range accuracy will be promoted when the polarized light is modulated before beam splitting.

  7. The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design (United States)

    Riza, Nabeel A.


    Multi-pixel imaging devices such as CCD, CMOS and Focal Plane Array (FPA) photo-sensors dominate the imaging world. These Photo-Detector Array (PDA) devices certainly have their merits including increasingly high pixel counts and shrinking pixel sizes, nevertheless, they are also being hampered by limitations in instantaneous dynamic range, inter-pixel crosstalk, quantum full well capacity, signal-to-noise ratio, sensitivity, spectral flexibility, and in some cases, imager response time. Recently invented is the Coded Access Optical Sensor (CAOS) Camera platform that works in unison with current Photo-Detector Array (PDA) technology to counter fundamental limitations of PDA-based imagers while providing high enough imaging spatial resolution and pixel counts. Using for example the Texas Instruments (TI) Digital Micromirror Device (DMD) to engineer the CAOS camera platform, ushered in is a paradigm change in advanced imager design, particularly for extreme dynamic range applications.

  8. PROVE GOES-8 Images of Jornada Experimental Range, New Mexico, 1997 (United States)

    National Aeronautics and Space Administration — As part of the Prototype Validation Experiment (PROVE) at the Jornada Experimental Range, GOES-8 images were collected every 30 minutes for 15 days overlapping the...

  9. Fusing range and intensity images for generating dense models of three-dimensional environments

    DEFF Research Database (Denmark)

    Ellekilde, Lars-Peter; Miró, Jaime Valls; Dissanayake., Gamini

    This paper presents a novel strategy for the construction of dense three-dimensional environment models by combining images from a conventional camera and a range imager. Ro- bust data association is ?rst accomplished by exploiting the Scale Invariant Feature Transformation (SIFT) technique on th...

  10. Fast Hue and Range Preserving Histogram: Specification: Theory and New Algorithms for Color Image Enhancement. (United States)

    Nikolova, Mila; Steidl, Gabriele


    Color image enhancement is a complex and challenging task in digital imaging with abundant applications. Preserving the hue of the input image is crucial in a wide range of situations. We propose simple image enhancement algorithms which conserve the hue and preserve the range (gamut) of the R, G, B channels in an optimal way. In our setup, the intensity input image is transformed into a target intensity image whose histogram matches a specified, well-behaved histogram. We derive a new color assignment methodology where the resulting enhanced image fits the target intensity image. We analyse the obtained algorithms in terms of chromaticity improvement and compare them with the unique and quite popular histogram based hue and range preserving algorithm of Naik and Murthy. Numerical tests confirm our theoretical results and show that our algorithms perform much better than the Naik-Murthy algorithm. In spite of their simplicity, they compete with well-established alternative methods for images where hue-preservation is desired.

  11. MATLAB-Based Applications for Image Processing and Image Quality Assessment – Part I: Software Description

    Directory of Open Access Journals (Sweden)

    L. Krasula


    Full Text Available This paper describes several MATLAB-based applications useful for image processing and image quality assessment. The Image Processing Application helps user to easily modify images, the Image Quality Adjustment Application enables to create series of pictures with different quality. The Image Quality Assessment Application contains objective full reference quality metrics that can be used for image quality assessment. The Image Quality Evaluation Applications represent an easy way to compare subjectively the quality of distorted images with reference image. Results of these subjective tests can be processed by using the Results Processing Application. All applications provide Graphical User Interface (GUI for the intuitive usage.

  12. Raw image processing in for Structure-from-Motion surveying (United States)

    O'Connor, James; Smith, Mike; James, Mike R.


    Consumer-grade cameras are now commonly used within geoscientific topographic surveys and, combined with modern photogrammetric techniques such as Structure-from-Motion (SfM), provide accurate 3-D products for use in a range of research applications. However, the workflows deployed are often treated as "black box" techniques and the image inputs (Quality, exposure conditions and pre-processing thereof) can go under-reported. Differences in how raw sensor data are converted into an image format (that is then used in an SfM workflow) can have an effect on the quality of SfM products. Within this contribution we present results generated from sets of photographs, initially captured as RAW images, of two cliffs in Norfolk, UK, where complex topography provides challenging conditions for accurate 3-D reconstructions using SfM. These RAW image sets were pre-processed in several ways, including the generation of 8 bit-per-channel JPEG and 16 bit-per-channel TIFF files, prior to SfM processing. The resulting point cloud products were compared against a high-resolution Terrestrial Laser Scan (TLS) reference. Results show slight differences in benchmark tests for each image block against the TLS reference data, but metrics within the bundle adjustment suggest a higher internal precision (in terms of RMS reprojection error within the sparse cloud) and more stable solution within the 16 bit-per-channel data.

  13. Facial Edema Evaluation Using Digital Image Processing

    Directory of Open Access Journals (Sweden)

    A. E. Villafuerte-Nuñez


    Full Text Available The main objective of the facial edema evaluation is providing the needed information to determine the effectiveness of the anti-inflammatory drugs in development. This paper presents a system that measures the four main variables present in facial edemas: trismus, blush (coloration, temperature, and inflammation. Measurements are obtained by using image processing and the combination of different devices such as a projector, a PC, a digital camera, a thermographic camera, and a cephalostat. Data analysis and processing are performed using MATLAB. Facial inflammation is measured by comparing three-dimensional reconstructions of inflammatory variations using the fringe projection technique. Trismus is measured by converting pixels to centimeters in a digitally obtained image of an open mouth. Blushing changes are measured by obtaining and comparing the RGB histograms from facial edema images at different times. Finally, temperature changes are measured using a thermographic camera. Some tests using controlled measurements of every variable are presented in this paper. The results allow evaluating the measurement system before its use in a real test, using the pain model approved by the US Food and Drug Administration (FDA, which consists in extracting the third molar to generate the facial edema.

  14. Experimental Study of High-Range-Resolution Medical Acoustic Imaging for Multiple Target Detection by Frequency Domain Interferometry (United States)

    Kimura, Tomoki; Taki, Hirofumi; Sakamoto, Takuya; Sato, Toru


    We employed frequency domain interferometry (FDI) for use as a medical acoustic imager to detect multiple targets with high range resolution. The phase of each frequency component of an echo varies with the frequency, and target intervals can be estimated from the phase variance. This processing technique is generally used in radar imaging. When the interference within a range gate is coherent, the cross correlation between the desired signal and the coherent interference signal is nonzero. The Capon method works under the guiding principle that output power minimization cancels the desired signal with a coherent interference signal. Therefore, we utilize frequency averaging to suppress the correlation of the coherent interference. The results of computational simulations using a pseudoecho signal show that the Capon method with adaptive frequency averaging (AFA) provides a higher range resolution than a conventional method. These techniques were experimentally investigated and we confirmed the effectiveness of the proposed method of processing by FDI.

  15. Luminescence imaging of water during proton-beam irradiation for range estimation

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, Seiichi, E-mail:; Okumura, Satoshi; Komori, Masataka [Radiological and Medical Laboratory Sciences, Nagoya University Graduate School of Medicine, Nagoya 461-8673 (Japan); Toshito, Toshiyuki [Department of Proton Therapy Physics, Nagoya Proton Therapy Center, Nagoya City West Medical Center, Nagoya 462-8508 (Japan)


    Purpose: Proton therapy has the ability to selectively deliver a dose to the target tumor, so the dose distribution should be accurately measured by a precise and efficient method. The authors found that luminescence was emitted from water during proton irradiation and conjectured that this phenomenon could be used for estimating the dose distribution. Methods: To achieve more accurate dose distribution, the authors set water phantoms on a table with a spot scanning proton therapy system and measured the luminescence images of these phantoms with a high-sensitivity, cooled charge coupled device camera during proton-beam irradiation. The authors imaged the phantoms of pure water, fluorescein solution, and an acrylic block. Results: The luminescence images of water phantoms taken during proton-beam irradiation showed clear Bragg peaks, and the measured proton ranges from the images were almost the same as those obtained with an ionization chamber. Furthermore, the image of the pure-water phantom showed almost the same distribution as the tap-water phantom, indicating that the luminescence image was not related to impurities in the water. The luminescence image of the fluorescein solution had ∼3 times higher intensity than water, with the same proton range as that of water. The luminescence image of the acrylic phantom had a 14.5% shorter proton range than that of water; the proton range in the acrylic phantom generally matched the calculated value. The luminescence images of the tap-water phantom during proton irradiation could be obtained in less than 2 s. Conclusions: Luminescence imaging during proton-beam irradiation is promising as an effective method for range estimation in proton therapy.

  16. Portable EDITOR (PEDITOR): A portable image processing system. [satellite images (United States)

    Angelici, G.; Slye, R.; Ozga, M.; Ritter, P.


    The PEDITOR image processing system was created to be readily transferable from one type of computer system to another. While nearly identical in function and operation to its predecessor, EDITOR, PEDITOR employs additional techniques which greatly enhance its portability. These cover system structure and processing. In order to confirm the portability of the software system, two different types of computer systems running greatly differing operating systems were used as target machines. A DEC-20 computer running the TOPS-20 operating system and using a Pascal Compiler was utilized for initial code development. The remaining programmers used a Motorola Corporation 68000-based Forward Technology FT-3000 supermicrocomputer running the UNIX-based XENIX operating system and using the Silicon Valley Software Pascal compiler and the XENIX C compiler for their initial code development.

  17. Luminescence imaging of water during carbon-ion irradiation for range estimation

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, Seiichi, E-mail:; Komori, Masataka; Koyama, Shuji; Morishita, Yuki; Sekihara, Eri [Radiological and Medical Laboratory Sciences, Nagoya University Graduate School of Medicine, Higashi-ku, Nagoya, Aichi 461-8673 (Japan); Akagi, Takashi; Yamashita, Tomohiro [Hygo Ion Beam Medical Center, Hyogo 679-5165 (Japan); Toshito, Toshiyuki [Department of Proton Therapy Physics, Nagoya Proton Therapy Center, Nagoya City West Medical Center, Aichi 462-8508 (Japan)


    Purpose: The authors previously reported successful luminescence imaging of water during proton irradiation and its application to range estimation. However, since the feasibility of this approach for carbon-ion irradiation remained unclear, the authors conducted luminescence imaging during carbon-ion irradiation and estimated the ranges. Methods: The authors placed a pure-water phantom on the patient couch of a carbon-ion therapy system and measured the luminescence images with a high-sensitivity, cooled charge-coupled device camera during carbon-ion irradiation. The authors also carried out imaging of three types of phantoms (tap-water, an acrylic block, and a plastic scintillator) and compared their intensities and distributions with those of a phantom containing pure-water. Results: The luminescence images of pure-water phantoms during carbon-ion irradiation showed clear Bragg peaks, and the measured carbon-ion ranges from the images were almost the same as those obtained by simulation. The image of the tap-water phantom showed almost the same distribution as that of the pure-water phantom. The acrylic block phantom’s luminescence image produced seven times higher luminescence and had a 13% shorter range than that of the water phantoms; the range with the acrylic phantom generally matched the calculated value. The plastic scintillator showed ∼15 000 times higher light than that of water. Conclusions: Luminescence imaging during carbon-ion irradiation of water is not only possible but also a promising method for range estimation in carbon-ion therapy.

  18. Imprecise Arithmetic for Low Power Image Processing

    DEFF Research Database (Denmark)

    Albicocco, Pietro; Cardarilli, Gian Carlo; Nannarelli, Alberto


    Sometimes reducing the precision of a numerical processor, by introducing errors, can lead to significant performance (delay, area and power dissipation) improvements without compromising the overall quality of the processing. In this work, we show how to perform the two basic operations, additio...... and multiplication, in an imprecise manner by simplifying the hardware implementation. With the proposed ”sloppy” operations, we obtain a reduction in delay, area and power dissipation, and the error introduced is still acceptable for applications such as image processing.......Sometimes reducing the precision of a numerical processor, by introducing errors, can lead to significant performance (delay, area and power dissipation) improvements without compromising the overall quality of the processing. In this work, we show how to perform the two basic operations, addition...

  19. Development of the SOFIA Image Processing Tool (United States)

    Adams, Alexander N.


    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a Boeing 747SP carrying a 2.5 meter infrared telescope capable of operating between at altitudes of between twelve and fourteen kilometers, which is above more than 99 percent of the water vapor in the atmosphere. The ability to make observations above most water vapor coupled with the ability to make observations from anywhere, anytime, make SOFIA one of the world s premiere infrared observatories. SOFIA uses three visible light CCD imagers to assist in pointing the telescope. The data from these imagers is stored in archive files as is housekeeping data, which contains information such as boresight and area of interest locations. A tool that could both extract and process data from the archive files was developed.

  20. HYMOSS signal processing for pushbroom spectral imaging (United States)

    Ludwig, David E.


    The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

  1. Advanced Color Image Processing and Analysis

    CERN Document Server


    This volume does much more than survey modern advanced color processing. Starting with a historical perspective on ways we have classified color, it sets out the latest numerical techniques for analyzing and processing colors, the leading edge in our search to accurately record and print what we see. The human eye perceives only a fraction of available light wavelengths, yet we live in a multicolor world of myriad shining hues. Colors rich in metaphorical associations make us “purple with rage” or “green with envy” and cause us to “see red.” Defining colors has been the work of centuries, culminating in today’s complex mathematical coding that nonetheless remains a work in progress: only recently have we possessed the computing capacity to process the algebraic matrices that reproduce color more accurately. With chapters on dihedral color and image spectrometers, this book provides technicians and researchers with the knowledge they need to grasp the intricacies of today’s color imaging.

  2. Dynamic deformation image de-blurring and image processing for digital imaging correlation measurement (United States)

    Guo, X.; Li, Y.; Suo, T.; Liu, H.; Zhang, C.


    This paper proposes a method for de-blurring of images captured in the dynamic deformation of materials. De-blurring is achieved based on the dynamic-based approach, which is used to estimate the Point Spread Function (PSF) during the camera exposure window. The deconvolution process involving iterative matrix calculations of pixels, is then performed on the GPU to decrease the time cost. Compared to the Gauss method and the Lucy-Richardson method, it has the best result of the image restoration. The proposed method has been evaluated by using the Hopkinson bar loading system. In comparison to the blurry image, the proposed method has successfully restored the image. It is also demonstrated from image processing applications that the de-blurring method can improve the accuracy and the stability of the digital imaging correlation measurement.

  3. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark


    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  4. Digital signal and image processing using Matlab

    CERN Document Server

    Blanchet , Gérard


    The most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals, the theory being supported by exercises and computer simulations relating to real applications.   More than 200 programs and functions are provided in the MATLAB® language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject.  Following on from the first volume, this second installation takes a more practical stance, provi

  5. Digital signal and image processing using MATLAB

    CERN Document Server

    Blanchet , Gérard


    This fully revised and updated second edition presents the most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals. The theory is supported by exercises and computer simulations relating to real applications. More than 200 programs and functions are provided in the MATLABÒ language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject. This fully revised new edition updates : - the

  6. Radar image processing for rock-type discrimination (United States)

    Blom, R. G.; Daily, M.


    Image processing and enhancement techniques for improving the geologic utility of digital satellite radar images are reviewed. Preprocessing techniques such as mean and variance correction on a range or azimuth line by line basis to provide uniformly illuminated swaths, median value filtering for four-look imagery to eliminate speckle, and geometric rectification using a priori elevation data. Examples are presented of application of preprocessing methods to Seasat and Landsat data, and Seasat SAR imagery was coregistered with Landsat imagery to form composite scenes. A polynomial was developed to distort the radar picture to fit the Landsat image of a 90 x 90 km sq grid, using Landsat color ratios with Seasat intensities. Subsequent linear discrimination analysis was employed to discriminate rock types from known areas. Seasat additions to the Landsat data improved rock identification by 7%.

  7. Image processing to optimize wave energy converters (United States)

    Bailey, Kyle Marc-Anthony

    The world is turning to renewable energies as a means of ensuring the planet's future and well-being. There have been a few attempts in the past to utilize wave power as a means of generating electricity through the use of Wave Energy Converters (WEC), but only recently are they becoming a focal point in the renewable energy field. Over the past few years there has been a global drive to advance the efficiency of WEC. Placing a mechanical device either onshore or offshore that captures the energy within ocean surface waves to drive a mechanical device is how wave power is produced. This paper seeks to provide a novel and innovative way to estimate ocean wave frequency through the use of image processing. This will be achieved by applying a complex modulated lapped orthogonal transform filter bank to satellite images of ocean waves. The complex modulated lapped orthogonal transform filterbank provides an equal subband decomposition of the Nyquist bounded discrete time Fourier Transform spectrum. The maximum energy of the 2D complex modulated lapped transform subband is used to determine the horizontal and vertical frequency, which subsequently can be used to determine the wave frequency in the direction of the WEC by a simple trigonometric scaling. The robustness of the proposed method is provided by the applications to simulated and real satellite images where the frequency is known.

  8. Supplementary Golay pair for range side lobe suppression in dual-frequency tissue harmonic imaging. (United States)

    Shen, Che-Chou; Wu, Chi; Peng, Jun-Kai


    In dual-frequency (DF) harmonic imaging, the second harmonic signal at second harmonic (2f0) frequency and the inter-modulation harmonic signal at fundamental (f0) frequency are simultaneously imaged for spectral compounding. When the phase-encoded Golay pair is utilized to improve the harmonic signal-to-noise ratio (SNR), however, the DF imaging suffers from range side lobe artifacts due to spectral cross-talk with other harmonic components at DC and third harmonic (3f0) frequency. In this study, a supplementary Golay pair is developed to suppress the range side lobes in combination with the original Golay pair. Since the phase code of the DC interference cannot be manipulated, the supplementary Golay is designed to reverse the polarity of the 3f0 interference and the f0 signal while keeping the 2f0 signal unchanged. For 2f0 imaging, the echo summation of the supplementary and the original Golay can cancel the 3f0 interference. On the contrary, the echo difference between the two Golay pairs can eliminate the DC interference for f0 imaging. Hydrophone measurements indicate that the range side lobe level (RSLL) increases with the signal bandwidth of DF harmonic imaging. By using the combination of the two Golay pairs, the achievable suppression of RSLL can be 3 and 14 dB, respectively for the f0 and 2f0 harmonic signal. B-mode phantom imaging also verifies the presence of range side lobe artifacts when only the original Golay pair is utilized. In combination with the supplementary Golay pair, the artifacts are effectively suppressed. The corresponding range side lobe magnitude reduces by about 8 dB in 2f0 imaging but remains unchanged in f0 imaging. Meanwhile, the harmonic SNR improves by 8-10 dB and the contrast-to-noise ratio of harmonic image increases from about 1 to 1.2 by spectral compounding. For DF tissue harmonic imaging, the spectral cross-talk in Golay excitation results in severe range side lobe artifacts. To restore the image quality, two particular

  9. Platform for distributed image processing and image retrieval (United States)

    Gueld, Mark O.; Thies, Christian J.; Fischer, Benedikt; Keysers, Daniel; Wein, Berthold B.; Lehmann, Thomas M.


    We describe a platform for the implementation of a system for content-based image retrieval in medical applications (IRMA). To cope with the constantly evolving medical knowledge, the platform offers a flexible feature model to store and uniformly access all feature types required within a multi-step retrieval approach. A structured generation history for each feature allows the automatic identification and re-use of already computed features. The platform uses directed acyclic graphs composed of processing steps and control elements to model arbitrary retrieval algorithms. This visually intuitive, data-flow oriented representation vastly improves the interdisciplinary communication between computer scientists and physicians during the development of new retrieval algorithms. The execution of the graphs is fully automated within the platform. Each processing step is modeled as a feature transformation. Due to a high degree of system transparency, both the implementation and the evaluation of retrieval algorithms are accelerated significantly. The platform uses a client-server architecture consisting of a central database, a central job scheduler, instances of a daemon service, and clients which embed user-implemented feature ansformations. Automatically distributed batch processing and distributed feature storage enable the cost-efficient use of an existing workstation cluster.

  10. Discrimination between Sedimentary Rocks from Close-Range Visible and Very-Near-Infrared Images.

    Directory of Open Access Journals (Sweden)

    Susana Del Pozo

    Full Text Available Variation in the mineral composition of rocks results in a change of their spectral response capable of being studied by imaging spectroscopy. This paper proposes the use of a low-cost handy sensor, a calibrated visible-very near infrared (VIS-VNIR multispectral camera for the recognition of different geological formations. The spectral data was recorded by a Tetracam Mini-MCA-6 camera mounted on a field-based platform covering six bands in the spectral range of 0.530-0.801 µm. Twelve sedimentary formations were selected in the Rhône-Alpes region (France to analyse the discrimination potential of this camera for rock types and close-range mapping applications. After proper corrections and data processing, a supervised classification of the multispectral data was performed trying to distinguish four classes: limestones, marlstones, vegetation and shadows. After a maximum-likelihood classification, results confirmed that this camera can be efficiently exploited to map limestone-marlstone alternations in geological formations with this mineral composition.

  11. Discrimination between Sedimentary Rocks from Close-Range Visible and Very-Near-Infrared Images. (United States)

    Del Pozo, Susana; Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo; Kees Blom, Jan; González-Aguilera, Diego


    Variation in the mineral composition of rocks results in a change of their spectral response capable of being studied by imaging spectroscopy. This paper proposes the use of a low-cost handy sensor, a calibrated visible-very near infrared (VIS-VNIR) multispectral camera for the recognition of different geological formations. The spectral data was recorded by a Tetracam Mini-MCA-6 camera mounted on a field-based platform covering six bands in the spectral range of 0.530-0.801 µm. Twelve sedimentary formations were selected in the Rhône-Alpes region (France) to analyse the discrimination potential of this camera for rock types and close-range mapping applications. After proper corrections and data processing, a supervised classification of the multispectral data was performed trying to distinguish four classes: limestones, marlstones, vegetation and shadows. After a maximum-likelihood classification, results confirmed that this camera can be efficiently exploited to map limestone-marlstone alternations in geological formations with this mineral composition.

  12. Prediction of object detection, recognition, and identification [DRI] ranges at color scene images based on quantifying human color contrast perception (United States)

    Pinsky, Ephi; Levin, Ilia; Yaron, Ofer


    We propose a novel approach to predict, for specified color imaging system and for objects with known characteristics, their detection, recognition, identification (DRI) ranges in a colored dynamic scene, based on quantifying the human color contrast perception. The method refers to the well established L*a*b*, 3D color space. The nonlinear relations of this space are intended to mimic the nonlinear response of the human eye. The metrics of L*a*b* color space is such that the Euclidian distance between any two colors in this space is approximately proportional to the color contrast as perceived by the human eye/brain. The result of this metrics leads to the outcome that color contrast of any two points is always greater (or equal) than their equivalent grey scale contrast. This meets our sense that looking on a colored image, contrast is superior to the gray scale contrast of the same image. Yet, color loss by scattering at very long ranges should be considered as well. The color contrast derived from the distance between the colored object pixels and to the nearby colored background pixels, as derived from the L*a*b* color space metrics, is expressed in terms of gray scale contrast. This contrast replaces the original standard gray scale contrast component of that image. As expected, the resulted DRI ranges are, in most cases, larger than those predicted by the standard gray scale image. Upon further elaboration and validation of this method, it may be combined with the next versions of the well accepted TRM codes for DRI predictions. Consistent prediction of DRI ranges implies a careful evaluation of the object and background color contrast reduction along the range. Clearly, additional processing for reconstructing the objects and background true colors and hence the color contrast along the range, will further increase the DRI ranges.

  13. Digital image processing an algorithmic approach with Matlab

    CERN Document Server

    Qidwai, Uvais


    Introduction to Image Processing and the MATLAB EnvironmentIntroduction Digital Image Definitions: Theoretical Account Image Properties MATLAB Algorithmic Account MATLAB CodeImage Acquisition, Types, and File I/OImage Acquisition Image Types and File I/O Basics of Color Images Other Color Spaces Algorithmic Account MATLAB CodeImage ArithmeticIntroduction Operator Basics Theoretical TreatmentAlgorithmic Treatment Coding ExamplesAffine and Logical Operations, Distortions, and Noise in ImagesIntroduction Affine Operations Logical Operators Noise in Images Distortions in ImagesAlgorithmic Account

  14. A new compact, cost-efficient concept for underwater range-gated imaging: the UTOFIA project (United States)

    Mariani, Patrizio; Quincoces, Iñaki; Galparsoro, Ibon; Bald, Juan; Gabiña, Gorka; Visser, Andy; Jónasdóttir, Sigrun; Haugholt, Karl Henrik; Thorstensen, Jostein; Risholm, Petter; Thielemann, Jens


    Underwater Time Of Flight Image Acquisition system (UTOFIA) is a recently launched H2020 project (H2020 - 633098) to develop a compact and cost-effective underwater imaging system especially suited for observations in turbid environments. The UTOFIA project targets technology that can overcome the limitations created by scattering, by introducing cost-efficient range-gated imaging for underwater applications. This technology relies on a image acquisition principle that can extends the imaging range of the cameras 2-3 times respect to other cameras. Moreover, the system will simultaneously capture 3D information of the observed objects. Today range-gated imaging is not widely used, as it relies on specialised optical components making systems large and costly. Recent technology developments have made it possible a significant (2-3 times) reduction in size, complexity and cost of underwater imaging systems, whilst addressing the scattering issues at the same time. By acquiring simultaneous 3D data, the system allows to accurately measure the absolute size of marine life and their spatial relationship to their habitat, enhancing the precision of fish stock monitoring and ecology assessment, hence supporting proper management of marine resources. Additionally, the larger observed volume and the improved image quality make the system suitable for cost-effective underwater surveillance operations in e.g. fish farms, underwater infrastructures. The system can be integrated into existing ocean observatories for real time acquisition and can greatly advance present efforts in developing species recognition algorithms, given the additional features provided, the improved image quality and the independent illumination source based on laser. First applications of the most recent prototype of the imaging system will be provided including inspection of underwater infrastructures and observations of marine life under different environmental conditions.

  15. Context-dependent JPEG backward-compatible high-dynamic range image compression (United States)

    Korshunov, Pavel; Ebrahimi, Touradj


    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  16. A comparative analysis of dynamic range compression techniques in IR images for maritime applications (United States)

    Rossi, Alessandro; Acito, Nicola; Diani, Marco; Luison, Cristian; Olivieri, Monica; Barani, Gianni


    Modern thermal cameras acquire IR images with a high dynamic range because they have to sense with high thermal resolution the great temperature changes of monitored scenarios in specific surveillance applications. Initially developed for visible light images and recently extended for display of IR images, high dynamic range compression (HDRC) techniques aim at furnishing plain images to human operators for a first intuitive comprehension of the sensed scenario without altering the features of IR images. In this context, the maritime scenario represents a challenging case to test and develop HDRC strategies since images collected for surveillance at sea are typically characterized by high thermal gradients among the background scene and classes of objects at different temperatures. In the development of a new IRST system, Selex ES assembled a demonstrator equipped with modern thermal cameras and planned a measurement campaign on a maritime scenario so as to collect IR sequences in different operating conditions. This has led to build up a case record of situations suitable to test HDRC techniques. In this work, a survey of HDRC approaches is introduced pointing out advantages and drawbacks with focus on strategies specifically designed to display IR images. A detailed analysis of the performance is discussed in order to address the task of visualization with reference to typical issues of IR maritime images, such as robustness to the horizon effect and displaying of very warm objects and flat areas.

  17. Applications of the Integrated High-Performance CMOS Image Sensor to Range Finders — from Optical Triangulation to the Automotive Field (United States)

    Wu, Jih-Huah; Pen, Cheng-Chung; Jiang, Joe-Air


    With their significant features, the applications of complementary metal-oxide semiconductor (CMOS) image sensors covers a very extensive range, from industrial automation to traffic applications such as aiming systems, blind guidance, active/passive range finders, etc. In this paper CMOS image sensor-based active and passive range finders are presented. The measurement scheme of the proposed active/passive range finders is based on a simple triangulation method. The designed range finders chiefly consist of a CMOS image sensor and some light sources such as lasers or LEDs. The implementation cost of our range finders is quite low. Image processing software to adjust the exposure time (ET) of the CMOS image sensor to enhance the performance of triangulation-based range finders was also developed. An extensive series of experiments were conducted to evaluate the performance of the designed range finders. From the experimental results, the distance measurement resolutions achieved by the active range finder and the passive range finder can be better than 0.6% and 0.25% within the measurement ranges of 1 to 8 m and 5 to 45 m, respectively. Feasibility tests on applications of the developed CMOS image sensor-based range finders to the automotive field were also conducted. The experimental results demonstrated that our range finders are well-suited for distance measurements in this field. PMID:27879789

  18. Design flow for implementing image processing in FPGAs (United States)

    Trakalo, M.; Giles, G.


    A design flow for implementing a dynamic gamma algorithm in an FPGA is described. Real-time video processing makes enormous demands on processing resources. An FPGA solution offers some advantages over commercial video chip and DSP implementation alternatives. The traditional approach to FPGA development involves a system engineer designing, modeling and verifying an algorithm and writing a specification. A hardware engineer uses the specification as a basis for coding in VHDL and testing the algorithm in the FPGA with supporting electronics. This process is work intensive and the verification of the image processing algorithm executing on the FPGA does not occur until late in the program. The described design process allows the system engineer to design and verify a true VHDL version of the algorithm, executing in an FPGA. This process yields reduced risk and development time. The process is achieved by using Xilinx System Generator in conjunction with Simulink® from The MathWorks. System Generator is a tool that bridges the gap between the high level modeling environment and the digital world of the FPGA. System Generator is used to develop the dynamic gamma algorithm for the contrast enhancement of a candidate display product. The results of this effort are to increase the dynamic range of the displayed video, resulting in a more useful image for the user.

  19. A contest of sensors in close range 3D imaging: performance evaluation with a new metric test object

    Directory of Open Access Journals (Sweden)

    M. Hess


    Full Text Available An independent means of 3D image quality assessment is introduced, addressing non-professional users of sensors and freeware, which is largely characterized as closed-sourced and by the absence of quality metrics for processing steps, such as alignment. A performance evaluation of commercially available, state-of-the-art close range 3D imaging technologies is demonstrated with the help of a newly developed Portable Metric Test Artefact. The use of this test object provides quality control by a quantitative assessment of 3D imaging sensors. It will enable users to give precise specifications which spatial resolution and geometry recording they expect as outcome from their 3D digitizing process. This will lead to the creation of high-quality 3D digital surrogates and 3D digital assets. The paper is presented in the form of a competition of teams, and a possible winner will emerge.

  20. High-dynamic-range microscope imaging based on exposure bracketing in full-field optical coherence tomography. (United States)

    Leong-Hoi, Audrey; Montgomery, Paul C; Serio, Bruno; Twardowski, Patrice; Uhring, Wilfried


    By applying the proposed high-dynamic-range (HDR) technique based on exposure bracketing, we demonstrate a meaningful reduction in the spatial noise in image frames acquired with a CCD camera so as to improve the fringe contrast in full-field optical coherence tomography (FF-OCT). This new signal processing method thus allows improved probing within transparent or semitransparent samples. The proposed method is demonstrated on 3 μm thick transparent polymer films of Mylar, which, due to their transparency, produce low contrast fringe patterns in white-light interference microscopy. High-resolution tomographic analysis is performed using the technique. After performing appropriate signal processing, resulting XZ sections are observed. Submicrometer-sized defects can be lost in the noise that is present in the CCD images. With the proposed method, we show that by increasing the signal-to-noise ratio of the images, submicrometer-sized defect structures can thus be detected.

  1. A concise introduction to image processing using C++

    CERN Document Server

    Wang, Meiqing


    Image recognition has become an increasingly dynamic field with new and emerging civil and military applications in security, exploration, and robotics. Written by experts in fractal-based image and video compression, A Concise Introduction to Image Processing using C++ strengthens your knowledge of fundamentals principles in image acquisition, conservation, processing, and manipulation, allowing you to easily apply these techniques in real-world problems. The book presents state-of-the-art image processing methodology, including current industrial practices for image compression, image de-noi

  2. A Document Imaging Technique for Implementing Electronic Loan Approval Process

    National Research Council Canada - National Science Library

    J. Manikandan; C.S. Celin; V.M. Gayathri


    ...), research fields, crime investigation fields and military fields. In this paper, we proposed a document image processing technique, for establishing electronic loan approval process (E-LAP) [2...

  3. Computational ghost imaging of hot objects in long-wave infrared range (United States)

    Liu, Hong-Chao; Zhang, Shuang


    Ghost imaging (GI) is an intriguing imaging modality to obtain the object information from the correlation calculations of spatial intensity fluctuations. In this letter, we report the computational GI of hot objects in the long-wave infrared range both in experiment and simulation. Without employing an independent light source, we reconstruct thermal images of objects only based on the intensity correlations of their thermal radiation at room temperature. By comparing different GI reconstruction algorithms, we demonstrate that GI with compressive sensing can efficiently obtain the thermal object information only with a single-pixel infrared camera, which might be applied to night-vision, environmental sensing, military detection, etc.

  4. Effects of image processing on the detective quantum efficiency (United States)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na


    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  5. [Influence of human body target's spectral characteristics on visual range of low light level image intensifiers]. (United States)

    Zhang, Jun-Ju; Yang, Wen-Bin; Xu, Hui; Liu, Lei; Tao, Yuan-Yaun


    To study the effect of different human target's spectral reflective characteristic on low light level (LLL) image intensifier's distance, based on the spectral characteristics of the night-sky radiation and the spectral reflective coefficients of common clothes, we established a equation of human body target's spectral reflective distribution, and analyzed the spectral reflective characteristics of different human targets wearing the clothes of different color and different material, and from the actual detection equation of LLL image intensifier distance, discussed the detection capability of LLL image intensifier for different human target. The study shows that the effect of different human target's spectral reflective characteristic on LLL image intensifier distance is mainly reflected in the average reflectivity rho(-) and the initial contrast of the target and the background C0. Reflective coefficient and spectral reflection intensity of cotton clothes are higher than polyester clothes, and detection capability of LLL image intensifier is stronger for the human target wearing cotton clothes. Experimental results show that the LLL image intensifiers have longer visual ranges for targets who wear cotton clothes than targets who wear same color but polyester clothes, and have longer visual ranges for targets who wear light-colored clothes than targets who wear dark-colored clothes. And in the full moon illumination conditions, LLL image intensifiers are more sensitive to the clothes' material.

  6. Intelligent elevator management system using image processing (United States)

    Narayanan, H. Sai; Karunamurthy, Vignesh; Kumar, R. Barath


    In the modern era, the increase in the number of shopping malls and industrial building has led to an exponential increase in the usage of elevator systems. Thus there is an increased need for an effective control system to manage the elevator system. This paper is aimed at introducing an effective method to control the movement of the elevators by considering various cases where in the location of the person is found and the elevators are controlled based on various conditions like Load, proximity etc... This method continuously monitors the weight limit of each elevator while also making use of image processing to determine the number of persons waiting for an elevator in respective floors. Canny edge detection technique is used to find out the number of persons waiting for an elevator. Hence the algorithm takes a lot of cases into account and locates the correct elevator to service the respective persons waiting in different floors.

  7. Simulink Component Recognition Using Image Processing

    Directory of Open Access Journals (Sweden)

    Ramya R


    Full Text Available ABSTRACT In early stages of engineering design pen-and-paper sketches are often used to quickly convey concepts and ideas. Free-form drawing is often preferable to using computer interfaces due to its ease of use fluidity and lack of constraints. The objective of this project is to create a trainable sketched Simulink component recognizer and classifying the individual Simulink components from the input block diagram. The recognized components will be placed on the new Simulink model window after which operations can be performed over them. Noise from the input image is removed by Median filter the segmentation process is done by K-means clustering algorithm and recognition of individual Simulink components from the input block diagram is done by Euclidean distance. The project aims to devise an efficient way to segment a control system block diagram into individual components for recognition.

  8. Measurement of smaller colon polyp in CT colonography images using morphological image processing. (United States)

    Manjunath, K N; Siddalingaswamy, P C; Prabhu, G K


    Automated measurement of the size and shape of colon polyps is one of the challenges in Computed tomography colonography (CTC). The objective of this retrospective study was to improve the sensitivity and specificity of smaller polyp measurement in CTC using image processing techniques. A domain knowledge-based method has been implemented with hybrid method of colon segmentation, morphological image processing operators for detecting the colonic structures, and the decision-making system for delineating the smaller polyp-based on a priori knowledge. The method was applied on 45 CTC dataset. The key finding was that the smaller polyps were accurately measured. In addition to 6-9 mm range, polyps of even processing. It takes [Formula: see text] min for measuring the smaller polyp in a dataset of 500 CTC images. With this method, [Formula: see text] and [Formula: see text] were achieved. The domain-based approach with morphological image processing has given good results. The smaller polyps were measured accurately which helps in making right clinical decisions. Qualitatively and quantitatively the results were acceptable when compared to the ground truth at [Formula: see text].

  9. Knowledge-based approach to medical image processing monitoring (United States)

    Chameroy, Virginie; Aubry, Florent; Di Paola, Robert


    The clinical use of image processing requires both medical knowledge and expertise in image processing techniques. We have designed a knowledge-based interactive quantification support system (IQSS) to help the medical user in the use and evaluation of medical image processing, and in the development of specific protocols. As the user proceeds according to a heuristic and intuitive approach, our system is meant to work according to a similar behavior. At the basis of the reasoning of our monitoring system, there are the semantic features of an image and of image processing. These semantic features describe their intrinsic properties, and are not symbolic description of the image content. Their obtention requires modeling of medical image and of image processing procedures. Semantic interpretation function gives rules to obtain the values of the semantic features extracted from these models. Then, commonsense compatibility rules yield to compatibility criteria which are based on a partial order (a subsumption relationship) on image and image processing, enabling a comparison to be made between data available to be processed and appropriate image processing procedures. This knowledge-based approach makes IQSS modular, flexible and consequently well adapted to aid in the development and in the utilization of image processing methods for multidimensional and multimodality medical image quantification.

  10. Registration of partially overlapping surfaces for range image based augmented reality on mobile devices (United States)

    Kilgus, T.; Franz, A. M.; Seitel, A.; Marz, K.; Bartha, L.; Fangerau, M.; Mersmann, S.; Groch, A.; Meinzer, H.-P.; Maier-Hein, L.


    Visualization of anatomical data for disease diagnosis, surgical planning, or orientation during interventional therapy is an integral part of modern health care. However, as anatomical information is typically shown on monitors provided by a radiological work station, the physician has to mentally transfer internal structures shown on the screen to the patient. To address this issue, we recently presented a new approach to on-patient visualization of 3D medical images, which combines the concept of augmented reality (AR) with an intuitive interaction scheme. Our method requires mounting a range imaging device, such as a Time-of-Flight (ToF) camera, to a portable display (e.g. a tablet PC). During the visualization process, the pose of the camera and thus the viewing direction of the user is continuously determined with a surface matching algorithm. By moving the device along the body of the patient, the physician is given the impression of looking directly into the human body. In this paper, we present and evaluate a new method for camera pose estimation based on an anisotropic trimmed variant of the well-known iterative closest point (ICP) algorithm. According to in-silico and in-vivo experiments performed with computed tomography (CT) and ToF data of human faces, knees and abdomens, our new method is better suited for surface registration with ToF data than the established trimmed variant of the ICP, reducing the target registration error (TRE) by more than 60%. The TRE obtained (approx. 4-5 mm) is promising for AR visualization, but clinical applications require maximization of robustness and run-time.

  11. Hyperspectral image representation and processing with binary partition trees


    Valero Valbuena, Silvia


    Premi extraordinari doctorat curs 2011-2012, àmbit Enginyeria de les TIC The optimal exploitation of the information provided by hyperspectral images requires the development of advanced image processing tools. Therefore, under the title Hyperspectral image representation and Processing with Binary Partition Trees, this PhD thesis proposes the construction and the processing of a new region-based hierarchical hyperspectral image representation: the Binary Partition Tree (BPT). This hierarc...

  12. Polarized Imaging Lidar Using Underwater Range Gating in a Multifunctional Remote Sensing System. (United States)

    Fournier, G.; Trees, C.


    This work describes the design of a compact imaging underwater polarized LIDAR system using a new modular laser beam shaping technology, which ensures eye safe operation at significant optical power levels that were previously unattainable in such an eye safe mode. The system is based on an existing battery powered high efficiency compact range-gated system which can be operated from a variety of underwater vehicles including AUV's. A detailed analysis is presented of the procedure required to successfully extract information on the depth distribution of the inherent optical properties along with the shape of the phase function in the near forward direction. The effect of polarization in helping to constrain and improve the retrieval of these fundamental optical properties of the water column is also discussed. The LIDAR mode is shown to be only one of the many functionalities useful to oceanographic research, which can be implemented using the beam shaping technology described above. Beyond the improvement in range and image quality of gated imaging over conventional imaging in turbid waters, the application of gated-structured imaging can be shown to significantly improve range and precision of 3D bottom mapping near the turbid seabed environment. We will show that the spatial precision that is available is sufficient for seabed habitat mapping and litter identification required for an environmental impact evaluation.

  13. Spot restoration for GPR image post-processing

    Energy Technology Data Exchange (ETDEWEB)

    Paglieroni, David W; Beer, N. Reginald


    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  14. Investigation of the impact of water absorption on retinal OCT imaging in the 1060 nm range

    DEFF Research Database (Denmark)

    Marschall, Sebastian; Pedersen, Christian; Andersen, Peter E.


    Recently, the wavelength range around 1060 nm has become attractive for retinal imaging with optical coherence tomography (OCT), promising deep penetration into the retina and the choroid. The adjacent water absorption bands limit the useful bandwidth of broadband light sources, but until now...... sources for OCT....

  15. Short-Range Ultra-Wideband Imaging with Multiple-Input Multiple-Output Arrays

    NARCIS (Netherlands)

    Zhuge, X.


    Compact, cost-efficient and high-resolution imaging sensors are especially desirable in the field of short-range observation and surveillance. Such sensors are of great value in fields of security, rescue and medical applications. Systems can be formed for various practical purposes, such as

  16. Moving target detection in flash mode against stroboscopic mode by active range-gated laser imaging (United States)

    Zhang, Xuanyu; Wang, Xinwei; Sun, Liang; Fan, Songtao; Lei, Pingshun; Zhou, Yan; Liu, Yuliang


    Moving target detection is important for the application of target tracking and remote surveillance in active range-gated laser imaging. This technique has two operation modes based on the difference of the number of pulses per frame: stroboscopic mode with the accumulation of multiple laser pulses per frame and flash mode with a single shot of laser pulse per frame. In this paper, we have established a range-gated laser imaging system. In the system, two types of lasers with different frequency were chosen for the two modes. Electric fan and horizontal sliding track were selected as the moving targets to compare the moving blurring between two modes. Consequently, the system working in flash mode shows more excellent performance in motion blurring against stroboscopic mode. Furthermore, based on experiments and theoretical analysis, we presented the higher signal-to-noise ratio of image acquired by stroboscopic mode than flash mode in indoor and underwater environment.

  17. A comparison of interest point and region detectors on structured, range and texture images

    DEFF Research Database (Denmark)

    Kazmi, Wajahat; Andersen, Hans Jørgen


    )) and corner based detectors (such as Hessian and Harris with both Affine/Laplace variants, SURF with determinant of Hessian based corners and SIFT with difference of Gaussians) acquired more than 90% mean average precision, whereas on range images, homogeneous region detector did not work well. TLR offered...... and textured images. It is also shown that in a bi-channel approach, combining surface and edge regions (MSER and TLR) boosts the overall performance. Among the descriptors, SIFT and SURF generally offer higher performance but low dimensional descriptors such as Steerable Filters follow closely.......This article presents an evaluation of the image retrieval and classification potential of local features. Several affine invariant region and scale invariant interest point detectors in combination with well known descriptors were evaluated. Tests on building, range and texture databases were...

  18. New segmentation-based tone mapping algorithm for high dynamic range image (United States)

    Duan, Weiwei; Guo, Huinan; Zhou, Zuofeng; Huang, Huimin; Cao, Jianzhong


    The traditional tone mapping algorithm for the display of high dynamic range (HDR) image has the drawback of losing the impression of brightness, contrast and color information. To overcome this phenomenon, we propose a new tone mapping algorithm based on dividing the image into different exposure regions in this paper. Firstly, the over-exposure region is determined using the Local Binary Pattern information of HDR image. Then, based on the peak and average gray of the histogram, the under-exposure and normal-exposure region of HDR image are selected separately. Finally, the different exposure regions are mapped by differentiated tone mapping methods to get the final result. The experiment results show that the proposed algorithm achieve the better performance both in visual quality and objective contrast criterion than other algorithms.

  19. Polymer-free optode nanosensors for dynamic, reversible, and ratiometric sodium imaging in the physiological range. (United States)

    Ruckh, Timothy T; Mehta, Ankeeta A; Dubach, J Matthew; Clark, Heather A


    This work introduces a polymer-free optode nanosensor for ratiometric sodium imaging. Transmembrane ion dynamics are often captured by electrophysiology and calcium imaging, but sodium dyes suffer from short excitation wavelengths and poor selectivity. Optodes, optical sensors composed of a polymer matrix with embedded sensing chemistry, have been translated into nanosensors that selectively image ion concentrations. Polymer-free nanosensors were fabricated by emulsification and were stable by diameter and sensitivity for at least one week. Ratiometric fluorescent measurements demonstrated that the nanosensors are selective for sodium over potassium by ~1.4 orders of magnitude, have a dynamic range centered at 20 mM, and are fully reversible. The ratiometric signal changes by 70% between 10 and 100 mM sodium, showing that they are sensitive to changes in sodium concentration. These nanosensors will provide a new tool for sensitive and quantitative ion imaging.

  20. Quantitative analysis of velopharyngeal movement using a stereoendoscope: accuracy and reliability of range images. (United States)

    Nakano, Asuka; Mishima, Katsuaki; Shiraishi, Ruriko; Ueyama, Yoshiya


    We developed a novel method of producing accurate range images of the velopharynx using a three-dimensional (3D) endoscope to obtain detailed measurements of velopharyngeal movements. The purpose of the present study was to determine the relationship between the distance from the endoscope to an object, elucidate the measurement accuracy along the temporal axes, and determine the degree of blurring when using a jig to fix the endoscope. An endoscopic measuring system was developed in which a pattern projection system was incorporated into a commercially available 3D endoscope. After correcting the distortion of the camera images, range images were produced using pattern projection to achieve stereo matching. Graph paper was used to measure the appropriate distance from the camera to an object, the mesial buccal cusp of the right maxillary first molar was measured to clarify the range image stability, and an electric actuator was used to evaluate the measurement accuracy along the temporal axes. The measurement error was substantial when the distance from the camera to the subject was >6.5 cm. The standard error of the 3D coordinate value produced from 30 frames was within 0.1 mm (range, 0.01-0.08 mm). The measurement error of the temporal axes was 9.16% in the horizontal direction and 9.27% in the vertical direction. The optimal distance from the camera to an object is <6.5 cm. The present endoscopic measuring system can provide stable range images of the velopharynx when using an appropriate fixation method and enables quantitative analysis of velopharyngeal movements.

  1. Quaternion Fourier transforms for signal and image processing

    CERN Document Server

    Ell, Todd A; Sangwine, Stephen J


    Based on updates to signal and image processing technology made in the last two decades, this text examines the most recent research results pertaining to Quaternion Fourier Transforms. QFT is a central component of processing color images and complex valued signals. The book's attention to mathematical concepts, imaging applications, and Matlab compatibility render it an irreplaceable resource for students, scientists, researchers, and engineers.

  2. Sub-image data processing in Astro-WISE

    NARCIS (Netherlands)

    Mwebaze, Johnson; Boxhoorn, Danny; McFarland, John; Valentijn, Edwin A.

    Most often, astronomers are interested in a source (e.g., moving, variable, or extreme in some colour index) that lies on a few pixels of an image. However, the classical approach in astronomical data processing is the processing of the entire image or set of images even when the sole source of

  3. Image analysis for ophthalmological diagnosis image processing of Corvis ST images using Matlab

    CERN Document Server

    Koprowski, Robert


    This monograph focuses on the use of analysis and processing methods for images from the Corvis® ST tonometer. The presented analysis is associated with the quantitative, repeatable and fully automatic evaluation of the response of the eye, eyeball and cornea to an air-puff. All the described algorithms were practically implemented in MATLAB®. The monograph also describes and provides the full source code designed to perform the discussed calculations. As a result, this monograph is intended for scientists, graduate students and students of computer science and bioengineering as well as doctors wishing to expand their knowledge of modern diagnostic methods assisted by various image analysis and processing methods.

  4. A Document Imaging Technique for Implementing Electronic Loan Approval Process

    Directory of Open Access Journals (Sweden)

    J. Manikandan


    Full Text Available The image processing is one of the leading technologies of computer applications. Image processing is a type of signal processing, the input for image processor is an image or video frame and the output will be an image or subset of image [1]. Computer graphics and computer vision process uses an image processing techniques. Image processing systems are used in various environments like medical fields, computer-aided design (CAD, research fields, crime investigation fields and military fields. In this paper, we proposed a document image processing technique, for establishing electronic loan approval process (E-LAP [2]. Loan approval process has been tedious process, the E-LAP system attempts to reduce the complexity of loan approval process. Customers have to login to fill the loan application form online with all details and submit the form. The loan department then processes the submitted form and then sends an acknowledgement mail via the E-LAP to the requested customer with the details about list of documents required for the loan approval process [3]. The approaching customer can upload the scanned copies of all required documents. All this interaction between customer and bank take place using an E-LAP system.

  5. Positron range in PET imaging: an alternative approach for assessing and correcting the blurring

    DEFF Research Database (Denmark)

    Jødal, Lars; Le Loirec, Cindy; Champion, Christophe


    Background: Positron range impairs resolution in PET imaging, especially for high-energy emitters and for small-animal PET. De-blurring in image reconstruction is possible if the blurring distribution is known. Further, the percentage of annihilation events within a given distance from the point...... of positron emission is relevant for assessing statistical noise. Aims: The paper aims to determine positron range distribution relevant for blurring for seven medically relevant PET isotopes, 18F, 11C, 13N, 15O, 68Ga, 62Cu, and 82Rb, and derive empirical formulas for the distributions. The paper focuses...... on allowed-decay isotopes. Methods: It is argued that blurring at the detection level should not be described by positron range r, but instead the 2D-projected distance δ (equal to the closest distance between decay and line-of-response). To determine these 2D distributions, results from a dedicated positron...

  6. Stochastic calculus analysis of optical time-of-flight range imaging and estimation of radial motion. (United States)

    Streeter, Lee


    Time-of-flight range imaging is analyzed using stochastic calculus. Through a series of interpretations and simplifications, the stochastic model leads to two methods for estimating linear radial velocity: maximum likelihood estimation on the transition probability distribution between measurements, and a new method based on analyzing the measured correlation waveform and its first derivative. The methods are tested in a simulated motion experiment from (-40)-(+40)  m/s, with data from a camera imaging an object on a translation stage. In tests maximum likelihood is slow and unreliable, but when it works it estimates the linear velocity with standard deviation of 1 m/s or better. In comparison the new method is fast and reliable but works in a reduced velocity range of (-20)-(+20)  m/s with standard deviation ranging from 3.5 m/s to 10 m/s.

  7. Optimizing Single Sweep Range and Doppler Processing for FMCW Radar using Inverse Filtering

    NARCIS (Netherlands)

    Jong, A.J. de; Dorp, Ph. van


    We discuss range and Doppler processing for FMCW radar using only a single pulse or frequency sweep. The first step is correlation processing, for which the range and Doppler resolution are limited by the ambiguity function. We show that this resolution can be optimized with an additional inverse

  8. Image simulation and a model of noise power spectra across a range of mammographic beam qualities. (United States)

    Mackenzie, Alistair; Dance, David R; Diaz, Oliver; Young, Kenneth C


    The aim of this work is to create a model to predict the noise power spectra (NPS) for a range of mammographic radiographic factors. The noise model was necessary to degrade images acquired on one system to match the image quality of different systems for a range of beam qualities. Five detectors and x-ray systems [Hologic Selenia (ASEh), Carestream computed radiography CR900 (CRc), GE Essential (CSI), Carestream NIP (NIPc), and Siemens Inspiration (ASEs)] were characterized for this study. The signal transfer property was measured as the pixel value against absorbed energy per unit area (E) at a reference beam quality of 28 kV, Mo/Mo or 29 kV, W/Rh with 45 mm polymethyl methacrylate (PMMA) at the tube head. The contributions of the three noise sources (electronic, quantum, and structure) to the NPS were calculated by fitting a quadratic at each spatial frequency of the NPS against E. A quantum noise correction factor which was dependent on beam quality was quantified using a set of images acquired over a range of radiographic factors with different thicknesses of PMMA. The noise model was tested for images acquired at 26 kV, Mo/Mo with 20 mm PMMA and 34 kV, Mo/Rh with 70 mm PMMA for three detectors (ASEh, CRc, and CSI) over a range of exposures. The NPS were modeled with and without the noise correction factor and compared with the measured NPS. A previous method for adapting an image to appear as if acquired on a different system was modified to allow the reference beam quality to be different from the beam quality of the image. The method was validated by adapting the ASEh flat field images with two thicknesses of PMMA (20 and 70 mm) to appear with the imaging characteristics of the CSI and CRc systems. The quantum noise correction factor rises with higher beam qualities, except for CR systems at high spatial frequencies, where a flat response was found against mean photon energy. This is due to the dominance of secondary quantum noise in CR. The use of the

  9. Image simulation and a model of noise power spectra across a range of mammographic beam qualities

    Energy Technology Data Exchange (ETDEWEB)

    Mackenzie, Alistair, E-mail:; Dance, David R.; Young, Kenneth C. [National Coordinating Centre for the Physics of Mammography, Royal Surrey County Hospital, Guildford GU2 7XX, United Kingdom and Department of Physics, University of Surrey, Guildford GU2 7XH (United Kingdom); Diaz, Oliver [Centre for Vision, Speech and Signal Processing, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH, United Kingdom and Computer Vision and Robotics Research Institute, University of Girona, Girona 17071 (Spain)


    Purpose: The aim of this work is to create a model to predict the noise power spectra (NPS) for a range of mammographic radiographic factors. The noise model was necessary to degrade images acquired on one system to match the image quality of different systems for a range of beam qualities. Methods: Five detectors and x-ray systems [Hologic Selenia (ASEh), Carestream computed radiography CR900 (CRc), GE Essential (CSI), Carestream NIP (NIPc), and Siemens Inspiration (ASEs)] were characterized for this study. The signal transfer property was measured as the pixel value against absorbed energy per unit area (E) at a reference beam quality of 28 kV, Mo/Mo or 29 kV, W/Rh with 45 mm polymethyl methacrylate (PMMA) at the tube head. The contributions of the three noise sources (electronic, quantum, and structure) to the NPS were calculated by fitting a quadratic at each spatial frequency of the NPS against E. A quantum noise correction factor which was dependent on beam quality was quantified using a set of images acquired over a range of radiographic factors with different thicknesses of PMMA. The noise model was tested for images acquired at 26 kV, Mo/Mo with 20 mm PMMA and 34 kV, Mo/Rh with 70 mm PMMA for three detectors (ASEh, CRc, and CSI) over a range of exposures. The NPS were modeled with and without the noise correction factor and compared with the measured NPS. A previous method for adapting an image to appear as if acquired on a different system was modified to allow the reference beam quality to be different from the beam quality of the image. The method was validated by adapting the ASEh flat field images with two thicknesses of PMMA (20 and 70 mm) to appear with the imaging characteristics of the CSI and CRc systems. Results: The quantum noise correction factor rises with higher beam qualities, except for CR systems at high spatial frequencies, where a flat response was found against mean photon energy. This is due to the dominance of secondary quantum noise

  10. Interactive image processing for mobile devices (United States)

    Shaw, Rodney


    As the number of consumer digital images escalates by tens of billions each year, an increasing proportion of these images are being acquired using the latest generations of sophisticated mobile devices. The characteristics of the cameras embedded in these devices now yield image-quality outcomes that approach those of the parallel generations of conventional digital cameras, and all aspects of the management and optimization of these vast new image-populations become of utmost importance in providing ultimate consumer satisfaction. However this satisfaction is still limited by the fact that a substantial proportion of all images are perceived to have inadequate image quality, and a lesser proportion of these to be completely unacceptable (for sharing, archiving, printing, etc). In past years at this same conference, the author has described various aspects of a consumer digital-image interface based entirely on an intuitive image-choice-only operation. Demonstrations have been given of this facility in operation, essentially allowing criticalpath navigation through approximately a million possible image-quality states within a matter of seconds. This was made possible by the definition of a set of orthogonal image vectors, and defining all excursions in terms of a fixed linear visual-pixel model, independent of the image attribute. During recent months this methodology has been extended to yield specific user-interactive image-quality solutions in the form of custom software, which at less than 100kb is readily embedded in the latest generations of unlocked portable devices. This has also necessitated the design of new user-interfaces and controls, as well as streamlined and more intuitive versions of the user quality-choice hierarchy. The technical challenges and details will be described for these modified versions of the enhancement methodology, and initial practical experience with typical images will be described.

  11. Multiscale image processing and antiscatter grids in digital radiography. (United States)

    Lo, Winnie Y; Hornof, William J; Zwingenberger, Allison L; Robertson, Ian D


    Scatter radiation is a source of noise and results in decreased signal-to-noise ratio and thus decreased image quality in digital radiography. We determined subjectively whether a digitally processed image made without a grid would be of similar quality to an image made with a grid but without image processing. Additionally the effects of exposure dose and of a using a grid with digital radiography on overall image quality were studied. Thoracic and abdominal radiographs of five dogs of various sizes were made. Four acquisition techniques were included (1) with a grid, standard exposure dose, digital image processing; (2) without a grid, standard exposure dose, digital image processing; (3) without a grid, half the exposure dose, digital image processing; and (4) with a grid, standard exposure dose, no digital image processing (to mimic a film-screen radiograph). Full-size radiographs as well as magnified images of specific anatomic regions were generated. Nine reviewers rated the overall image quality subjectively using a five-point scale. All digitally processed radiographs had higher overall scores than nondigitally processed radiographs regardless of patient size, exposure dose, or use of a grid. The images made at half the exposure dose had a slightly lower quality than those made at full dose, but this was only statistically significant in magnified images. Using a grid with digital image processing led to a slight but statistically significant increase in overall quality when compared with digitally processed images made without a grid but whether this increase in quality is clinically significant is unknown.

  12. Forward and backward tone mapping of high dynamic range images based on subband architecture (United States)

    Bouzidi, Ines; Ouled Zaid, Azza


    This paper presents a novel High Dynamic Range (HDR) tone mapping (TM) system based on sub-band architecture. Standard wavelet filters of Daubechies, Symlets, Coiflets and Biorthogonal were used to estimate the proposed system performance in terms of Low Dynamic Range (LDR) image quality and reconstructed HDR image fidelity. During TM stage, the HDR image is firstly decomposed in sub-bands using symmetrical analysis-synthesis filter bank. The transform coefficients are then rescaled using a predefined gain map. The inverse Tone Mapping (iTM) stage is straightforward. Indeed, the LDR image passes through the same sub-band architecture. But, instead of reducing the dynamic range, the LDR content is boosted to an HDR representation. Moreover, in our TM sheme, we included an optimization module to select the gain map components that minimize the reconstruction error, and consequently resulting in high fidelity HDR content. Comparisons with recent state-of-the-art methods have shown that our method provides better results in terms of visual quality and HDR reconstruction fidelity using objective and subjective evaluations.

  13. High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering

    Directory of Open Access Journals (Sweden)

    Nelson Eduardo Diaz


    Full Text Available The coded aperture snapshot spectral imaging system (CASSI is an imaging architecture which senses the three dimensional informa-tion of a scene with two dimensional (2D focal plane array (FPA coded projection measurements. A reconstruction algorithm takes advantage of the compressive measurements sparsity to recover the underlying 3D data cube. Traditionally, CASSI uses block-un-block coded apertures (BCA to spatially modulate the light. In CASSI the quality of the reconstructed images depends on the design of these coded apertures and the FPA dynamic range. This work presents a new CASSI architecture based on grayscaled coded apertu-res (GCA which reduce the FPA saturation and increase the dynamic range of the reconstructed images. The set of GCA is calculated in a real-time adaptive manner exploiting the information from the FPA compressive measurements. Extensive simulations show the attained improvement in the quality of the reconstructed images when GCA are employed.  In addition, a comparison between traditional coded apertures and GCA is realized with respect to noise tolerance.

  14. Image processing and enhancement provided by commercial dental software programs

    National Research Council Canada - National Science Library

    Lehmann, T M; Troeltsch, E; Spitzer, K


    To identify and analyse methods/algorithms for image processing provided by various commercial software programs used in direct digital dental imaging and to map them onto a standardized nomenclature...

  15. Video image processing to create a speed sensor (United States)


    Image processing has been applied to traffic analysis in recent years, with different goals. In the report, a new approach is presented for extracting vehicular speed information, given a sequence of real-time traffic images. We extract moving edges ...

  16. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato


    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  17. Viewpoints on Medical Image Processing: From Science to Application (United States)

    Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas


    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804

  18. Influence of the particle size on polarization-based range-gated imaging in turbid media

    Directory of Open Access Journals (Sweden)

    Heng Tian


    Full Text Available The influence of size of the scatterer on the image contrast for polarization-based range-gated imaging in turbid media is investigated here by Monte Carlo method. Circularly polarized light would be more efficient to eliminate the noise photons for both the isotropic medium as well as the anisotropic medium, as compared with linearly polarized light. The improvement in contrast is pronounced for isotropic medium using either linear or circular polarization. The plausible explanations for these observations are also presented.

  19. Method development for verification the completeancient statues by image processing


    Natthariya Laopracha; Umaporn Saisangjan; Rapeeporn Chamchong


    Ancient statues are cultural heritages that should be preserved and maintained. Nevertheless, such invaluable statues may be targeted by vandalism or burglary. In order to guard these statues by using image processing, this research aims to develop a technique for detecting images of ancient statues with missing parts using digital image processing. This paper proposed the effective feature extraction method for detecting images of damaged statues or statues with missing parts based on the Hi...

  20. Histopathological Image Analysis Using Image Processing Techniques: An Overview


    A. D. Belsare; M.M. Mushrif


    This paper reviews computer assisted histopathology image analysis for cancer detection and classification. Histopathology refers to the examination of invasive or less invasive biopsy sample by a pathologist under microscope for locating, analyzing and classifying most of the diseases like cancer. The analysis of histoapthological image is done manually by the pathologist to detect disease which leads to subjective diagnosis of sample and varies with level of expertise of examine...

  1. Multiple-wavelength range-gated active imaging principle in the accumulation mode for three-dimensional imaging. (United States)

    Matwyschuk, Alexis


    Having laid the foundations of the multiple-wavelength range-gated active imaging principle in flash mode in a previous paper, we have been studying its use in accumulation mode. Whatever the mode, the principle consists of restoring the 3D scene directly in a single image at the moment of recording with a camera. Each emitted light pulse with a different wavelength corresponds to a visualized zone with a different distance in the scene. So each of these visualized zones is identified by a different wavelength. In flash mode, the camera shutter opens just once during the emission of light pulses with the different wavelengths. However, the energy constraints to restore scenes in three dimensions can lead to a change in the recording mode when moving from the flash mode to the accumulation mode. In this mode, the cycle, including a series of light pulses with the used wavelengths and an aperture of the camera shutter, is repeated several times for a given image recorded with the intensified camera. Each wavelength always corresponds to a visualized slice with a different distance in the scene. So, the accumulation enables increasing the illumination of every visualized slice. The modeling conducted in the previous paper must be completed to adapt it to this mode. The tests with a multiple-wavelength laser source confirmed the quality improvement of the recorded images for more remote scenes and validated the principle of restoring, directly in a color image, the three dimensions of a scene.

  2. Effects of image processing on the detective quantum efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na [Yonsei University, Wonju (Korea, Republic of)


    The evaluation of image quality is an important part of digital radiography. The modulation transfer function (MTF), the noise power spectrum (NPS), and the detective quantum efficiency (DQE) are widely accepted measurements of the digital radiographic system performance. However, as the methodologies for such characterization have not been standardized, it is difficult to compare directly reported the MTF, NPS, and DQE results. In this study, we evaluated the effect of an image processing algorithm for estimating the MTF, NPS, and DQE. The image performance parameters were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) posterior-anterior (PA) images of a hand for measuring the signal to noise ratio (SNR), the slit images for measuring the MTF, and the white images for measuring the NPS were obtained, and various multi-Scale image contrast amplification (MUSICA) factors were applied to each of the acquired images. All of the modifications of the images obtained by using image processing had a considerable influence on the evaluated image quality. In conclusion, the control parameters of image processing can be accounted for evaluating characterization of image quality in same way. The results of this study should serve as a baseline for based on evaluating imaging systems and their imaging characteristics by MTF, NPS, and DQE measurements.

  3. Viking image processing. [digital stereo imagery and computer mosaicking (United States)

    Green, W. B.


    The paper discusses the camera systems capable of recording black and white and color imagery developed for the Viking Lander imaging experiment. Each Viking Lander image consisted of a matrix of numbers with 512 rows and an arbitrary number of columns up to a maximum of about 9,000. Various techniques were used in the processing of the Viking Lander images, including: (1) digital geometric transformation, (2) the processing of stereo imagery to produce three-dimensional terrain maps, and (3) computer mosaicking of distinct processed images. A series of Viking Lander images is included.

  4. Image processing and analysis with graphs theory and practice

    CERN Document Server

    Lézoray, Olivier


    Covering the theoretical aspects of image processing and analysis through the use of graphs in the representation and analysis of objects, Image Processing and Analysis with Graphs: Theory and Practice also demonstrates how these concepts are indispensible for the design of cutting-edge solutions for real-world applications. Explores new applications in computational photography, image and video processing, computer graphics, recognition, medical and biomedical imaging With the explosive growth in image production, in everything from digital photographs to medical scans, there has been a drast

  5. FunImageJ: a Lisp framework for scientific image processing. (United States)

    Harrington, Kyle I S; Rueden, Curtis T; Eliceiri, Kevin W


    FunImageJ is a Lisp framework for scientific image processing built upon the ImageJ software ecosystem. The framework provides a natural functional-style for programming, while accounting for the performance requirements necessary in big data processing commonly encountered in biological image analysis. Freely available plugin to Fiji ( Installation and use instructions available at ( Supplementary data are available at Bioinformatics online.

  6. Survey on Neural Networks Used for Medical Image Processing. (United States)

    Shi, Zhenghao; He, Lifeng; Suzuki, Kenji; Nakamura, Tsuyoshi; Itoh, Hidenori


    This paper aims to present a review of neural networks used in medical image processing. We classify neural networks by its processing goals and the nature of medical images. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of neural network application for medical image processing and an outlook for the future research are also discussed. By this survey, we try to answer the following two important questions: (1) What are the major applications of neural networks in medical image processing now and in the nearby future? (2) What are the major strengths and weakness of applying neural networks for solving medical image processing tasks? We believe that this would be very helpful researchers who are involved in medical image processing with neural network techniques.


    Directory of Open Access Journals (Sweden)

    T. H. Kurz


    Full Text Available Compact and lightweight hyperspectral imagers allow the application of close range hyperspectral imaging with a ground based scanning setup for geological fieldwork. Using such a scanning setup, steep cliff sections and quarry walls can be scanned with a more appropriate viewing direction and a higher image resolution than from airborne and spaceborne platforms. Integration of the hyperspectral imagery with terrestrial lidar scanning provides the hyperspectral information in a georeferenced framework and enables measurement at centimetre scale. In this paper, three geological case studies are used to demonstrate the potential of this method for rock characterisation. Two case studies are applied to carbonate quarries where mapping of different limestone and dolomite types was required, as well as measurements of faults and layer thicknesses from inaccessible parts of the quarries. The third case study demonstrates the method using artificial lighting, applied in a subsurface scanning scenario where solar radiation cannot be utilised.

  8. Survey on Neural Networks Used for Medical Image Processing


    Shi, Zhenghao; He, Lifeng; Suzuki, Kenji; Nakamura, Tsuyoshi; Itoh, Hidenori


    This paper aims to present a review of neural networks used in medical image processing. We classify neural networks by its processing goals and the nature of medical images. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of neural network application for medical image processing and an outlook for the future research are also discussed. By this survey, we try to answer the following two important questions: (1) Wh...

  9. Application of image processing technology in yarn hairiness detection


    Zhang, Guohong; Binjie XIN


    Digital image processing technology is one of the new methods for yarn detection, which can realize the digital characterization and objective evaluation of yarn appearance. This paper overviews the current status of development and application of digital image processing technology used for yarn hairiness evaluation, and analyzes and compares the traditional detection methods and this new developed method. Compared with the traditional methods, the image processing technology based method is...

  10. Optimizing signal and image processing applications using Intel libraries (United States)

    Landré, Jérôme; Truchetet, Frédéric


    This paper presents optimized signal and image processing libraries from Intel Corporation. Intel Performance Primitives (IPP) is a low-level signal and image processing library developed by Intel Corporation to optimize code on Intel processors. Open Computer Vision library (OpenCV) is a high-level library dedicated to computer vision tasks. This article describes the use of both libraries to build flexible and efficient signal and image processing applications.

  11. Range image segmentation using Zernike moment-based generalized edge detector (United States)

    Ghosal, S.; Mehrotra, R.


    The authors proposed a novel Zernike moment-based generalized step edge detection method which can be used for segmenting range and intensity images. A generalized step edge detector is developed to identify different kinds of edges in range images. These edge maps are thinned and linked to provide final segmentation. A generalized edge is modeled in terms of five parameters: orientation, two slopes, one step jump at the location of the edge, and the background gray level. Two complex and two real Zernike moment-based masks are required to determine all these parameters of the edge model. Theoretical noise analysis is performed to show that these operators are quite noise tolerant. Experimental results are included to demonstrate edge-based segmentation technique.

  12. High throughput holographic imaging-in-flow for the analysis of a wide plankton size range. (United States)

    Yourassowsky, Catherine; Dubois, Frank


    We developed a Digital Holographic Microscope (DHM) working with a partial coherent source specifically adapted to perform high throughput recording of holograms of plankton organisms in-flow, in a size range of 3 µm-300 µm, which is of importance for this kind of applications. This wide size range is achieved with the same flow cell and with the same microscope magnification. The DHM configuration combines a high magnification with a large field of view and provides high-resolution intensity and quantitative phase images refocusing on high sample flow rate. Specific algorithms were developed to detect and extract automatically the particles and organisms present in the samples in order to build holograms of each one that are used for holographic refocusing and quantitative phase contrast imaging. Experimental results are shown and discussed.

  13. Fusion of Building Information and Range Imaging for Autonomous Location Estimation in Indoor Environments

    Directory of Open Access Journals (Sweden)

    Tobias K. Kohoutek


    Full Text Available We present a novel approach for autonomous location estimation and navigation in indoor environments using range images and prior scene knowledge from a GIS database (CityGML. What makes this task challenging is the arbitrary relative spatial relation between GIS and Time-of-Flight (ToF range camera further complicated by a markerless configuration. We propose to estimate the camera’s pose solely based on matching of GIS objects and their detected location in image sequences. We develop a coarse-to-fine matching strategy that is able to match point clouds without any initial parameters. Experiments with a state-of-the-art ToF point cloud show that our proposed method delivers an absolute camera position with decimeter accuracy, which is sufficient for many real-world applications (e.g., collision avoidance.

  14. Image analysis of gunshot residue on entry wounds. II--A statistical estimation of firing range. (United States)

    Brown, H; Cauchi, D M; Holden, J L; Allen, F C; Cordner, S; Thatcher, P


    A statistical investigation of the relationship between firing range and the amount and distribution of gunshot residue (GSR), used automated image analysis (IA) to quantify GSR deposit resulting from firings into pig skin, from distances ranging between contact and 45 cm. Overall, for a Ruger .22 semi-automatic rifle using CCI solid point, high velocity ammunition, the total area of GSR deposit on the skin sections decreased in a non-linear fashion with firing range. More specifically there were significant differences in the amount of GSR deposited from shots fired at contact compared with shots fired from distances between 2.5 and 45 cm; and between shots fired from a distance of 20 cm or less, with shots fired at a distance of 30 cm or more. In addition, GSR particles were heavily concentrated in the wound tract only for contact and close range shots at 2.5 cm, while the particle distribution was more uniform between the wound tract and the skin surfaces for shots fired from distances greater than 2.5 cm. Consequently, for future scientific investigations of gunshot fatalities, once standards have been established for the weapon and ammunition type in question, image analysis quantification of GSR deposited in and around the gunshot wound may be capable of providing a reliable, statistical basis for estimating firing range.

  15. TRM4: Range performance model for electro-optical imaging systems (United States)

    Keßler, Stefan; Gal, Raanan; Wittenstein, Wolfgang


    TRM4 is a commonly used model for assessing device and range performance of electro-optical imagers. The latest version, TRM4.v2, has been released by Fraunhofer IOSB of Germany in June 2016. While its predecessor, TRM3, was developed for thermal imagers, assuming blackbody targets and backgrounds, TRM4 extends the TRM approach to assess three imager categories: imagers that exploit emitted radiation (TRM4 category Thermal), reflected radiation (TRM4 category Visible/NIR/SWIR), and both emitted and reflected radiation (TRM4 category General). Performance assessment in TRM3 and TRM4 is based on the perception of standard four-bar test patterns, whether distorted by under-sampling or not. Spatial and sampling characteristics are taken into account by the Average Modulation at Optimum Phase (AMOP), which replaces the system MTF used in previous models. The Minimum Temperature Difference Perceived (MTDP) figure of merit was introduced in TRM3 for assessing the range performance of thermal imagers. In TRM4, this concept is generalized to the MDSP (Minimum Difference Signal Perceived), which can be applied to all imager categories. In this paper, we outline and discuss the TRM approach and pinpoint differences between TRM4 and TRM3. In addition, an overview of the TRM4 software and its functionality is given. Features newly introduced in TRM4, such as atmospheric turbulence, irradiation sources, and libraries are addressed. We conclude with an outlook on future work and the new module for intensified CCD cameras that is currently under development

  16. GStreamer as a framework for image processing applications in image fusion (United States)

    Burks, Stephen D.; Doe, Joshua M.


    Multiple source band image fusion can sometimes be a multi-step process that consists of several intermediate image processing steps. Typically, each of these steps is required to be in a particular arrangement in order to produce a unique output image. GStreamer is an open source, cross platform multimedia framework, and using this framework, engineers at NVESD have produced a software package that allows for real time manipulation of processing steps for rapid prototyping in image fusion.

  17. A Very Low Dark Current Temperature-Resistant, Wide Dynamic Range, Complementary Metal Oxide Semiconductor Image Sensor (United States)

    Mizobuchi, Koichi; Adachi, Satoru; Tejada, Jose; Oshikubo, Hiromichi; Akahane, Nana; Sugawa, Shigetoshi


    A very low dark current (VLDC) temperature-resistant approach which best suits a wide dynamic range (WDR) complementary metal oxide semiconductor (CMOS) image sensor with a lateral over-flow integration capacitor (LOFIC) has been developed. By implementing a low electric field photodiode without a trade-off of full well-capacity, reduced plasma damage, re-crystallization, and termination of silicon-silicon dioxide interface states in the front end of line and back end of line (FEOL and BEOL) in a 0.18 µm, two polycrystalline silicon, three metal (2P3M) process, the dark current is reduced to 11 e-/s/pixel (0.35 e-/s/µm2: pixel area normalized) at 60 °C, which is the lowest value ever reported. For further robustness at low and high temperatures, 1/3-in., 5.6-µm pitch, 800×600 pixel sensor chips with low noise readout circuits designed for a signal and noise hold circuit and a programmable gain amplifier (PGA) have also been deposited with an inorganic cap layer on a micro-lens and covered with a metal hermetically sealed package assembly. Image sensing performance results in 2.4 e-rms temporal noise and 100 dB dynamic range (DR) with 237 ke- full well-capacity. The operating temperature range is extended from -40 to 85 °C while retaining good image quality.

  18. Octave-spanning hyperspectral coherent diffractive imaging in the extreme ultraviolet range. (United States)

    Meng, Yijian; Zhang, Chunmei; Marceau, Claude; Naumov, A Yu; Corkum, P B; Villeneuve, D M


    Soft x-ray microscopy is a powerful imaging technique that provides sub-micron spatial resolution, as well as chemical specificity using core-level near-edge x-ray absorption fine structure (NEXAFS). Near the carbon K-edge (280-300 eV) biological samples exhibit high contrast, and the detailed spectrum contains information about the local chemical environment of the atoms. Most soft x-ray imaging takes place on dedicated beamlines at synchrotron facilities or at x-ray free electron laser facilities. Tabletop femtosecond laser systems are now able to produce coherent radiation at the carbon K-edge and beyond through the process of high harmonic generation (HHG). The broad bandwidth of HHG is seemingly a limitation to imaging, since x-ray optical elements such as Fresnel zone plates require monochromatic sources. Counter-intuitively, the broad bandwidth of HHG sources can be beneficial as it permits chemically-specific hyperspectral imaging. We apply two separate techniques - Fourier transform spectroscopy, and lensless holographic imaging - to obtain images of an object simultaneously at multiple wavelengths using an octave-spanning high harmonic source with photon energies up to 30 eV. We use an interferometric delay reference to correct for nanometer-scale fluctuations between the two HHG sources.

  19. Subinteger Range-Bin Alignment Method for ISAR Imaging of Noncooperative Targets

    Directory of Open Access Journals (Sweden)

    F. Pérez-Martínez


    Full Text Available Inverse Synthetic Aperture Radar (ISAR is a coherent radar technique capable of generating images of noncooperative targets. ISAR may have better performance in adverse meteorological conditions than traditional imaging sensors. Unfortunately, ISAR images are usually blurred because of the relative motion between radar and target. To improve the quality of ISAR products, motion compensation is necessary. In this context, range-bin alignment is the first step for translational motion compensation. In this paper, we propose a subinteger range-bin alignment method based on envelope correlation and reference profiles. The technique, which makes use of a carefully designed optimization stage, is robust against noise, clutter, target scintillation, and error accumulation. It provides us with very fine translational motion compensation. Comparisons with state-of-the-art range-bin alignment methods are included and advantages of the proposal are highlighted. Simulated and live data from a high-resolution linear-frequency-modulated continuous-wave radar are included to perform the pertinent comparisons.


    Directory of Open Access Journals (Sweden)

    Suranjan Ganguly


    Full Text Available In this paper, we present a novel approach for three-dimensional face recognition by extracting the curvature maps from range images. There are four types of curvature maps: Gaussian, Mean, Maximum and Minimum curvature maps. These curvature maps are used as a feature for 3D face recognition purpose. The dimension of these feature vectors is reduced using Singular Value Decomposition (SVD technique. Now from calculated three components of SVD, the non-negative values of ‘S’ part of SVD is ranked and used as feature vector. In this proposed method, two pair-wise curvature computations are done. One is Mean, and Maximum curvature pair and another is Gaussian and Mean curvature pair. These are used to compare the result for better recognition rate. This automated 3D face recognition system is focused in different directions like, frontal pose with expression and illumination variation, frontal face along with registered face, only registered face and registered face from different pose orientation across X, Y and Z axes. 3D face images used for this research work are taken from FRAV3D database. The pose variation of 3D facial image is being registered to frontal pose by applying one to all registration technique then curvature mapping is applied on registered face images along with remaining frontal face images. For the classification and recognition purpose five layer feed-forward back propagation neural network classifiers is used, and the corresponding result is discussed in section 4.

  1. Sliding mean edge estimation. [in digital image processing (United States)

    Ford, G. E.


    A method for determining the locations of the major edges of objects in digital images is presented. The method is based on an algorithm utilizing maximum likelihood concepts. An image line-scan interval is processed to determine if an edge exists within the interval and its location. The proposed algorithm has demonstrated good results even in noisy images.

  2. Experiences with digital processing of images at INPE (United States)

    Mascarenhas, N. D. A. (Principal Investigator)


    Four different research experiments with digital image processing at INPE will be described: (1) edge detection by hypothesis testing; (2) image interpolation by finite impulse response filters; (3) spatial feature extraction methods in multispectral classification; and (4) translational image registration by sequential tests of hypotheses.

  3. APPLEPIPS /Apple Personal Image Processing System/ - An interactive digital image processing system for the Apple II microcomputer (United States)

    Masuoka, E.; Rose, J.; Quattromani, M.


    Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.

  4. Photogrammetric Processing of Rover Images by example of NASAs MER Mission Data


    Peters, O.; Scholten, F.; Oberst, J.


    We have developed a photogrammetric processing scheme for planetary rover image data which involves several main steps: dense image matching, improvement of orientation, and 3d-reconstruction. The first step uses DLR matching software which originally was built for matching orbital imagery [1]. The main problem with close range imagery is the wide range of disparities caused by the varying distances to the surface in the foreground and in the background. If not specifically dealt with, th...

  5. Breast image pre-processing for mammographic tissue segmentation. (United States)

    He, Wenda; Hogg, Peter; Juette, Arne; Denton, Erika R E; Zwiggelaar, Reyer


    During mammographic image acquisition, a compression paddle is used to even the breast thickness in order to obtain optimal image quality. Clinical observation has indicated that some mammograms may exhibit abrupt intensity change and low visibility of tissue structures in the breast peripheral areas. Such appearance discrepancies can affect image interpretation and may not be desirable for computer aided mammography, leading to incorrect diagnosis and/or detection which can have a negative impact on sensitivity and specificity of screening mammography. This paper describes a novel mammographic image pre-processing method to improve image quality for analysis. An image selection process is incorporated to better target problematic images. The processed images show improved mammographic appearances not only in the breast periphery but also across the mammograms. Mammographic segmentation and risk/density classification were performed to facilitate a quantitative and qualitative evaluation. When using the processed images, the results indicated more anatomically correct segmentation in tissue specific areas, and subsequently better classification accuracies were achieved. Visual assessments were conducted in a clinical environment to determine the quality of the processed images and the resultant segmentation. The developed method has shown promising results. It is expected to be useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Using quantum filters to process images of diffuse axonal injury (United States)

    Pineda Osorio, Mateo


    Some images corresponding to a diffuse axonal injury (DAI) are processed using several quantum filters such as Hermite Weibull and Morse. Diffuse axonal injury is a particular, common and severe case of traumatic brain injury (TBI). DAI involves global damage on microscopic scale of brain tissue and causes serious neurologic abnormalities. New imaging techniques provide excellent images showing cellular damages related to DAI. Said images can be processed with quantum filters, which accomplish high resolutions of dendritic and axonal structures both in normal and pathological state. Using the Laplacian operators from the new quantum filters, excellent edge detectors for neurofiber resolution are obtained. Image quantum processing of DAI images is made using computer algebra, specifically Maple. Quantum filter plugins construction is proposed as a future research line, which can incorporated to the ImageJ software package, making its use simpler for medical personnel.

  7. Advances in low-level color image processing

    CERN Document Server

    Smolka, Bogdan


    Color perception plays an important role in object recognition and scene understanding both for humans and intelligent vision systems. Recent advances in digital color imaging and computer hardware technology have led to an explosion in the use of color images in a variety of applications including medical imaging, content-based image retrieval, biometrics, watermarking, digital inpainting, remote sensing, visual quality inspection, among many others. As a result, automated processing and analysis of color images has become an active area of research, to which the large number of publications of the past two decades bears witness. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for single channel images are often not directly applicable to multichannel  ones. The goal of this volume is to summarize the state-of-the-art in the early stages of the color image processing pipeline.

  8. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato


      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  9. Product/Process (P/P) Models For The Defense Waste Processing Facility (DWPF): Model Ranges And Validation Ranges For Future Processing

    Energy Technology Data Exchange (ETDEWEB)

    Jantzen, C. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Edwards, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)


    Radioactive high level waste (HLW) at the Savannah River Site (SRS) has successfully been vitrified into borosilicate glass in the Defense Waste Processing Facility (DWPF) since 1996. Vitrification requires stringent product/process (P/P) constraints since the glass cannot be reworked once it is poured into ten foot tall by two foot diameter canisters. A unique “feed forward” statistical process control (SPC) was developed for this control rather than statistical quality control (SQC). In SPC, the feed composition to the DWPF melter is controlled prior to vitrification. In SQC, the glass product would be sampled after it is vitrified. Individual glass property-composition models form the basis for the “feed forward” SPC. The models transform constraints on the melt and glass properties into constraints on the feed composition going to the melter in order to guarantee, at the 95% confidence level, that the feed will be processable and that the durability of the resulting waste form will be acceptable to a geologic repository.

  10. Image Processing on Morphological Traits of Grape Germplasm


    Shiraishi, Mikio; Shiraishi, Shinichi; Kurushima, Takashi


    The methods of image processing of grape plants was developed to make the description of morphological traits more accurate and effective. A plant image was taken with a still video camera and displayed through a digital to analog conversion. A highquality image was obtained by 500 TV pieces as a horizontal resolution, and in particular, the degree of density of prostrate hairs between mature leaf veins (lower surface). The analog image was stored in an optical disk to preserve semipermanentl...

  11. [A novel image processing and analysis system for medical images based on IDL language]. (United States)

    Tang, Min


    Medical image processing and analysis system, which is of great value in medical research and clinical diagnosis, has been a focal field in recent years. Interactive data language (IDL) has a vast library of built-in math, statistics, image analysis and information processing routines, therefore, it has become an ideal software for interactive analysis and visualization of two-dimensional and three-dimensional scientific datasets. The methodology is proposed to design a novel image processing and analysis system for medical images based on IDL. There are five functional modules in this system: Image Preprocessing, Image Segmentation, Image Reconstruction, Image Measurement and Image Management. Experimental results demonstrate that this system is effective and efficient, and it has the advantages of extensive applicability, friendly interaction, convenient extension and favorable transplantation.

  12. Pyramidal Image-Processing Code For Hexagonal Grid (United States)

    Watson, Andrew B.; Ahumada, Albert J., Jr.


    Algorithm based on processing of information on intensities of picture elements arranged in regular hexagonal grid. Called "image pyramid" because image information at each processing level arranged in hexagonal grid having one-seventh number of picture elements of next lower processing level, each picture element derived from hexagonal set of seven nearest-neighbor picture elements in next lower level. At lowest level, fine-resolution of elements of original image. Designed to have some properties of image-coding scheme of primate visual cortex.

  13. The operation technology of realtime image processing system (Datacube)

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Lee, Yong Bum; Lee, Nam Ho; Choi, Young Soo; Park, Soon Yong; Park, Jin Seok


    In this project, a Sparc VME-based MaxSparc system, running the solaris operating environment, is selected as the dedicated image processing hardware for robot vision applications. In this report, the operation of Datacube maxSparc system, which is high performance realtime image processing hardware, is systematized. And image flow example programs for running MaxSparc system are studied and analyzed. The state-of-the-arts of Datacube system utilizations are studied and analyzed. For the next phase, advanced realtime image processing platform for robot vision application is going to be developed. (author). 19 refs., 71 figs., 11 tabs.

  14. Vacuum Switches Arc Images Pre–processing Based on MATLAB

    Directory of Open Access Journals (Sweden)

    Huajun Dong


    Full Text Available In order to filter out the noise effects of Vacuum Switches Arc(VSAimages, enhance the characteristic details of the VSA images, and improve the visual effects of VSA images, in this paper, the VSA images were implemented pre-processing such as noise removal, edge detection, processing of image’s pseudo color and false color, and morphological processing by MATLAB software. Furthermore, the morphological characteristics of the VSA images were extracted, including isopleths of the gray value, arc area and perimeter.

  15. IPL Processing of the Viking Orbiter Images of Mars (United States)

    Ruiz, R. M.; Elliott, D. A.; Yagi, G. M.; Pomphrey, R. B.; Power, M. A.; Farrell, W., Jr.; Lorre, J. J.; Benton, W. D.; Dewar, R. E.; Cullen, L. E.


    The Viking orbiter cameras returned over 9000 images of Mars during the 6-month nominal mission. Digital image processing was required to produce products suitable for quantitative and qualitative scientific interpretation. Processing included the production of surface elevation data using computer stereophotogrammetric techniques, crater classification based on geomorphological characteristics, and the generation of color products using multiple black-and-white images recorded through spectral filters. The Image Processing Laboratory of the Jet Propulsion Laboratory was responsible for the design, development, and application of the software required to produce these 'second-order' products.

  16. Monitoring Car Drivers' Condition Using Image Processing (United States)

    Adachi, Kazumasa; Yamamto, Nozomi; Yamamoto, Osami; Nakano, Tomoaki; Yamamoto, Shin

    We have developed a car driver monitoring system for measuring drivers' consciousness, with which we aim to reduce car accidents caused by drowsiness of drivers. The system consists of the following three subsystems: an image capturing system with a pulsed infrared CCD camera, a system for detecting blinking waveform by the images using a neural network with which we can extract images of face and eye areas, and a system for measuring drivers' consciousness analyzing the waveform with a fuzzy inference technique and others. The third subsystem extracts three factors from the waveform first, and analyzed them with a statistical method, while our previous system used only one factor. Our experiments showed that the three-factor method we used this time was more effective to measure drivers' consciousness than the one-factor method we described in the previous paper. Moreover, the method is more suitable for fitting parameters of the system to each individual driver.

  17. Measurements of pulse rate using long-range imaging photoplethysmography and sunlight illumination outdoors (United States)

    Blackford, Ethan B.; Estepp, Justin R.


    Imaging photoplethysmography, a method using imagers to record absorption variations caused by microvascular blood volume pulsations, shows promise as a non-contact cardiovascular sensing technology. The first long-range imaging photoplethysmography measurements at distances of 25, 50, and 100 meters from the participant was recently demonstrated. Degraded signal quality was observed with increasing imager-to-subject distances. The degradation in signal quality was hypothesized to be largely attributable to inadequate light return to the image sensor with increasing lens focal length. To test this hypothesis, a follow-up evaluation with 27 participants was conducted outdoors with natural sunlight illumination resulting in 5-33 times the illumination intensity. Video was recorded from cameras equipped with ultra-telephoto lenses and positioned at distances of 25, 50, 100, and 150 meters. The brighter illumination allowed high-definition video recordings at increased frame rates of 60fps, shorter exposure times, and lower ISO settings, leading to higher quality image formation than the previous indoor evaluation. Results were compared to simultaneous reference measurements from electrocardiography. Compared to the previous indoor study, we observed lower overall error in pulse rate measurement with the same pattern of degradation in signal quality with respect to increasing distance. This effect was corroborated by the signal-to-noise ratio of the blood volume pulse signal which also showed decreasing quality with respect to increasing distance. Finally, a popular chrominance-based method was compared to a blind source separation approach; while comparable in measurement of signal-to-noise ratio, we observed higher overall error in pulse rate measurement using the chrominance method in this data.

  18. Quantitative immunocytochemistry using an image analyzer. I. Hardware evaluation, image processing, and data analysis. (United States)

    Mize, R R; Holdefer, R N; Nabors, L B


    In this review we describe how video-based image analysis systems are used to measure immunocytochemically labeled tissue. The general principles underlying hardware and software procedures are emphasized. First, the characteristics of image analyzers are described, including the densitometric measure, spatial resolution, gray scale resolution, dynamic range, and acquisition and processing speed. The errors produced by these instruments are described and methods for correcting or reducing the errors are discussed. Methods for evaluating image analyzers are also presented, including spatial resolution, photometric transfer function, short- and long-term temporal variability, and measurement error. The procedures used to measure immunocytochemically labeled cells and fibers are then described. Immunoreactive profiles are imaged and enhanced using an edge sharpening operator and then extracted using segmentation, a procedure which captures all labeled profiles above a threshold gray level. Binary operators, including erosion and dilation, are applied to separate objects and to remove artifacts. The software then automatically measures the geometry and optical density of the extracted profiles. The procedures are rapid and efficient methods for measuring simultaneously the position, geometry, and labeling intensity of immunocytochemically labeled tissue, including cells, fibers, and whole fields. A companion paper describes non-biological standards we have developed to estimate antigen concentration from the optical density produced by antibody labeling (Nabors et al., 1988).

  19. Advanced Spectroscopic and Thermal Imaging Instrumentation for Shock Tube and Ballistic Range Facilities (United States)

    Grinstead, Jay H.; Wilder, Michael C.; Reda, Daniel C.; Cruden, Brett A.; Bogdanoff, David W.


    The Electric Arc Shock Tube (EAST) facility and Hypervelocity Free Flight Aerodynamic Facility (HFFAF, an aeroballistic range) at NASA Ames support basic research in aerothermodynamic phenomena of atmospheric entry, specifically shock layer radiation spectroscopy, convective and radiative heat transfer, and transition to turbulence. Innovative optical instrumentation has been developed and implemented to meet the challenges posed from obtaining such data in these impulse facilities. Spatially and spectrally resolved measurements of absolute radiance of a travelling shock wave in EAST are acquired using multiplexed, time-gated imaging spectrographs. Nearly complete spectral coverage from the vacuum ultraviolet to the near infrared is possible in a single experiment. Time-gated thermal imaging of ballistic range models in flight enables quantitative, global measurements of surface temperature. These images can be interpreted to determine convective heat transfer rates and reveal transition to turbulence due to isolated and distributed surface roughness at hypersonic velocities. The focus of this paper is a detailed description of the optical instrumentation currently in use in the EAST and HFFAF.

  20. Interactive Digital Image Processing Investigation. Phase II. (United States)


    Information 7-81 7.7.2 ITRES Control Flow 7-85 7.7.3 Program Subroutine Description 7-87 Subroutine ACUSTS 7-87 Subroutine DSPMAPP 7-88... ACUSTS to accumulate statistics for total image DO for every field CALL ACUSTS to accumulate stats for field ENDDO ENDDO Calculate total image stats CALL...The subroutines developed for ITRES are described below: 1 Subroutine ACUSTS Purpose Accumulates field statistics Usage CALL ACUSTS (BUF

  1. Digital image sequence processing, compression, and analysis

    CERN Document Server

    Reed, Todd R



  2. The vision guidance and image processing of AGV (United States)

    Feng, Tongqing; Jiao, Bin


    Firstly, the principle of AGV vision guidance is introduced and the deviation and deflection angle are measured by image coordinate system. The visual guidance image processing platform is introduced. In view of the fact that the AGV guidance image contains more noise, the image has already been smoothed by a statistical sorting. By using AGV sampling way to obtain image guidance, because the image has the best and different threshold segmentation points. In view of this situation, the method of two-dimensional maximum entropy image segmentation is used to solve the problem. We extract the foreground image in the target band by calculating the contour area method and obtain the centre line with the least square fitting algorithm. With the help of image and physical coordinates, we can obtain the guidance information.

  3. Detection of optimum maturity of maize using image processing and ...

    African Journals Online (AJOL)

    ... green colorations of the maize leaves at maturity was used. Different color features were extracted from the image processing system (MATLAB) and used as inputs to the artificial neural network that classify different levels of maturity. Keywords: Maize, Maturity, CCD Camera, Image Processing, Artificial Neural Network ...

  4. Image Processing In Laser-Beam-Steering Subsystem (United States)

    Lesh, James R.; Ansari, Homayoon; Chen, Chien-Chung; Russell, Donald W.


    Conceptual design of image-processing circuitry developed for proposed tracking apparatus described in "Beam-Steering Subsystem For Laser Communication" (NPO-19069). In proposed system, desired frame rate achieved by "windowed" readout scheme in which only pixels containing and surrounding two spots read out and others skipped without being read. Image data processed rapidly and efficiently to achieve high frequency response.

  5. [Filing and processing systems of ultrasonic images in personal computers]. (United States)

    Filatov, I A; Bakhtin, D A; Orlov, A V


    The paper covers the software pattern for the ultrasonic image filing and processing system. The system records images on a computer display in real time or still, processes them by local filtration techniques, makes different measurements and stores the findings in the graphic database. It is stressed that the database should be implemented as a network version.

  6. Digital image processing for two-phase flow

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Young; Lim, Jae Yun [Cheju National University, Cheju (Korea, Republic of); No, Hee Cheon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)


    A photographic method to measure the key parameters of two-phase flow is realized by using a digital image processing technique. The 8 bit gray level and 256 x 256 pixels are used to generates the image data which is treated to get the parameters of two-phase flow. It is observed that the key parameters could be identified by treating data obtained by the digital image processing technique.

  7. Surface Distresses Detection of Pavement Based on Digital Image Processing


    Ouyang, Aiguo; Luo, Chagen; Zhou, Chao


    International audience; Pavement crack is the main form of early diseases of pavement. The use of digital photography to record pavement images and subsequent crack detection and classification has undergone continuous improvements over the past decade. Digital image processing has been applied to detect the pavement crack for its advantages of large amount of information and automatic detection. The applications of digital image processing in pavement crack detection, distresses classificati...



    Sanjay B Patil; Dr Shrikant K Bodhe


    In order to increase the average sugarcane yield per acres with minimum cost farmers are adapting precision farming technique. This paper includes the area measurement of sugarcane leaf based on image processing method which is useful for plants growth monitoring, to analyze fertilizer deficiency and environmental stress,to measure diseases severity. In image processing method leaf area is calculated through pixel number statistic. Unit pixel in the same digital images represent the same size...

  9. Future trends in image processing software and hardware (United States)

    Green, W. B.


    JPL image processing applications are examined, considering future trends in fields such as planetary exploration, electronics, astronomy, computers, and Landsat. Attention is given to adaptive search and interrogation of large image data bases, the display of multispectral imagery recorded in many spectral channels, merging data acquired by a variety of sensors, and developing custom large scale integrated chips for high speed intelligent image processing user stations and future pipeline production processors.

  10. Range-Gated LADAR Coherent Imaging Using Parametric Up-Conversion of IR and NIR Light for Imaging with a Visible-Range Fast-Shuttered Intensified Digital CCD Camera

    Energy Technology Data Exchange (ETDEWEB)



    Research is presented on infrared (IR) and near infrared (NIR) sensitive sensor technologies for use in a high speed shuttered/intensified digital video camera system for range-gated imaging at ''eye-safe'' wavelengths in the region of 1.5 microns. The study is based upon nonlinear crystals used for second harmonic generation (SHG) in optical parametric oscillators (OPOS) for conversion of NIR and IR laser light to visible range light for detection with generic S-20 photocathodes. The intensifiers are ''stripline'' geometry 18-mm diameter microchannel plate intensifiers (MCPIIS), designed by Los Alamos National Laboratory and manufactured by Philips Photonics. The MCPIIS are designed for fast optical shattering with exposures in the 100-200 ps range, and are coupled to a fast readout CCD camera. Conversion efficiency and resolution for the wavelength conversion process are reported. Experimental set-ups for the wavelength shifting and the optical configurations for producing and transporting laser reflectance images are discussed.

  11. Tilt-pair analysis of images from a range of different specimens in single-particle electron cryomicroscopy. (United States)

    Henderson, Richard; Chen, Shaoxia; Chen, James Z; Grigorieff, Nikolaus; Passmore, Lori A; Ciccarelli, Luciano; Rubinstein, John L; Crowther, R Anthony; Stewart, Phoebe L; Rosenthal, Peter B


    The comparison of a pair of electron microscope images recorded at different specimen tilt angles provides a powerful approach for evaluating the quality of images, image-processing procedures, or three-dimensional structures. Here, we analyze tilt-pair images recorded from a range of specimens with different symmetries and molecular masses and show how the analysis can produce valuable information not easily obtained otherwise. We show that the accuracy of orientation determination of individual single particles depends on molecular mass, as expected theoretically since the information in each particle image increases with molecular mass. The angular uncertainty is less than 1° for particles of high molecular mass (~50 MDa), several degrees for particles in the range 1-5 MDa, and tens of degrees for particles below 1 MDa. Orientational uncertainty may be the major contributor to the effective temperature factor (B-factor) describing contrast loss and therefore the maximum resolution of a structure determination. We also made two unexpected observations. Single particles that are known to be flexible showed a wider spread in orientation accuracy, and the orientations of the largest particles examined changed by several degrees during typical low-dose exposures. Smaller particles presumably also reorient during the exposure; hence, specimen movement is a second major factor that limits resolution. Tilt pairs thus enable assessment of orientation accuracy, map quality, specimen motion, and conformational heterogeneity. A convincing tilt-pair parameter plot, where 60% of the particles show a single cluster around the expected tilt axis and tilt angle, provides confidence in a structure determined using electron cryomicroscopy. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Modified Range-Doppler Processing for FM-CW Synthetic Aperture Radar

    NARCIS (Netherlands)

    Wit, J.J.M. de; Meta, A.; Hoogeboom, P.


    The combination of compact frequency-modulated continuous-wave (FM-CW) technology and high-resolution synthetic aperture radar (SAR) processing techniques should pave the way for the development of a lightweight, cost-effective, high-resolution, airborne imaging radar. Regarding FM-CW SAR signal

  13. NASA Computational Case Study SAR Data Processing: Ground-Range Projection (United States)

    Memarsadeghi, Nargess; Rincon, Rafael


    Radar technology is used extensively by NASA for remote sensing of the Earth and other Planetary bodies. In this case study, we learn about different computational concepts for processing radar data. In particular, we learn how to correct a slanted radar image by projecting it on the surface that was sensed by a radar instrument.

  14. Large scale parallel document image processing

    NARCIS (Netherlands)

    van der Zant, Tijn; Schomaker, Lambert; Valentijn, Edwin; Yanikoglu, BA; Berkner, K


    Building a system which allows to search a very large database of document images. requires professionalization of hardware and software, e-science and web access. In astrophysics there is ample experience dealing with large data sets due to an increasing number of measurement instruments. The

  15. 8th International Image Processing and Communications Conference

    CERN Document Server


    This book collects a series of research papers in the area of Image Processing and Communications which not only introduce a summary of current technology but also give an outlook of potential feature problems in this area. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in image processing and communications. The book is divided into two parts and presents the proceedings of the 8th International Image Processing and Communications Conference (IP&C 2016) held in Bydgoszcz, Poland September 7-9 2016. Part I deals with image processing. A comprehensive survey of different methods of image processing, computer vision is also presented. Part II deals with the telecommunications networks and computer networks. Applications in these areas are considered.

  16. Implementing full backtracking facilities for Prolog-based image processing (United States)

    Jones, Andrew C.; Batchelor, Bruce G.


    PIP (Prolog image processing) is a system currently under development at UWCC, designed to support interactive image processing using the PROLOG programming language. In this paper we discuss Prolog-based image processing paradigms and present a meta-interpreter developed by the first author, designed to support an approach to image processing in PIP which is more in the spirit of Prolog than was previously possible. This meta-interpreter allows backtracking over image processing operations in a manner transparent to the programmer. Currently, for space-efficiency, the programmer needs to indicate over which operations the system may backtrack in a program; however, a number of extensions to the present work, including a more intelligent approach intended to obviate this need, are mentioned at the end of this paper, which the present meta-interpreter will provide a basis for investigating in the future.

  17. 6th International Image Processing and Communications Conference

    CERN Document Server


    This book collects a series of research papers in the area of Image Processing and Communications which not only introduce a summary of current technology but also give an outlook of potential feature problems in this area. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in image processing and communications. The book is divided into two parts and presents the proceedings of the 6th International Image Processing and Communications Conference (IP&C 2014) held in Bydgoszcz, 10-12 September 2014. Part I deals with image processing. A comprehensive survey of different methods  of image processing, computer vision  is also presented. Part II deals with the telecommunications networks and computer networks. Applications in these areas are considered.

  18. Automatic grading of appearance retention of carpets using intensity and range images (United States)

    Orjuela Vargas, Sergio Alejandro; Ortiz-Jaramillo, Benhur; Vansteenkiste, Ewout; Rooms, Filip; De Meulemeester, Simon; de Keyser, Robain; Van Langenhove, Lieva; Philips, Wilfried


    Textiles are mainly used for decoration and protection. In both cases, their original appearance and its retention are important factors for customers. Therefore, evaluation of appearance parameters are critical for quality assurance purposes, during and after manufacturing, to determine the lifetime and/or beauty of textile products. In particular, appearance retention of textile products is commonly certified with grades, which are currently assigned by human experts. However, manufacturers would prefer a more objective system. We present an objective system for grading appearance retention, particularly, for textile floor coverings. Changes in appearance are quantified by using linear regression models on texture features extracted from intensity and range images. Range images are obtained by our own laser scanner, reconstructing the carpet surface using two methods that have been previously presented. We extract texture features using a variant of the local binary pattern technique based on detecting those patterns whose frequencies are related to the appearance retention grades. We test models for eight types of carpets. Results show that the proposed approach describes the degree of wear with a precision within the range allowed to human inspectors by international standards. The methodology followed in this experiment has been designed to be general for evaluating global deviation of texture in other types of textiles, as well as other surface materials.

  19. AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves (United States)

    Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.


    ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.

  20. Condensation transition in a conserved generalized interacting zero-range process. (United States)

    Khaleque, Abdul; Sen, Parongama


    A conserved generalized zero-range process is considered in which two sites interact such that particles hop from the more populated site to the other with a probability p. The steady-state particle distribution function P(n) is obtained using both analytical and numerical methods. The system goes through several phases as p is varied. In particular, a condensate phase appears for p_{l}condensate phase using a known scaling form shows there is universal behavior in the short-range process while the infinite range process displays nonuniversality. In the noncondensate phase above p_{c}, two distinct regions are identified: p_{c}0.5; a scale emerges in the system in the latter and this feature is present for all ranges of interaction.

  1. Full Parallax Integral 3D Display and Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Byung-Gook Lee


    Full Text Available Purpose – Full parallax integral 3D display is one of the promising future displays that provide different perspectives according to viewing direction. In this paper, the authors review the recent integral 3D display and image processing techniques for improving the performance, such as viewing resolution, viewing angle, etc.Design/methodology/approach – Firstly, to improve the viewing resolution of 3D images in the integral imaging display with lenslet array, the authors present 3D integral imaging display with focused mode using the time-multiplexed display. Compared with the original integral imaging with focused mode, the authors use the electrical masks and the corresponding elemental image set. In this system, the authors can generate the resolution-improved 3D images with the n×n pixels from each lenslet by using n×n time-multiplexed display. Secondly, a new image processing technique related to the elemental image generation for 3D scenes is presented. With the information provided by the Kinect device, the array of elemental images for an integral imaging display is generated.Findings – From their first work, the authors improved the resolution of 3D images by using the time-multiplexing technique through the demonstration of the 24 inch integral imaging system. Authors’ method can be applied to a practical application. Next, the proposed method with the Kinect device can gain a competitive advantage over other methods for the capture of integral images of big 3D scenes. The main advantage of fusing the Kinect and the integral imaging concepts is the acquisition speed, and the small amount of handled data.Originality / Value – In this paper, the authors review their recent methods related to integral 3D display and image processing technique.Research type – general review.

  2. SlideJ: An ImageJ plugin for automated processing of whole slide images.

    Directory of Open Access Journals (Sweden)

    Vincenzo Della Mea

    Full Text Available The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images-up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations.

  3. Enabling customer self service through image processing on mobile devices (United States)

    Kliche, Ingmar; Hellmann, Sascha; Kreutel, Jörn


    Our paper will outline the results of a research project that employs image processing for the automatic diagnosis of technical devices whose internal state is communicated through visual displays. In particular, we developed a method for detecting exceptional states of retail wireless routers, analysing the state and blinking behaviour of the LEDs that make up most routers' user interface. The method was made configurable by means of abstracting away from a particular device's display properties, thus being able to analyse a whole range of different devices whose displays are covered by our abstraction. The method of analysis and its configuration mechanism were implemented as a native mobile application for the Android Platform. It employs the local camera of mobile devices for capturing a router's state, and uses overlaid visual hints for guiding the user toward that perspective from where an analysis is possible.

  4. Photonics-based real-time ultra-high-range-resolution radar with broadband signal generation and processing. (United States)

    Zhang, Fangzheng; Guo, Qingshui; Pan, Shilong


    Real-time and high-resolution target detection is highly desirable in modern radar applications. Electronic techniques have encountered grave difficulties in the development of such radars, which strictly rely on a large instantaneous bandwidth. In this article, a photonics-based real-time high-range-resolution radar is proposed with optical generation and processing of broadband linear frequency modulation (LFM) signals. A broadband LFM signal is generated in the transmitter by photonic frequency quadrupling, and the received echo is de-chirped to a low frequency signal by photonic frequency mixing. The system can operate at a high frequency and a large bandwidth while enabling real-time processing by low-speed analog-to-digital conversion and digital signal processing. A conceptual radar is established. Real-time processing of an 8-GHz LFM signal is achieved with a sampling rate of 500 MSa/s. Accurate distance measurement is implemented with a maximum error of 4 mm within a range of ~3.5 meters. Detection of two targets is demonstrated with a range-resolution as high as 1.875 cm. We believe the proposed radar architecture is a reliable solution to overcome the limitations of current radar on operation bandwidth and processing speed, and it is hopefully to be used in future radars for real-time and high-resolution target detection and imaging.

  5. Deep architecture neural network-based real-time image processing for image-guided radiotherapy. (United States)

    Mori, Shinichiro


    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  6. High-speed image processing systems in non-destructive testing (United States)

    Shashev, D. V.; Shidlovskiy, S. V.


    Digital imaging systems are using in most of both industrial and scientific industries. Such systems effectively solve a wide range of tasks in the field of non-destructive testing. There are problems in digital image processing for decades associated with the speed of the operation of such systems, sufficient to efficiently process and analyze video streams in real time, ideally in mobile small-sized devices. In this paper, we consider the use of parallel-pipeline computing architectures in image processing problems using the example of an algorithm for calculating the area of an object on a binary image. The approach used allows us to achieve high-speed performance in the tasks of digital image processing.

  7. A compact, short-pulse laser for near-field, range-gated imaging

    Energy Technology Data Exchange (ETDEWEB)

    Zutavern, F.J.; Helgeson, W.D.; Loubriel, G.M. [Sandia National Labs., Albuquerque, NM (United States); Yates, G.J.; Gallegos, R.A.; McDonald, T.E. [Los Alamos National Lab., NM (United States)


    This paper describes a compact laser, which produces high power, wide-angle emission for a near-field, range-gated, imaging system. The optical pulses are produced by a 100 element laser diode array (LDA) which is pulsed with a GaAs, photoconductive semiconductor switch (PCSS). The LDA generates 100 ps long, gain-switched, optical pulses at 904 nm when it is driven with 3 ns, 400 A, electrical pulses from a high gain PCSS. Gain switching is facilitated with this many lasers by using a low impedance circuit to drive an array of lasers, which are connected electrically in series. The total optical energy produced per pulse is 10 microjoules corresponding to a total peak power of 100 kW. The entire laser system, including prime power (a nine volt battery), pulse charging, PCSS, and LDA, is the size of a small, hand-held flashlight. System lifetime, which is presently limited by the high gain PCSS, is an active area of research and development. Present limitations and potential improvements will be discussed. The complete range-gated imaging system is based on complementary technologies: high speed optical gating with intensified charge coupled devices (ICCD) developed at Los Alamos National Laboratory (LANL) and high gain, PCSS-driven LDAs developed at Sandia National Laboratories (SNL). The system is designed for use in highly scattering media such as turbid water or extremely dense fog or smoke. The short optical pulses from the laser and high speed gating of the ICCD are synchronized to eliminate the back-scattered light from outside the depth of the field of view (FOV) which may be as short as a few centimeters. A high speed photodiode can be used to trigger the intensifier gate and set the range-gated FOV precisely on the target. The ICCD and other aspects of the imaging system are discussed in a separate paper.

  8. Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition (United States)

    Downie, John D.; Tucker, Deanne (Technical Monitor)


    Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

  9. Energy-Driven Image Interpolation Using Gaussian Process Regression

    Directory of Open Access Journals (Sweden)

    Lingling Zi


    Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.

  10. Entropy-Based Block Processing for Satellite Image Registration

    Directory of Open Access Journals (Sweden)

    Ikhyun Lee


    Full Text Available Image registration is an important task in many computer vision applications such as fusion systems, 3D shape recovery and earth observation. Particularly, registering satellite images is challenging and time-consuming due to limited resources and large image size. In such scenario, state-of-the-art image registration methods such as scale-invariant feature transform (SIFT may not be suitable due to high processing time. In this paper, we propose an algorithm based on block processing via entropy to register satellite images. The performance of the proposed method is evaluated using different real images. The comparative analysis shows that it not only reduces the processing time but also enhances the accuracy.

  11. Gaussian process interpolation for uncertainty estimation in image registration. (United States)

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William


    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods.

  12. 3D imaging by fast deconvolution algorithm in short-range UWB radar for concealed weapon detection

    NARCIS (Netherlands)

    Savelyev, T.; Yarovoy, A.


    A fast imaging algorithm for real-time use in short-range (ultra-wideband) radar with synthetic or real-array aperture is proposed. The reflected field is presented here as a convolution of the target reflectivity and point spread function (PSF) of the imaging system. To obtain a focused 3D image,

  13. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan


    An estimate of the thickness of subcutaneous adipose tissue at differing positions around the body was required in a study examining body composition. To eliminate human error associated with the manual placement of markers for measurements and to facilitate the collection of data from a large...... number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...

  14. Image processing based detection of lung cancer on CT scan images (United States)

    Abdillah, Bariqi; Bustamam, Alhadi; Sarwinda, Devvi


    In this paper, we implement and analyze the image processing method for detection of lung cancer. Image processing techniques are widely used in several medical problems for picture enhancement in the detection phase to support the early medical treatment. In this research we proposed a detection method of lung cancer based on image segmentation. Image segmentation is one of intermediate level in image processing. Marker control watershed and region growing approach are used to segment of CT scan image. Detection phases are followed by image enhancement using Gabor filter, image segmentation, and features extraction. From the experimental results, we found the effectiveness of our approach. The results show that the best approach for main features detection is watershed with masking method which has high accuracy and robust.


    Directory of Open Access Journals (Sweden)

    K. Sujatha


    Full Text Available Combustion quality in power station boilers plays an important role in minimizing the flue gas emissions. In the present work various intelligent schemes to infer the flue gas emissions by monitoring the flame colour at the furnace of the boiler are proposed here. Flame image monitoring involves capturing the flame video over a period of time with the measurement of various parameters like Carbon dioxide (CO2, excess oxygen (O2, Nitrogen dioxide (NOx, Sulphur dioxide (SOx and Carbon monoxide (CO emissions plus the flame temperature at the core of the fire ball, air/fuel ratio and the combustion quality. Higher the quality of combustion less will be the flue gases at the exhaust. The flame video was captured using an infrared camera. The flame video is then split up into the frames for further analysis. The video splitter is used for progressive extraction of the flame images from the video. The images of the flame are then pre-processed to reduce noise. The conventional classification and clustering techniques include the Euclidean distance classifier (L2 norm classifier. The intelligent classifier includes the Radial Basis Function Network (RBF, Back Propagation Algorithm (BPA and parallel architecture with RBF and BPA (PRBFBPA. The results of the validation are supported with the above mentioned performance measures whose values are in the optimal range. The values of the temperatures, combustion quality, SOx, NOx, CO, CO2 concentrations, air and fuel supplied corresponding to the images were obtained thereby indicating the necessary control action taken to increase or decrease the air supply so as to ensure complete combustion. In this work, by continuously monitoring the flame images, combustion quality was inferred (complete/partial/incomplete combustion and the air/fuel ratio can be automatically varied. Moreover in the existing set-up, measurements like NOx, CO and CO2 are inferred from the samples that are collected periodically or by

  16. A Dynamic Range Enhanced Readout Technique with a Two-Step TDC for High Speed Linear CMOS Image Sensors

    Directory of Open Access Journals (Sweden)

    Zhiyuan Gao


    Full Text Available This paper presents a dynamic range (DR enhanced readout technique with a two-step time-to-digital converter (TDC for high speed linear CMOS image sensors. A multi-capacitor and self-regulated capacitive trans-impedance amplifier (CTIA structure is employed to extend the dynamic range. The gain of the CTIA is auto adjusted by switching different capacitors to the integration node asynchronously according to the output voltage. A column-parallel ADC based on a two-step TDC is utilized to improve the conversion rate. The conversion is divided into coarse phase and fine phase. An error calibration scheme is also proposed to correct quantization errors caused by propagation delay skew within −Tclk~+Tclk. A linear CMOS image sensor pixel array is designed in the 0.13 μm CMOS process to verify this DR-enhanced high speed readout technique. The post simulation results indicate that the dynamic range of readout circuit is 99.02 dB and the ADC achieves 60.22 dB SNDR and 9.71 bit ENOB at a conversion rate of 2 MS/s after calibration, with 14.04 dB and 2.4 bit improvement, compared with SNDR and ENOB of that without calibration.

  17. Leveraging the Cloud for Robust and Efficient Lunar Image Processing (United States)

    Chang, George; Malhotra, Shan; Wolgast, Paul


    The Lunar Mapping and Modeling Project (LMMP) is tasked to aggregate lunar data, from the Apollo era to the latest instruments on the LRO spacecraft, into a central repository accessible by scientists and the general public. A critical function of this task is to provide users with the best solution for browsing the vast amounts of imagery available. The image files LMMP manages range from a few gigabytes to hundreds of gigabytes in size with new data arriving every day. Despite this ever-increasing amount of data, LMMP must make the data readily available in a timely manner for users to view and analyze. This is accomplished by tiling large images into smaller images using Hadoop, a distributed computing software platform implementation of the MapReduce framework, running on a small cluster of machines locally. Additionally, the software is implemented to use Amazon's Elastic Compute Cloud (EC2) facility. We also developed a hybrid solution to serve images to users by leveraging cloud storage using Amazon's Simple Storage Service (S3) for public data while keeping private information on our own data servers. By using Cloud Computing, we improve upon our local solution by reducing the need to manage our own hardware and computing infrastructure, thereby reducing costs. Further, by using a hybrid of local and cloud storage, we are able to provide data to our users more efficiently and securely. 12 This paper examines the use of a distributed approach with Hadoop to tile images, an approach that provides significant improvements in image processing time, from hours to minutes. This paper describes the constraints imposed on the solution and the resulting techniques developed for the hybrid solution of a customized Hadoop infrastructure over local and cloud resources in managing this ever-growing data set. It examines the performance trade-offs of using the more plentiful resources of the cloud, such as those provided by S3, against the bandwidth limitations such use

  18. Color error in the digital camera image capture process. (United States)

    Penczek, John; Boynton, Paul A; Splett, Jolene D


    The color error in images taken by digital cameras is evaluated with respect to its sensitivity to the image capture conditions. A parametric study was conducted to investigate the dependence of image color error on camera technology, illumination spectra, and lighting uniformity. The measurement conditions were selected to simulate the variation that might be expected in typical telemedicine situations. Substantial color errors were observed, depending on the measurement conditions. Several image post-processing methods were also investigated for their effectiveness in reducing the color errors. The results of this study quantify the level of color error that may occur in the digital camera image capture process, and provide guidance for improving the color accuracy through appropriate changes in that process and in post-processing.

  19. Imaging Heat and Mass Transfer Processes Visualization and Analysis

    CERN Document Server

    Panigrahi, Pradipta Kumar


    Imaging Heat and Mass Transfer Processes: Visualization and Analysis applies Schlieren and shadowgraph techniques to complex heat and mass transfer processes. Several applications are considered where thermal and concentration fields play a central role. These include vortex shedding and suppression from stationary and oscillating bluff bodies such as cylinders, convection around crystals growing from solution, and buoyant jets. Many of these processes are unsteady and three dimensional. The interpretation and analysis of images recorded are discussed in the text.

  20. Optical image processing by using a photorefractive spatial soliton waveguide

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Bao-Lai, E-mail: [College of Physics Science & Technology, Hebei University, Baoding 071002 (China); Wang, Ying; Zhang, Su-Heng; Guo, Qing-Lin; Wang, Shu-Fang; Fu, Guang-Sheng [College of Physics Science & Technology, Hebei University, Baoding 071002 (China); Simmonds, Paul J. [Department of Physics and Micron School of Materials Science & Engineering, Boise State University, Boise, ID 83725 (United States); Wang, Zhao-Qi [Institute of Modern Optics, Nankai University, Tianjin 300071 (China)


    By combining the photorefractive spatial soliton waveguide of a Ce:SBN crystal with a coherent 4-f system we are able to manipulate the spatial frequencies of an input optical image to perform edge-enhancement and direct component enhancement operations. Theoretical analysis of this optical image processor is presented to interpret the experimental observations. This work provides an approach for optical image processing by using photorefractive spatial solitons. - Highlights: • A coherent 4-f system with the spatial soliton waveguide as spatial frequency filter. • Manipulate the spatial frequencies of an input optical image. • Achieve edge-enhancement and direct component enhancement operations of an optical image.

  1. A low-power high dynamic range front-end ASIC for imaging calorimeters

    CERN Document Server

    Bagliesi, M G; Marrocchesi, P S; Meucci, M; Millucci, V; Morsani, F; Paoletti, R; Pilo, F; Scribano, A; Turini, N; Valle, G D


    High granularity calorimeters with shower imaging capabilities require dedicated front-end electronics. The ICON 4CH and VA4 PMT chip-set is suitable for very high dynamic range systems with strict noise requirements. The ICON 4CH is a 4 channel input, 12 channel output ASIC designed for use in a multi-anode photomultiplier system with very large dynamic range and low-noise requirements. Each of the four input signals to the ASIC is split equally into three branches by a current conveyor. Each of the three branches is scaled differently: 1:1, 1:8 and 1:80. The signal is read out by a 12 channel low noise/low power high dynamic range charge sensitive preamplifier-shaper circuit (VA4-PMT chip), with simultaneous sample- and-hold, multiplexed analog read-out, calibration facilities. Tests performed in our lab with a PMT are reported in terms of linearity, dynamic range and cross-talk of the system. (5 refs).

  2. Application of lidar techniques to time-of-flight range imaging. (United States)

    Whyte, Refael; Streeter, Lee; Cree, Michael J; Dorrington, Adrian A


    Amplitude-modulated continuous wave (AMCW) time-of-flight (ToF) range imaging cameras measure distance by illuminating the scene with amplitude-modulated light and measuring the phase difference between the transmitted and reflected modulation envelope. This method of optical range measurement suffers from errors caused by multiple propagation paths, motion, phase wrapping, and nonideal amplitude modulation. In this paper a ToF camera is modified to operate in modes analogous to continuous wave (CW) and stepped frequency continuous wave (SFCW) lidar. In CW operation the velocity of objects can be measured. CW measurement of velocity was linear with true velocity (R2=0.9969). Qualitative analysis of a complex scene confirms that range measured by SFCW is resilient to errors caused by multiple propagation paths, phase wrapping, and nonideal amplitude modulation which plague AMCW operation. In viewing a complicated scene through a translucent sheet, quantitative comparison of AMCW with SFCW demonstrated a reduction in the median error from -1.3  m to -0.06  m with interquartile range of error reduced from 4.0 m to 0.18 m.

  3. Surface wave effects on long range IR imaging in the marine surface layer (United States)

    Francius, M. J.; Kunz, G. J.; van Eijk, A. M. J.


    The quality of long range infrared (IR) imaging depends on the effects of atmospheric refraction and other pathintegrated effects (e.g., transmission losses, scintillation and blurring), which are strongly related to the prevailing meteorological conditions. EOSTAR is a PC based computer program to quantify these strong nonlinear effects in the marine atmospheric surface layer and to present a spectrally resolved target image influenced by atmospheric effects using ray tracing techniques for the individual camera pixels. Presently, the propagation is predicted with bulk atmospheric models and the sea surface is idealized by steady regular periodic Stokes' waves. Dynamical wind-waves interactions are not taken into account in this approach, although they may strongly modify the refractive index in the near-surface layer. Nonetheless, the inclusion of the sea surface in the ray tracer module already has a great impact on the near-surface grazing rays and thus influences the images especially in situations of super refraction and mirage. This work aims at improving the description of the sea surface in EOSTAR taking into account the non-uniformity of spatially resolved wind-generated waves and swell. A new surface module is developed to model surface wind-waves and swell in EOSTAR on the basis of meteorological observations and spectral wave modeling. Effects due to these new surfaces will be analyzed and presented.

  4. Range imaging observations of PMSE using the EISCAT VHF radar: Phase calibration and first results

    Directory of Open Access Journals (Sweden)

    J. R. Fernandez


    Full Text Available A novel phase calibration technique for use with the multiple-frequency Range IMaging (RIM technique is introduced based on genetic algorithms. The method is used on data collected with the European Incoherent SCATter (EISCAT VHF radar during a 2002 experiment with the goal of characterizing the vertical structure of Polar Mesosphere Summer Echoes (PMSE over northern Norway. For typical Doppler measurements, the initial phases of the transmitter and receiver are not required to be the same. The EISCAT receiver systems exploit this fact, allowing a multi-static configuration. However, the RIM method relies on the small phase differences between closely spaced frequencies. As a result, the high-resolution images produced by the RIM method can be significantly degraded if not properly calibrated. Using an enhanced numerical radar simulator, in which data from multiple sampling volumes are simultaneously generated, the proposed calibration method is validated. Subsequently, the method is applied to preliminary data from the EISCAT radar, providing first results of RIM images of PMSE. Data using conventional analysis techniques, and confirmed by RIM, reveal an often-observed double-layer structure with higher stability in the lower layer. Moreover, vertical velocity oscillations exhibit a clear correlation with the apparent motion of the layers shown in the echo power plots.

  5. Realization of High Dynamic Range Imaging in the GLORIA Network and Its Effect on Astronomical Measurement

    Directory of Open Access Journals (Sweden)

    Stanislav Vítek


    Full Text Available Citizen science project GLORIA (GLObal Robotic-telescopes Intelligent Array is a first free- and open-access network of robotic telescopes in the world. It provides a web-based environment where users can do research in astronomy by observing with robotic telescopes and/or by analyzing data that other users have acquired with GLORIA or from other free-access databases. Network of 17 telescopes allows users to control selected telescopes in real time or schedule any more demanding observation. This paper deals with new opportunity that GLORIA project provides to teachers and students of various levels of education. At the moment, there are prepared educational materials related to events like Sun eclipse (measuring local atmosphere changes, Aurora Borealis (calculation of Northern Lights height, or transit of Venus (measurement of the Earth-Sun distance. Student should be able to learn principles of CCD imaging, spectral analysis, basic calibration like dark frames subtraction, or advanced methods of noise suppression. Every user of the network can design his own experiment. We propose advanced experiment aimed at obtaining astronomical image data with high dynamic range. We also introduce methods of objective image quality evaluation in order to discover how HDR methods are affecting astronomical measurements.

  6. Monitoring glacier variations in the Urubamba and Vilcabamba Mountain Ranges, Peru, using "Landsat 5" images (United States)

    Suarez, Wilson; Cerna, Marcos; Ordoñez, Julio; Frey, Holger; Giráldez, Claudia; Huggel, Christian


    The Urubamba and Vilcabamba mountain ranges are two geological structures belonging to the Andes in the southern part of Peru, which is located in the tropical region. These mountain ranges are especially located within the transition area between the Amazon region (altitudes close to 1'000 m a.s.l.) and the Andes. These mountains, with a maximum height of 6'280 m a.s.l. (Salkantay Snow Peak in the Vilcabamba range), are characterized by glaciers mainly higher than 5000 m a.s.l. Here we present a study on the evolution of the ice cover based on "Landsat 5" images from 1991 and 2011 is presented in this paper. These data are freely available from the USGS in a georeferenced format and cover a time span of more than 25 years. The glacier mapping is based on the Normalized Difference Snow Index (NDSI). In 1991 the Vilcabamba mountain range had 221 km2 of glacier cover, being reduced to 116.4 km2 in 2011, which represents a loss of 48%. In the Urubamba mountain range, the total glacier area was 64.9 km2 in 1991 and 29.4 km2 in 2011, representing a loss of 54.7%. It means that the glacier area was halved during the past two decades although precipitation patterns show an increase in recent years (the wet season lasts from September to April with precipitation peaks in February and March). Glacier changes in these two tropical mountain ranges also impact from an economic point of view due to small local farming common in this region (use of water from the melting glacier). Furthermore, potential glacier related hazards can pose a threat to people and infrastructure in the valleys below these glaciers, where the access routes to Machu Picchu Inca City, Peru's main tourist destination, are located too.

  7. High performance image processing of SPRINT

    Energy Technology Data Exchange (ETDEWEB)

    DeGroot, T. [Lawrence Livermore National Lab., CA (United States)


    This talk will describe computed tomography (CT) reconstruction using filtered back-projection on SPRINT parallel computers. CT is a computationally intensive task, typically requiring several minutes to reconstruct a 512x512 image. SPRINT and other parallel computers can be applied to CT reconstruction to reduce computation time from minutes to seconds. SPRINT is a family of massively parallel computers developed at LLNL. SPRINT-2.5 is a 128-node multiprocessor whose performance can exceed twice that of a Cray-Y/MP. SPRINT-3 will be 10 times faster. Described will be the parallel algorithms for filtered back-projection and their execution on SPRINT parallel computers.

  8. Detecting jaundice by using digital image processing (United States)

    Castro-Ramos, J.; Toxqui-Quitl, C.; Villa Manriquez, F.; Orozco-Guillen, E.; Padilla-Vivanco, A.; Sánchez-Escobar, JJ.


    When strong Jaundice is presented, babies or adults should be subject to clinical exam like "serum bilirubin" which can cause traumas in patients. Often jaundice is presented in liver disease such as hepatitis or liver cancer. In order to avoid additional traumas we propose to detect jaundice (icterus) in newborns or adults by using a not pain method. By acquiring digital images in color, in palm, soles and forehead, we analyze RGB attributes and diffuse reflectance spectra as the parameter to characterize patients with either jaundice or not, and we correlate that parameters with the level of bilirubin. By applying support vector machine we distinguish between healthy and sick patients.

  9. Full Waveform Analysis for Long-Range 3D Imaging Laser Radar

    Directory of Open Access Journals (Sweden)

    Wallace AndrewM


    Full Text Available The new generation of 3D imaging systems based on laser radar (ladar offers significant advantages in defense and security applications. In particular, it is possible to retrieve 3D shape information directly from the scene and separate a target from background or foreground clutter by extracting a narrow depth range from the field of view by range gating, either in the sensor or by postprocessing. We discuss and demonstrate the applicability of full-waveform ladar to produce multilayer 3D imagery, in which each pixel produces a complex temporal response that describes the scene structure. Such complexity caused by multiple and distributed reflection arises in many relevant scenarios, for example in viewing partially occluded targets, through semitransparent materials (e.g., windows and through distributed reflective media such as foliage. We demonstrate our methodology on 3D image data acquired by a scanning time-of-flight system, developed in our own laboratories, which uses the time-correlated single-photon counting technique.

  10. Poisson point processes imaging, tracking, and sensing

    CERN Document Server

    Streit, Roy L


    This overview of non-homogeneous and multidimensional Poisson point processes and their applications features mathematical tools and applications from emission- and transmission-computed tomography to multiple target tracking and distributed sensor detection.

  11. Automatic construction of image inspection algorithm by using image processing network programming (United States)

    Yoshimura, Yuichiro; Aoki, Kimiya


    In this paper, we discuss a method for automatic programming of inspection image processing. In the industrial field, automatic program generators or expert systems are expected to shorten a period required for developing a new appearance inspection system. So-called "image processing expert system" have been studied for over the nearly 30 years. We are convinced of the need to adopt a new idea. Recently, a novel type of evolutionary algorithms, called genetic network programming (GNP), has been proposed. In this study, we use GNP as a method to create an inspection image processing logic. GNP develops many directed graph structures, and shows excellent ability of formulating complex problems. We have converted this network program model to Image Processing Network Programming (IPNP). IPNP selects an appropriate image processing command based on some characteristics of input image data and processing log, and generates a visual inspection software with series of image processing commands. It is verified from experiments that the proposed method is able to create some inspection image processing programs. In the basic experiment with 200 test images, the success rate of detection of target region was 93.5%.

  12. Imaging and Controlling Ultrafast Ionization Processes (United States)

    Schafer, Kenneth


    We describe how the combination of an attosecond pulse train (APT) and a synchronized infrared (IR) laser field can be used to image and control ionization dynamics in atomic systems. In two recent experiments, attosecond pulses were used to create a sequence of electron wave packets (EWPs) near the ionization threshold in helium. In the first experiment^, the EWPs were created just below the ionization threshold, and the ionization probability was found to vary strongly with the IR/APT delay. Calculations that reproduce the experimental results demonstrate that this ionization control results from interference between transiently bound EWPs created by different pulses in the train. In the second experiment^, the APT was tailored to produce a sequence of identical EWPs just above the ionization threshold exactly once per laser cycle, allowing us to study a single ionization event stroboscopically. This technique has enabled us to image the coherent electron scattering that takes place when the IR field is sufficiently strong to reverse the initial direction of the electron motion causing it to re-scatter from its parent ion.^P. Johnsson, et al., PRL 99, 233001 (2007).^J. Mauritsson, et al. PRL, to appear (2008).In collaboration with A. L'Huillier, J. Mauritsson, P. Johnsson, T. Remetter, E. Mantsen, M. Swoboda, and T. Ruchon.

  13. Towards a comprehensive eye model for zebrafish retinal imaging using full range spectral domain optical coherence tomography (United States)

    Gaertner, Maria; Weber, Anke; Cimalla, Peter; Köttig, Felix; Brand, Michael; Koch, Edmund


    In regenerative medicine, the zebrafish is a prominent animal model for studying degeneration and regeneration processes, e.g. of photoreceptor cells in the retina. By means of optical coherence tomography (OCT), these studies can be conducted over weeks using the same individual and hence reducing the variability of the results. To allow an improvement of zebrafish retinal OCT imaging by suitable optics, we developed a zebrafish eye model using geometrical data obtained by in vivo dispersion encoded full range OCT as well as a dispersion comprising gradient index (GRIN) lens model based on refractive index data found in the literature. Using non-sequential ray tracing, the focal length of the spherical GRIN lens (diameter of 0.96 mm) was determined to be 1.22 mm at 800 nm wavelength giving a Matheissen's ratio (ratio of focal length to radius of the lens) of 2.54, which fits well into the range between 2.19 and 2.82, found for various fish lenses. Additionally, a mean refractive index of 1.64 at 800 nm could be retrieved for the lens to yield the same focal position as found for the GRIN condition. With the aid of the zebrafish eye model, the optics of the OCT scanner head were adjusted to provide high-resolution retinal images with a field of view of 30° x 30°. The introduced model therefore provides the basis for improved retinal imaging with OCT and can be further used to study the image formation within the zebrafish eye.

  14. Processing, analysis, recognition, and automatic understanding of medical images (United States)

    Tadeusiewicz, Ryszard; Ogiela, Marek R.


    Paper presents some new ideas introducing automatic understanding of the medical images semantic content. The idea under consideration can be found as next step on the way starting from capturing of the images in digital form as two-dimensional data structures, next going throw images processing as a tool for enhancement of the images visibility and readability, applying images analysis algorithms for extracting selected features of the images (or parts of images e.g. objects), and ending on the algorithms devoted to images classification and recognition. In the paper we try to explain, why all procedures mentioned above can not give us full satisfaction in many important medical problems, when we do need understand image semantic sense, not only describe the image in terms of selected features and/or classes. The general idea of automatic images understanding is presented as well as some remarks about the successful applications of such ideas for increasing potential possibilities and performance of computer vision systems dedicated to advanced medical images analysis. This is achieved by means of applying linguistic description of the picture merit content. After this we try use new AI methods to undertake tasks of the automatic understanding of images semantics in intelligent medical information systems. A successful obtaining of the crucial semantic content of the medical image may contribute considerably to the creation of new intelligent multimedia cognitive medical systems. Thanks to the new idea of cognitive resonance between stream of the data extracted form the image using linguistic methods and expectations taken from the representation of the medical knowledge, it is possible to understand the merit content of the image even if the form of the image is very different from any known pattern.

  15. Image Harvest: an open-source platform for high-throughput plant image processing and analysis (United States)

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal


    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  16. Image Harvest: an open-source platform for high-throughput plant image processing and analysis. (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal


    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  17. Evaluation of clinical image processing algorithms used in digital mammography. (United States)

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde


    Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the same six pairs of

  18. Application of image processing technology in yarn hairiness detection

    Directory of Open Access Journals (Sweden)

    Guohong ZHANG


    Full Text Available Digital image processing technology is one of the new methods for yarn detection, which can realize the digital characterization and objective evaluation of yarn appearance. This paper overviews the current status of development and application of digital image processing technology used for yarn hairiness evaluation, and analyzes and compares the traditional detection methods and this new developed method. Compared with the traditional methods, the image processing technology based method is more objective, fast and accurate, which is the vital development trend of the yarn appearance evaluation.

  19. Remote sensing models and methods for image processing

    CERN Document Server

    Schowengerdt, Robert A


    This book is a completely updated, greatly expanded version of the previously successful volume by the author. The Second Edition includes new results and data, and discusses a unified framework and rationale for designing and evaluating image processing algorithms.Written from the viewpoint that image processing supports remote sensing science, this book describes physical models for remote sensing phenomenology and sensors and how they contribute to models for remote-sensing data. The text then presents image processing techniques and interprets them in terms of these models. Spectral, s

  20. 1st International Conference on Computer Vision and Image Processing

    CERN Document Server

    Kumar, Sanjeev; Roy, Partha; Sen, Debashis


    This edited volume contains technical contributions in the field of computer vision and image processing presented at the First International Conference on Computer Vision and Image Processing (CVIP 2016). The contributions are thematically divided based on their relation to operations at the lower, middle and higher levels of vision systems, and their applications. The technical contributions in the areas of sensors, acquisition, visualization and enhancement are classified as related to low-level operations. They discuss various modern topics – reconfigurable image system architecture, Scheimpflug camera calibration, real-time autofocusing, climate visualization, tone mapping, super-resolution and image resizing. The technical contributions in the areas of segmentation and retrieval are classified as related to mid-level operations. They discuss some state-of-the-art techniques – non-rigid image registration, iterative image partitioning, egocentric object detection and video shot boundary detection. Th...

  1. An image-processing analysis of skin textures. (United States)

    Sparavigna, A; Marazzato, R


    This paper discusses an image-processing method applied to skin texture analysis. Considering that the characterisation of human skin texture is a task approached only recently by image processing, our goal is to lay out the benefits of this technique for quantitative evaluations of skin features and localisation of defects. We propose a method based on a statistical approach to image pattern recognition. The results of our statistical calculations on the grey-tone distributions of the images are proposed in specific diagrams, the coherence length diagrams. Using the coherence length diagrams, we were able to determine grain size and anisotropy of skin textures. Maps showing the localisation of defects are also proposed. According to the chosen statistical parameters of grey-tone distribution, several procedures to defect detection can be proposed. Here, we follow a comparison of the local coherence lengths with their average values. More sophisticated procedures, suggested by clinical experience, can be used to improve the image processing.

  2. Image and Sensor Data Processing for Target Acquisition and Recognition. (United States)


    reprisontativo d’images d’antratne- mont dout il connait la viriti terrain . Pour chacune des cibles do cec images, lordinateur calculera les n paramitres...l’objet, glissement limitd A sa lergeur. DOaprds las rdsultets obtenus jusqu’A meintenent, nous navons pas observE de glissement impor- tant et ATR> I TR...AEROSPACE RESEARCH AND DEVELOPMENT (ORGANISATION DU TRAITE DE L’ATLANTIQUE NORD) AGARDonferenceJoceedin io.290 IMAGE AND SENSOR DATA PROCESSING FOR TARGET

  3. Processing of hyperspectral medical images applications in dermatology using Matlab

    CERN Document Server

    Koprowski, Robert


    This book presents new methods of analyzing and processing hyperspectral medical images, which can be used in diagnostics, for example for dermatological images. The algorithms proposed are fully automatic and the results obtained are fully reproducible. Their operation was tested on a set of several thousands of hyperspectral images and they were implemented in Matlab. The presented source code can be used without licensing restrictions. This is a valuable resource for computer scientists, bioengineers, doctoral students, and dermatologists interested in contemporary analysis methods.

  4. Fast Transforms in Image Processing: Compression, Restoration, and Resampling

    Directory of Open Access Journals (Sweden)

    Leonid P. Yaroslavsky


    Full Text Available Transform image processing methods are methods that work in domains of image transforms, such as Discrete Fourier, Discrete Cosine, Wavelet, and alike. They proved to be very efficient in image compression, in image restoration, in image resampling, and in geometrical transformations and can be traced back to early 1970s. The paper reviews these methods, with emphasis on their comparison and relationships, from the very first steps of transform image compression methods to adaptive and local adaptive filters for image restoration and up to “compressive sensing” methods that gained popularity in last few years. References are made to both first publications of the corresponding results and more recent and more easily available ones. The review has a tutorial character and purpose.

  5. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab. (United States)

    Koprowski, Robert


    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Matrix product formula for {{U}_{q}}(A_{2}^{(1)}) -zero range process (United States)

    Kuniba, Atsuo; Okado, Masato


    The {{U}q}(An(1)) -zero range processes introduced recently by Mangazeev, Maruyama and the authors are integrable discrete and continuous time Markov processes associated with the stochastic R matrix derived from the well-known {{U}q}(An(1)) quantum R matrix. By constructing a representation of the relevant Zamolodchikov-Faddeev algebra, we present, for n  =  2, a matrix product formula for the steady state probabilities in terms of q-boson operators.

  7. Digital image processing and analysis for activated sludge wastewater treatment. (United States)

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed


    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  8. A new method of SC image processing for confluence estimation. (United States)

    Soleimani, Sajjad; Mirzaei, Mohsen; Toncu, Dana-Cristina


    Stem cells images are a strong instrument in the estimation of confluency during their culturing for therapeutic processes. Various laboratory conditions, such as lighting, cell container support and image acquisition equipment, effect on the image quality, subsequently on the estimation efficiency. This paper describes an efficient image processing method for cell pattern recognition and morphological analysis of images that were affected by uneven background. The proposed algorithm for enhancing the image is based on coupling a novel image denoising method through BM3D filter with an adaptive thresholding technique for improving the uneven background. This algorithm works well to provide a faster, easier, and more reliable method than manual measurement for the confluency assessment of stem cell cultures. The present scheme proves to be valid for the prediction of the confluency and growth of stem cells at early stages for tissue engineering in reparatory clinical surgery. The method used in this paper is capable of processing the image of the cells, which have already contained various defects due to either personnel mishandling or microscope limitations. Therefore, it provides proper information even out of the worst original images available. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. VICAR-DIGITAL image processing system (United States)

    Billingsley, F.; Bressler, S.; Friden, H.; Morecroft, J.; Nathan, R.; Rindfleisch, T.; Selzer, R.


    Computer program corrects various photometic, geometric and frequency response distortions in pictures. The program converts pictures to a number of elements, with each elements optical density quantized to a numerical value. The translated picture is recorded on magnetic tape in digital form for subsequent processing and enhancement by computer.

  10. Natural image statistics and visual processing

    NARCIS (Netherlands)

    van der Schaaf, Arjen


    The visual system of a human or animal that functions in its natural environment receives huge amounts of visual information. This information is vital for the survival of the organism. In this thesis I follow the hypothesis that evolution has optimised the biological visual system to process the

  11. A study of correlation technique on pyramid processed images

    Indian Academy of Sciences (India)

    The pyramid algorithm is potentially a powerful tool for advanced television image processing and for pattern recognition. An attempt is made to design and develop both hardware and software for a system which performs decomposition and reconstruction of digitized images by implementing the Burt pyramid algorithm.

  12. Image processing for drift compensation in fluorescence microscopy

    DEFF Research Database (Denmark)

    Petersen, Steffen; Thiagarajan, Viruthachalam; Coutinho, Isabel


    Fluorescence microscopy is characterized by low background noise, thus a fluorescent object appears as an area of high signal/noise. Thermal gradients may result in apparent motion of the object, leading to a blurred image. Here, we have developed an image processing methodology that may remove/r...

  13. Digital passband processing of wideband-modulated optical signals for enhanced underwater imaging. (United States)

    Mullen, Linda; Lee, Robert; Nash, Justin


    Radar modulation, demodulation, and signal processing techniques have been merged with laser imaging to enhance visibility in murky underwater environments. The modulation provides a way to reject multiple scattered light that would otherwise reduce image contrast and resolution. Recent work has focused on the use of wideband modulation schemes and digital passband processing to resolve range details of an underwater scene. Use of the CLEAN algorithm has also been investigated to extract object features that are obscured by scattered light. Results from controlled laboratory experiments show an improvement in the range resolution and accuracy of underwater imagery relative to data collected with a conventional short pulse system.

  14. A novel data processing technique for image reconstruction of penumbral imaging (United States)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin


    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  15. Processed images in human perception: A case study in ultrasound breast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yap, Moi Hoon [Department of Computer Science, Loughborough University, FH09, Ergonomics and Safety Research Institute, Holywell Park (United Kingdom)], E-mail:; Edirisinghe, Eran [Department of Computer Science, Loughborough University, FJ.05, Garendon Wing, Holywell Park, Loughborough LE11 3TU (United Kingdom); Bez, Helmut [Department of Computer Science, Loughborough University, Room N.2.26, Haslegrave Building, Loughborough University, Loughborough LE11 3TU (United Kingdom)


    Two main research efforts in early detection of breast cancer include the development of software tools to assist radiologists in identifying abnormalities and the development of training tools to enhance their skills. Medical image analysis systems, widely known as Computer-Aided Diagnosis (CADx) systems, play an important role in this respect. Often it is important to determine whether there is a benefit in including computer-processed images in the development of such software tools. In this paper, we investigate the effects of computer-processed images in improving human performance in ultrasound breast cancer detection (a perceptual task) and classification (a cognitive task). A survey was conducted on a group of expert radiologists and a group of non-radiologists. In our experiments, random test images from a large database of ultrasound images were presented to subjects. In order to gather appropriate formal feedback, questionnaires were prepared to comment on random selections of original images only, and on image pairs consisting of original images displayed alongside computer-processed images. We critically compare and contrast the performance of the two groups according to perceptual and cognitive tasks. From a Receiver Operating Curve (ROC) analysis, we conclude that the provision of computer-processed images alongside the original ultrasound images, significantly improve the perceptual tasks of non-radiologists but only marginal improvements are shown in the perceptual and cognitive tasks of the group of expert radiologists.

  16. Characterization of Periodically Poled Nonlinear Materials Using Digital Image Processing

    National Research Council Canada - National Science Library

    Alverson, James R


    .... A new approach based on image processing across an entire z+ or z- surface of a poled crystal allows for better quantification of the underlying domain structure and directly relates to device performance...

  17. Application of digital image processing techniques to astronomical imagery 1977 (United States)

    Lorre, J. J.; Lynn, D. J.


    Nine specific techniques of combination of techniques developed for applying digital image processing technology to existing astronomical imagery are described. Photoproducts are included to illustrate the results of each of these investigations.

  18. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J


    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  19. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven


    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately...

  20. Applications of evolutionary computation in image processing and pattern recognition

    CERN Document Server

    Cuevas, Erik; Perez-Cisneros, Marco


    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  1. The Digital Microscope and Its Image Processing Utility

    Directory of Open Access Journals (Sweden)

    Tri Wahyu Supardi


    Full Text Available Many institutions, including high schools, own a large number of analog or ordinary microscopes. These microscopes are used to observe small objects. Unfortunately, object observations on the ordinary microscope require precision and visual acuity of the user. This paper discusses the development of a high-resolution digital microscope from an analog microscope, including the image processing utility, which allows the digital microscope users to capture, store and process the digital images of the object being observed. The proposed microscope is constructed from hardware components that can be easily found in Indonesia. The image processing software is capable of performing brightness adjustment, contrast enhancement, histogram equalization, scaling and cropping. The proposed digital microscope has a maximum magnification of 1600x, and image resolution can be varied from 320x240 pixels up to 2592x1944 pixels. The microscope was tested with various objects with a variety of magnification, and image processing was carried out on the image of the object. The results showed that the digital microscope and its image processing system were capable of enhancing the observed object and other operations in accordance with the user need. The digital microscope has eliminated the need for direct observation by human eye as with the traditional microscope.

  2. Digital processing considerations for extraction of ocean wave image spectra from raw synthetic aperture radar data (United States)

    Lahaie, I. J.; Dias, A. R.; Darling, G. D.


    The digital processing requirements of several algorithms for extracting the spectrum of a detected synthetic aperture radar (SAR) image from the raw SAR data are described and compared. The most efficient algorithms for image spectrum extraction from raw SAR data appear to be those containing an intermediate image formation step. It is shown that a recently developed compact formulation of the image spectrum in terms of the raw data is computationally inefficient when evaluated directly, in comparison with the classical method where matched-filter image formation is an intermediate result. It is also shown that a proposed indirect procedure for digitally implementing the same compact formulation is somewhat more efficient than the classical matched-filtering approach. However, this indirect procedure includes the image formation process as part of the total algorithm. Indeed, the computational savings afforded by the indirect implementation are identical to those obtained in SAR image formation processing when the matched-filtering algorithm is replaced by the well-known 'dechirp-Fourier transform' technique. Furthermore, corrections to account for slant-to-ground range conversion, spherical earth, etc., are often best implemented in the image domain, making intermediate image formation a valuable processing feature.

  3. A laboratory-based Laue X-ray diffraction system for enhanced imaging range and surface grain mapping. (United States)

    Whitley, William; Stock, Chris; Huxley, Andrew D


    Although CCD X-ray detectors can be faster to use, their large-area versions can be much more expensive than similarly sized photographic plate detectors. When indexing X-ray diffraction patterns, large-area detectors can prove very advantageous as they provide more spots, which makes fitting an orientation easier. On the other hand, when looking for single crystals in a polycrystalline sample, the speed of CCD detectors is more useful. A new setup is described here which overcomes some of the limitations of limited-range CCD detectors to make them more useful for indexing, whilst at the same time making it much quicker to find single crystals within a larger polycrystalline structure. This was done by combining a CCD detector with a six-axis goniometer, allowing the compilation of images from different angles into a wide-angled image. Automated scans along the sample were coupled with image processing techniques to produce grain maps, which can then be used to provide a strategy to extract single crystals from a polycrystal.

  4. Wide range local resistance imaging on fragile materials by conducting probe atomic force microscopy in intermittent contact mode

    Energy Technology Data Exchange (ETDEWEB)

    Vecchiola, Aymeric [Laboratoire de Génie électrique et électronique de Paris (GeePs), UMR 8507 CNRS-CentraleSupélec, Paris-Sud and UPMC Universities, 11 rue Joliot-Curie, Plateau de Moulon, 91192 Gif-sur-Yvette (France); Concept Scientific Instruments, ZA de Courtaboeuf, 2 rue de la Terre de Feu, 91940 Les Ulis (France); Unité Mixte de Physique CNRS-Thales UMR 137, 1 avenue Augustin Fresnel, 91767 Palaiseau (France); Chrétien, Pascal; Schneegans, Olivier; Mencaraglia, Denis; Houzé, Frédéric, E-mail: [Laboratoire de Génie électrique et électronique de Paris (GeePs), UMR 8507 CNRS-CentraleSupélec, Paris-Sud and UPMC Universities, 11 rue Joliot-Curie, Plateau de Moulon, 91192 Gif-sur-Yvette (France); Delprat, Sophie [Unité Mixte de Physique CNRS-Thales UMR 137, 1 avenue Augustin Fresnel, 91767 Palaiseau (France); UPMC, Université Paris 06, 4 place Jussieu, 75005 Paris (France); Bouzehouane, Karim; Seneor, Pierre; Mattana, Richard [Unité Mixte de Physique CNRS-Thales UMR 137, 1 avenue Augustin Fresnel, 91767 Palaiseau (France); Tatay, Sergio [Molecular Science Institute, University of Valencia, 46980 Paterna (Spain); Geffroy, Bernard [Lab. Physique des Interfaces et Couches minces (PICM), UMR 7647 CNRS-École polytechnique, 91128 Palaiseau (France); Lab. d' Innovation en Chimie des Surfaces et Nanosciences (LICSEN), NIMBE UMR 3685 CNRS-CEA Saclay, 91191 Gif-sur-Yvette (France); and others


    An imaging technique associating a slowly intermittent contact mode of atomic force microscopy (AFM) with a home-made multi-purpose resistance sensing device is presented. It aims at extending the widespread resistance measurements classically operated in contact mode AFM to broaden their application fields to soft materials (molecular electronics, biology) and fragile or weakly anchored nano-objects, for which nanoscale electrical characterization is highly demanded and often proves to be a challenging task in contact mode. Compared with the state of the art concerning less aggressive solutions for AFM electrical imaging, our technique brings a significantly wider range of resistance measurement (over 10 decades) without any manual switching, which is a major advantage for the characterization of materials with large on-sample resistance variations. After describing the basics of the set-up, we report on preliminary investigations focused on academic samples of self-assembled monolayers with various thicknesses as a demonstrator of the imaging capabilities of our instrument, from qualitative and semi-quantitative viewpoints. Then two application examples are presented, regarding an organic photovoltaic thin film and an array of individual vertical carbon nanotubes. Both attest the relevance of the technique for the control and optimization of technological processes.

  5. Arabidopsis Growth Simulation Using Image Processing Technology

    Directory of Open Access Journals (Sweden)

    Junmei Zhang


    Full Text Available This paper aims to provide a method to represent the virtual Arabidopsis plant at each growth stage. It includes simulating the shape and providing growth parameters. The shape is described with elliptic Fourier descriptors. First, the plant is segmented from the background with the chromatic coordinates. With the segmentation result, the outer boundary series are obtained by using boundary tracking algorithm. The elliptic Fourier analysis is then carried out to extract the coefficients of the contour. The coefficients require less storage than the original contour points and can be used to simulate the shape of the plant. The growth parameters include total area and the number of leaves of the plant. The total area is obtained with the number of the plant pixels and the image calibration result. The number of leaves is derived by detecting the apex of each leaf. It is achieved by using wavelet transform to identify the local maximum of the distance signal between the contour points and the region centroid. Experiment result shows that this method can record the growth stage of Arabidopsis plant with fewer data and provide a visual platform for plant growth research.

  6. Hydrodynamics of the Zero-Range Process in the Condensation Regime (United States)

    Schütz, G. M.; Harris, R. J.


    We argue that the coarse-grained dynamics of the zero-range process in the condensation regime can be described by an extension of the standard hydrodynamic equation obtained from Eulerian scaling even though the system is not locally stationary. Our result is supported by Monte Carlo simulations.

  7. Parallel Computers for Region-Level Image Processing. (United States)


    It is well known that parallel computers can be used very effectively for image processing at the pixel level, by assigning a processor to each pixel...or block of pixels, and passing information as necessary between processors whose blocks are adjacent. This paper discusses the use of parallel ... computers for processing images at the region level, assigning a processor to each region and passing information between processors whose regions are

  8. Digital image processing for the earth resources technology satellite data. (United States)

    Will, P. M.; Bakis, R.; Wesley, M. A.


    This paper discusses the problems of digital processing of the large volumes of multispectral image data that are expected to be received from the ERTS program. Correction of geometric and radiometric distortions are discussed and a byte oriented implementation is proposed. CPU timing estimates are given for a System/360 Model 67, and show that a processing throughput of 1000 image sets per week is feasible.

  9. The Digital Microscope and Its Image Processing Utility


    Tri Wahyu Supardi; Agus Harjoko; Sri Hartati


    Many institutions, including high schools, own a large number of analog or ordinary microscopes. These microscopes are used to observe small objects. Unfortunately, object observations on the ordinary microscope require precision and visual acuity of the user. This paper discusses the development of a high-resolution digital microscope from an analog microscope, including the image processing utility, which allows the digital microscope users to capture, store and process the digital images o...

  10. Techniques and software architectures for medical visualisation and image processing


    Botha, C.P.


    This thesis presents a flexible software platform for medical visualisation and image processing, a technique for the segmentation of the shoulder skeleton from CT data and three techniques that make contributions to the field of direct volume rendering. Our primary goal was to investigate the use of visualisation techniques to assist the shoulder replacement process. This motivated the need for a flexible environment within which to test and develop new visualisation and also image processin...

  11. Automated measurement of pressure injury through image processing. (United States)

    Li, Dan; Mathews, Carol


    To develop an image processing algorithm to automatically measure pressure injuries using electronic pressure injury images stored in nursing documentation. Photographing pressure injuries and storing the images in the electronic health record is standard practice in many hospitals. However, the manual measurement of pressure injury is time-consuming, challenging and subject to intra/inter-reader variability with complexities of the pressure injury and the clinical environment. A cross-sectional algorithm development study. A set of 32 pressure injury images were obtained from a western Pennsylvania hospital. First, we transformed the images from an RGB (i.e. red, green and blue) colour space to a YCb Cr colour space to eliminate inferences from varying light conditions and skin colours. Second, a probability map, generated by a skin colour Gaussian model, guided the pressure injury segmentation process using the Support Vector Machine classifier. Third, after segmentation, the reference ruler - included in each of the images - enabled perspective transformation and determination of pressure injury size. Finally, two nurses independently measured those 32 pressure injury images, and intraclass correlation coefficient was calculated. An image processing algorithm was developed to automatically measure the size of pressure injuries. Both inter- and intra-rater analysis achieved good level reliability. Validation of the size measurement of the pressure injury (1) demonstrates that our image processing algorithm is a reliable approach to monitoring pressure injury progress through clinical pressure injury images and (2) offers new insight to pressure injury evaluation and documentation. Once our algorithm is further developed, clinicians can be provided with an objective, reliable and efficient computational tool for segmentation and measurement of pressure injuries. With this, clinicians will be able to more effectively monitor the healing process of pressure injuries

  12. Survey: interpolation methods for whole slide image processing. (United States)

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T


    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  13. Initial Comparison of the Lightning Imaging Sensor (LIS) with Lightning Detection and Ranging (LDAR) (United States)

    Ushio, Tomoo; Driscoll, Kevin; Heckman, Stan; Boccippio, Dennis; Koshak, William; Christian, Hugh


    The mapping of the lightning optical pulses detected by the Lightning Imaging Sensor (LIS) is compared with the radiation sources by Lightning Detection and Ranging (LDAR) and the National Lightning Detection Network (NLDN) for three thunderstorms observed during and overpasses on 15 August 1998. The comparison involves 122 flashes including 42 ground and 80 cloud flashes. For ground flash, the LIS recorded the subsequent strokes and changes inside the cloud. For cloud flashes, LIS recorded those with higher sources in altitude and larger number of sources. The discrepancies between the LIS and LDAR flash locations are about 4.3 km for cloud flashes and 12.2 km for ground flashes. The reason for these differences remain a mystery.

  14. Study of gray image pseudo-color processing algorithms (United States)

    Hu, Jinlong; Peng, Xianrong; Xu, Zhiyong

    In gray images which contain abundant information, if the differences between adjacent pixels' intensity are small, the required information can not be extracted by humans, since humans are more sensitive to color images than gray images. If gray images are transformed to pseudo-color images, the details of images will be more explicit, and the target will be recognized more easily. There are two methods (in frequency field and in spatial field) to realize pseudo-color enhancement of gray images. The first method is mainly the filtering in frequency field, and the second is the equal density pseudo-color coding methods which mainly include density segmentation coding, function transformation and complementary pseudo-color coding. Moreover, there are many other methods to realize pseudo-color enhancement, such as pixel's self-transformation based on RGB tri-primary, pseudo-color coding from phase-modulated image based on RGB color model, pseudo-color coding of high gray-resolution image, et al. However, above methods are tailored to a particular situation and transformations are based on RGB color space. In order to improve the visual effect, the method based on RGB color space and pixels' self-transformation is improved in this paper, which is based on HIS color space. Compared with other methods, some gray images with ordinary formats can be processed, and many gray images can be transformed to pseudo-color images with 24 bits. The experiment shows that the processed image has abundant levels, which is consistent with human's perception.

  15. Design and fabrication of the New Horizons Long-Range Reconnaissance Imager (United States)

    Conard, S. J.; Azad, F.; Boldt, J. D.; Cheng, A.; Cooper, K. A.; Darlington, E. H.; Grey, M. P.; Hayes, J. R.; Hogue, P.; Kosakowski, K. E.; Magee, T.; Morgan, M. F.; Rossano, E.; Sampath, D.; Schlemm, C.; Weaver, H. A.


    The LOng-Range Reconnaissance Imager (LORRI) is an instrument that was designed, fabricated, and qualified for the New Horizons mission to the outermost planet Pluto, its giant satellite Charon, and the Kuiper Belt, which is the vast belt of icy bodies extending roughly from Neptune's orbit out to 50 astronomical units (AU). New Horizons is being prepared for launch in January 2006 as the inaugural mission in NASA's New Frontiers program. This paper provides an overview of the efforts to produce LORRI. LORRI is a narrow angle (field of view=0.29°), high resolution (instantaneous field of view = 4.94 μrad), Ritchey-Chretien telescope with a 20.8 cm diameter primary mirror, a focal length of 263 cm, and a three lens field-flattening assembly. A 1024 x 1024 pixel (optically active region), back-thinned, backside-illuminated charge-coupled device (CCD) detector (model CCD 47-20 from E2V Technologies) is located at the telescope focal plane and is operated in standard frame-transfer mode. LORRI does not have any color filters; it provides panchromatic imaging over a wide bandpass that extends approximately from 350 nm to 850 nm. A unique aspect of LORRI is the extreme thermal environment, as the instrument is situated inside a near room temperature spacecraft, while pointing primarily at cold space. This environment forced the use of a silicon carbide optical system, which is designed to maintain focus over the operating temperature range without a focus adjustment mechanism. Another challenging aspect of the design is that the spacecraft will be thruster stabilized (no reaction wheels), which places stringent limits on the available exposure time and the optical throughput needed to accomplish the high-resolution observations required. LORRI was designed and fabricated by a combined effort of The Johns Hopkins University Applied Physics Laboratory (APL) and SSG Precision Optronics Incorporated (SSG).

  16. Phases' characteristics of poultry litter hydrothermal carbonization under a range of process parameters. (United States)

    Mau, Vivian; Quance, Julie; Posmanik, Roy; Gross, Amit


    The aim of this work was to study the hydrothermal carbonization of poultry litter under a range of process parameters. Experiments were conducted to investigate the effect of HTC of poultry litter under a range of operational parameters (temperature, reaction time, and solids concentration) on the formation and characteristics of its phases. Results showed production of a hydrochar with caloric value of 24.4MJ/kg, similar to sub-bituminous coal. The gaseous phase consisted mainly of CO2. However, significant amounts of H2S dictate the need for (further) treatment. The process also produced an aqueous phase with chemical characteristics suggesting its possible use as a liquid fertilizer. Temperature had the most significant effect on processes and product formation. Solids concentration was not a significant factor once dilution effects were considered. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Research on signal processing techniques for a chirped amplitude modulation imaging laser radar (United States)

    Wang, Yang; Wang, Qianqian; Wang, Haiwei


    Due to some significant advantages such as high space resolution, three-dimensional imagery (including intensity image and range image) acquiring, and so on, an imaging laser radar is helpful to improve the correct recognition ratio being as a sensor in a target recognition system. A chirped amplitude modulation imaging ladar is based on the frequency modulation/continuous wave (FM/cw) technique. The target range is calculated by measuring the frequency difference between projected and returned laser signal. The design of a signal processing system for a FM/cw imaging ladar is introduced in this paper, which includes an acquiring block, a memory block, a communication block, and a FFT processor. The performance of this system is analyzed in detail in this paper.

  18. Signal processing and analysis for copper layer thickness measurement within a large variation range in the CMP process (United States)

    Li, Hongkai; Zhao, Qian; Lu, Xinchun; Luo, Jianbin


    In the copper (Cu) chemical mechanical planarization (CMP) process, accurate determination of a process reaching the end point is of great importance. Based on the eddy current technology, the in situ thickness measurement of the Cu layer is feasible. Previous research studies focus on the application of the eddy current method to the metal layer thickness measurement or endpoint detection. In this paper, an in situ measurement system, which is independently developed by using the eddy current method, is applied to the actual Cu CMP process. A series of experiments are done for further analyzing the dynamic response characteristic of the output signal within different thickness variation ranges. In this study, the voltage difference of the output signal is used to represent the thickness of the Cu layer, and we can extract the voltage difference variations from the output signal fast by using the proposed data processing algorithm. The results show that the voltage difference decreases as thickness decreases in the conventional measurement range and the sensitivity increases at the same time. However, it is also found that there exists a thickness threshold, and the correlation is negative, when the thickness is more than the threshold. Furthermore, it is possible that the in situ measurement system can be used within a larger Cu layer thickness variation range by creating two calibration tables.

  19. Low-complexity Compression of High Dynamic Range Infrared Images with JPEG compatibility

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren


    data size, then we include the raw residual image instead. If the residual image contains only zero values or the quality factor for it is 0 then we do not include the residual image into the header. Experimental results show that compared with JPEG-XT Part 6 with ’global Reinhard’ tone-mapping....... Then we compress each image by a JPEG baseline encoder and include the residual image bit stream into the application part of JPEG header of the base image. As a result, the base image can be reconstructed by JPEG baseline decoder. If the JPEG bit stream size of the residual image is higher than the raw...

  20. Tumor image signatures and habitats: a processing pipeline of multimodality metabolic and physiological images. (United States)

    You, Daekeun; Kim, Michelle M; Aryal, Madhava P; Parmar, Hemant; Piert, Morand; Lawrence, Theodore S; Cao, Yue


    To create tumor "habitats" from the "signatures" discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix [Formula: see text] containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix [Formula: see text] of the image parameters to discover major image "signatures;" (4) clustering the superpixels and organizing the parameter order of the [Formula: see text] matrix according to the one found in step 3; (5) creating "habitats" in the image space from the superpixels associated with the "signatures;" and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image "signatures" were identified. The three major "habitats" plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined "signatures" and "habitats," the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined "signatures" and "habitats" was revealed. These image-defined "signatures" and "habitats" can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.

  1. Digital image processing of bone - Problems and potentials (United States)

    Morey, E. R.; Wronski, T. J.


    The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.

  2. Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images (United States)

    Rector, Travis A.; Levay, Zoltan G.; Frattare, Lisa M.; English, Jayanne; Pu'uohau-Pummill, Kirk


    The quality of modern astronomical data and the agility of current image-processing software enable the visualization of data in a way that exceeds the traditional definition of an astronomical image. Two developments in particular have led to a fundamental change in how astronomical images can be assembled. First, the availability of high-quality multiwavelength and narrowband data allow for images that do not correspond to the wavelength sensitivity of the human eye, thereby introducing ambiguity in the usage and interpretation of color. Second, many image-processing software packages now use a layering metaphor that allows for any number of astronomical data sets to be combined into a color image. With this technique, images with as many as eight data sets have been produced. Each data set is intensity-scaled and colorized independently, creating an immense parameter space that can be used to assemble the image. Since such images are intended for data visualization, scaling and color schemes must be chosen that best illustrate the science. A practical guide is presented on how to use the layering metaphor to generate publication-ready astronomical images from as many data sets as desired. A methodology is also given on how to use intensity scaling, color, and composition to create contrasts in an image that highlight the scientific detail. Examples of image creation are discussed.

  3. Image processing for improved eye-tracking accuracy (United States)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)


    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  4. Integrating digital topology in image-processing libraries. (United States)

    Lamy, Julien


    This paper describes a method to integrate digital topology informations in image-processing libraries. This additional information allows a library user to write algorithms respecting topological constraints, for example, a seed fill or a skeletonization algorithm. As digital topology is absent from most image-processing libraries, such constraints cannot be fulfilled. We describe and give code samples for all the structures necessary for this integration, and show a use case in the form of a homotopic thinning filter inside ITK. The obtained filter can be up to a hundred times as fast as ITK's thinning filter and works for any image dimension. This paper mainly deals of integration within ITK, but can be adapted with only minor modifications to other image-processing libraries.

  5. Study for online range monitoring with the interaction vertex imaging method (United States)

    Finck, Ch; Karakaya, Y.; Reithinger, V.; Rescigno, R.; Baudot, J.; Constanzo, J.; Juliani, D.; Krimmer, J.; Rinaldi, I.; Rousseau, M.; Testa, E.; Vanstalle, M.; Ray, C.


    Ion beam therapy enables a highly accurate dose conformation delivery to the tumor due to the finite range of charged ions in matter (i.e. Bragg peak (BP)). Consequently, the dose profile is very sensitive to patients anatomical changes as well as minor mispositioning, and so it requires improved dose control techniques. Proton interaction vertex imaging (IVI) could offer an online range control in carbon ion therapy. In this paper, a statistical method was used to study the sensitivity of the IVI technique on experimental data obtained from the Heidelberg Ion-Beam Therapy Center. The vertices of secondary protons were reconstructed with pixelized silicon detectors. The statistical study used the χ2 test of the reconstructed vertex distributions for a given displacement of the BP position as a function of the impinging carbon ions. Different phantom configurations were used with or without bone equivalent tissue and air inserts. The inflection points in the fall-off region of the longitudinal vertex distribution were computed using different methods, while the relation with the BP position was established. In the present setup, the resolution of the BP position was about 4–5 mm in the homogeneous phantom under clinical conditions (106 incident carbon ions). Our results show that the IVI method could therefore monitor the BP position with a promising resolution in clinical conditions.

  6. Effects of processing conditions on mammographic image quality. (United States)

    Braeuning, M P; Cooper, H W; O'Brien, S; Burns, C B; Washburn, D B; Schell, M J; Pisano, E D


    Any given mammographic film will exhibit changes in sensitometric response and image resolution as processing variables are altered. Developer type, immersion time, and temperature have been shown to affect the contrast of the mammographic image and thus lesion visibility. The authors evaluated the effect of altering processing variables, including film type, developer type, and immersion time, on the visibility of masses, fibrils, and speaks in a standard mammographic phantom. Images of a phantom obtained with two screen types (Kodak Min-R and Fuji) and five film types (Kodak Min-R M, Min-R E, Min-R H; Fuji UM-MA HC, and DuPont Microvision-C) were processed with five different developer chemicals (Autex SE, DuPont HSD, Kodak RP, Picker 3-7-90, and White Mountain) at four different immersion times (24, 30, 36, and 46 seconds). Processor chemical activity was monitored with sensitometric strips, and developer temperatures were continuously measured. The film images were reviewed by two board-certified radiologists and two physicists with expertise in mammography quality control and were scored based on the visibility of calcifications, masses, and fibrils. Although the differences in the absolute scores were not large, the Kodak Min-R M and Fuji films exhibited the highest scores, and images developed in White Mountain and Autex chemicals exhibited the highest scores. For any film, several processing chemicals may be used to produce images of similar quality. Extended processing may no longer be necessary.

  7. Digital Signal Processing for Medical Imaging Using Matlab

    CERN Document Server

    Gopi, E S


    This book describes medical imaging systems, such as X-ray, Computed tomography, MRI, etc. from the point of view of digital signal processing. Readers will see techniques applied to medical imaging such as Radon transformation, image reconstruction, image rendering, image enhancement and restoration, and more. This book also outlines the physics behind medical imaging required to understand the techniques being described. The presentation is designed to be accessible to beginners who are doing research in DSP for medical imaging. Matlab programs and illustrations are used wherever possible to reinforce the concepts being discussed.  ·         Acts as a “starter kit” for beginners doing research in DSP for medical imaging; ·         Uses Matlab programs and illustrations throughout to make content accessible, particularly with techniques such as Radon transformation and image rendering; ·         Includes discussion of the basic principles behind the various medical imaging tec...

  8. First-order convex feasibility algorithms for iterative image reconstruction in limited angular-range X-ray CT

    CERN Document Server

    Sidky, Emil Y; Pan, Xiaochuan


    Iterative image reconstruction (IIR) algorithms in Computed Tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this article, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for efficient algorithms for their solution -- thereby facilitating the IIR algorithm design process. An accelerated version of the Chambolle-Pock (CP) algorithm is adapted to various convex fea...


    Directory of Open Access Journals (Sweden)

    L. Barazzetti


    Full Text Available The avaibility of automated software for image-based 3D modelling has changed the way people acquire images for photogrammetric applications. Short baseline images are required to match image points with SIFT-like algorithms, obtaining more images than those necessary for “old fashioned” photogrammetric projects based on manual measurements. This paper describes some considerations on network design for short baseline image sequences, especially on precision and reliability of bundle adjustment. Simulated results reveal that the large number of 3D points used for image orientation has very limited impact on network precision.

  10. Network Design in Close-Range Photogrammetry with Short Baseline Images (United States)

    Barazzetti, L.


    The avaibility of automated software for image-based 3D modelling has changed the way people acquire images for photogrammetric applications. Short baseline images are required to match image points with SIFT-like algorithms, obtaining more images than those necessary for "old fashioned" photogrammetric projects based on manual measurements. This paper describes some considerations on network design for short baseline image sequences, especially on precision and reliability of bundle adjustment. Simulated results reveal that the large number of 3D points used for image orientation has very limited impact on network precision.

  11. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images. (United States)

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong


    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.

  12. Manycore processing of repeated range queries over massive moving objects observations

    DEFF Research Database (Denmark)

    Lettich, Francesco; Orlando, Salvatore; Silvestri, Claudio


    . In this paper we focus on a specific data-intensive problem, concerning the repeated processing of huge amounts of range queries over massive sets of moving objects, where the spatial extents of queries and objects are continuously modified over time. To tackle this problem and significantly accelerate query......The ability to timely process significant amounts of continuously updated spatial data is mandatory for an increasing number of applications. Parallelism enables such applications to face this data-intensive challenge and allows the devised systems to feature low latency and high scalability...... processing we devise a hybrid CPU/GPU pipeline that compresses data output and save query processing work. The devised system relies on an ad-hoc spatial index leading to a problem decomposition that results in a set of independent data-parallel tasks. The index is based on a point-region quadtree space...

  13. Digital image processing for photo-reconnaissance applications (United States)

    Billingsley, F. C.


    Digital image-processing techniques developed for processing pictures from NASA space vehicles are analyzed in terms of enhancement, quantitative restoration, and information extraction. Digital filtering, and the action of a high frequency filter in the real and Fourier domain are discussed along with color and brightness.

  14. Image processing system performance prediction and product quality evaluation (United States)

    Stein, E. K.; Hammill, H. B. (Principal Investigator)


    The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

  15. Digital image processing using parallel computing based on CUDA technology (United States)

    Skirnevskiy, I. P.; Pustovit, A. V.; Abdrashitova, M. O.


    This article describes expediency of using a graphics processing unit (GPU) in big data processing in the context of digital images processing. It provides a short description of a parallel computing technology and its usage in different areas, definition of the image noise and a brief overview of some noise removal algorithms. It also describes some basic requirements that should be met by certain noise removal algorithm in the projection to computer tomography. It provides comparison of the performance with and without using GPU as well as with different percentage of using CPU and GPU.

  16. Computer image processing - The Viking experience. [digital enhancement techniques (United States)

    Green, W. B.


    Computer processing of digital imagery from the Viking mission to Mars is discussed, with attention given to subjective enhancement and quantitative processing. Contrast stretching and high-pass filtering techniques of subjective enhancement are described; algorithms developed to determine optimal stretch and filtering parameters are also mentioned. In addition, geometric transformations to rectify the distortion of shapes in the field of view and to alter the apparent viewpoint of the image are considered. Perhaps the most difficult problem in quantitative processing of Viking imagery was the production of accurate color representations of Orbiter and Lander camera images.

  17. IDP: Image and data processing (software) in C++

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S. [Lawrence Livermore National Lab., CA (United States)


    IDP++(Image and Data Processing in C++) is a complied, multidimensional, multi-data type, signal processing environment written in C++. It is being developed within the Radar Ocean Imaging group and is intended as a partial replacement for View. IDP++ takes advantage of the latest object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is being designed for real-time environment where interpreted signal processing packages are less efficient.

  18. Comparing ecohydrological processes in alien vs. native ranges: perspectives from the endangered shrub Myricaria germanica (United States)

    Michielon, Bruno; Campagnaro, Thomas; Porté, Annabel; Hoyle, Jo; Picco, Lorenzo; Sitzia, Tommaso


    Comparing the ecology of woody species in their alien and native ranges may provide interesting insights for theoretical ecology, invasion biology, restoration ecology and forestry. The literature which describes the biological evolution of successful plant invaders is rich and increasing. However, no general theories have been developed about the geomorphic settings which may limit or favour the alien woody species expansion along rivers. The aim of this contribution is to explore the research opportunities in the comparison of ecohydrological processes occurring in the alien vs. the native ranges of invasive tree and shrub species along the riverine corridor. We use the endangered shrub Myricaria germanica as an example. Myricaria germanica is an Euro-Asiatic pioneer species that, in the native range, develops along natural rivers, wide and dynamic. These conditions are increasingly limited by anthropogenic constraints in most European rivers. This species has been recently introduced in New Zealand, where it is spreading in some natural rivers of the Canterbury region (South Island). We present the current knowledge about the natural and anthropogenic factors influencing this species in its native range. We compare this information with the current knowledge about the same factors influencing M. germanica invasiveness and invasibility of riparian habitats in New Zealand. We stress the need to identify potential factors which could drive life-traits and growing strategies divergence which may hinder the application to the alien ranges of existing ecohydrological knowledge from native ranges. Moreover, the pattern of expansion of the alien range of species endangered in their native ranges opens new windows for research.

  19. Performance Measure as Feedback Variable in Image Processing

    Directory of Open Access Journals (Sweden)

    Ristić Danijela


    Full Text Available This paper extends the view of image processing performance measure presenting the use of this measure as an actual value in a feedback structure. The idea behind is that the control loop, which is built in that way, drives the actual feedback value to a given set point. Since the performance measure depends explicitly on the application, the inclusion of feedback structures and choice of appropriate feedback variables are presented on example of optical character recognition in industrial application. Metrics for quantification of performance at different image processing levels are discussed. The issues that those metrics should address from both image processing and control point of view are considered. The performance measures of individual processing algorithms that form a character recognition system are determined with respect to the overall system performance.

  20. Comparison of Multichannel Wide Dynamic Range Compression and ChannelFree Processing Strategies on Consonant Recognition. (United States)

    Plyler, Patrick; Hedrick, Mark; Rinehart, Brittany; Tripp, Rebekah


    Both wide dynamic range compression (WDRC) and ChannelFree (CF) processing strategies in hearing aids were designed to improve listener comfort and consonant identification, yet few studies have actually compared them. To determine whether CF processing provides equal or better consonant identification and subjective preference than WDRC. A repeated-measures randomized design was used in which each participant identified consonants from prerecorded nonsense vowel-consonant-vowel syllables in three conditions: unaided, aided using CF processing, and aided using WDRC processing. For each of the three conditions, syllables were presented in quiet and in a speech-noise background. Participants were also asked to rate the two processing schemes according to overall preference, preference in quiet and noise, and sound quality. Twenty adults (seven females; mean age 69.7 yr) with ≥1 yr of hearing aid use participated. Ten participants had previous experience wearing aids with WDRC, and 10 had previous experience with CF processing. Participants were tested with both WDRC and CF processing. Number of consonants correct were measured and used as the dependent variable in analyses of variance with subsequent post hoc testing. For subjective preference, a listener rating form was employed with subsequent χ² analysis. Overall results showed that signal-processing strategy did not significantly affect consonant identification or subjective preference, nor did previous hearing aid use influence results. Listeners with audiometric slopes exceeding 11 dB per octave, however, preferred CF processing and performed better in noise with CF processing. CF processing is a viable alternative to WDRC for listeners with more severely sloping audiometric contours. American Academy of Audiology.

  1. Cloud cover detection combining high dynamic range sky images and ceilometer measurements (United States)

    Román, R.; Cazorla, A.; Toledano, C.; Olmo, F. J.; Cachorro, V. E.; de Frutos, A.; Alados-Arboledas, L.


    This paper presents a new algorithm for cloud detection based on high dynamic range images from a sky camera and ceilometer measurements. The algorithm is also able to detect the obstruction of the sun. This algorithm, called CPC (Camera Plus Ceilometer), is based on the assumption that under cloud-free conditions the sky field must show symmetry. The symmetry criteria are applied depending on ceilometer measurements of the cloud base height. CPC algorithm is applied in two Spanish locations (Granada and Valladolid). The performance of CPC retrieving the sun conditions (obstructed or unobstructed) is analyzed in detail using as reference pyranometer measurements at Granada. CPC retrievals are in agreement with those derived from the reference pyranometer in 85% of the cases (it seems that this agreement does not depend on aerosol size or optical depth). The agreement percentage goes down to only 48% when another algorithm, based on Red-Blue Ratio (RBR), is applied to the sky camera images. The retrieved cloud cover at Granada and Valladolid is compared with that registered by trained meteorological observers. CPC cloud cover is in agreement with the reference showing a slight overestimation and a mean absolute error around 1 okta. A major advantage of the CPC algorithm with respect to the RBR method is that the determined cloud cover is independent of aerosol properties. The RBR algorithm overestimates cloud cover for coarse aerosols and high loads. Cloud cover obtained only from ceilometer shows similar results than CPC algorithm; but the horizontal distribution cannot be obtained. In addition, it has been observed that under quick and strong changes on cloud cover ceilometers retrieve a cloud cover fitting worse with the real cloud cover.

  2. Enhancement of structure images of interstellar diamond microcrystals by image processing (United States)

    O'Keefe, Michael A.; Hetherington, Crispin; Turner, John; Blake, David; Freund, Friedemann


    Image processed high resolution TEM images of diamond crystals found in oxidized acid residues of carbonaceous chondrites are presented. Two models of the origin of the diamonds are discussed. The model proposed by Lewis et al. (1987) supposes that the diamonds formed under low pressure conditions, whereas that of Blake et al (1988) suggests that the diamonds formed due to particle-particle collisions behind supernova shock waves. The TEM images of the diamond presented support the high pressure model.

  3. Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing. (United States)

    Kim, Hyunjun; Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu; Sim, Sung-Han


    Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%.

  4. Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing

    Directory of Open Access Journals (Sweden)

    Hyunjun Kim


    Full Text Available Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%.

  5. Digital image processing on a small computer system (United States)

    Danielson, R.


    A minicomputer-based image processing facility provides a relatively low-cost entry point for education about image analysis applications in remote sensing. While a minicomputer has sufficient processing power to produce results quite rapidly for low volumes of small images, it does not have sufficient power to perform CPU- or 1/0-bound tasks on large images. A system equipped with a display terminal is ideally suited for interactive tasks. Software procurement is a limiting factor for most end users, and software availability may well be the overriding consideration in selecting a particular hardware configuration. The hardware chosen should be selected to be compatible with the software and with concern for future expansion.

  6. Image processing techniques in 3-D foot shape measurement system (United States)

    Liu, Guozhong; Li, Ping; Wang, Boxiong; Shi, Hui; Luo, Xiuzhi


    The 3-D foot-shape measurement system based on laser-line-scanning principle was designed and 3-D foot-shape measurements without blind areas and the automatic extraction of foot-parameters were achieved. The paper is focused on the study of the system structure and principle and image processing techniques. The key techniques related to the image processing for 3-D foot shape measurement system include laser stripe extraction, laser stripe coordinate transformation from CCD cameras image coordinates system to laser plane coordinates system, laser stripe assembly of eight CCD cameras and eliminating of image noise and disturbance. 3-D foot shape measurement makes it possible to realize custom-made shoe-making and shows great prosperity in shoe design, foot orthopaedic treatment, shoe size standardization and establishment of a feet database for consumers.

  7. Automatic detection of NIL defects using microscopy and image processing

    KAUST Repository

    Pietroy, David


    Nanoimprint Lithography (NIL) is a promising technology for low cost and large scale nanostructure fabrication. This technique is based on a contact molding-demolding process, that can produce number of defects such as incomplete filling, negative patterns, sticking. In this paper, microscopic imaging combined to a specific processing algorithm is used to detect numerically defects in printed patterns. Results obtained for 1D and 2D imprinted gratings with different microscopic image magnifications are presented. Results are independent on the device which captures the image (optical, confocal or electron microscope). The use of numerical images allows the possibility to automate the detection and to compute a statistical analysis of defects. This method provides a fast analysis of printed gratings and could be used to monitor the production of such structures. © 2013 Elsevier B.V. All rights reserved.

  8. Modular Scanning Confocal Microscope with Digital Image Processing. (United States)

    Ye, Xianjun; McCluskey, Matthew D


    In conventional confocal microscopy, a physical pinhole is placed at the image plane prior to the detector to limit the observation volume. In this work, we present a modular design of a scanning confocal microscope which uses a CCD camera to replace the physical pinhole for materials science applications. Experimental scans were performed on a microscope resolution target, a semiconductor chip carrier, and a piece of etched silicon wafer. The data collected by the CCD were processed to yield images of the specimen. By selecting effective pixels in the recorded CCD images, a virtual pinhole is created. By analyzing the image moments of the imaging data, a lateral resolution enhancement is achieved by using a 20 × / NA = 0.4 microscope objective at 532 nm laser wavelength.

  9. Digital Image Processing Techniques to Create Attractive Astronomical Images from Research Data (United States)

    Rector, T. A.; Levay, Z.; Frattare, L.; English, J.; Pu'uohau-Pummill, K.


    The quality of modern astronomical data, the power of modern computers and the agility of current image processing software enable the creation of high-quality images in a purely digital form that rival the quality of traditional photographic astronomical images. The combination of these technological advancements has created a new ability to make color astronomical images. And in many ways, it has led to a new philosophy towards how to create them. We present a practical guide to generate astronomical images from research data by using powerful image processing programs. These programs use a layering metaphor that allows an unlimited number of astronomical datasets to be combined in any desired color scheme, creating an immense parameter space to be explored using an iterative approach. Several examples of image creation are presented. We also present a philosophy on how to use color and composition to create images that simultaneously highlight the scientific detail within an image and are aesthetically appealing. We advocate an approach that uses visual grammar, defined as the elements which affect the interpretation of an image, to maximize the richness and detail in an image while maintaining scientific accuracy. By properly using visual grammar, one can imply qualities that a two-dimensional image intrinsically cannot show, such as depth, motion and energy. In addition, composition can be used to engage the viewer and keep him or her interested for a longer period of time. The effective use of these techniques can result in a striking image that will effectively convey the science within the image, to scientists and to the public.

  10. A MURI Center for Intelligent Biomimetic Image Processing and Classification (United States)


    paradoxically both inconsistent and accurate. A new ARTMAP neural network system derives hierarchical knowledge structures from nominally inconsistent...derive knowledge of relationship rules, confidence estimates, equivalence classes, and hierarchical structures . Image analysis: Long-range object...figures: New cases and hypotheses. Technical Report CAS/CNS TR-2005-008, Boston University. Psychofenia: Ricerca ed Analisi Psicologica, IX(15), 93

  11. Establishing an international reference image database for research and development in medical image processing

    NARCIS (Netherlands)

    Horsch, A.D.; Prinz, M.; Schneider, S.; Sipilä, O; Spinnler, K.; Vallée, J-P; Verdonck-de Leeuw, I; Vogl, R.; Wittenberg, T.; Zahlmann, G.


    INTRODUCTION: The lack of comparability of evaluation results is one of the major obstacles of research and development in Medical Image Processing (MIP). The main reason for that is the usage of different image datasets with different quality, size and Gold standard. OBJECTIVES: Therefore, one of

  12. MATLAB-based Applications for Image Processing and Image Quality Assessment – Part II: Experimental Results

    Directory of Open Access Journals (Sweden)

    L. Krasula


    Full Text Available The paper provides an overview of some possible usage of the software described in the Part I. It contains the real examples of image quality improvement, distortion simulations, objective and subjective quality assessment and other ways of image processing that can be obtained by the individual applications.

  13. Image processing tool for automatic feature recognition and quantification

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xing; Stoddard, Ryan J.


    A system for defining structures within an image is described. The system includes reading of an input file, preprocessing the input file while preserving metadata such as scale information and then detecting features of the input file. In one version the detection first uses an edge detector followed by identification of features using a Hough transform. The output of the process is identified elements within the image.

  14. Assessment of banana fruit maturity by image processing technique


    Surya Prabha, D.; J. Satheesh Kumar


    Maturity stage of fresh banana fruit is an important factor that affects the fruit quality during ripening and marketability after ripening. The ability to identify maturity of fresh banana fruit will be a great support for farmers to optimize harvesting phase which helps to avoid harvesting either under-matured or over-matured banana. This study attempted to use image processing technique to detect the maturity stage of fresh banana fruit by its color and size value of their images precisely...

  15. Method development for verification the completeancient statues by image processing

    Directory of Open Access Journals (Sweden)

    Natthariya Laopracha


    Full Text Available Ancient statues are cultural heritages that should be preserved and maintained. Nevertheless, such invaluable statues may be targeted by vandalism or burglary. In order to guard these statues by using image processing, this research aims to develop a technique for detecting images of ancient statues with missing parts using digital image processing. This paper proposed the effective feature extraction method for detecting images of damaged statues or statues with missing parts based on the Histogram Oriented Gradient (HOG technique, a popular method for object detection. Unlike the original HOG technique, the proposed method has improved the area scanning strategy that effectively extracts important features of statues. Results obtained from the proposed method were compared with those of the HOG method. The tested image dataset composed of 500 images of perfect statues and 500 images of statues with missing parts. The experimental results show that the proposed method yields 99.88% accuracy while the original HOG method gives the accuracy of only 84.86%.


    Directory of Open Access Journals (Sweden)

    S. J. Baillarin


    Full Text Available In partnership with the European Commission and in the frame of the Global Monitoring for Environment and Security (GMES program, the European Space Agency (ESA is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. While ensuring data continuity of former SPOT and LANDSAT multi-spectral missions, Sentinel-2 will also offer wide improvements such as a unique combination of global coverage with a wide field of view (290 km, a high revisit (5 days with two satellites, a high resolution (10 m, 20 m and 60 m and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains. In this context, the Centre National d'Etudes Spatiales (CNES supports ESA to define the system image products and to prototype the relevant image processing techniques. This paper offers, first, an overview of the Sentinel-2 system and then, introduces the image products delivered by the ground processing: the Level-0 and Level-1A are system products which correspond to respectively raw compressed and uncompressed data (limited to internal calibration purposes, the Level-1B is the first public product: it comprises radiometric corrections (dark signal, pixels response non uniformity, crosstalk, defective pixels, restoration, and binning for 60 m bands; and an enhanced physical geometric model appended to the product but not applied, the Level-1C provides ortho-rectified top of atmosphere reflectance with a sub-pixel multi-spectral and multi-date registration; a cloud and land/water mask is associated to the product. Note that the cloud mask also provides an indication about cirrus. The ground sampling distance of Level-1C product will be 10 m, 20 m or 60 m according to the band. The final Level-1C product is tiled following a pre-defined grid of 100x100 km2, based on UTM/WGS84 reference frame

  17. Lessons from the masters current concepts in astronomical image processing

    CERN Document Server


    There are currently thousands of amateur astronomers around the world engaged in astrophotography at increasingly sophisticated levels. Their ranks far outnumber professional astronomers doing the same and their contributions both technically and artistically are the dominant drivers of progress in the field today. This book is a unique collaboration of individuals, all world-renowned in their particular area, and covers in detail each of the major sub-disciplines of astrophotography. This approach offers the reader the greatest opportunity to learn the most current information and the latest techniques directly from the foremost innovators in the field today.   The book as a whole covers all types of astronomical image processing, including processing of eclipses and solar phenomena, extracting detail from deep-sky, planetary, and widefield images, and offers solutions to some of the most challenging and vexing problems in astronomical image processing. Recognized chapter authors include deep sky experts su...

  18. Image data processing system requirements study. Volume 1: Analysis. [for Earth Resources Survey Program (United States)

    Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.


    Digital image processing, image recorders, high-density digital data recorders, and data system element processing for use in an Earth Resources Survey image data processing system are studied. Loading to various ERS systems is also estimated by simulation.

  19. FlexISP: a flexible camera image processing framework

    KAUST Repository

    Heide, Felix


    Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques. Finally, we show that our approach is capable of very efficiently handling high-resolution images, making even mobile implementations feasible.

  20. Triple Bioluminescence Imaging for In Vivo Monitoring of Cellular Processes

    Directory of Open Access Journals (Sweden)

    Casey A Maguire


    Full Text Available Bioluminescence imaging (BLI has shown to be crucial for monitoring in vivo biological processes. So far, only dual bioluminescence imaging using firefly (Fluc and Renilla or Gaussia (Gluc luciferase has been achieved due to the lack of availability of other efficiently expressed luciferases using different substrates. Here, we characterized a codon-optimized luciferase from Vargula hilgendorfii (Vluc as a reporter for mammalian gene expression. We showed that Vluc can be multiplexed with Gluc and Fluc for sequential imaging of three distinct cellular phenomena in the same biological system using vargulin, coelenterazine, and D-luciferin substrates, respectively. We applied this triple imaging system to monitor the effect of soluble tumor necrosis factor-related apoptosis-inducing ligand (sTRAIL delivered using an adeno-associated viral vector (AAV on brain tumors in mice. Vluc imaging showed efficient sTRAIL gene delivery to the brain, while Fluc imaging revealed a robust antiglioma therapy. Further, nuclear factor-κB (NF-κB activation in response to sTRAIL binding to glioma cells death receptors was monitored by Gluc imaging. This work is the first demonstration of trimodal in vivo bioluminescence imaging and will have a broad applicability in many different fields including immunology, oncology, virology, and neuroscience.