WorldWideScience

Sample records for range image processing

  1. Dense range images from sparse point clouds using multi-scale processing

    NARCIS (Netherlands)

    Do, Q.L.; Ma, L.; With, de P.H.N.

    2013-01-01

    Multi-modal data processing based on visual and depth/range images has become relevant in computer vision for 3D reconstruction applications such as city modeling, robot navigation etc. In this paper, we generate highaccuracy dense range images from sparse point clouds to facilitate such

  2. Image processing pipeline for segmentation and material classification based on multispectral high dynamic range polarimetric images.

    Science.gov (United States)

    Martínez-Domingo, Miguel Ángel; Valero, Eva M; Hernández-Andrés, Javier; Tominaga, Shoji; Horiuchi, Takahiko; Hirai, Keita

    2017-11-27

    We propose a method for the capture of high dynamic range (HDR), multispectral (MS), polarimetric (Pol) images of indoor scenes using a liquid crystal tunable filter (LCTF). We have included the adaptive exposure estimation (AEE) method to fully automatize the capturing process. We also propose a pre-processing method which can be applied for the registration of HDR images after they are already built as the result of combining different low dynamic range (LDR) images. This method is applied to ensure a correct alignment of the different polarization HDR images for each spectral band. We have focused our efforts in two main applications: object segmentation and classification into metal and dielectric classes. We have simplified the segmentation using mean shift combined with cluster averaging and region merging techniques. We compare the performance of our segmentation with that of Ncut and Watershed methods. For the classification task, we propose to use information not only in the highlight regions but also in their surrounding area, extracted from the degree of linear polarization (DoLP) maps. We present experimental results which proof that the proposed image processing pipeline outperforms previous techniques developed specifically for MSHDRPol image cubes.

  3. Real-time image processing of TOF range images using a reconfigurable processor system

    Science.gov (United States)

    Hussmann, S.; Knoll, F.; Edeler, T.

    2011-07-01

    During the last years, Time-of-Flight sensors achieved a significant impact onto research fields in machine vision. In comparison to stereo vision system and laser range scanners they combine the advantages of active sensors providing accurate distance measurements and camera-based systems recording a 2D matrix at a high frame rate. Moreover low cost 3D imaging has the potential to open a wide field of additional applications and solutions in markets like consumer electronics, multimedia, digital photography, robotics and medical technologies. This paper focuses on the currently implemented 4-phase-shift algorithm in this type of sensors. The most time critical operation of the phase-shift algorithm is the arctangent function. In this paper a novel hardware implementation of the arctangent function using a reconfigurable processor system is presented and benchmarked against the state-of-the-art CORDIC arctangent algorithm. Experimental results show that the proposed algorithm is well suited for real-time processing of the range images of TOF cameras.

  4. Local contrast-enhanced MR images via high dynamic range processing.

    Science.gov (United States)

    Chandra, Shekhar S; Engstrom, Craig; Fripp, Jurgen; Neubert, Ales; Jin, Jin; Walker, Duncan; Salvado, Olivier; Ho, Charles; Crozier, Stuart

    2018-09-01

    To develop a local contrast-enhancing and feature-preserving high dynamic range (HDR) image processing algorithm for multichannel and multisequence MR images of multiple body regions and tissues, and to evaluate its performance for structure visualization, bias field (correction) mitigation, and automated tissue segmentation. A multiscale-shape and detail-enhancement HDR-MRI algorithm is applied to data sets of multichannel and multisequence MR images of the brain, knee, breast, and hip. In multisequence 3T hip images, agreement between automatic cartilage segmentations and corresponding synthesized HDR-MRI series were computed for mean voxel overlap established from manual segmentations for a series of cases. Qualitative comparisons between the developed HDR-MRI and standard synthesis methods were performed on multichannel 7T brain and knee data, and multisequence 3T breast and knee data. The synthesized HDR-MRI series provided excellent enhancement of fine-scale structure from multiple scales and contrasts, while substantially reducing bias field effects in 7T brain gradient echo, T 1 and T 2 breast images and 7T knee multichannel images. Evaluation of the HDR-MRI approach on 3T hip multisequence images showed superior outcomes for automatic cartilage segmentations with respect to manual segmentation, particularly around regions with hyperintense synovial fluid, across a set of 3D sequences. The successful combination of multichannel/sequence MR images into a single-fused HDR-MR image format provided consolidated visualization of tissues within 1 omnibus image, enhanced definition of thin, complex anatomical structures in the presence of variable or hyperintense signals, and improved tissue (cartilage) segmentation outcomes. © 2018 International Society for Magnetic Resonance in Medicine.

  5. Image enhancement circuit using nonlinear processing curve and constrained histogram range equalization

    NARCIS (Netherlands)

    Cvetkovic, S.D.; With, de P.H.N.; Panchanathan, S.; Vasudev, B.

    2004-01-01

    For real-time imaging in surveillance applications, image fidelity is of primary importance to ensure customer confidence. The obtained image fidelity is a result from amongst others dynamic range expansion and video signal enhancement. The dynamic range of the signal needs adaptation, because the

  6. Heterodyne range imaging as an alternative to photogrammetry

    Science.gov (United States)

    Dorrington, Adrian; Cree, Michael; Carnegie, Dale; Payne, Andrew; Conroy, Richard

    2007-01-01

    Solid-state full-field range imaging technology, capable of determining the distance to objects in a scene simultaneously for every pixel in an image, has recently achieved sub-millimeter distance measurement precision. With this level of precision, it is becoming practical to use this technology for high precision three-dimensional metrology applications. Compared to photogrammetry, range imaging has the advantages of requiring only one viewing angle, a relatively short measurement time, and simplistic fast data processing. In this paper we fist review the range imaging technology, then describe an experiment comparing both photogrammetric and range imaging measurements of a calibration block with attached retro-reflective targets. The results show that the range imaging approach exhibits errors of approximately 0.5 mm in-plane and almost 5 mm out-of-plane; however, these errors appear to be mostly systematic. We then proceed to examine the physical nature and characteristics of the image ranging technology and discuss the possible causes of these systematic errors. Also discussed is the potential for further system characterization and calibration to compensate for the range determination and other errors, which could possibly lead to three-dimensional measurement precision approaching that of photogrammetry.

  7. Characteristics of different frequency ranges in scanning electron microscope images

    International Nuclear Information System (INIS)

    Sim, K. S.; Nia, M. E.; Tan, T. L.; Tso, C. P.; Ee, C. S.

    2015-01-01

    We demonstrate a new approach to characterize the frequency range in general scanning electron microscope (SEM) images. First, pure frequency images are generated from low frequency to high frequency, and then, the magnification of each type of frequency image is implemented. By comparing the edge percentage of the SEM image to the self-generated frequency images, we can define the frequency ranges of the SEM images. Characterization of frequency ranges of SEM images benefits further processing and analysis of those SEM images, such as in noise filtering and contrast enhancement

  8. Characteristics of different frequency ranges in scanning electron microscope images

    Energy Technology Data Exchange (ETDEWEB)

    Sim, K. S., E-mail: kssim@mmu.edu.my; Nia, M. E.; Tan, T. L.; Tso, C. P.; Ee, C. S. [Faculty of Engineering and Technology, Multimedia University, 75450 Melaka (Malaysia)

    2015-07-22

    We demonstrate a new approach to characterize the frequency range in general scanning electron microscope (SEM) images. First, pure frequency images are generated from low frequency to high frequency, and then, the magnification of each type of frequency image is implemented. By comparing the edge percentage of the SEM image to the self-generated frequency images, we can define the frequency ranges of the SEM images. Characterization of frequency ranges of SEM images benefits further processing and analysis of those SEM images, such as in noise filtering and contrast enhancement.

  9. Unsynchronized scanning with a low-cost laser range finder for real-time range imaging

    Science.gov (United States)

    Hatipoglu, Isa; Nakhmani, Arie

    2017-06-01

    Range imaging plays an essential role in many fields: 3D modeling, robotics, heritage, agriculture, forestry, reverse engineering. One of the most popular range-measuring technologies is laser scanner due to its several advantages: long range, high precision, real-time measurement capabilities, and no dependence on lighting conditions. However, laser scanners are very costly. Their high cost prevents widespread use in applications. Due to the latest developments in technology, now, low-cost, reliable, faster, and light-weight 1D laser range finders (LRFs) are available. A low-cost 1D LRF with a scanning mechanism, providing the ability of laser beam steering for additional dimensions, enables to capture a depth map. In this work, we present an unsynchronized scanning with a low-cost LRF to decrease scanning period and reduce vibrations caused by stop-scan in synchronized scanning. Moreover, we developed an algorithm for alignment of unsynchronized raw data and proposed range image post-processing framework. The proposed technique enables to have a range imaging system for a fraction of the price of its counterparts. The results prove that the proposed method can fulfill the need for a low-cost laser scanning for range imaging for static environments because the most significant limitation of the method is the scanning period which is about 2 minutes for 55,000 range points (resolution of 250x220 image). In contrast, scanning the same image takes around 4 minutes in synchronized scanning. Once faster, longer range, and narrow beam LRFs are available, the methods proposed in this work can produce better results.

  10. High-dynamic-range imaging for cloud segmentation

    Science.gov (United States)

    Dev, Soumyabrata; Savoy, Florian M.; Lee, Yee Hui; Winkler, Stefan

    2018-04-01

    Sky-cloud images obtained from ground-based sky cameras are usually captured using a fisheye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is overexposed, and the regions near the horizon are underexposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRCloudSeg - an effective method for cloud segmentation using high-dynamic-range (HDR) imaging based on multi-exposure fusion. We describe the HDR image generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR radiance maps for cloud segmentation and achieves very good results.

  11. Image Alignment for Multiple Camera High Dynamic Range Microscopy

    OpenAIRE

    Eastwood, Brian S.; Childs, Elisabeth C.

    2012-01-01

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability fo...

  12. Image Denoising Using Interquartile Range Filter with Local Averaging

    OpenAIRE

    Jassim, Firas Ajil

    2013-01-01

    Image denoising is one of the fundamental problems in image processing. In this paper, a novel approach to suppress noise from the image is conducted by applying the interquartile range (IQR) which is one of the statistical methods used to detect outlier effect from a dataset. A window of size kXk was implemented to support IQR filter. Each pixel outside the IQR range of the kXk window is treated as noisy pixel. The estimation of the noisy pixels was obtained by local averaging. The essential...

  13. Image Alignment for Multiple Camera High Dynamic Range Microscopy.

    Science.gov (United States)

    Eastwood, Brian S; Childs, Elisabeth C

    2012-01-09

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.

  14. Research on range-gated laser active imaging seeker

    Science.gov (United States)

    You, Mu; Wang, PengHui; Tan, DongJie

    2013-09-01

    Compared with other imaging methods such as millimeter wave imaging, infrared imaging and visible light imaging, laser imaging provides both a 2-D array of reflected intensity data as well as 2-D array of range data, which is the most important data for use in autonomous target acquisition .In terms of application, it can be widely used in military fields such as radar, guidance and fuse. In this paper, we present a laser active imaging seeker system based on range-gated laser transmitter and sensor technology .The seeker system presented here consist of two important part, one is laser image system, which uses a negative lens to diverge the light from a pulse laser to flood illuminate a target, return light is collected by a camera lens, each laser pulse triggers the camera delay and shutter. The other is stabilization gimbals, which is designed to be a rotatable structure both in azimuth and elevation angles. The laser image system consists of transmitter and receiver. The transmitter is based on diode pumped solid-state lasers that are passively Q-switched at 532nm wavelength. A visible wavelength was chosen because the receiver uses a Gen III image intensifier tube with a spectral sensitivity limited to wavelengths less than 900nm.The receiver is image intensifier tube's micro channel plate coupled into high sensitivity charge coupled device camera. The image has been taken at range over one kilometer and can be taken at much longer range in better weather. Image frame frequency can be changed according to requirement of guidance with modifiable range gate, The instantaneous field of views of the system was found to be 2×2 deg. Since completion of system integration, the seeker system has gone through a series of tests both in the lab and in the outdoor field. Two different kinds of buildings have been chosen as target, which is located at range from 200m up to 1000m.To simulate dynamic process of range change between missile and target, the seeker system has

  15. PROCESSING OF UAV BASED RANGE IMAGING DATA TO GENERATE DETAILED ELEVATION MODELS OF COMPLEX NATURAL STRUCTURES

    Directory of Open Access Journals (Sweden)

    T. K. Kohoutek

    2012-07-01

    Full Text Available Unmanned Aerial Vehicles (UAVs are more and more used in civil areas like geomatics. Autonomous navigated platforms have a great flexibility in flying and manoeuvring in complex environments to collect remote sensing data. In contrast to standard technologies such as aerial manned platforms (airplanes and helicopters UAVs are able to fly closer to the object and in small-scale areas of high-risk situations such as landslides, volcano and earthquake areas and floodplains. Thus, UAVs are sometimes the only practical alternative in areas where access is difficult and where no manned aircraft is available or even no flight permission is given. Furthermore, compared to terrestrial platforms, UAVs are not limited to specific view directions and could overcome occlusions from trees, houses and terrain structures. Equipped with image sensors and/or laser scanners they are able to provide elevation models, rectified images, textured 3D-models and maps. In this paper we will describe a UAV platform, which can carry a range imaging (RIM camera including power supply and data storage for the detailed mapping and monitoring of complex structures, such as alpine riverbed areas. The UAV platform NEO from Swiss UAV was equipped with the RIM camera CamCube 2.0 by PMD Technologies GmbH to capture the surface structures. Its navigation system includes an autopilot. To validate the UAV-trajectory a 360° prism was installed and tracked by a total station. Within the paper a workflow for the processing of UAV-RIM data is proposed, which is based on the processing of differential GNSS data in combination with the acquired range images. Subsequently, the obtained results for the trajectory are compared and verified with a track of a UAV (Falcon 8, Ascending Technologies carried out with a total station simultaneously to the GNSS data acquisition. The results showed that the UAV's position using differential GNSS could be determined in the centimetre to the decimetre

  16. Automatic Generation of Wide Dynamic Range Image without Pseudo-Edge Using Integration of Multi-Steps Exposure Images

    Science.gov (United States)

    Migiyama, Go; Sugimura, Atsuhiko; Osa, Atsushi; Miike, Hidetoshi

    Recently, digital cameras are offering technical advantages rapidly. However, the shot image is different from the sight image generated when that scenery is seen with the naked eye. There are blown-out highlights and crushed blacks in the image that photographed the scenery of wide dynamic range. The problems are hardly generated in the sight image. These are contributory cause of difference between the shot image and the sight image. Blown-out highlights and crushed blacks are caused by the difference of dynamic range between the image sensor installed in a digital camera such as CCD and CMOS and the human visual system. Dynamic range of the shot image is narrower than dynamic range of the sight image. In order to solve the problem, we propose an automatic method to decide an effective exposure range in superposition of edges. We integrate multi-step exposure images using the method. In addition, we try to erase pseudo-edges using the process to blend exposure values. Afterwards, we get a pseudo wide dynamic range image automatically.

  17. Improvement of range spatial resolution of medical ultrasound imaging by element-domain signal processing

    Science.gov (United States)

    Hasegawa, Hideyuki

    2017-07-01

    The range spatial resolution is an important factor determining the image quality in ultrasonic imaging. The range spatial resolution in ultrasonic imaging depends on the ultrasonic pulse length, which is determined by the mechanical response of the piezoelectric element in an ultrasonic probe. To improve the range spatial resolution without replacing the transducer element, in the present study, methods based on maximum likelihood (ML) estimation and multiple signal classification (MUSIC) were proposed. The proposed methods were applied to echo signals received by individual transducer elements in an ultrasonic probe. The basic experimental results showed that the axial half maximum of the echo from a string phantom was improved from 0.21 mm (conventional method) to 0.086 mm (ML) and 0.094 mm (MUSIC).

  18. FITS Liberator: Image processing software

    Science.gov (United States)

    Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

    2012-06-01

    The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

  19. Enhancing the dynamic range of Ultrasound Imaging Velocimetry using interleaved imaging

    NARCIS (Netherlands)

    Poelma, C.; Fraser, K.H.

    2013-01-01

    In recent years, non-invasive velocity field measurement based on correlation of ultrasound images has been introduced as a promising technique for fundamental research into disease processes, as well as a diagnostic tool. A major drawback of the method is the relatively limited dynamic range when

  20. using fuzzy logic in image processing

    International Nuclear Information System (INIS)

    Ashabrawy, M.A.F.

    2002-01-01

    due to the unavoidable merge between computer and mathematics, the signal processing in general and the processing in particular have greatly improved and advanced. signal processing deals with the processing of any signal data for use by a computer, while image processing deals with all kinds of images (just images). image processing involves the manipulation of image data for better appearance and viewing by people; consequently, it is a rapidly growing and exciting field to be involved in today . this work takes an applications - oriented approach to image processing .the applications; the maps and documents of the first egyptian research reactor (ETRR-1), the x-ray medical images and the fingerprints image. since filters, generally, work continuous ranges rather than discrete values, fuzzy logic techniques are more convenient.thee techniques are powerful in image processing and can deal with one- dimensional, 1-D and two - dimensional images, 2-D images as well

  1. [Research on the range of motion measurement system for spine based on LabVIEW image processing technology].

    Science.gov (United States)

    Li, Xiaofang; Deng, Linhong; Lu, Hu; He, Bin

    2014-08-01

    A measurement system based on the image processing technology and developed by LabVIEW was designed to quickly obtain the range of motion (ROM) of spine. NI-Vision module was used to pre-process the original images and calculate the angles of marked needles in order to get ROM data. Six human cadaveric thoracic spine segments T7-T10 were selected to carry out 6 kinds of loads, including left/right lateral bending, flexion, extension, cis/counterclockwise torsion. The system was used to measure the ROM of segment T8-T9 under the loads from 1 Nm to 5 Nm. The experimental results showed that the system is able to measure the ROM of the spine accurately and quickly, which provides a simple and reliable tool for spine biomechanics investigators.

  2. Sampling Number Effects in 2D and Range Imaging of Range-gated Acquisition

    International Nuclear Information System (INIS)

    Kwon, Seong-Ouk; Park, Seung-Kyu; Baik, Sung-Hoon; Cho, Jai-Wan; Jeong, Kyung-Min

    2015-01-01

    In this paper, we analyzed the number effects of sampling images for making a 2D image and a range image from acquired RGI images. We analyzed the number effects of RGI images for making a 2D image and a range image using a RGI vision system. As the results, 2D image quality was not much depended on the number of sampling images but on how much well extract efficient RGI images. But, the number of RGI images was important for making a range image because range image quality was proportional to the number of RGI images. Image acquiring in a monitoring area of nuclear industry is an important function for safety inspection and preparing appropriate control plans. To overcome the non-visualization problem caused by airborne obstacle particles, vision systems should have extra-functions, such as active illumination lightening through disturbance airborne particles. One of these powerful active vision systems is a range-gated imaging system. The vision system based on the range-gated imaging system can acquire image data from raining or smoking environments. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra-short exposure time to only get the illumination light. Here, the illuminant illuminates objects by flashing strong light through airborne disturbance particles. Thus, in contrast to passive conventional vision systems, the RGI active vision technology robust for low-visibility environments

  3. Sampling Number Effects in 2D and Range Imaging of Range-gated Acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Seong-Ouk; Park, Seung-Kyu; Baik, Sung-Hoon; Cho, Jai-Wan; Jeong, Kyung-Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In this paper, we analyzed the number effects of sampling images for making a 2D image and a range image from acquired RGI images. We analyzed the number effects of RGI images for making a 2D image and a range image using a RGI vision system. As the results, 2D image quality was not much depended on the number of sampling images but on how much well extract efficient RGI images. But, the number of RGI images was important for making a range image because range image quality was proportional to the number of RGI images. Image acquiring in a monitoring area of nuclear industry is an important function for safety inspection and preparing appropriate control plans. To overcome the non-visualization problem caused by airborne obstacle particles, vision systems should have extra-functions, such as active illumination lightening through disturbance airborne particles. One of these powerful active vision systems is a range-gated imaging system. The vision system based on the range-gated imaging system can acquire image data from raining or smoking environments. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra-short exposure time to only get the illumination light. Here, the illuminant illuminates objects by flashing strong light through airborne disturbance particles. Thus, in contrast to passive conventional vision systems, the RGI active vision technology robust for low-visibility environments.

  4. Enhancement of image contrast in linacgram through image processing

    International Nuclear Information System (INIS)

    Suh, Hyun Suk; Shin, Hyun Kyo; Lee, Re Na

    2000-01-01

    Conventional radiation therapy portal images gives low contrast images. The purpose of this study was to enhance image contrast of a linacgram by developing a low--cost image processing method. Chest linacgram was obtained by irradiating humanoid phantom and scanned using Diagnostic-Pro scanner for image processing. Several types of scan method were used in scanning. These include optical density scan, histogram equalized scan, linear histogram based scan, linear histogram independent scan, linear optical density scan, logarithmic scan, and power square root scan. The histogram distribution of the scanned images were plotted and the ranges of the gray scale were compared among various scan types. The scanned images were then transformed to the gray window by pallette fitting method and the contrast of the reprocessed portal images were evaluated for image improvement. Portal images of patients were also taken at various anatomic sites and the images were processed by Gray Scale Expansion (GSE) method. The patient images were analyzed to examine the feasibility of using the GSE technique in clinic. The histogram distribution showed that minimum and maximum gray scale ranges of 3192 and 21940 were obtained when the image was scanned using logarithmic method and square root method, respectively. Out of 256 gray scale, only 7 to 30% of the steps were used. After expanding the gray scale to full range, contrast of the portal images were improved. Experiment performed with patient image showed that improved identification of organs were achieved by GSE in portal images of knee joint, head and neck, lung, and pelvis. Phantom study demonstrated that the GSE technique improved image contrast of a linacgram. This indicates that the decrease in image quality resulting from the dual exposure, could be improved by expanding the gray scale. As a result, the improved technique will make it possible to compare the digitally reconstructed radiographs (DRR) and simulation image for

  5. Introduction to sensors for ranging and imaging

    CERN Document Server

    Brooker, Graham

    2009-01-01

    ""This comprehensive text-reference provides a solid background in active sensing technology. It is concerned with active sensing, starting with the basics of time-of-flight sensors (operational principles, components), and going through the derivation of the radar range equation and the detection of echo signals, both fundamental to the understanding of radar, sonar and lidar imaging. Several chapters cover signal propagation of both electromagnetic and acoustic energy, target characteristics, stealth, and clutter. The remainder of the book introduces the range measurement process, active ima

  6. Histogram Matching Extends Acceptable Signal Strength Range on Optical Coherence Tomography Images

    Science.gov (United States)

    Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Sigal, Ian A.; Kagemann, Larry; Schuman, Joel S.

    2015-01-01

    Purpose. We minimized the influence of image quality variability, as measured by signal strength (SS), on optical coherence tomography (OCT) thickness measurements using the histogram matching (HM) method. Methods. We scanned 12 eyes from 12 healthy subjects with the Cirrus HD-OCT device to obtain a series of OCT images with a wide range of SS (maximal range, 1–10) at the same visit. For each eye, the histogram of an image with the highest SS (best image quality) was set as the reference. We applied HM to the images with lower SS by shaping the input histogram into the reference histogram. Retinal nerve fiber layer (RNFL) thickness was automatically measured before and after HM processing (defined as original and HM measurements), and compared to the device output (device measurements). Nonlinear mixed effects models were used to analyze the relationship between RNFL thickness and SS. In addition, the lowest tolerable SSs, which gave the RNFL thickness within the variability margin of manufacturer recommended SS range (6–10), were determined for device, original, and HM measurements. Results. The HM measurements showed less variability across a wide range of image quality than the original and device measurements (slope = 1.17 vs. 4.89 and 1.72 μm/SS, respectively). The lowest tolerable SS was successfully reduced to 4.5 after HM processing. Conclusions. The HM method successfully extended the acceptable SS range on OCT images. This would qualify more OCT images with low SS for clinical assessment, broadening the OCT application to a wider range of subjects. PMID:26066749

  7. High dynamic range coding imaging system

    Science.gov (United States)

    Wu, Renfan; Huang, Yifan; Hou, Guangqi

    2014-10-01

    We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.

  8. Long range image enhancement

    CSIR Research Space (South Africa)

    Duvenhage, B

    2015-11-01

    Full Text Available the surveillance system performance. This paper discusses an image processing method that tracks the behaviour of the PSF and then de-warps the image to reduce the disruptive effects of turbulence. Optical flow, an average image filter and a simple unsharp mask...

  9. Nuclear medicine imaging and data processing

    International Nuclear Information System (INIS)

    Bell, P.R.; Dillon, R.S.

    1978-01-01

    The Oak Ridge Imaging System (ORIS) is a software operating system structure around the Digital Equipment Corporation's PDP-8 minicomputer which provides a complete range of image manipulation procedures. Through its modular design it remains open-ended for easy expansion to meet future needs. Already included in the system are image access routines for use with the rectilinear scanner or gamma camera (both static and flow studies); display hardware design and corresponding software; archival storage provisions; and, most important, many image processing techniques. The image processing capabilities include image defect removal, smoothing, nonlinear bounding, preparation of functional images, and transaxial emission tomography reconstruction from a limited number of views

  10. Image dynamic range test and evaluation of Gaofen-2 dual cameras

    Science.gov (United States)

    Zhang, Zhenhua; Gan, Fuping; Wei, Dandan

    2015-12-01

    In order to fully understand the dynamic range of Gaofen-2 satellite data and support the data processing, application and next satellites development, in this article, we evaluated the dynamic range by calculating some statistics such as maximum ,minimum, average and stand deviation of four images obtained at the same time by Gaofen-2 dual cameras in Beijing area; then the maximum ,minimum, average and stand deviation of each longitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of each camera's dynamic range consistency; and these four statistics of each latitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of the dynamic range consistency between PMS1 and PMS2 at last. The results suggest that there is a wide dynamic range of DN value in the image obtained by PMS1 and PMS2 which contains rich information of ground objects; in general, the consistency of dynamic range between the single camera images is in close agreement, but also a little difference, so do the dual cameras. The consistency of dynamic range between the single camera images is better than the dual cameras'.

  11. PC image processing

    International Nuclear Information System (INIS)

    Hwa, Mok Jin Il; Am, Ha Jeng Ung

    1995-04-01

    This book starts summary of digital image processing and personal computer, and classification of personal computer image processing system, digital image processing, development of personal computer and image processing, image processing system, basic method of image processing such as color image processing and video processing, software and interface, computer graphics, video image and video processing application cases on image processing like satellite image processing, color transformation of image processing in high speed and portrait work system.

  12. Corner-point criterion for assessing nonlinear image processing imagers

    Science.gov (United States)

    Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory

    2017-10-01

    Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to

  13. Digital image processing for radiography in nuclear power plants

    International Nuclear Information System (INIS)

    Heidt, H.; Rose, P.; Raabe, P.; Daum, W.

    1985-01-01

    With the help of digital processing of radiographic images from reactor-components it is possible to increase the security and objectiveness of the evaluation. Several examples of image processing procedures (contrast enhancement, density profiles, shading correction, digital filtering, superposition of images etc.) show the advantages for the visualization and evaluation of radiographs. Digital image processing can reduce some of the restrictions of radiography in nuclear power plants. In addition a higher degree of automation can be cost-saving and increase the quality of radiographic evaluation. The aim of the work performed was to to improve the readability of radiographs for the human observer. The main problem is lack of contrast and the presence of disturbing structures like weld seams. Digital image processing of film radiographs starts with the digitization of the image. Conventional systems use TV-cameras or scanners and provide a dynamic range of 1.5. to 3 density units, which are digitized to 256 grey levels. For the enhancement process it is necessary that the grey level range covers the density range of the important regions of the presented film. On the other hand the grey level coverage should not be wider than necessary to minimize the width of digitization steps. Poor digitization makes flaws and cracks invisible and spoils all further image processing

  14. Multi-exposure high dynamic range image synthesis with camera shake correction

    Science.gov (United States)

    Li, Xudong; Chen, Yongfu; Jiang, Hongzhi; Zhao, Huijie

    2017-10-01

    Machine vision plays an important part in industrial online inspection. Owing to the nonuniform illuminance conditions and variable working distances, the captured image tends to be over-exposed or under-exposed. As a result, when processing the image such as crack inspection, the algorithm complexity and computing time increase. Multiexposure high dynamic range (HDR) image synthesis is used to improve the quality of the captured image, whose dynamic range is limited. Inevitably, camera shake will result in ghost effect, which blurs the synthesis image to some extent. However, existed exposure fusion algorithms assume that the input images are either perfectly aligned or captured in the same scene. These assumptions limit the application. At present, widely used registration based on Scale Invariant Feature Transform (SIFT) is usually time consuming. In order to rapidly obtain a high quality HDR image without ghost effect, we come up with an efficient Low Dynamic Range (LDR) images capturing approach and propose a registration method based on ORiented Brief (ORB) and histogram equalization which can eliminate the illumination differences between the LDR images. The fusion is performed after alignment. The experiment results demonstrate that the proposed method is robust to illumination changes and local geometric distortion. Comparing with other exposure fusion methods, our method is more efficient and can produce HDR images without ghost effect by registering and fusing four multi-exposure images.

  15. RADIANCE DOMAIN COMPOSITING FOR HIGH DYNAMIC RANGE IMAGING

    Directory of Open Access Journals (Sweden)

    M.R. Renu

    2013-02-01

    Full Text Available High dynamic range imaging aims at creating an image with a range of intensity variations larger than the range supported by a camera sensor. Most commonly used methods combine multiple exposure low dynamic range (LDR images, to obtain the high dynamic range (HDR image. Available methods typically neglect the noise term while finding appropriate weighting functions to estimate the camera response function as well as the radiance map. We look at the HDR imaging problem in a denoising frame work and aim at reconstructing a low noise radiance map from noisy low dynamic range images, which is tone mapped to get the LDR equivalent of the HDR image. We propose a maximum aposteriori probability (MAP based reconstruction of the HDR image using Gibb’s prior to model the radiance map, with total variation (TV as the prior to avoid unnecessary smoothing of the radiance field. To make the computation with TV prior efficient, we extend the majorize-minimize method of upper bounding the total variation by a quadratic function to our case which has a nonlinear term arising from the camera response function. A theoretical justification for doing radiance domain denoising as opposed to image domain denoising is also provided.

  16. Characterization of modulated time-of-flight range image sensors

    Science.gov (United States)

    Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.

    2009-01-01

    A number of full field image sensors have been developed that are capable of simultaneously measuring intensity and distance (range) for every pixel in a given scene using an indirect time-of-flight measurement technique. A light source is intensity modulated at a frequency between 10-100 MHz, and an image sensor is modulated at the same frequency, synchronously sampling light reflected from objects in the scene (homodyne detection). The time of flight is manifested as a phase shift in the illumination modulation envelope, which can be determined from the sampled data simultaneously for each pixel in the scene. This paper presents a method of characterizing the high frequency modulation response of these image sensors, using a pico-second laser pulser. The characterization results allow the optimal operating parameters, such as the modulation frequency, to be identified in order to maximize the range measurement precision for a given sensor. A number of potential sources of error exist when using these sensors, including deficiencies in the modulation waveform shape, duty cycle, or phase, resulting in contamination of the resultant range data. From the characterization data these parameters can be identified and compensated for by modifying the sensor hardware or through post processing of the acquired range measurements.

  17. Image processing for medical diagnosis using CNN

    International Nuclear Information System (INIS)

    Arena, Paolo; Basile, Adriano; Bucolo, Maide; Fortuna, Luigi

    2003-01-01

    Medical diagnosis is one of the most important area in which image processing procedures are usefully applied. Image processing is an important phase in order to improve the accuracy both for diagnosis procedure and for surgical operation. One of these fields is tumor/cancer detection by using Microarray analysis. The research studies in the Cancer Genetics Branch are mainly involved in a range of experiments including the identification of inherited mutations predisposing family members to malignant melanoma, prostate and breast cancer. In bio-medical field the real-time processing is very important, but often image processing is a quite time-consuming phase. Therefore techniques able to speed up the elaboration play an important rule. From this point of view, in this work a novel approach to image processing has been developed. The new idea is to use the Cellular Neural Networks to investigate on diagnostic images, like: Magnetic Resonance Imaging, Computed Tomography, and fluorescent cDNA microarray images

  18. REMOTE SENSING IMAGE QUALITY ASSESSMENT EXPERIMENT WITH POST-PROCESSING

    Directory of Open Access Journals (Sweden)

    W. Jiang

    2018-04-01

    Full Text Available This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  19. Model-based restoration using light vein for range-gated imaging systems.

    Science.gov (United States)

    Wang, Canjin; Sun, Tao; Wang, Tingfeng; Wang, Rui; Guo, Jin; Tian, Yuzhen

    2016-09-10

    The images captured by an airborne range-gated imaging system are degraded by many factors, such as light scattering, noise, defocus of the optical system, atmospheric disturbances, platform vibrations, and so on. The characteristics of low illumination, few details, and high noise make the state-of-the-art restoration method fail. In this paper, we present a restoration method especially for range-gated imaging systems. The degradation process is divided into two parts: the static part and the dynamic part. For the static part, we establish the physical model of the imaging system according to the laser transmission theory, and estimate the static point spread function (PSF). For the dynamic part, a so-called light vein feature extraction method is presented to estimate the fuzzy parameter of the atmospheric disturbance and platform movement, which make contributions to the dynamic PSF. Finally, combined with the static and dynamic PSF, an iterative updating framework is used to restore the image. Compared with the state-of-the-art methods, the proposed method can effectively suppress ringing artifacts and achieve better performance in a range-gated imaging system.

  20. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  1. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    International Nuclear Information System (INIS)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  2. High dynamic range image acquisition based on multiplex cameras

    Science.gov (United States)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  3. Quantitative Analysis of Range Image Patches by NEB Method

    Directory of Open Access Journals (Sweden)

    Wang Wen

    2017-01-01

    Full Text Available In this paper we analyze sampled high dimensional data with the NEB method from a range image database. Select a large random sample of log-valued, high contrast, normalized, 8×8 range image patches from the Brown database. We make a density estimator and we establish 1-dimensional cell complexes from the range image patch data. We find topological properties of 8×8 range image patches, prove that there exist two types of subsets of 8×8 range image patches modelled as a circle.

  4. A high-resolution full-field range imaging system

    Science.gov (United States)

    Carnegie, D. A.; Cree, M. J.; Dorrington, A. A.

    2005-08-01

    There exist a number of applications where the range to all objects in a field of view needs to be obtained. Specific examples include obstacle avoidance for autonomous mobile robots, process automation in assembly factories, surface profiling for shape analysis, and surveying. Ranging systems can be typically characterized as being either laser scanning systems where a laser point is sequentially scanned over a scene or a full-field acquisition where the range to every point in the image is simultaneously obtained. The former offers advantages in terms of range resolution, while the latter tend to be faster and involve no moving parts. We present a system for determining the range to any object within a camera's field of view, at the speed of a full-field system and the range resolution of some point laser scans. Initial results obtained have a centimeter range resolution for a 10 second acquisition time. Modifications to the existing system are discussed that should provide faster results with submillimeter resolution.

  5. Robust image registration for multiple exposure high dynamic range image synthesis

    Science.gov (United States)

    Yao, Susu

    2011-03-01

    Image registration is an important preprocessing technique in high dynamic range (HDR) image synthesis. This paper proposed a robust image registration method for aligning a group of low dynamic range images (LDR) that are captured with different exposure times. Illumination change and photometric distortion between two images would result in inaccurate registration. We propose to transform intensity image data into phase congruency to eliminate the effect of the changes in image brightness and use phase cross correlation in the Fourier transform domain to perform image registration. Considering the presence of non-overlapped regions due to photometric distortion, evolutionary programming is applied to search for the accurate translation parameters so that the accuracy of registration is able to be achieved at a hundredth of a pixel level. The proposed algorithm works well for under and over-exposed image registration. It has been applied to align LDR images for synthesizing high quality HDR images..

  6. Stereo Vision-Based High Dynamic Range Imaging Using Differently-Exposed Image Pair

    Directory of Open Access Journals (Sweden)

    Won-Jae Park

    2017-06-01

    Full Text Available In this paper, a high dynamic range (HDR imaging method based on the stereo vision system is presented. The proposed method uses differently exposed low dynamic range (LDR images captured from a stereo camera. The stereo LDR images are first converted to initial stereo HDR images using the inverse camera response function estimated from the LDR images. However, due to the limited dynamic range of the stereo LDR camera, the radiance values in under/over-exposed regions of the initial main-view (MV HDR image can be lost. To restore these radiance values, the proposed stereo matching and hole-filling algorithms are applied to the stereo HDR images. Specifically, the auxiliary-view (AV HDR image is warped by using the estimated disparity between initial the stereo HDR images and then effective hole-filling is applied to the warped AV HDR image. To reconstruct the final MV HDR, the warped and hole-filled AV HDR image is fused with the initial MV HDR image using the weight map. The experimental results demonstrate objectively and subjectively that the proposed stereo HDR imaging method provides better performance compared to the conventional method.

  7. Acquisition and Post-Processing of Immunohistochemical Images.

    Science.gov (United States)

    Sedgewick, Jerry

    2017-01-01

    Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.

  8. Positron range in PET imaging: non-conventional isotopes

    International Nuclear Information System (INIS)

    Jødal, L; Le Loirec, C; Champion, C

    2014-01-01

    In addition to conventional short-lived radionuclides, longer-lived isotopes are becoming increasingly important to positron emission tomography (PET). The longer half-life both allows for circumvention of the in-house production of radionuclides, and expands the spectrum of physiological processes amenable to PET imaging, including processes with prohibitively slow kinetics for investigation with short-lived radiotracers. However, many of these radionuclides emit ‘high-energy’ positrons and gamma rays which affect the spatial resolution and quantitative accuracy of PET images. The objective of the present work is to investigate the positron range distribution for some of these long-lived isotopes. Based on existing Monte Carlo simulations of positron interactions in water, the probability distribution of the line of response displacement have been empirically described by means of analytic displacement functions. Relevant distributions have been derived for the isotopes 22 Na, 52 Mn, 89 Zr, 45 Ti, 51 Mn, 94m Tc, 52m Mn, 38 K, 64 Cu, 86 Y, 124 I, and 120 I. It was found that the distribution functions previously found for a series of conventional isotopes (Jødal et al 2012 Phys. Med. Bio. 57 3931–43), were also applicable to these non-conventional isotopes, except that for 120 I, 124 I, 89 Zr, 52 Mn, and 64 Cu, parameters in the formulae were less well predicted by mean positron energy alone. Both conventional and non-conventional range distributions can be described by relatively simple analytic expressions. The results will be applicable to image-reconstruction software to improve the resolution. (paper)

  9. High Dynamic Range Imaging Using Multiple Exposures

    Science.gov (United States)

    Hou, Xinglin; Luo, Haibo; Zhou, Peipei; Zhou, Wei

    2017-06-01

    It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range (LDR) camera. This paper presents an approach for improving the dynamic range of cameras by using multiple exposure images of same scene taken under different exposure times. First, the camera response function (CRF) is recovered by solving a high-order polynomial in which only the ratios of the exposures are used. Then, the HDR radiance image is reconstructed by weighted summation of the each radiance maps. After that, a novel local tone mapping (TM) operator is proposed for the display of the HDR radiance image. By solving the high-order polynomial, the CRF can be recovered quickly and easily. Taken the local image feature and characteristic of histogram statics into consideration, the proposed TM operator could preserve the local details efficiently. Experimental result demonstrates the effectiveness of our method. By comparison, the method outperforms other methods in terms of imaging quality.

  10. Markov Processes in Image Processing

    Science.gov (United States)

    Petrov, E. P.; Kharina, N. L.

    2018-05-01

    Digital images are used as an information carrier in different sciences and technologies. The aspiration to increase the number of bits in the image pixels for the purpose of obtaining more information is observed. In the paper, some methods of compression and contour detection on the basis of two-dimensional Markov chain are offered. Increasing the number of bits on the image pixels will allow one to allocate fine object details more precisely, but it significantly complicates image processing. The methods of image processing do not concede by the efficiency to well-known analogues, but surpass them in processing speed. An image is separated into binary images, and processing is carried out in parallel with each without an increase in speed, when increasing the number of bits on the image pixels. One more advantage of methods is the low consumption of energy resources. Only logical procedures are used and there are no computing operations. The methods can be useful in processing images of any class and assignment in processing systems with a limited time and energy resources.

  11. Rotation Covariant Image Processing for Biomedical Applications

    Directory of Open Access Journals (Sweden)

    Henrik Skibbe

    2013-01-01

    Full Text Available With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.

  12. Image processing techniques for thermal, x-rays and nuclear radiations

    International Nuclear Information System (INIS)

    Chadda, V.K.

    1998-01-01

    The paper describes image acquisition techniques for the non-visible range of electromagnetic spectrum especially thermal, x-rays and nuclear radiations. Thermal imaging systems are valuable tools used for applications ranging from PCB inspection, hot spot studies, fire identification, satellite imaging to defense applications. Penetrating radiations like x-rays and gamma rays are used in NDT, baggage inspection, CAT scan, cardiology, radiography, nuclear medicine etc. Neutron radiography compliments conventional x-rays and gamma radiography. For these applications, image processing and computed tomography are employed for 2-D and 3-D image interpretation respectively. The paper also covers main features of image processing systems for quantitative evaluation of gray level and binary images. (author)

  13. High Dynamic Velocity Range Particle Image Velocimetry Using Multiple Pulse Separation Imaging

    Directory of Open Access Journals (Sweden)

    Tadhg S. O’Donovan

    2010-12-01

    Full Text Available The dynamic velocity range of particle image velocimetry (PIV is determined by the maximum and minimum resolvable particle displacement. Various techniques have extended the dynamic range, however flows with a wide velocity range (e.g., impinging jets still challenge PIV algorithms. A new technique is presented to increase the dynamic velocity range by over an order of magnitude. The multiple pulse separation (MPS technique (i records series of double-frame exposures with different pulse separations, (ii processes the fields using conventional multi-grid algorithms, and (iii yields a composite velocity field with a locally optimized pulse separation. A robust criterion determines the local optimum pulse separation, accounting for correlation strength and measurement uncertainty. Validation experiments are performed in an impinging jet flow, using laser-Doppler velocimetry as reference measurement. The precision of mean flow and turbulence quantities is significantly improved compared to conventional PIV, due to the increase in dynamic range. In a wide range of applications, MPS PIV is a robust approach to increase the dynamic velocity range without restricting the vector evaluation methods.

  14. High dynamic velocity range particle image velocimetry using multiple pulse separation imaging.

    Science.gov (United States)

    Persoons, Tim; O'Donovan, Tadhg S

    2011-01-01

    The dynamic velocity range of particle image velocimetry (PIV) is determined by the maximum and minimum resolvable particle displacement. Various techniques have extended the dynamic range, however flows with a wide velocity range (e.g., impinging jets) still challenge PIV algorithms. A new technique is presented to increase the dynamic velocity range by over an order of magnitude. The multiple pulse separation (MPS) technique (i) records series of double-frame exposures with different pulse separations, (ii) processes the fields using conventional multi-grid algorithms, and (iii) yields a composite velocity field with a locally optimized pulse separation. A robust criterion determines the local optimum pulse separation, accounting for correlation strength and measurement uncertainty. Validation experiments are performed in an impinging jet flow, using laser-Doppler velocimetry as reference measurement. The precision of mean flow and turbulence quantities is significantly improved compared to conventional PIV, due to the increase in dynamic range. In a wide range of applications, MPS PIV is a robust approach to increase the dynamic velocity range without restricting the vector evaluation methods.

  15. INFLUENCE OF RAW IMAGE PREPROCESSING AND OTHER SELECTED PROCESSES ON ACCURACY OF CLOSE-RANGE PHOTOGRAMMETRIC SYSTEMS ACCORDING TO VDI 2634

    Directory of Open Access Journals (Sweden)

    J. Reznicek

    2016-06-01

    Full Text Available This paper examines the influence of raw image preprocessing and other selected processes on the accuracy of close-range photogrammetric measurement. The examined processes and features includes: raw image preprocessing, sensor unflatness, distance-dependent lens distortion, extending the input observations (image measurements by incorporating all RGB colour channels, ellipse centre eccentricity and target detecting. The examination of each effect is carried out experimentally by performing the validation procedure proposed in the German VDI guideline 2634/1. The validation procedure is based on performing standard photogrammetric measurements of high-accurate calibrated measuring lines (multi-scale bars with known lengths (typical uncertainty = 5 μm at 2 sigma. The comparison of the measured lengths with the known values gives the maximum length measurement error LME, which characterize the accuracy of the validated photogrammetric system. For higher reliability the VDI test field was photographed ten times independently with the same configuration and camera settings. The images were acquired with the metric ALPA 12WA camera. The tests are performed on all ten measurements which gives the possibility to measure the repeatability of the estimated parameters as well. The influences are examined by comparing the quality characteristics of the reference and tested settings.

  16. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs

    International Nuclear Information System (INIS)

    Sensakovic, William F.; O'Dell, M.C.; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura

    2016-01-01

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA"2 by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image processing

  17. Design for embedded image processing on FPGAs

    CERN Document Server

    Bailey, Donald G

    2011-01-01

    "Introductory material will consider the problem of embedded image processing, and how some of the issues may be solved using parallel hardware solutions. Field programmable gate arrays (FPGAs) are introduced as a technology that provides flexible, fine-grained hardware that can readily exploit parallelism within many image processing algorithms. A brief review of FPGA programming languages provides the link between a software mindset normally associated with image processing algorithms, and the hardware mindset required for efficient utilization of a parallel hardware design. The bulk of the book will focus on the design process, and in particular how designing an FPGA implementation differs from a conventional software implementation. Particular attention is given to the techniques for mapping an algorithm onto an FPGA implementation, considering timing, memory bandwidth and resource constraints, and efficient hardware computational techniques. Extensive coverage will be given of a range of image processing...

  18. Advances and applications of optimised algorithms in image processing

    CERN Document Server

    Oliva, Diego

    2017-01-01

    This book presents a study of the use of optimization algorithms in complex image processing problems. The problems selected explore areas ranging from the theory of image segmentation to the detection of complex objects in medical images. Furthermore, the concepts of machine learning and optimization are analyzed to provide an overview of the application of these tools in image processing. The material has been compiled from a teaching perspective. Accordingly, the book is primarily intended for undergraduate and postgraduate students of Science, Engineering, and Computational Mathematics, and can be used for courses on Artificial Intelligence, Advanced Image Processing, Computational Intelligence, etc. Likewise, the material can be useful for research from the evolutionary computation, artificial intelligence and image processing co.

  19. Image interpolation used in three-dimensional range data compression.

    Science.gov (United States)

    Zhang, Shaoze; Zhang, Jianqi; Huang, Xi; Liu, Delian

    2016-05-20

    Advances in the field of three-dimensional (3D) scanning have made the acquisition of 3D range data easier and easier. However, with the large size of 3D range data comes the challenge of storing and transmitting it. To address this challenge, this paper presents a framework to further compress 3D range data using image interpolation. We first use a virtual fringe-projection system to store 3D range data as images, and then apply the interpolation algorithm to the images to reduce their resolution to further reduce the data size. When the 3D range data are needed, the low-resolution image is scaled up to its original resolution by applying the interpolation algorithm, and then the scaled-up image is decoded and the 3D range data are recovered according to the decoded result. Experimental results show that the proposed method could further reduce the data size while maintaining a low rate of error.

  20. High-speed image processing systems in non-destructive testing

    Science.gov (United States)

    Shashev, D. V.; Shidlovskiy, S. V.

    2017-08-01

    Digital imaging systems are using in most of both industrial and scientific industries. Such systems effectively solve a wide range of tasks in the field of non-destructive testing. There are problems in digital image processing for decades associated with the speed of the operation of such systems, sufficient to efficiently process and analyze video streams in real time, ideally in mobile small-sized devices. In this paper, we consider the use of parallel-pipeline computing architectures in image processing problems using the example of an algorithm for calculating the area of an object on a binary image. The approach used allows us to achieve high-speed performance in the tasks of digital image processing.

  1. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Sensakovic, William F.; O' Dell, M.C.; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura [Florida Hospital, Imaging Administration, Orlando, FL (United States)

    2016-10-15

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA{sup 2} by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  2. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs.

    Science.gov (United States)

    Sensakovic, William F; O'Dell, M Cody; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura

    2016-10-01

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA(2) by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  3. Selections from 2017: Image Processing with AstroImageJ

    Science.gov (United States)

    Kohler, Susanna

    2017-12-01

    Editors note:In these last two weeks of 2017, well be looking at a few selections that we havent yet discussed on AAS Nova from among the most-downloaded paperspublished in AAS journals this year. The usual posting schedule will resume in January.AstroImageJ: Image Processing and Photometric Extraction for Ultra-Precise Astronomical Light CurvesPublished January2017The AIJ image display. A wide range of astronomy specific image display options and image analysis tools are available from the menus, quick access icons, and interactive histogram. [Collins et al. 2017]Main takeaway:AstroImageJ is a new integrated software package presented in a publication led byKaren Collins(Vanderbilt University,Fisk University, andUniversity of Louisville). Itenables new users even at the level of undergraduate student, high school student, or amateur astronomer to quickly start processing, modeling, and plotting astronomical image data.Why its interesting:Science doesnt just happen the momenta telescope captures a picture of a distantobject. Instead, astronomical images must firstbe carefully processed to clean up thedata, and this data must then be systematically analyzed to learn about the objects within it. AstroImageJ as a GUI-driven, easily installed, public-domain tool is a uniquelyaccessible tool for thisprocessing and analysis, allowing even non-specialist users to explore and visualizeastronomical data.Some features ofAstroImageJ:(as reported by Astrobites)Image calibration:generate master flat, dark, and bias framesImage arithmetic:combineimages viasubtraction, addition, division, multiplication, etc.Stack editing:easily perform operations on a series of imagesImage stabilization and image alignment featuresPrecise coordinate converters:calculate Heliocentric and Barycentric Julian DatesWCS coordinates:determine precisely where atelescope was pointed for an image by PlateSolving using Astronomy.netMacro and plugin support:write your own macrosMulti-aperture photometry

  4. Signal and image processing in medical applications

    CERN Document Server

    Kumar, Amit; Rahim, B Abdul; Kumar, D Sravan

    2016-01-01

    This book highlights recent findings on and analyses conducted on signals and images in the area of medicine. The experimental investigations involve a variety of signals and images and their methodologies range from very basic to sophisticated methods. The book explains how signal and image processing methods can be used to detect and forecast abnormalities in an easy-to-follow manner, offering a valuable resource for researchers, engineers, physicians and bioinformatics researchers alike.

  5. Fingerprint image enhancement by differential hysteresis processing.

    Science.gov (United States)

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results.

  6. Multiple Constraints Based Robust Matching of Poor-Texture Close-Range Images for Monitoring a Simulated Landslide

    Directory of Open Access Journals (Sweden)

    Gang Qiao

    2016-05-01

    Full Text Available Landslides are one of the most destructive geo-hazards that can bring about great threats to both human lives and infrastructures. Landslide monitoring has been always a research hotspot. In particular, landslide simulation experimentation is an effective tool in landslide research to obtain critical parameters that help understand the mechanism and evaluate the triggering and controlling factors of slope failure. Compared with other traditional geotechnical monitoring approaches, the close-range photogrammetry technique shows potential in tracking and recording the 3D surface deformation and failure processes. In such cases, image matching usually plays a critical role in stereo image processing for the 3D geometric reconstruction. However, the complex imaging conditions such as rainfall, mass movement, illumination, and ponding will reduce the texture quality of the stereo images, bringing about difficulties in the image matching process and resulting in very sparse matches. To address this problem, this paper presents a multiple-constraints based robust image matching approach for poor-texture close-range images particularly useful in monitoring a simulated landslide. The Scale Invariant Feature Transform (SIFT algorithm was first applied to the stereo images for generation of scale-invariate feature points, followed by a two-step matching process: feature-based image matching and area-based image matching. In the first feature-based matching step, the triangulation process was performed based on the SIFT matches filtered by the Fundamental Matrix (FM and a robust checking procedure, to serve as the basic constraints for feature-based iterated matching of all the non-matched SIFT-derived feature points inside each triangle. In the following area-based image-matching step, the corresponding points of the non-matched features in each triangle of the master image were predicted in the homologous triangle of the searching image by using geometric

  7. Toward 1-mm depth precision with a solid state full-field range imaging system

    Science.gov (United States)

    Dorrington, Adrian A.; Carnegie, Dale A.; Cree, Michael J.

    2006-02-01

    Previously, we demonstrated a novel heterodyne based solid-state full-field range-finding imaging system. This system is comprised of modulated LED illumination, a modulated image intensifier, and a digital video camera. A 10 MHz drive is provided with 1 Hz difference between the LEDs and image intensifier. A sequence of images of the resulting beating intensifier output are captured and processed to determine phase and hence distance to the object for each pixel. In a previous publication, we detailed results showing a one-sigma precision of 15 mm to 30 mm (depending on signal strength). Furthermore, we identified the limitations of the system and potential improvements that were expected to result in a range precision in the order of 1 mm. These primarily include increasing the operating frequency and improving optical coupling and sensitivity. In this paper, we report on the implementation of these improvements and the new system characteristics. We also comment on the factors that are important for high precision image ranging and present configuration strategies for best performance. Ranging with sub-millimeter precision is demonstrated by imaging a planar surface and calculating the deviations from a planar fit. The results are also illustrated graphically by imaging a garden gnome.

  8. Aerial Triangulation Close-range Images with Dual Quaternion

    Directory of Open Access Journals (Sweden)

    SHENG Qinghong

    2015-05-01

    Full Text Available A new method for the aerial triangulation of close-range images based on dual quaternion is presented. Using dual quaternion to represent the spiral screw motion of the beam in the space, the real part of dual quaternion represents the angular elements of all the beams in the close-range area networks, the real part and the dual part of dual quaternion represents the line elements corporately. Finally, an aerial triangulation adjustment model based on dual quaternion is established, and the elements of interior orientation and exterior orientation and the object coordinates of the ground points are calculated. Real images and large attitude angle simulated images are selected to run the experiments of aerial triangulation. The experimental results show that the new method for the aerial triangulation of close-range images based on dual quaternion can obtain higher accuracy.

  9. Fast image acquisition and processing on a TV camera-based portal imaging system

    International Nuclear Information System (INIS)

    Baier, K.; Meyer, J.

    2005-01-01

    The present paper describes the fast acquisition and processing of portal images directly from a TV camera-based portal imaging device (Siemens Beamview Plus trademark). This approach employs not only hard- and software included in the standard package installed by the manufacturer (in particular the frame grabber card and the Matrox(tm) Intellicam interpreter software), but also a software tool developed in-house for further processing and analysis of the images. The technical details are presented, including the source code for the Matrox trademark interpreter script that enables the image capturing process. With this method it is possible to obtain raw images directly from the frame grabber card at an acquisition rate of 15 images per second. The original configuration by the manufacturer allows the acquisition of only a few images over the course of a treatment session. The approach has a wide range of applications, such as quality assurance (QA) of the radiation beam, real-time imaging, real-time verification of intensity-modulated radiation therapy (IMRT) fields, and generation of movies of the radiation field (fluoroscopy mode). (orig.)

  10. Computer Vision and Image Processing: A Paper Review

    Directory of Open Access Journals (Sweden)

    victor - wiley

    2018-02-01

    Full Text Available Computer vision has been studied from many persective. It expands from raw data recording into techniques and ideas combining digital image processing, pattern recognition, machine learning and computer graphics. The wide usage has attracted many scholars to integrate with many disciplines and fields. This paper provide a survey of the recent technologies and theoretical concept explaining the development of computer vision especially related to image processing using different areas of their field application. Computer vision helps scholars to analyze images and video to obtain necessary information,    understand information on events or descriptions, and scenic pattern. It used method of multi-range application domain with massive data analysis. This paper provides contribution of recent development on reviews related to computer vision, image processing, and their related studies. We categorized the computer vision mainstream into four group e.g., image processing, object recognition, and machine learning. We also provide brief explanation on the up-to-date information about the techniques and their performance.

  11. A Range Ambiguity Suppression Processing Method for Spaceborne SAR with Up and Down Chirp Modulation.

    Science.gov (United States)

    Wen, Xuejiao; Qiu, Xiaolan; Han, Bing; Ding, Chibiao; Lei, Bin; Chen, Qi

    2018-05-07

    Range ambiguity is one of the factors which affect the SAR image quality. Alternately transmitting up and down chirp modulation pulses is one of the methods used to suppress the range ambiguity. However, the defocusing range ambiguous signal can still hold the stronger backscattering intensity than the mainlobe imaging area in some case, which has a severe impact on visual effects and subsequent applications. In this paper, a novel hybrid range ambiguity suppression method for up and down chirp modulation is proposed. The method can obtain the ambiguity area image and reduce the ambiguity signal power appropriately, by applying pulse compression using a contrary modulation rate and CFAR detecting method. The effectiveness and correctness of the approach is demonstrated by processing the archive images acquired by Chinese Gaofen-3 SAR sensor in full-polarization mode.

  12. Video-rate or high-precision: a flexible range imaging camera

    Science.gov (United States)

    Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John P.; Jongenelen, Adrian P. P.

    2008-02-01

    A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system's frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.

  13. Target recognition of log-polar ladar range images using moment invariants

    Science.gov (United States)

    Xia, Wenze; Han, Shaokun; Cao, Jie; Yu, Haoyong

    2017-01-01

    The ladar range image has received considerable attentions in the automatic target recognition field. However, previous research does not cover target recognition using log-polar ladar range images. Therefore, we construct a target recognition system based on log-polar ladar range images in this paper. In this system combined moment invariants and backpropagation neural network are selected as shape descriptor and shape classifier, respectively. In order to fully analyze the effect of log-polar sampling pattern on recognition result, several comparative experiments based on simulated and real range images are carried out. Eventually, several important conclusions are drawn: (i) if combined moments are computed directly by log-polar range images, translation, rotation and scaling invariant properties of combined moments will be invalid (ii) when object is located in the center of field of view, recognition rate of log-polar range images is less sensitive to the changing of field of view (iii) as object position changes from center to edge of field of view, recognition performance of log-polar range images will decline dramatically (iv) log-polar range images has a better noise robustness than Cartesian range images. Finally, we give a suggestion that it is better to divide field of view into recognition area and searching area in the real application.

  14. Real-time image processing II; Proceedings of the Meeting, Orlando, FL, Apr. 16-18, 1990

    Science.gov (United States)

    Juday, Richard D. (Editor)

    1990-01-01

    The present conference discusses topics in the fields of feature extraction and implementation, filter and correlation algorithms, optical correlators, high-level algorithms, and digital image processing for ranging and remote driving. Attention is given to a nonlinear filter derived from topological image features, IR image segmentation through iterative thresholding, orthogonal subspaces for correlation masking, composite filter trees and image recognition via binary search, and features of matrix-coherent optical image processing. Also discussed are multitarget tracking via hybrid joint transform correlator, binary joint Fourier transform correlator considerations, global image processing operations on parallel architectures, real-time implementation of a differential range finder, and real-time binocular stereo range and motion detection.

  15. Real-time image processing II; Proceedings of the Meeting, Orlando, FL, Apr. 16-18, 1990

    Science.gov (United States)

    Juday, Richard D.

    The present conference discusses topics in the fields of feature extraction and implementation, filter and correlation algorithms, optical correlators, high-level algorithms, and digital image processing for ranging and remote driving. Attention is given to a nonlinear filter derived from topological image features, IR image segmentation through iterative thresholding, orthogonal subspaces for correlation masking, composite filter trees and image recognition via binary search, and features of matrix-coherent optical image processing. Also discussed are multitarget tracking via hybrid joint transform correlator, binary joint Fourier transform correlator considerations, global image processing operations on parallel architectures, real-time implementation of a differential range finder, and real-time binocular stereo range and motion detection.

  16. High dynamic range imaging sensors and architectures

    CERN Document Server

    Darmont, Arnaud

    2013-01-01

    Illumination is a crucial element in many applications, matching the luminance of the scene with the operational range of a camera. When luminance cannot be adequately controlled, a high dynamic range (HDR) imaging system may be necessary. These systems are being increasingly used in automotive on-board systems, road traffic monitoring, and other industrial, security, and military applications. This book provides readers with an intermediate discussion of HDR image sensors and techniques for industrial and non-industrial applications. It describes various sensor and pixel architectures capable

  17. Early skin tumor detection from microscopic images through image processing

    International Nuclear Information System (INIS)

    Siddiqi, A.A.; Narejo, G.B.; Khan, A.M.

    2017-01-01

    The research is done to provide appropriate detection technique for skin tumor detection. The work is done by using the image processing toolbox of MATLAB. Skin tumors are unwanted skin growth with different causes and varying extent of malignant cells. It is a syndrome in which skin cells mislay the ability to divide and grow normally. Early detection of tumor is the most important factor affecting the endurance of a patient. Studying the pattern of the skin cells is the fundamental problem in medical image analysis. The study of skin tumor has been of great interest to the researchers. DIP (Digital Image Processing) allows the use of much more complex algorithms for image processing, and hence, can offer both more sophisticated performance at simple task, and the implementation of methods which would be impossibly by analog means. It allows much wider range of algorithms to be applied to the input data and can avoid problems such as build up of noise and signal distortion during processing. The study shows that few works has been done on cellular scale for the images of skin. This research allows few checks for the early detection of skin tumor using microscopic images after testing and observing various algorithms. After analytical evaluation the result has been observed that the proposed checks are time efficient techniques and appropriate for the tumor detection. The algorithm applied provides promising results in lesser time with accuracy. The GUI (Graphical User Interface) that is generated for the algorithm makes the system user friendly. (author)

  18. The Study of Image Processing Method for AIDS PA Test

    International Nuclear Information System (INIS)

    Zhang, H J; Wang, Q G

    2006-01-01

    At present, the main test technique of AIDS is PA in China. Because the judgment of PA test image is still depending on operator, the error ration is high. To resolve this problem, we present a new technique of image processing, which first process many samples and get the data including coordinate of center and the rang of kinds images; then we can segment the image with the data; at last, the result is exported after data was judgment. This technique is simple and veracious; and it also turns out to be suitable for the processing and analyzing of other infectious diseases' PA test image

  19. The Ansel Adams zone system: HDR capture and range compression by chemical processing

    Science.gov (United States)

    McCann, John J.

    2010-02-01

    We tend to think of digital imaging and the tools of PhotoshopTM as a new phenomenon in imaging. We are also familiar with multiple-exposure HDR techniques intended to capture a wider range of scene information, than conventional film photography. We know about tone-scale adjustments to make better pictures. We tend to think of everyday, consumer, silver-halide photography as a fixed window of scene capture with a limited, standard range of response. This description of photography is certainly true, between 1950 and 2000, for instant films and negatives processed at the drugstore. These systems had fixed dynamic range and fixed tone-scale response to light. All pixels in the film have the same response to light, so the same light exposure from different pixels was rendered as the same film density. Ansel Adams, along with Fred Archer, formulated the Zone System, staring in 1940. It was earlier than the trillions of consumer photos in the second half of the 20th century, yet it was much more sophisticated than today's digital techniques. This talk will describe the chemical mechanisms of the zone system in the parlance of digital image processing. It will describe the Zone System's chemical techniques for image synthesis. It also discusses dodging and burning techniques to fit the HDR scene into the LDR print. Although current HDR imaging shares some of the Zone System's achievements, it usually does not achieve all of them.

  20. A Range Ambiguity Suppression Processing Method for Spaceborne SAR with Up and Down Chirp Modulation

    Directory of Open Access Journals (Sweden)

    Xuejiao Wen

    2018-05-01

    Full Text Available Range ambiguity is one of the factors which affect the SAR image quality. Alternately transmitting up and down chirp modulation pulses is one of the methods used to suppress the range ambiguity. However, the defocusing range ambiguous signal can still hold the stronger backscattering intensity than the mainlobe imaging area in some case, which has a severe impact on visual effects and subsequent applications. In this paper, a novel hybrid range ambiguity suppression method for up and down chirp modulation is proposed. The method can obtain the ambiguity area image and reduce the ambiguity signal power appropriately, by applying pulse compression using a contrary modulation rate and CFAR detecting method. The effectiveness and correctness of the approach is demonstrated by processing the archive images acquired by Chinese Gaofen-3 SAR sensor in full-polarization mode.

  1. ISAR imaging using the instantaneous range instantaneous Doppler method

    CSIR Research Space (South Africa)

    Wazna, TM

    2015-10-01

    Full Text Available In Inverse Synthetic Aperture Radar (ISAR) imaging, the Range Instantaneous Doppler (RID) method is used to compensate for the nonuniform rotational motion of the target that degrades the Doppler resolution of the ISAR image. The Instantaneous Range...

  2. Image processing and recognition for biological images.

    Science.gov (United States)

    Uchida, Seiichi

    2013-05-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.

  3. Effects of Resolution, Range, and Image Contrast on Target Acquisition Performance.

    Science.gov (United States)

    Hollands, Justin G; Terhaar, Phil; Pavlovic, Nada J

    2018-05-01

    We sought to determine the joint influence of resolution, target range, and image contrast on the detection and identification of targets in simulated naturalistic scenes. Resolution requirements for target acquisition have been developed based on threshold values obtained using imaging systems, when target range was fixed, and image characteristics were determined by the system. Subsequent work has examined the influence of factors like target range and image contrast on target acquisition. We varied the resolution and contrast of static images in two experiments. Participants (soldiers) decided whether a human target was located in the scene (detection task) or whether a target was friendly or hostile (identification task). Target range was also varied (50-400 m). In Experiment 1, 30 participants saw color images with a single target exemplar. In Experiment 2, another 30 participants saw monochrome images containing different target exemplars. The effects of target range and image contrast were qualitatively different above and below 6 pixels per meter of target for both tasks in both experiments. Target detection and identification performance were a joint function of image resolution, range, and contrast for both color and monochrome images. The beneficial effects of increasing resolution for target acquisition performance are greater for closer (larger) targets.

  4. Influence of range-gated intensifiers on underwater imaging system SNR

    Science.gov (United States)

    Wang, Xia; Hu, Ling; Zhi, Qiang; Chen, Zhen-yue; Jin, Wei-qi

    2013-08-01

    Range-gated technology has been a hot research field in recent years due to its high effective back scattering eliminating. As a result, it can enhance the contrast between a target and its background and extent the working distance of the imaging system. The underwater imaging system is required to have the ability to image in low light level conditions, as well as the ability to eliminate the back scattering effect, which means that the receiver has to be high-speed external trigger function, high resolution, high sensitivity, low noise, higher gain dynamic range. When it comes to an intensifier, the noise characteristics directly restrict the observation effect and range of the imaging system. The background noise may decrease the image contrast and sharpness, even covering the signal making it impossible to recognize the target. So it is quite important to investigate the noise characteristics of intensifiers. SNR is an important parameter reflecting the noise features of a system. Through the use of underwater laser range-gated imaging prediction model, and according to the linear SNR system theory, the gated imaging noise performance of the present market adopted super second generation and generation Ⅲ intensifiers were theoretically analyzed. Based on the active laser underwater range-gated imaging model, the effect to the system by gated intensifiers and the relationship between the system SNR and MTF were studied. Through theoretical and simulation analysis to the image intensifier background noise and SNR, the different influence on system SNR by super second generation and generation Ⅲ ICCD was obtained. Range-gated system SNR formula was put forward, and compared the different effect influence on the system by using two kind of ICCDs was compared. According to the matlab simulation, a detailed analysis was carried out theoretically. All the work in this paper lays a theoretical foundation to further eliminating back scattering effect, improving

  5. Image processing applications: From particle physics to society

    International Nuclear Information System (INIS)

    Sotiropoulou, C.-L.; Citraro, S.; Dell'Orso, M.; Luciano, P.; Gkaitatzis, S.; Giannetti, P.

    2017-01-01

    We present an embedded system for extremely efficient real-time pattern recognition execution, enabling technological advancements with both scientific and social impact. It is a compact, fast, low consumption processing unit (PU) based on a combination of Field Programmable Gate Arrays (FPGAs) and the full custom associative memory chip. The PU has been developed for real time tracking in particle physics experiments, but delivers flexible features for potential application in a wide range of fields. It has been proposed to be used in accelerated pattern matching execution for Magnetic Resonance Fingerprinting (biomedical applications), in real time detection of space debris trails in astronomical images (space applications) and in brain emulation for image processing (cognitive image processing). We illustrate the potentiality of the PU for the new applications.

  6. scikit-image: image processing in Python.

    Science.gov (United States)

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  7. scikit-image: image processing in Python

    Directory of Open Access Journals (Sweden)

    Stéfan van der Walt

    2014-06-01

    Full Text Available scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  8. A Wide Spectral Range Reflectance and Luminescence Imaging System

    Directory of Open Access Journals (Sweden)

    Tapani Hirvonen

    2013-10-01

    Full Text Available In this study, we introduce a wide spectral range (200–2500 nm imaging system with a 250 μm minimum spatial resolution, which can be freely modified for a wide range of resolutions and measurement geometries. The system has been tested for reflectance and luminescence measurements, but can also be customized for transmittance measurements. This study includes the performance results of the developed system, as well as examples of spectral images. Discussion of the system relates it to existing systems and methods. The wide range spectral imaging system that has been developed is however highly customizable and has great potential in many practical applications.

  9. Methods in Astronomical Image Processing

    Science.gov (United States)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  10. Image processing in radiology. Current applications

    International Nuclear Information System (INIS)

    Neri, E.; Caramella, D.; Bartolozzi, C.

    2008-01-01

    Few fields have witnessed such impressive advances as image processing in radiology. The progress achieved has revolutionized diagnosis and greatly facilitated treatment selection and accurate planning of procedures. This book, written by leading experts from many countries, provides a comprehensive and up-to-date description of how to use 2D and 3D processing tools in clinical radiology. The first section covers a wide range of technical aspects in an informative way. This is followed by the main section, in which the principal clinical applications are described and discussed in depth. To complete the picture, a third section focuses on various special topics. The book will be invaluable to radiologists of any subspecialty who work with CT and MRI and would like to exploit the advantages of image processing techniques. It also addresses the needs of radiographers who cooperate with clinical radiologists and should improve their ability to generate the appropriate 2D and 3D processing. (orig.)

  11. High speed display algorithm for 3D medical images using Multi Layer Range Image

    International Nuclear Information System (INIS)

    Ban, Hideyuki; Suzuki, Ryuuichi

    1993-01-01

    We propose high speed algorithm that display 3D voxel images obtained from medical imaging systems such as MRI. This algorithm convert voxel image data to 6 Multi Layer Range Image (MLRI) data, which is an augmentation of the range image data. To avoid the calculation for invisible voxels, the algorithm selects at most 3 MLRI data from 6 in accordance with the view direction. The proposed algorithm displays 256 x 256 x 256 voxel data within 0.6 seconds using 22 MIPS Workstation without a special hardware such as Graphics Engine. Real-time display will be possible on 100 MIPS class Workstation by our algorithm. (author)

  12. Color sensitivity of the multi-exposure HDR imaging process

    Science.gov (United States)

    Lenseigne, Boris; Jacobs, Valéry Ann; Withouck, Martijn; Hanselaer, Peter; Jonker, Pieter P.

    2013-04-01

    Multi-exposure high dynamic range(HDR) imaging builds HDR radiance maps by stitching together different views of a same scene with varying exposures. Practically, this process involves converting raw sensor data into low dynamic range (LDR) images, estimate the camera response curves, and use them in order to recover the irradiance for every pixel. During the export, applying white balance settings and image stitching, which both have an influence on the color balance in the final image. In this paper, we use a calibrated quasi-monochromatic light source, an integrating sphere, and a spectrograph in order to evaluate and compare the average spectral response of the image sensor. We finally draw some conclusion about the color consistency of HDR imaging and the additional steps necessary to use multi-exposure HDR imaging as a tool to measure the physical quantities such as radiance and luminance.

  13. POTENTIALS OF IMAGE BASED ACTIVE RANGING TO CAPTURE DYNAMIC SCENES

    Directory of Open Access Journals (Sweden)

    B. Jutzi

    2012-09-01

    Full Text Available Obtaining a 3D description of man-made and natural environments is a basic task in Computer Vision and Remote Sensing. To this end, laser scanning is currently one of the dominating techniques to gather reliable 3D information. The scanning principle inherently needs a certain time interval to acquire the 3D point cloud. On the other hand, new active sensors provide the possibility of capturing range information by images with a single measurement. With this new technique image-based active ranging is possible which allows capturing dynamic scenes, e.g. like walking pedestrians in a yard or moving vehicles. Unfortunately most of these range imaging sensors have strong technical limitations and are not yet sufficient for airborne data acquisition. It can be seen from the recent development of highly specialized (far-range imaging sensors – so called flash-light lasers – that most of the limitations could be alleviated soon, so that future systems will be equipped with improved image size and potentially expanded operating range. The presented work is a first step towards the development of methods capable for application of range images in outdoor environments. To this end, an experimental setup was set up for investigating these proposed possibilities. With the experimental setup a measurement campaign was carried out and first results will be presented within this paper.

  14. Medical image processing on the GPU - past, present and future.

    Science.gov (United States)

    Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M

    2013-12-01

    Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Auto-SEIA: simultaneous optimization of image processing and machine learning algorithms

    Science.gov (United States)

    Negro Maggio, Valentina; Iocchi, Luca

    2015-02-01

    Object classification from images is an important task for machine vision and it is a crucial ingredient for many computer vision applications, ranging from security and surveillance to marketing. Image based object classification techniques properly integrate image processing and machine learning (i.e., classification) procedures. In this paper we present a system for automatic simultaneous optimization of algorithms and parameters for object classification from images. More specifically, the proposed system is able to process a dataset of labelled images and to return a best configuration of image processing and classification algorithms and of their parameters with respect to the accuracy of classification. Experiments with real public datasets are used to demonstrate the effectiveness of the developed system.

  16. Cellular Neural Network for Real Time Image Processing

    International Nuclear Information System (INIS)

    Vagliasindi, G.; Arena, P.; Fortuna, L.; Mazzitelli, G.; Murari, A.

    2008-01-01

    Since their introduction in 1988, Cellular Nonlinear Networks (CNNs) have found a key role as image processing instruments. Thanks to their structure they are able of processing individual pixels in a parallel way providing fast image processing capabilities that has been applied to a wide range of field among which nuclear fusion. In the last years, indeed, visible and infrared video cameras have become more and more important in tokamak fusion experiments for the twofold aim of understanding the physics and monitoring the safety of the operation. Examining the output of these cameras in real-time can provide significant information for plasma control and safety of the machines. The potentiality of CNNs can be exploited to this aim. To demonstrate the feasibility of the approach, CNN image processing has been applied to several tasks both at the Frascati Tokamak Upgrade (FTU) and the Joint European Torus (JET)

  17. Luminescence imaging of water during carbon-ion irradiation for range estimation

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Komori, Masataka; Koyama, Shuji; Morishita, Yuki; Sekihara, Eri [Radiological and Medical Laboratory Sciences, Nagoya University Graduate School of Medicine, Higashi-ku, Nagoya, Aichi 461-8673 (Japan); Akagi, Takashi; Yamashita, Tomohiro [Hygo Ion Beam Medical Center, Hyogo 679-5165 (Japan); Toshito, Toshiyuki [Department of Proton Therapy Physics, Nagoya Proton Therapy Center, Nagoya City West Medical Center, Aichi 462-8508 (Japan)

    2016-05-15

    Purpose: The authors previously reported successful luminescence imaging of water during proton irradiation and its application to range estimation. However, since the feasibility of this approach for carbon-ion irradiation remained unclear, the authors conducted luminescence imaging during carbon-ion irradiation and estimated the ranges. Methods: The authors placed a pure-water phantom on the patient couch of a carbon-ion therapy system and measured the luminescence images with a high-sensitivity, cooled charge-coupled device camera during carbon-ion irradiation. The authors also carried out imaging of three types of phantoms (tap-water, an acrylic block, and a plastic scintillator) and compared their intensities and distributions with those of a phantom containing pure-water. Results: The luminescence images of pure-water phantoms during carbon-ion irradiation showed clear Bragg peaks, and the measured carbon-ion ranges from the images were almost the same as those obtained by simulation. The image of the tap-water phantom showed almost the same distribution as that of the pure-water phantom. The acrylic block phantom’s luminescence image produced seven times higher luminescence and had a 13% shorter range than that of the water phantoms; the range with the acrylic phantom generally matched the calculated value. The plastic scintillator showed ∼15 000 times higher light than that of water. Conclusions: Luminescence imaging during carbon-ion irradiation of water is not only possible but also a promising method for range estimation in carbon-ion therapy.

  18. Luminescence imaging of water during carbon-ion irradiation for range estimation

    International Nuclear Information System (INIS)

    Yamamoto, Seiichi; Komori, Masataka; Koyama, Shuji; Morishita, Yuki; Sekihara, Eri; Akagi, Takashi; Yamashita, Tomohiro; Toshito, Toshiyuki

    2016-01-01

    Purpose: The authors previously reported successful luminescence imaging of water during proton irradiation and its application to range estimation. However, since the feasibility of this approach for carbon-ion irradiation remained unclear, the authors conducted luminescence imaging during carbon-ion irradiation and estimated the ranges. Methods: The authors placed a pure-water phantom on the patient couch of a carbon-ion therapy system and measured the luminescence images with a high-sensitivity, cooled charge-coupled device camera during carbon-ion irradiation. The authors also carried out imaging of three types of phantoms (tap-water, an acrylic block, and a plastic scintillator) and compared their intensities and distributions with those of a phantom containing pure-water. Results: The luminescence images of pure-water phantoms during carbon-ion irradiation showed clear Bragg peaks, and the measured carbon-ion ranges from the images were almost the same as those obtained by simulation. The image of the tap-water phantom showed almost the same distribution as that of the pure-water phantom. The acrylic block phantom’s luminescence image produced seven times higher luminescence and had a 13% shorter range than that of the water phantoms; the range with the acrylic phantom generally matched the calculated value. The plastic scintillator showed ∼15 000 times higher light than that of water. Conclusions: Luminescence imaging during carbon-ion irradiation of water is not only possible but also a promising method for range estimation in carbon-ion therapy.

  19. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  20. Rapid, low-cost, image analysis through video processing

    International Nuclear Information System (INIS)

    Levinson, R.A.; Marrs, R.W.; Grantham, D.G.

    1976-01-01

    Remote Sensing now provides the data necessary to solve many resource problems. However, many of the complex image processing and analysis functions used in analysis of remotely-sensed data are accomplished using sophisticated image analysis equipment. High cost of this equipment places many of these techniques beyond the means of most users. A new, more economical, video system capable of performing complex image analysis has now been developed. This report describes the functions, components, and operation of that system. Processing capability of the new video image analysis system includes many of the tasks previously accomplished with optical projectors and digital computers. Video capabilities include: color separation, color addition/subtraction, contrast stretch, dark level adjustment, density analysis, edge enhancement, scale matching, image mixing (addition and subtraction), image ratioing, and construction of false-color composite images. Rapid input of non-digital image data, instantaneous processing and display, relatively low initial cost, and low operating cost gives the video system a competitive advantage over digital equipment. Complex pre-processing, pattern recognition, and statistical analyses must still be handled through digital computer systems. The video system at the University of Wyoming has undergone extensive testing, comparison to other systems, and has been used successfully in practical applications ranging from analysis of x-rays and thin sections to production of color composite ratios of multispectral imagery. Potential applications are discussed including uranium exploration, petroleum exploration, tectonic studies, geologic mapping, hydrology sedimentology and petrography, anthropology, and studies on vegetation and wildlife habitat

  1. [Digital thoracic radiology: devices, image processing, limits].

    Science.gov (United States)

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing.

  2. Document Examination: Applications of Image Processing Systems.

    Science.gov (United States)

    Kopainsky, B

    1989-12-01

    Dealing with images is a familiar business for an expert in questioned documents: microscopic, photographic, infrared, and other optical techniques generate images containing the information he or she is looking for. A recent method for extracting most of this information is digital image processing, ranging from the simple contrast and contour enhancement to the advanced restoration of blurred texts. When combined with a sophisticated physical imaging system, an image pricessing system has proven to be a powerful and fast tool for routine non-destructive scanning of suspect documents. This article reviews frequent applications, comprising techniques to increase legibility, two-dimensional spectroscopy (ink discrimination, alterations, erased entries, etc.), comparison techniques (stamps, typescript letters, photo substitution), and densitometry. Computerized comparison of handwriting is not included. Copyright © 1989 Central Police University.

  3. SEGMENTATION AND QUALITY ANALYSIS OF LONG RANGE CAPTURED IRIS IMAGE

    Directory of Open Access Journals (Sweden)

    Anand Deshpande

    2016-05-01

    Full Text Available The iris segmentation plays a major role in an iris recognition system to increase the performance of the system. This paper proposes a novel method for segmentation of iris images to extract the iris part of long range captured eye image and an approach to select best iris frame from the iris polar image sequences by analyzing the quality of iris polar images. The quality of iris image is determined by the frequency components present in the iris polar images. The experiments are carried out on CASIA-long range captured iris image sequences. The proposed segmentation method is compared with Hough transform based segmentation and it has been determined that the proposed method gives higher accuracy for segmentation than Hough transform.

  4. Image processing with ImageJ

    CERN Document Server

    Pascau, Javier

    2013-01-01

    The book will help readers discover the various facilities of ImageJ through a tutorial-based approach.This book is targeted at scientists, engineers, technicians, and managers, and anyone who wishes to master ImageJ for image viewing, processing, and analysis. If you are a developer, you will be able to code your own routines after you have finished reading this book. No prior knowledge of ImageJ is expected.

  5. Study on super-resolution three-dimensional range-gated imaging technology

    Science.gov (United States)

    Guo, Huichao; Sun, Huayan; Wang, Shuai; Fan, Youchen; Li, Yuanmiao

    2018-04-01

    Range-gated three dimensional imaging technology is a hotspot in recent years, because of the advantages of high spatial resolution, high range accuracy, long range, and simultaneous reflection of target reflectivity information. Based on the study of the principle of intensity-related method, this paper has carried out theoretical analysis and experimental research. The experimental system adopts the high power pulsed semiconductor laser as light source, gated ICCD as the imaging device, can realize the imaging depth and distance flexible adjustment to achieve different work mode. The imaging experiment of small imaging depth is carried out aiming at building 500m away, and 26 group images were obtained with distance step 1.5m. In this paper, the calculation method of 3D point cloud based on triangle method is analyzed, and 15m depth slice of the target 3D point cloud are obtained by using two frame images, the distance precision is better than 0.5m. The influence of signal to noise ratio, illumination uniformity and image brightness on distance accuracy are analyzed. Based on the comparison with the time-slicing method, a method for improving the linearity of point cloud is proposed.

  6. Measurement of smaller colon polyp in CT colonography images using morphological image processing.

    Science.gov (United States)

    Manjunath, K N; Siddalingaswamy, P C; Prabhu, G K

    2017-11-01

    Automated measurement of the size and shape of colon polyps is one of the challenges in Computed tomography colonography (CTC). The objective of this retrospective study was to improve the sensitivity and specificity of smaller polyp measurement in CTC using image processing techniques. A domain knowledge-based method has been implemented with hybrid method of colon segmentation, morphological image processing operators for detecting the colonic structures, and the decision-making system for delineating the smaller polyp-based on a priori knowledge. The method was applied on 45 CTC dataset. The key finding was that the smaller polyps were accurately measured. In addition to 6-9 mm range, polyps of even processing. It takes [Formula: see text] min for measuring the smaller polyp in a dataset of 500 CTC images. With this method, [Formula: see text] and [Formula: see text] were achieved. The domain-based approach with morphological image processing has given good results. The smaller polyps were measured accurately which helps in making right clinical decisions. Qualitatively and quantitatively the results were acceptable when compared to the ground truth at [Formula: see text].

  7. Range and Image Based Modelling: a way for Frescoed Vault Texturing Optimization

    Science.gov (United States)

    Caroti, G.; Martínez-Espejo Zaragoza, I.; Piemonte, A.

    2015-02-01

    In the restoration of the frescoed vaults it is not only important to know the geometric shape of the painted surface, but it is essential to document its chromatic characterization and conservation status. The new techniques of range-based and image-based modelling, each with its limitations and advantages, offer a wide range of methods to obtain the geometric shape. In fact, several studies widely document that laser scanning enable obtaining three-dimensional models with high morphological precision. However, the quality level of the colour obtained with built-in laser scanner cameras is not comparable to that obtained for the shape. It is possible to improve the texture quality by means of a dedicated photographic campaign. This procedure, however, requires to calculate the external orientation of each image identifying the control points on it and on the model through a costly step of post processing. With image-based modelling techniques it is possible to obtain models that maintain the colour quality of the original images, but with variable geometric precision, locally lower than the laser scanning model. This paper presents a methodology that uses the camera external orientation parameters calculated by image based modelling techniques to project the same image on the model obtained from the laser scan. This methodology is tested on an Italian mirror (a schifo) frescoed vault. In the paper the different models, the analysis of precision and the efficiency evaluation of proposed methodology are presented.

  8. Hyperspectral image processing methods

    Science.gov (United States)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  9. Range Process Simulation Tool

    Science.gov (United States)

    Phillips, Dave; Haas, William; Barth, Tim; Benjamin, Perakath; Graul, Michael; Bagatourova, Olga

    2005-01-01

    Range Process Simulation Tool (RPST) is a computer program that assists managers in rapidly predicting and quantitatively assessing the operational effects of proposed technological additions to, and/or upgrades of, complex facilities and engineering systems such as the Eastern Test Range. Originally designed for application to space transportation systems, RPST is also suitable for assessing effects of proposed changes in industrial facilities and large organizations. RPST follows a model-based approach that includes finite-capacity schedule analysis and discrete-event process simulation. A component-based, scalable, open architecture makes RPST easily and rapidly tailorable for diverse applications. Specific RPST functions include: (1) definition of analysis objectives and performance metrics; (2) selection of process templates from a processtemplate library; (3) configuration of process models for detailed simulation and schedule analysis; (4) design of operations- analysis experiments; (5) schedule and simulation-based process analysis; and (6) optimization of performance by use of genetic algorithms and simulated annealing. The main benefits afforded by RPST are provision of information that can be used to reduce costs of operation and maintenance, and the capability for affordable, accurate, and reliable prediction and exploration of the consequences of many alternative proposed decisions.

  10. Understanding synthesis imaging dynamic range

    Science.gov (United States)

    Braun, R.

    2013-03-01

    We develop a general framework for quantifying the many different contributions to the noise budget of an image made with an array of dishes or aperture array stations. Each noise contribution to the visibility data is associated with a relevant correlation timescale and frequency bandwidth so that the net impact on a complete observation can be assessed when a particular effect is not captured in the instrumental calibration. All quantities are parameterised as function of observing frequency and the visibility baseline length. We apply the resulting noise budget analysis to a wide range of existing and planned telescope systems that will operate between about 100 MHz and 5 GHz to ascertain the magnitude of the calibration challenges that they must overcome to achieve thermal noise limited performance. We conclude that calibration challenges are increased in several respects by small dimensions of the dishes or aperture array stations. It will be more challenging to achieve thermal noise limited performance using 15 m class dishes rather than the 25 m dishes of current arrays. Some of the performance risks are mitigated by the deployment of phased array feeds and more with the choice of an (alt,az,pol) mount, although a larger dish diameter offers the best prospects for risk mitigation. Many improvements to imaging performance can be anticipated at the expense of greater complexity in calibration algorithms. However, a fundamental limitation is ultimately imposed by an insufficient number of data constraints relative to calibration variables. The upcoming aperture array systems will be operating in a regime that has never previously been addressed, where a wide range of effects are expected to exceed the thermal noise by two to three orders of magnitude. Achieving routine thermal noise limited imaging performance with these systems presents an extreme challenge. The magnitude of that challenge is inversely related to the aperture array station diameter.

  11. Luminescence imaging of water during proton-beam irradiation for range estimation

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Okumura, Satoshi; Komori, Masataka [Radiological and Medical Laboratory Sciences, Nagoya University Graduate School of Medicine, Nagoya 461-8673 (Japan); Toshito, Toshiyuki [Department of Proton Therapy Physics, Nagoya Proton Therapy Center, Nagoya City West Medical Center, Nagoya 462-8508 (Japan)

    2015-11-15

    Purpose: Proton therapy has the ability to selectively deliver a dose to the target tumor, so the dose distribution should be accurately measured by a precise and efficient method. The authors found that luminescence was emitted from water during proton irradiation and conjectured that this phenomenon could be used for estimating the dose distribution. Methods: To achieve more accurate dose distribution, the authors set water phantoms on a table with a spot scanning proton therapy system and measured the luminescence images of these phantoms with a high-sensitivity, cooled charge coupled device camera during proton-beam irradiation. The authors imaged the phantoms of pure water, fluorescein solution, and an acrylic block. Results: The luminescence images of water phantoms taken during proton-beam irradiation showed clear Bragg peaks, and the measured proton ranges from the images were almost the same as those obtained with an ionization chamber. Furthermore, the image of the pure-water phantom showed almost the same distribution as the tap-water phantom, indicating that the luminescence image was not related to impurities in the water. The luminescence image of the fluorescein solution had ∼3 times higher intensity than water, with the same proton range as that of water. The luminescence image of the acrylic phantom had a 14.5% shorter proton range than that of water; the proton range in the acrylic phantom generally matched the calculated value. The luminescence images of the tap-water phantom during proton irradiation could be obtained in less than 2 s. Conclusions: Luminescence imaging during proton-beam irradiation is promising as an effective method for range estimation in proton therapy.

  12. Luminescence imaging of water during proton-beam irradiation for range estimation

    International Nuclear Information System (INIS)

    Yamamoto, Seiichi; Okumura, Satoshi; Komori, Masataka; Toshito, Toshiyuki

    2015-01-01

    Purpose: Proton therapy has the ability to selectively deliver a dose to the target tumor, so the dose distribution should be accurately measured by a precise and efficient method. The authors found that luminescence was emitted from water during proton irradiation and conjectured that this phenomenon could be used for estimating the dose distribution. Methods: To achieve more accurate dose distribution, the authors set water phantoms on a table with a spot scanning proton therapy system and measured the luminescence images of these phantoms with a high-sensitivity, cooled charge coupled device camera during proton-beam irradiation. The authors imaged the phantoms of pure water, fluorescein solution, and an acrylic block. Results: The luminescence images of water phantoms taken during proton-beam irradiation showed clear Bragg peaks, and the measured proton ranges from the images were almost the same as those obtained with an ionization chamber. Furthermore, the image of the pure-water phantom showed almost the same distribution as the tap-water phantom, indicating that the luminescence image was not related to impurities in the water. The luminescence image of the fluorescein solution had ∼3 times higher intensity than water, with the same proton range as that of water. The luminescence image of the acrylic phantom had a 14.5% shorter proton range than that of water; the proton range in the acrylic phantom generally matched the calculated value. The luminescence images of the tap-water phantom during proton irradiation could be obtained in less than 2 s. Conclusions: Luminescence imaging during proton-beam irradiation is promising as an effective method for range estimation in proton therapy

  13. A novel track imaging system as a range counter

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Z. [National Institute of Radiological Sciences (Japan); Matsufuji, N. [National Institute of Radiological Sciences (Japan); Tokyo Institute of Technology (Japan); Kanayama, S. [Chiba University (Japan); Ishida, A. [National Institute of Radiological Sciences (Japan); Tokyo Institute of Technology (Japan); Kohno, T. [Tokyo Institute of Technology (Japan); Koba, Y.; Sekiguchi, M.; Kitagawa, A.; Murakami, T. [National Institute of Radiological Sciences (Japan)

    2016-05-01

    An image-intensified, camera-based track imaging system has been developed to measure the tracks of ions in a scintillator block. To study the performance of the detector unit in the system, two types of scintillators, a dosimetrically tissue-equivalent plastic scintillator EJ-240 and a CsI(Tl) scintillator, were separately irradiated with carbon ion ({sup 12}C) beams of therapeutic energy from HIMAC at NIRS. The images of individual ion tracks in the scintillators were acquired by the newly developed track imaging system. The ranges reconstructed from the images are reported here. The range resolution of the measurements is 1.8 mm for 290 MeV/u carbon ions, which is considered a significant improvement on the energy resolution of the conventional ΔE/E method. The detector is compact and easy to handle, and it can fit inside treatment rooms for in-situ studies, as well as satisfy clinical quality assurance purposes.

  14. Bubble feature extracting based on image processing of coal flotation froth

    Energy Technology Data Exchange (ETDEWEB)

    Wang, F.; Wang, Y.; Lu, M.; Liu, W. [China University of Mining and Technology, Beijing (China). Dept of Chemical Engineering and Environment

    2001-11-01

    Using image processing the contrast ratio between the bubble on the surface of flotation froth and the image background was enhanced, and the edges of bubble were extracted. Thus a model about the relation between the statistic feature of the bubbles in the image and the cleaned coal can be established. It is feasible to extract the bubble by processing the froth image of coal flotation on the basis of analysing the shape of the bubble. By means of processing the 51 group images sampled from laboratory column, it is thought that the use of the histogram equalization of image gradation and the medium filtering can obviously improve the dynamic contrast range and the brightness of bubbles. Finally, the method of threshold value cut and the bubble edge detecting for extracting the bubble were also discussed to describe the bubble feature, such as size and shape, in the froth image and to distinguish the froth image of coal flotation. 6 refs., 3 figs.

  15. Parallel asynchronous hardware implementation of image processing algorithms

    Science.gov (United States)

    Coon, Darryl D.; Perera, A. G. U.

    1990-01-01

    Research is being carried out on hardware for a new approach to focal plane processing. The hardware involves silicon injection mode devices. These devices provide a natural basis for parallel asynchronous focal plane image preprocessing. The simplicity and novel properties of the devices would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture built from arrays of the devices would form a two-dimensional (2-D) array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuron-like asynchronous pulse-coded form through the laminar processor. No multiplexing, digitization, or serial processing would occur in the preprocessing state. High performance is expected, based on pulse coding of input currents down to one picoampere with noise referred to input of about 10 femtoamperes. Linear pulse coding has been observed for input currents ranging up to seven orders of magnitude. Low power requirements suggest utility in space and in conjunction with very large arrays. Very low dark current and multispectral capability are possible because of hardware compatibility with the cryogenic environment of high performance detector arrays. The aforementioned hardware development effort is aimed at systems which would integrate image acquisition and image processing.

  16. Methods of digital image processing

    International Nuclear Information System (INIS)

    Doeler, W.

    1985-01-01

    Increasing use of computerized methods for diagnostical imaging of radiological problems will open up a wide field of applications for digital image processing. The requirements set by routine diagnostics in medical radiology point to picture data storage and documentation and communication as the main points of interest for application of digital image processing. As to the purely radiological problems, the value of digital image processing is to be sought in the improved interpretability of the image information in those cases where the expert's experience and image interpretation by human visual capacities do not suffice. There are many other domains of imaging in medical physics where digital image processing and evaluation is very useful. The paper reviews the various methods available for a variety of problem solutions, and explains the hardware available for the tasks discussed. (orig.) [de

  17. Target recognition of ladar range images using slice image: comparison of four improved algorithms

    Science.gov (United States)

    Xia, Wenze; Han, Shaokun; Cao, Jingya; Wang, Liang; Zhai, Yu; Cheng, Yang

    2017-07-01

    Compared with traditional 3-D shape data, ladar range images possess properties of strong noise, shape degeneracy, and sparsity, which make feature extraction and representation difficult. The slice image is an effective feature descriptor to resolve this problem. We propose four improved algorithms on target recognition of ladar range images using slice image. In order to improve resolution invariance of the slice image, mean value detection instead of maximum value detection is applied in these four improved algorithms. In order to improve rotation invariance of the slice image, three new improved feature descriptors-which are feature slice image, slice-Zernike moments, and slice-Fourier moments-are applied to the last three improved algorithms, respectively. Backpropagation neural networks are used as feature classifiers in the last two improved algorithms. The performance of these four improved recognition systems is analyzed comprehensively in the aspects of the three invariances, recognition rate, and execution time. The final experiment results show that the improvements for these four algorithms reach the desired effect, the three invariances of feature descriptors are not directly related to the final recognition performance of recognition systems, and these four improved recognition systems have different performances under different conditions.

  18. 110 °C range athermalization of wavefront coding infrared imaging systems

    Science.gov (United States)

    Feng, Bin; Shi, Zelin; Chang, Zheng; Liu, Haizheng; Zhao, Yaohong

    2017-09-01

    110 °C range athermalization is significant but difficult for designing infrared imaging systems. Our wavefront coding athermalized infrared imaging system adopts an optical phase mask with less manufacturing errors and a decoding method based on shrinkage function. The qualitative experiments prove that our wavefront coding athermalized infrared imaging system has three prominent merits: (1) working well over a temperature range of 110 °C; (2) extending the focal depth up to 15.2 times; (3) achieving a decoded image being approximate to its corresponding in-focus infrared image, with a mean structural similarity index (MSSIM) value greater than 0.85.

  19. Stable image acquisition for mobile image processing applications

    Science.gov (United States)

    Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker

    2015-02-01

    Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.

  20. Fast processing of foreign fiber images by image blocking

    OpenAIRE

    Yutao Wu; Daoliang Li; Zhenbo Li; Wenzhu Yang

    2014-01-01

    In the textile industry, it is always the case that cotton products are constitutive of many types of foreign fibers which affect the overall quality of cotton products. As the foundation of the foreign fiber automated inspection, image process exerts a critical impact on the process of foreign fiber identification. This paper presents a new approach for the fast processing of foreign fiber images. This approach includes five main steps, image block, image pre-decision, image background extra...

  1. SINGLE IMAGE CAMERA CALIBRATION IN CLOSE RANGE PHOTOGRAMMETRY FOR SOLDER JOINT ANALYSIS

    Directory of Open Access Journals (Sweden)

    D. Heinemann

    2016-06-01

    Full Text Available Printed Circuit Boards (PCB play an important role in the manufacturing of electronic devices. To ensure a correct function of the PCBs a certain amount of solder paste is needed during the placement of components. The aim of the current research is to develop an real-time, closed-loop solution for the analysis of the printing process where solder is printed onto PCBs. Close range photogrammetry allows for determination of the solder volume and a subsequent correction if necessary. Photogrammetry is an image based method for three dimensional reconstruction from two dimensional image data of an object. A precise camera calibration is indispensable for an accurate reconstruction. In our certain application it is not possible to use calibration methods with two dimensional calibration targets. Therefore a special calibration target was developed and manufactured, which allows for single image camera calibration.

  2. Biomedical Image Processing

    CERN Document Server

    Deserno, Thomas Martin

    2011-01-01

    In modern medicine, imaging is the most effective tool for diagnostics, treatment planning and therapy. Almost all modalities have went to directly digital acquisition techniques and processing of this image data have become an important option for health care in future. This book is written by a team of internationally recognized experts from all over the world. It provides a brief but complete overview on medical image processing and analysis highlighting recent advances that have been made in academics. Color figures are used extensively to illustrate the methods and help the reader to understand the complex topics.

  3. Image perception and image processing

    International Nuclear Information System (INIS)

    Wackenheim, A.

    1987-01-01

    The author develops theoretical and practical models of image perception and image processing, based on phenomenology and structuralism and leading to original perception: fundamental for a positivistic approach of research work for the development of artificial intelligence that will be able in an automated system fo 'reading' X-ray pictures. (orig.) [de

  4. Image perception and image processing

    Energy Technology Data Exchange (ETDEWEB)

    Wackenheim, A.

    1987-01-01

    The author develops theoretical and practical models of image perception and image processing, based on phenomenology and structuralism and leading to original perception: fundamental for a positivistic approach of research work for the development of artificial intelligence that will be able in an automated system fo 'reading' X-ray pictures.

  5. Optoelectronic imaging of speckle using image processing method

    Science.gov (United States)

    Wang, Jinjiang; Wang, Pengfei

    2018-01-01

    A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.

  6. Introduction to digital image processing

    CERN Document Server

    Pratt, William K

    2013-01-01

    CONTINUOUS IMAGE CHARACTERIZATION Continuous Image Mathematical Characterization Image RepresentationTwo-Dimensional SystemsTwo-Dimensional Fourier TransformImage Stochastic CharacterizationPsychophysical Vision Properties Light PerceptionEye PhysiologyVisual PhenomenaMonochrome Vision ModelColor Vision ModelPhotometry and ColorimetryPhotometryColor MatchingColorimetry ConceptsColor SpacesDIGITAL IMAGE CHARACTERIZATION Image Sampling and Reconstruction Image Sampling and Reconstruction ConceptsMonochrome Image Sampling SystemsMonochrome Image Reconstruction SystemsColor Image Sampling SystemsImage QuantizationScalar QuantizationProcessing Quantized VariablesMonochrome and Color Image QuantizationDISCRETE TWO-DIMENSIONAL LINEAR PROCESSING Discrete Image Mathematical Characterization Vector-Space Image RepresentationGeneralized Two-Dimensional Linear OperatorImage Statistical CharacterizationImage Probability Density ModelsLinear Operator Statistical RepresentationSuperposition and ConvolutionFinite-Area Superp...

  7. ARMA processing for NDE ultrasonic imaging

    International Nuclear Information System (INIS)

    Pao, Y.H.; El-Sherbini, A.

    1984-01-01

    This chapter describes a new method for acoustic image reconstruction for an active multiple sensor system operating in the reflection mode in the Fresnel region. The method is based on the use of an ARMA model for the reconstruction process. Algorithms for estimating the model parameters are presented and computer simulation results are shown. The AR coefficients are obtained independently of the MA coefficients. It is shown that when the ARMA reconstruction method is augmented with the multifrequency approach, it can provide a three-dimensional reconstructed image with high lateral and range resolutions, high signal to noise ratio and reduced sidelobe levels. The proposed ARMA reconstruction method results in high quality images and better performance than that obtainable with conventional methods. The advantages of the method are very high lateral resolution with a limited number of sensors, reduced sidelobes level, and high signal to noise ratio

  8. An image-processing methodology for extracting bloodstain pattern features.

    Science.gov (United States)

    Arthur, Ravishka M; Humburg, Philomena J; Hoogenboom, Jerry; Baiker, Martin; Taylor, Michael C; de Bruin, Karla G

    2017-08-01

    There is a growing trend in forensic science to develop methods to make forensic pattern comparison tasks more objective. This has generally involved the application of suitable image-processing methods to provide numerical data for identification or comparison. This paper outlines a unique image-processing methodology that can be utilised by analysts to generate reliable pattern data that will assist them in forming objective conclusions about a pattern. A range of features were defined and extracted from a laboratory-generated impact spatter pattern. These features were based in part on bloodstain properties commonly used in the analysis of spatter bloodstain patterns. The values of these features were consistent with properties reported qualitatively for such patterns. The image-processing method developed shows considerable promise as a way to establish measurable discriminating pattern criteria that are lacking in current bloodstain pattern taxonomies. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Fundamentals of electronic image processing

    CERN Document Server

    Weeks, Arthur R

    1996-01-01

    This book is directed to practicing engineers and scientists who need to understand the fundamentals of image processing theory and algorithms to perform their technical tasks. It is intended to fill the gap between existing high-level texts dedicated to specialists in the field and the need for a more practical, fundamental text on image processing. A variety of example images are used to enhance reader understanding of how particular image processing algorithms work.

  10. Image processing technology for nuclear facilities

    International Nuclear Information System (INIS)

    Lee, Jong Min; Lee, Yong Beom; Kim, Woong Ki; Park, Soon Young

    1993-05-01

    Digital image processing technique is being actively studied since microprocessors and semiconductor memory devices have been developed in 1960's. Now image processing board for personal computer as well as image processing system for workstation is developed and widely applied to medical science, military, remote inspection, and nuclear industry. Image processing technology which provides computer system with vision ability not only recognizes nonobvious information but processes large information and therefore this technique is applied to various fields like remote measurement, object recognition and decision in adverse environment, and analysis of X-ray penetration image in nuclear facilities. In this report, various applications of image processing to nuclear facilities are examined, and image processing techniques are also analysed with the view of proposing the ideas for future applications. (Author)

  11. Range-Gated Laser Stroboscopic Imaging for Night Remote Surveillance

    International Nuclear Information System (INIS)

    Xin-Wei, Wang; Yan, Zhou; Song-Tao, Fan; Jun, He; Yu-Liang, Liu

    2010-01-01

    For night remote surveillance, we present a method, the range-gated laser stroboscopic imaging(RGLSI), which uses a new kind of time delay integration mode to integrate target signals so that night remote surveillance can be realized by a low-energy illuminated laser. The time delay integration in this method has no influence on the video frame rate. Compared with the traditional range-gated laser imaging, RGLSI can reduce scintillation and target speckle effects and significantly improve the image signal-to-noise ratio analyzed. Even under low light level and low visibility conditions, the RGLSI system can effectively work. In a preliminary experiment, we have detected and recognized a railway bridge one kilometer away under a visibility of six kilometers, when the effective illuminated energy is 29.5 μJ

  12. [Imaging center - optimization of the imaging process].

    Science.gov (United States)

    Busch, H-P

    2013-04-01

    Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams. © Georg Thieme Verlag KG Stuttgart · New York.

  13. Fast processing of foreign fiber images by image blocking

    Directory of Open Access Journals (Sweden)

    Yutao Wu

    2014-08-01

    Full Text Available In the textile industry, it is always the case that cotton products are constitutive of many types of foreign fibers which affect the overall quality of cotton products. As the foundation of the foreign fiber automated inspection, image process exerts a critical impact on the process of foreign fiber identification. This paper presents a new approach for the fast processing of foreign fiber images. This approach includes five main steps, image block, image pre-decision, image background extraction, image enhancement and segmentation, and image connection. At first, the captured color images were transformed into gray-scale images; followed by the inversion of gray-scale of the transformed images ; then the whole image was divided into several blocks. Thereafter, the subsequent step is to judge which image block contains the target foreign fiber image through image pre-decision. Then we segment the image block via OSTU which possibly contains target images after background eradication and image strengthening. Finally, we connect those relevant segmented image blocks to get an intact and clear foreign fiber target image. The experimental result shows that this method of segmentation has the advantage of accuracy and speed over the other segmentation methods. On the other hand, this method also connects the target image that produce fractures therefore getting an intact and clear foreign fiber target image.

  14. Modeling a color-rendering operator for high dynamic range images using a cone-response function

    Science.gov (United States)

    Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju

    2015-09-01

    Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.

  15. Study of CT-based positron range correction in high resolution 3D PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Cal-Gonzalez, J., E-mail: jacobo@nuclear.fis.ucm.es [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Herraiz, J.L. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Espana, S. [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Vicente, E. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Instituto de Estructura de la Materia, Consejo Superior de Investigaciones Cientificas (CSIC), Madrid (Spain); Herranz, E. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital General Universitario Gregorio Maranon, Madrid (Spain); Vaquero, J.J. [Dpto. de Bioingenieria e Ingenieria Espacial, Universidad Carlos III, Madrid (Spain); Udias, J.M. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain)

    2011-08-21

    Positron range limits the spatial resolution of PET images and has a different effect for different isotopes and positron propagation materials. Therefore it is important to consider it during image reconstruction, in order to obtain optimal image quality. Positron range distributions for most common isotopes used in PET in different materials were computed using the Monte Carlo simulations with PeneloPET. The range profiles were introduced into the 3D OSEM image reconstruction software FIRST and employed to blur the image either in the forward projection or in the forward and backward projection. The blurring introduced takes into account the different materials in which the positron propagates. Information on these materials may be obtained, for instance, from a segmentation of a CT image. The results of introducing positron blurring in both forward and backward projection operations was compared to using it only during forward projection. Further, the effect of different shapes of positron range profile in the quality of the reconstructed images with positron range correction was studied. For high positron energy isotopes, the reconstructed images show significant improvement in spatial resolution when positron range is taken into account during reconstruction, compared to reconstructions without positron range modeling.

  16. Study of CT-based positron range correction in high resolution 3D PET imaging

    International Nuclear Information System (INIS)

    Cal-Gonzalez, J.; Herraiz, J.L.; Espana, S.; Vicente, E.; Herranz, E.; Desco, M.; Vaquero, J.J.; Udias, J.M.

    2011-01-01

    Positron range limits the spatial resolution of PET images and has a different effect for different isotopes and positron propagation materials. Therefore it is important to consider it during image reconstruction, in order to obtain optimal image quality. Positron range distributions for most common isotopes used in PET in different materials were computed using the Monte Carlo simulations with PeneloPET. The range profiles were introduced into the 3D OSEM image reconstruction software FIRST and employed to blur the image either in the forward projection or in the forward and backward projection. The blurring introduced takes into account the different materials in which the positron propagates. Information on these materials may be obtained, for instance, from a segmentation of a CT image. The results of introducing positron blurring in both forward and backward projection operations was compared to using it only during forward projection. Further, the effect of different shapes of positron range profile in the quality of the reconstructed images with positron range correction was studied. For high positron energy isotopes, the reconstructed images show significant improvement in spatial resolution when positron range is taken into account during reconstruction, compared to reconstructions without positron range modeling.

  17. Influence of long-range Coulomb interaction in velocity map imaging.

    Science.gov (United States)

    Barillot, T; Brédy, R; Celep, G; Cohen, S; Compagnon, I; Concina, B; Constant, E; Danakas, S; Kalaitzis, P; Karras, G; Lépine, F; Loriot, V; Marciniak, A; Predelus-Renois, G; Schindler, B; Bordas, C

    2017-07-07

    The standard velocity-map imaging (VMI) analysis relies on the simple approximation that the residual Coulomb field experienced by the photoelectron ejected from a neutral or ion system may be neglected. Under this almost universal approximation, the photoelectrons follow ballistic (parabolic) trajectories in the externally applied electric field, and the recorded image may be considered as a 2D projection of the initial photoelectron velocity distribution. There are, however, several circumstances where this approximation is not justified and the influence of long-range forces must absolutely be taken into account for the interpretation and analysis of the recorded images. The aim of this paper is to illustrate this influence by discussing two different situations involving isolated atoms or molecules where the analysis of experimental images cannot be performed without considering long-range Coulomb interactions. The first situation occurs when slow (meV) photoelectrons are photoionized from a neutral system and strongly interact with the attractive Coulomb potential of the residual ion. The result of this interaction is the formation of a more complex structure in the image, as well as the appearance of an intense glory at the center of the image. The second situation, observed also at low energy, occurs in the photodetachment from a multiply charged anion and it is characterized by the presence of a long-range repulsive potential. Then, while the standard VMI approximation is still valid, the very specific features exhibited by the recorded images can be explained only by taking into consideration tunnel detachment through the repulsive Coulomb barrier.

  18. AUTOMATION OF IMAGE DATA PROCESSING

    Directory of Open Access Journals (Sweden)

    Preuss Ryszard

    2014-12-01

    Full Text Available This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft . At present, image data obtained by various registration systems (metric and non - metric cameras placed on airplanes , satellites , or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images . For fast images georeferencing automatic image matching algorithms are currently applied . They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage . Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object ( area. In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic , DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules . I mage processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters . The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system.

  19. Improved Feature Detection in Fused Intensity-Range Images with Complex SIFT (ℂSIFT

    Directory of Open Access Journals (Sweden)

    Boris Jutzi

    2011-09-01

    Full Text Available The real and imaginary parts are proposed as an alternative to the usual Polar representation of complex-valued images. It is proven that the transformation from Polar to Cartesian representation contributes to decreased mutual information, and hence to greater distinctiveness. The Complex Scale-Invariant Feature Transform (ℂSIFT detects distinctive features in complex-valued images. An evaluation method for estimating the uniformity of feature distributions in complex-valued images derived from intensity-range images is proposed. In order to experimentally evaluate the proposed methodology on intensity-range images, three different kinds of active sensing systems were used: Range Imaging, Laser Scanning, and Structured Light Projection devices (PMD CamCube 2.0, Z+F IMAGER 5003, Microsoft Kinect.

  20. Improving performance of wavelet-based image denoising algorithm using complex diffusion process

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Sharifzadeh, Sara; Korhonen, Jari

    2012-01-01

    using a variety of standard images and its performance has been compared against several de-noising algorithms known from the prior art. Experimental results show that the proposed algorithm preserves the edges better and in most cases, improves the measured visual quality of the denoised images......Image enhancement and de-noising is an essential pre-processing step in many image processing algorithms. In any image de-noising algorithm, the main concern is to keep the interesting structures of the image. Such interesting structures often correspond to the discontinuities (edges...... in comparison to the existing methods known from the literature. The improvement is obtained without excessive computational cost, and the algorithm works well on a wide range of different types of noise....

  1. Calibration and control for range imaging in mobile robot navigation

    Energy Technology Data Exchange (ETDEWEB)

    Dorum, O.H. [Norges Tekniske Hoegskole, Trondheim (Norway). Div. of Computer Systems and Telematics; Hoover, A. [University of South Florida, Tampa, FL (United States). Dept. of Computer Science and Engineering; Jones, J.P. [Oak Ridge National Lab., TN (United States)

    1994-06-01

    This paper addresses some issues in the development of sensor-based systems for mobile robot navigation which use range imaging sensors as the primary source for geometric information about the environment. In particular, we describe a model of scanning laser range cameras which takes into account the properties of the mechanical system responsible for image formation and a calibration procedure which yields improved accuracy over previous models. In addition, we describe an algorithm which takes the limitations of these sensors into account in path planning and path execution. In particular, range imaging sensors are characterized by a limited field of view and a standoff distance -- a minimum distance nearer than which surfaces cannot be sensed. These limitations can be addressed by enriching the concept of configuration space to include information about what can be sensed from a given configuration, and using this information to guide path planning and path following.

  2. Motion-compensated processing of image signals

    NARCIS (Netherlands)

    2010-01-01

    In a motion-compensated processing of images, input images are down-scaled (scl) to obtain down-scaled images, the down-scaled images are subjected to motion- compensated processing (ME UPC) to obtain motion-compensated images, the motion- compensated images are up-scaled (sc2) to obtain up-scaled

  3. Deeply trapped electrons in imaging plates and their utilization for extending the dynamic range

    International Nuclear Information System (INIS)

    Ohuchi, Hiroko; Kondo, Yasuhiro

    2010-01-01

    The absorption spectra of deep centers in an imaging plate (IP) made of BaFBr 0:85 I 0:15 :Eu 2+ have been studied in the ultraviolet region. Electrons trapped in deep centers are considered to be the cause of unerasable and reappearing latent images in IPs over-irradiated with X-rays. Deep centers showed a dominant peak at around 320 nm, followed by two small peaks at around 345 and 380 nm. By utilizing deeply trapped electrons, we have attempted to extend the dynamic range of an IP. The IP was irradiated by 150-kV X-rays with doses from 8.07 mGy to 80.7 Gy. Reading out the latent image by the stimulation of Eu 2+ luminescence with a 633-nm He-Ne laser light from a conventional Fuji reader showed a linear relationship with irradiated dose up to 0.8 Gy, but then becoming non-linear. After fully erasing with visible light, unerasable latent images were read out using 635-nm semi-conductor laser light combined with a photon-counting detection system. The dose-response curve so obtained gave a further two orders of magnitude extending the dynamic range up to 80.7 Gy. Comprehensive results indicate that electrons supplied from deep centers to the F centers provided the extended dynamic range after the F centers became saturated. Based on these facts, a model of the excitation of deeply trapped electrons and PSL processes is proposed.

  4. AUTOMATIC 3D MAPPING USING MULTIPLE UNCALIBRATED CLOSE RANGE IMAGES

    Directory of Open Access Journals (Sweden)

    M. Rafiei

    2013-09-01

    Full Text Available Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure and camera pose (motion, it is commonly known as structure from motion (SfM. In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction. Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower.

  5. Medical image processing

    CERN Document Server

    Dougherty, Geoff

    2011-01-01

    This book is designed for end users in the field of digital imaging, who wish to update their skills and understanding with the latest techniques in image analysis. This book emphasizes the conceptual framework of image analysis and the effective use of image processing tools. It uses applications in a variety of fields to demonstrate and consolidate both specific and general concepts, and to build intuition, insight and understanding. Although the chapters are essentially self-contained they reference other chapters to form an integrated whole. Each chapter employs a pedagogical approach to e

  6. High-dynamic-range microscope imaging based on exposure bracketing in full-field optical coherence tomography.

    Science.gov (United States)

    Leong-Hoi, Audrey; Montgomery, Paul C; Serio, Bruno; Twardowski, Patrice; Uhring, Wilfried

    2016-04-01

    By applying the proposed high-dynamic-range (HDR) technique based on exposure bracketing, we demonstrate a meaningful reduction in the spatial noise in image frames acquired with a CCD camera so as to improve the fringe contrast in full-field optical coherence tomography (FF-OCT). This new signal processing method thus allows improved probing within transparent or semitransparent samples. The proposed method is demonstrated on 3 μm thick transparent polymer films of Mylar, which, due to their transparency, produce low contrast fringe patterns in white-light interference microscopy. High-resolution tomographic analysis is performed using the technique. After performing appropriate signal processing, resulting XZ sections are observed. Submicrometer-sized defects can be lost in the noise that is present in the CCD images. With the proposed method, we show that by increasing the signal-to-noise ratio of the images, submicrometer-sized defect structures can thus be detected.

  7. Target recognition of ladar range images using even-order Zernike moments.

    Science.gov (United States)

    Liu, Zheng-Jun; Li, Qi; Xia, Zhi-Wei; Wang, Qi

    2012-11-01

    Ladar range images have attracted considerable attention in automatic target recognition fields. In this paper, Zernike moments (ZMs) are applied to classify the target of the range image from an arbitrary azimuth angle. However, ZMs suffer from high computational costs. To improve the performance of target recognition based on small samples, even-order ZMs with serial-parallel backpropagation neural networks (BPNNs) are applied to recognize the target of the range image. It is found that the rotation invariance and classified performance of the even-order ZMs are both better than for odd-order moments and for moments compressed by principal component analysis. The experimental results demonstrate that combining the even-order ZMs with serial-parallel BPNNs can significantly improve the recognition rate for small samples.

  8. Image quality dependence on image processing software in ...

    African Journals Online (AJOL)

    Image quality dependence on image processing software in computed radiography. ... Agfa CR readers use MUSICA software, and an upgrade with significantly different image ... Full Text: EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  9. Image Processing and Features Extraction of Fingerprint Images ...

    African Journals Online (AJOL)

    To demonstrate the importance of the image processing of fingerprint images prior to image enrolment or comparison, the set of fingerprint images in databases (a) and (b) of the FVC (Fingerprint Verification Competition) 2000 database were analyzed using a features extraction algorithm. This paper presents the results of ...

  10. Digital image processing in neutron radiography

    International Nuclear Information System (INIS)

    Koerner, S.

    2000-11-01

    Neutron radiography is a method for the visualization of the macroscopic inner-structure and material distributions of various samples. The basic experimental arrangement consists of a neutron source, a collimator functioning as beam formatting assembly and of a plane position sensitive integrating detector. The object is placed between the collimator exit and the detector, which records a two dimensional image. This image contains information about the composition and structure of the sample-interior, as a result of the interaction of neutrons by penetrating matter. Due to rapid developments of detector and computer technology as well as deployments in the field of digital image processing, new technologies are nowadays available which have the potential to improve the performance of neutron radiographic investigations enormously. Therefore, the aim of this work was to develop a state-of-the art digital imaging device, suitable for the two neutron radiography stations located at the 250 kW TRIGA Mark II reactor at the Atominstitut der Oesterreichischen Universitaeten and furthermore, to identify and develop two and three dimensional digital image processing methods suitable for neutron radiographic and tomographic applications, and to implement and optimize them within data processing strategies. The first step was the development of a new imaging device fulfilling the requirements of a high reproducibility, easy handling, high spatial resolution, a large dynamic range, high efficiency and a good linearity. The detector output should be inherently digitized. The key components of the detector system selected on the basis of these requirements consist of a neutron sensitive scintillator screen, a CCD-camera and a mirror to reflect the light emitted by the scintillator to the CCD-camera. This detector design enables to place the camera out of the direct neutron beam. The whole assembly is placed in a light shielded aluminum box. The camera is controlled by a

  11. Digital image processing in neutron radiography

    International Nuclear Information System (INIS)

    Koerner, S.

    2000-11-01

    Neutron radiography is a method for the visualization of the macroscopic inner-structure and material distributions of various materials. The basic experimental arrangement consists of a neutron source, a collimator functioning as beam formatting assembly and of a plane position sensitive integrating detector. The object is placed between the collimator exit and the detector, which records a two dimensional image. This image contains information about the composition and structure of the sample-interior, as a result of the interaction of neutrons by penetrating matter. Due to rapid developments of detector and computer technology as well as deployments in the field of digital image processing, new technologies are nowadays available which have the potential to improve the performance of neutron radiographic investigations enormously. Therefore, the aim of this work was to develop a state-of-the art digital imaging device, suitable for the two neutron radiography stations located at the 250 kW TRIGA Mark II reactor at the Atominstitut der Oesterreichischen Universitaeten and furthermore, to identify and develop two and three dimensional digital image processing methods suitable for neutron radiographic and tomographic applications, and to implement and optimize them within data processing strategies. The first step was the development of a new imaging device fulfilling the requirements of a high reproducibility, easy handling, high spatial resolution, a large dynamic range, high efficiency and a good linearity. The detector output should be inherently digitized. The key components of the detector system selected on the basis of these requirements consist of a neutron sensitive scintillator screen, a CCD-camera and a mirror to reflect the light emitted by the scintillator to the CCD-camera. This detector design enables to place the camera out of the direct neutron beam. The whole assembly is placed in a light shielded aluminum box. The camera is controlled by a

  12. Biomedical signal and image processing

    CERN Document Server

    Najarian, Kayvan

    2012-01-01

    INTRODUCTION TO DIGITAL SIGNAL AND IMAGE PROCESSINGSignals and Biomedical Signal ProcessingIntroduction and OverviewWhat is a ""Signal""?Analog, Discrete, and Digital SignalsProcessing and Transformation of SignalsSignal Processing for Feature ExtractionSome Characteristics of Digital ImagesSummaryProblemsFourier TransformIntroduction and OverviewOne-Dimensional Continuous Fourier TransformSampling and NYQUIST RateOne-Dimensional Discrete Fourier TransformTwo-Dimensional Discrete Fourier TransformFilter DesignSummaryProblemsImage Filtering, Enhancement, and RestorationIntroduction and Overview

  13. Industrial Applications of Image Processing

    Science.gov (United States)

    Ciora, Radu Adrian; Simion, Carmen Mihaela

    2014-11-01

    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  14. Multithreaded real-time 3D image processing software architecture and implementation

    Science.gov (United States)

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  15. Image processing in radiology

    International Nuclear Information System (INIS)

    Dammann, F.

    2002-01-01

    Medical imaging processing and analysis methods have significantly improved during recent years and are now being increasingly used in clinical applications. Preprocessing algorithms are used to influence image contrast and noise. Three-dimensional visualization techniques including volume rendering and virtual endoscopy are increasingly available to evaluate sectional imaging data sets. Registration techniques have been developed to merge different examination modalities. Structures of interest can be extracted from the image data sets by various segmentation methods. Segmented structures are used for automated quantification analysis as well as for three-dimensional therapy planning, simulation and intervention guidance, including medical modelling, virtual reality environments, surgical robots and navigation systems. These newly developed methods require specialized skills for the production and postprocessing of radiological imaging data as well as new definitions of the roles of the traditional specialities. The aim of this article is to give an overview of the state-of-the-art of medical imaging processing methods, practical implications for the ragiologist's daily work and future aspects. (orig.) [de

  16. Microprocessor based image processing system

    International Nuclear Information System (INIS)

    Mirza, M.I.; Siddiqui, M.N.; Rangoonwala, A.

    1987-01-01

    Rapid developments in the production of integrated circuits and introduction of sophisticated 8,16 and now 32 bit microprocessor based computers, have set new trends in computer applications. Nowadays the users by investing much less money can make optimal use of smaller systems by getting them custom-tailored according to their requirements. During the past decade there have been great advancements in the field of computer Graphics and consequently, 'Image Processing' has emerged as a separate independent field. Image Processing is being used in a number of disciplines. In the Medical Sciences, it is used to construct pseudo color images from computer aided tomography (CAT) or positron emission tomography (PET) scanners. Art, advertising and publishing people use pseudo colours in pursuit of more effective graphics. Structural engineers use Image Processing to examine weld X-rays to search for imperfections. Photographers use Image Processing for various enhancements which are difficult to achieve in a conventional dark room. (author)

  17. Image Processing: Some Challenging Problems

    Science.gov (United States)

    Huang, T. S.; Aizawa, K.

    1993-11-01

    Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing.

  18. Increasing the Dynamic Range of Synthetic Aperture Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Jensen, Jørgen Arendt

    2014-01-01

    images. The emissions for the two imaging modes are interleaved 1-to-1 ratio, providing a high frame rate equal to the effective pulse repetition frequency of each imaging mode. The direction of the flow is estimated, and the velocity is then determined in that direction. This method Works for all angles...... standard deviations are 1.59% and 6.12%, respectively. The presented method can improve the estimates by synthesizing a lower pulse repetition frequency, thereby increasing the dynamic range of the vector velocity imaging....

  19. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan

    2007-01-01

    number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...... boundaries of subcutaneous adipose tissue along this line segment. This process was repeated as the image was rotated (with the line position remaining unchanged) so that measurements around the complete circumference were obtained. Additionally, an image was created showing all detected boundary points so...

  20. Software for X-Ray Images Calculation of Hydrogen Compression Device in Megabar Pressure Range

    Science.gov (United States)

    Egorov, Nikolay; Bykov, Alexander; Pavlov, Valery

    2007-06-01

    Software for x-ray images simulation is described. The software is a part of x-ray method used for investigation of an equation of state of hydrogen in a megabar pressure range. A graphical interface that clearly and simply allows users to input data for x-ray image calculation: properties of the studied device, parameters of the x-ray radiation source, parameters of the x-ray radiation recorder, the experiment geometry; to represent the calculation results and efficiently transmit them to other software for processing. The calculation time is minimized. This makes it possible to perform calculations in a dialogue regime. The software is written in ``MATLAB'' system.

  1. Imaging using long range dipolar field effects

    International Nuclear Information System (INIS)

    Gutteridge, Sarah

    2002-01-01

    The work in this thesis has been undertaken by the author, except where indicated in reference, within the Magnetic Resonance Centre, at the University of Nottingham during the period from October 1998 to March 2001. This thesis details the different characteristics of the long range dipolar field and its application to magnetic resonance imaging. The long range dipolar field is usually neglected in nuclear magnetic resonance experiments, as molecular tumbling decouples its effect at short distances. However, in highly polarised samples residual long range components have a significant effect on the evolution of the magnetisation, giving rise to multiple spin echoes and unexpected quantum coherences. Three applications utilising these dipolar field effects are documented in this thesis. The first demonstrates the spatial sensitivity of the signal generated via dipolar field effects in structured liquid state samples. The second utilises the signal produced by the dipolar field to create proton spin density maps. These maps directly yield an absolute value for the water content of the sample that is unaffected by relaxation and any RF inhomogeneity or calibration errors in the radio frequency pulses applied. It has also been suggested that the signal generated by dipolar field effects may provide novel contrast in functional magnetic resonance imaging. In the third application, the effects of microscopic susceptibility variation on the signal are studied and the relaxation rate of the signal is compared to that of a conventional spin echo. (author)

  2. Context-dependent JPEG backward-compatible high-dynamic range image compression

    Science.gov (United States)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  3. The influence of CT image noise on proton range calculation in radiotherapy planning

    International Nuclear Information System (INIS)

    Chvetsov, Alexei V; Paige, Sandra L

    2010-01-01

    The purpose of this note is to evaluate the relationship between the stochastic errors in CT numbers and the standard deviation of the computed proton beam range in radiotherapy planning. The stochastic voxel-to-voxel variation in CT numbers called 'noise,' may be due to signal registration, processing and numerical image reconstruction technique. Noise in CT images may cause a deviation in the computed proton range from the physical proton range, even assuming that the error due to CT number-stopping power calibration is removed. To obtain the probability density function (PDF) of the computed proton range, we have used the continuing slowing down approximation (CSDA) and the uncorrelated white Gaussian noise along the proton path. The model of white noise was accepted because for the slice-based fan-beam CT scanner; the power-spectrum properties apply only to the axial (x, y) domain and the noise is uncorrelated in the z domain. However, the possible influence of the noise power spectrum on the standard deviation of the range should be investigated in the future. A random number generator was utilized for noise simulation and this procedure was iteratively repeated to obtain convergence of range PDF, which approached a Gaussian distribution. We showed that the standard deviation of the range, σ, increases linearly with the initial proton energy, computational grid size and standard deviation of the voxel values. The 95% confidence interval width of the range PDF, which is defined as 4σ, may reach 0.6 cm for the initial proton energy of 200 MeV, computational grid 0.25 cm and 5% standard deviation of CT voxel values. Our results show that the range uncertainty due to random errors in CT numbers may be significant and comparable to the uncertainties due to calibration of CT numbers. (note)

  4. Passive ranging using a filter-based non-imaging method based on oxygen absorption.

    Science.gov (United States)

    Yu, Hao; Liu, Bingqi; Yan, Zongqun; Zhang, Yu

    2017-10-01

    To solve the problem of poor real-time measurement caused by a hyperspectral imaging system and to simplify the design in passive ranging technology based on oxygen absorption spectrum, a filter-based non-imaging ranging method is proposed. In this method, three bandpass filters are used to obtain the source radiation intensities that are located in the oxygen absorption band near 762 nm and the band's left and right non-absorption shoulders, and a photomultiplier tube is used as the non-imaging sensor of the passive ranging system. Range is estimated by comparing the calculated values of band-average transmission due to oxygen absorption, τ O 2 , against the predicted curve of τ O 2 versus range. The method is tested under short-range conditions. Accuracy of 6.5% is achieved with the designed experimental ranging system at the range of 400 m.

  5. TECHNOLOGIES OF BRAIN IMAGES PROCESSING

    Directory of Open Access Journals (Sweden)

    O.M. Klyuchko

    2017-12-01

    Full Text Available The purpose of present research was to analyze modern methods of processing biological images implemented before storage in databases for biotechnological purposes. The databases further were incorporated into web-based digital systems. Examples of such information systems were described in the work for two levels of biological material organization; databases for storing data of histological analysis and of whole brain were described. Methods of neuroimaging processing for electronic brain atlas were considered. It was shown that certain pathological features can be revealed in histological image processing. Several medical diagnostic techniques (for certain brain pathologies, etc. as well as a few biotechnological methods are based on such effects. Algorithms of image processing were suggested. Electronic brain atlas was conveniently for professionals in different fields described in details. Approaches of brain atlas elaboration, “composite” scheme for large deformations as well as several methods of mathematic images processing were described as well.

  6. Infrared thermography quantitative image processing

    Science.gov (United States)

    Skouroliakou, A.; Kalatzis, I.; Kalyvas, N.; Grivas, TB

    2017-11-01

    Infrared thermography is an imaging technique that has the ability to provide a map of temperature distribution of an object’s surface. It is considered for a wide range of applications in medicine as well as in non-destructive testing procedures. One of its promising medical applications is in orthopaedics and diseases of the musculoskeletal system where temperature distribution of the body’s surface can contribute to the diagnosis and follow up of certain disorders. Although the thermographic image can give a fairly good visual estimation of distribution homogeneity and temperature pattern differences between two symmetric body parts, it is important to extract a quantitative measurement characterising temperature. Certain approaches use temperature of enantiomorphic anatomical points, or parameters extracted from a Region of Interest (ROI). A number of indices have been developed by researchers to that end. In this study a quantitative approach in thermographic image processing is attempted based on extracting different indices for symmetric ROIs on thermograms of the lower back area of scoliotic patients. The indices are based on first order statistical parameters describing temperature distribution. Analysis and comparison of these indices result in evaluating the temperature distribution pattern of the back trunk expected in healthy, regarding spinal problems, subjects.

  7. Image processing in medical ultrasound

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian

    This Ph.D project addresses image processing in medical ultrasound and seeks to achieve two major scientific goals: First to develop an understanding of the most significant factors influencing image quality in medical ultrasound, and secondly to use this knowledge to develop image processing...... multiple imaging setups. This makes the system well suited for development of new processing methods and for clinical evaluations, where acquisition of the exact same scan location for multiple methods is important. The second project addressed implementation, development and evaluation of SASB using...... methods for enhancing the diagnostic value of medical ultrasound. The project is an industrial Ph.D project co-sponsored by BK Medical ApS., with the commercial goal to improve the image quality of BK Medicals scanners. Currently BK Medical employ a simple conventional delay-and-sum beamformer to generate...

  8. Real-time processing for full-range Fourier-domain optical-coherence tomography with zero-filling interpolation using multiple graphic processing units.

    Science.gov (United States)

    Watanabe, Yuuki; Maeno, Seiya; Aoshima, Kenji; Hasegawa, Haruyuki; Koseki, Hitoshi

    2010-09-01

    The real-time display of full-range, 2048?axial pixelx1024?lateral pixel, Fourier-domain optical-coherence tomography (FD-OCT) images is demonstrated. The required speed was achieved by using dual graphic processing units (GPUs) with many stream processors to realize highly parallel processing. We used a zero-filling technique, including a forward Fourier transform, a zero padding to increase the axial data-array size to 8192, an inverse-Fourier transform back to the spectral domain, a linear interpolation from wavelength to wavenumber, a lateral Hilbert transform to obtain the complex spectrum, a Fourier transform to obtain the axial profiles, and a log scaling. The data-transfer time of the frame grabber was 15.73?ms, and the processing time, which includes the data transfer between the GPU memory and the host computer, was 14.75?ms, for a total time shorter than the 36.70?ms frame-interval time using a line-scan CCD camera operated at 27.9?kHz. That is, our OCT system achieved a processed-image display rate of 27.23 frames/s.

  9. Enhancement tuning and control for high dynamic range images in multi-scale locally adaptive contrast enhancement algorithms

    NARCIS (Netherlands)

    Cvetkovic, S.D.; Schirris, J.; With, de P.H.N.

    2009-01-01

    For real-time imaging in surveillance applications, visibility of details is of primary importance to ensure customer confidence. If we display High Dynamic-Range (HDR) scenes whose contrast spans four or more orders of magnitude on a conventional monitor without additional processing, results are

  10. A novel data processing technique for image reconstruction of penumbral imaging

    Science.gov (United States)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  11. Quality assessment of the digitalization process of analog x-ray images

    International Nuclear Information System (INIS)

    Georgieva, D.

    2014-01-01

    Computer-assisted diagnosis enabled doctors for a second point-of-view on the test results. This improves the diseases' early detection and significantly reduces the chance of errors. These methods very nicely complemented the possibilities of digital medical imaging apparatus, but in analog images their applicability and results entirely depend on the quality of analog images digitalisation. Today many standards and remarks for good practices discuss the digital apparatus image quality but the digitalisation process of analog medical images is not a part of them. Medical imaging apparatus have become digital, but within an entirely digital medical environment is necessary for their ability to blend with the old analog medical imaging carriers. The life of patients doesn't start with the beginning of digital era and for the aim of tracking diseases it is necessary to use the new digital images as well as older analog ones. For the generation of 40-50 years a large archive of images is piled up, which should be accounted of in the diagnosis process. This article is the author's study of the digitalized image quality problem. It offers a new approach to the x-ray image digitalisation - getting the HDR-image by optical sensor. After the HDR-image generation method offers to be used a digital signal processing to improve the quality of the final 16 bit gray scale medical image. The new method for medical image enhancement is proposed - it improves the image contrast, it increases or preserves the dynamic range and it doesn't lead to the loss of small low contrast structures in the image. Key words: Quality of Digital X-Rays Images

  12. Use of personal computer image for processing a magnetic resonance image (MRI)

    International Nuclear Information System (INIS)

    Yamamoto, Tetsuo; Tanaka, Hitoshi

    1988-01-01

    Image processing of MR imaging was attempted by using a popular personal computer as 16-bit model. The computer processed the images on a 256 x 256 matrix and 512 x 512 matrix. The softwer languages for image-processing were those of Macro-Assembler performed by (MS-DOS). The original images, acuired with an 0.5 T superconducting machine (VISTA MR 0.5 T, Picker International) were transfered to the computer by the flexible disket. Image process are the display of image to monitor, other the contrast enhancement, the unsharped mask contrast enhancement, the various filter process, the edge detections or the color histogram was obtained in 1.6 sec to 67 sec, indicating that commercialzed personal computer had ability for routine clinical purpose in MRI-processing. (author)

  13. Introduction to computer image processing

    Science.gov (United States)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  14. Statistical image processing and multidimensional modeling

    CERN Document Server

    Fieguth, Paul

    2010-01-01

    Images are all around us! The proliferation of low-cost, high-quality imaging devices has led to an explosion in acquired images. When these images are acquired from a microscope, telescope, satellite, or medical imaging device, there is a statistical image processing task: the inference of something - an artery, a road, a DNA marker, an oil spill - from imagery, possibly noisy, blurry, or incomplete. A great many textbooks have been written on image processing. However this book does not so much focus on images, per se, but rather on spatial data sets, with one or more measurements taken over

  15. Enhancing swimming pool safety by the use of range-imaging cameras

    Science.gov (United States)

    Geerardyn, D.; Boulanger, S.; Kuijk, M.

    2015-05-01

    Drowning is the cause of death of 372.000 people, each year worldwide, according to the report of November 2014 of the World Health Organization.1 Currently, most swimming pools only use lifeguards to detect drowning people. In some modern swimming pools, camera-based detection systems are nowadays being integrated. However, these systems have to be mounted underwater, mostly as a replacement of the underwater lighting. In contrast, we are interested in range imaging cameras mounted on the ceiling of the swimming pool, allowing to distinguish swimmers at the surface from drowning people underwater, while keeping the large field-of-view and minimizing occlusions. However, we have to take into account that the water surface of a swimming pool is not a flat, but mostly rippled surface, and that the water is transparent for visible light, but less transparent for infrared or ultraviolet light. We investigated the use of different types of 3D cameras to detect objects underwater at different depths and with different amplitudes of surface perturbations. Specifically, we performed measurements with a commercial Time-of-Flight camera, a commercial structured-light depth camera and our own Time-of-Flight system. Our own system uses pulsed Time-of-Flight and emits light of 785 nm. The measured distances between the camera and the object are influenced through the perturbations on the water surface. Due to the timing of our Time-of-Flight camera, our system is theoretically able to minimize the influence of the reflections of a partially-reflecting surface. The combination of a post image-acquisition filter compensating for the perturbations and the use of a light source with shorter wavelengths to enlarge the depth range can improve the current commercial cameras. As a result, we can conclude that low-cost range imagers can increase swimming pool safety, by inserting a post-processing filter and the use of another light source.

  16. Volumetric image processing: A new technique for three-dimensional imaging

    International Nuclear Information System (INIS)

    Fishman, E.K.; Drebin, B.; Magid, D.; St Ville, J.A.; Zerhouni, E.A.; Siegelman, S.S.; Ney, D.R.

    1986-01-01

    Volumetric three-dimensional (3D) image processing was performed on CT scans of 25 normal hips, and image quality and potential diagnostic applications were assessed. In contrast to surface detection 3D techniques, volumetric processing preserves every pixel of transaxial CT data, replacing the gray scale with transparent ''gels'' and shading. Anatomically, accurate 3D images can be rotated and manipulated in real time, including simulated tissue layer ''peeling'' and mock surgery or disarticulation. This pilot study suggests that volumetric rendering is a major advance in signal processing of medical image data, producing a high quality, uniquely maneuverable image that is useful for fracture interpretation, soft-tissue analysis, surgical planning, and surgical rehearsal

  17. Hierarchical tone mapping for high dynamic range image visualization

    Science.gov (United States)

    Qiu, Guoping; Duan, Jiang

    2005-07-01

    In this paper, we present a computationally efficient, practically easy to use tone mapping techniques for the visualization of high dynamic range (HDR) images in low dynamic range (LDR) reproduction devices. The new method, termed hierarchical nonlinear linear (HNL) tone-mapping operator maps the pixels in two hierarchical steps. The first step allocates appropriate numbers of LDR display levels to different HDR intensity intervals according to the pixel densities of the intervals. The second step linearly maps the HDR intensity intervals to theirs allocated LDR display levels. In the developed HNL scheme, the assignment of LDR display levels to HDR intensity intervals is controlled by a very simple and flexible formula with a single adjustable parameter. We also show that our new operators can be used for the effective enhancement of ordinary images.

  18. Image processing based detection of lung cancer on CT scan images

    Science.gov (United States)

    Abdillah, Bariqi; Bustamam, Alhadi; Sarwinda, Devvi

    2017-10-01

    In this paper, we implement and analyze the image processing method for detection of lung cancer. Image processing techniques are widely used in several medical problems for picture enhancement in the detection phase to support the early medical treatment. In this research we proposed a detection method of lung cancer based on image segmentation. Image segmentation is one of intermediate level in image processing. Marker control watershed and region growing approach are used to segment of CT scan image. Detection phases are followed by image enhancement using Gabor filter, image segmentation, and features extraction. From the experimental results, we found the effectiveness of our approach. The results show that the best approach for main features detection is watershed with masking method which has high accuracy and robust.

  19. Scilab and SIP for Image Processing

    OpenAIRE

    Fabbri, Ricardo; Bruno, Odemir Martinez; Costa, Luciano da Fontoura

    2012-01-01

    This paper is an overview of Image Processing and Analysis using Scilab, a free prototyping environment for numerical calculations similar to Matlab. We demonstrate the capabilities of SIP -- the Scilab Image Processing Toolbox -- which extends Scilab with many functions to read and write images in over 100 major file formats, including PNG, JPEG, BMP, and TIFF. It also provides routines for image filtering, edge detection, blurring, segmentation, shape analysis, and image recognition. Basic ...

  20. Digital Data Processing of Images

    African Journals Online (AJOL)

    Digital data processing was investigated to perform image processing. Image smoothing and restoration were explored and promising results obtained. The use of the computer, not only as a data management device, but as an important tool to render quantitative information, was illustrated by lung function determination.

  1. Image processing in diabetic related causes

    CERN Document Server

    Kumar, Amit

    2016-01-01

    This book is a collection of all the experimental results and analysis carried out on medical images of diabetic related causes. The experimental investigations have been carried out on images starting from very basic image processing techniques such as image enhancement to sophisticated image segmentation methods. This book is intended to create an awareness on diabetes and its related causes and image processing methods used to detect and forecast in a very simple way. This book is useful to researchers, Engineers, Medical Doctors and Bioinformatics researchers.

  2. Fuzzy image processing and applications with Matlab

    CERN Document Server

    Chaira, Tamalika

    2009-01-01

    In contrast to classical image analysis methods that employ ""crisp"" mathematics, fuzzy set techniques provide an elegant foundation and a set of rich methodologies for diverse image-processing tasks. However, a solid understanding of fuzzy processing requires a firm grasp of essential principles and background knowledge.Fuzzy Image Processing and Applications with MATLAB® presents the integral science and essential mathematics behind this exciting and dynamic branch of image processing, which is becoming increasingly important to applications in areas such as remote sensing, medical imaging,

  3. Processing of medical images

    International Nuclear Information System (INIS)

    Restrepo, A.

    1998-01-01

    Thanks to the innovations in the technology for the processing of medical images, to the high development of better and cheaper computers, and, additionally, to the advances in the systems of communications of medical images, the acquisition, storage and handling of digital images has acquired great importance in all the branches of the medicine. It is sought in this article to introduce some fundamental ideas of prosecution of digital images that include such aspects as their representation, storage, improvement, visualization and understanding

  4. Spot restoration for GPR image post-processing

    Science.gov (United States)

    Paglieroni, David W; Beer, N. Reginald

    2014-05-20

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  5. Joint Processing of Envelope Alignment and Phase Compensation for Isar Imaging

    Science.gov (United States)

    Chen, Tao; Jin, Guanghu; Dong, Zhen

    2018-04-01

    Range envelope alignment and phase compensation are spilt into two isolated parts in the classical methods of translational motion compensation in Inverse Synthetic Aperture Radar (ISAR) imaging. In classic method of the rotating object imaging, the two reference points of the envelope alignment and the Phase Difference (PD) estimation are probably not the same point, making it difficult to uncouple the coupling term by conducting the correction of Migration Through Resolution Cell (MTRC). In this paper, an improved approach of joint processing which chooses certain scattering point as the sole reference point is proposed to perform with utilizing the Prominent Point Processing (PPP) method. With this end in view, we firstly get the initial image using classical methods from which a certain scattering point can be chose. The envelope alignment and phase compensation using the selected scattering point as the same reference point are subsequently conducted. The keystone transform is thus smoothly applied to further improve imaging quality. Both simulation experiments and real data processing are provided to demonstrate the performance of the proposed method compared with classical method.

  6. Intelligent medical image processing by simulated annealing

    International Nuclear Information System (INIS)

    Ohyama, Nagaaki

    1992-01-01

    Image processing is being widely used in the medical field and already has become very important, especially when used for image reconstruction purposes. In this paper, it is shown that image processing can be classified into 4 categories; passive, active, intelligent and visual image processing. These 4 classes are explained at first through the use of several examples. The results show that the passive image processing does not give better results than the others. Intelligent image processing, then, is addressed, and the simulated annealing method is introduced. Due to the flexibility of the simulated annealing, formulated intelligence is shown to be easily introduced in an image reconstruction problem. As a practical example, 3D blood vessel reconstruction from a small number of projections, which is insufficient for conventional method to give good reconstruction, is proposed, and computer simulation clearly shows the effectiveness of simulated annealing method. Prior to the conclusion, medical file systems such as IS and C (Image Save and Carry) is pointed out to have potential for formulating knowledge, which is indispensable for intelligent image processing. This paper concludes by summarizing the advantages of simulated annealing. (author)

  7. Time-of-flight range imaging for underwater applications

    Science.gov (United States)

    Merbold, Hannes; Catregn, Gion-Pol; Leutenegger, Tobias

    2018-02-01

    Precise and low-cost range imaging in underwater settings with object distances on the meter level is demonstrated. This is addressed through silicon-based time-of-flight (TOF) cameras operated with light emitting diodes (LEDs) at visible, rather than near-IR wavelengths. We find that the attainable performance depends on a variety of parameters, such as the wavelength dependent absorption of water, the emitted optical power and response times of the LEDs, or the spectral sensitivity of the TOF chip. An in-depth analysis of the interplay between the different parameters is given and the performance of underwater TOF imaging using different visible illumination wavelengths is analyzed.

  8. Invitation to medical image processing

    International Nuclear Information System (INIS)

    Kitasaka, Takayuki; Suenaga, Yasuhito; Mori, Kensaku

    2010-01-01

    This medical essay explains the present state of CT image processing technology about its recognition, acquisition and visualization for computer-assisted diagnosis (CAD) and surgery (CAS), and future view. Medical image processing has a series of history of its original start from the discovery of X-ray to its application to diagnostic radiography, its combination with the computer for CT, multi-detector raw CT, leading to 3D/4D images for CAD and CAS. CAD is performed based on the recognition of normal anatomical structure of human body, detection of possible abnormal lesion and visualization of its numerical figure into image. Actual instances of CAD images are presented here for chest (lung cancer), abdomen (colorectal cancer) and future body atlas (models of organs and diseases for imaging), a recent national project: computer anatomy. CAS involves the surgical planning technology based on 3D images, navigation of the actual procedure and of endoscopy. As guidance to beginning technological image processing, described are the national and international community like related academic societies, regularly conducting congresses, textbooks and workshops, and topics in the field like computed anatomy of an individual patient for CAD and CAS, its data security and standardization. In future, protective medicine is in authors' view based on the imaging technology, e.g., daily life CAD of individuals ultimately, as exemplified in the present body thermometer and home sphygmometer, to monitor one's routine physical conditions. (T.T.)

  9. Differential morphology and image processing.

    Science.gov (United States)

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  10. The Digital Image Processing And Quantitative Analysis In Microscopic Image Characterization

    International Nuclear Information System (INIS)

    Ardisasmita, M. Syamsa

    2000-01-01

    Many electron microscopes although have produced digital images, but not all of them are equipped with a supporting unit to process and analyse image data quantitatively. Generally the analysis of image has to be made visually and the measurement is realized manually. The development of mathematical method for geometric analysis and pattern recognition, allows automatic microscopic image analysis with computer. Image processing program can be used for image texture and structure periodic analysis by the application of Fourier transform. Because the development of composite materials. Fourier analysis in frequency domain become important for measure the crystallography orientation. The periodic structure analysis and crystal orientation are the key to understand many material properties like mechanical strength. stress, heat conductivity, resistance, capacitance and other material electric and magnetic properties. In this paper will be shown the application of digital image processing in microscopic image characterization and analysis in microscopic image

  11. Image exploitation and dissemination prototype of distributed image processing

    International Nuclear Information System (INIS)

    Batool, N.; Huqqani, A.A.; Mahmood, A.

    2003-05-01

    Image processing applications requirements can be best met by using the distributed environment. This report presents to draw inferences by utilizing the existed LAN resources under the distributed computing environment using Java and web technology for extensive processing to make it truly system independent. Although the environment has been tested using image processing applications, its design and architecture is truly general and modular so that it can be used for other applications as well, which require distributed processing. Images originating from server are fed to the workers along with the desired operations to be performed on them. The Server distributes the task among the Workers who carry out the required operations and send back the results. This application has been implemented using the Remote Method Invocation (RMl) feature of Java. Java RMI allows an object running in one Java Virtual Machine (JVM) to invoke methods on another JVM thus providing remote communication between programs written in the Java programming language. RMI can therefore be used to develop distributed applications [1]. We undertook this project to gain a better understanding of distributed systems concepts and its uses for resource hungry jobs. The image processing application is developed under this environment

  12. Digital image processing

    National Research Council Canada - National Science Library

    Gonzalez, Rafael C; Woods, Richard E

    2008-01-01

    Completely self-contained-and heavily illustrated-this introduction to basic concepts and methodologies for digital image processing is written at a level that truly is suitable for seniors and first...

  13. Predictive images of postoperative levator resection outcome using image processing software

    Directory of Open Access Journals (Sweden)

    Mawatari Y

    2016-09-01

    Full Text Available Yuki Mawatari,1 Mikiko Fukushima2 1Igo Ophthalmic Clinic, Kagoshima, 2Department of Ophthalmology, Faculty of Life Science, Kumamoto University, Chuo-ku, Kumamoto, Japan Purpose: This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection.Methods: Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection. Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®. Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery.Results: Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2% were satisfied with their postoperative appearances, and 55 patients (84.8% positively responded to the usefulness of processed images to predict postoperative appearance.Conclusion: Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. Keywords: levator resection, blepharoptosis, image processing, Adobe Photoshop® 

  14. Predictive images of postoperative levator resection outcome using image processing software.

    Science.gov (United States)

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller's muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop ® ). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery.

  15. Applied medical image processing a basic course

    CERN Document Server

    Birkfellner, Wolfgang

    2014-01-01

    A widely used, classroom-tested text, Applied Medical Image Processing: A Basic Course delivers an ideal introduction to image processing in medicine, emphasizing the clinical relevance and special requirements of the field. Avoiding excessive mathematical formalisms, the book presents key principles by implementing algorithms from scratch and using simple MATLAB®/Octave scripts with image data and illustrations on an accompanying CD-ROM or companion website. Organized as a complete textbook, it provides an overview of the physics of medical image processing and discusses image formats and data storage, intensity transforms, filtering of images and applications of the Fourier transform, three-dimensional spatial transforms, volume rendering, image registration, and tomographic reconstruction.

  16. Processor for Real-Time Atmospheric Compensation in Long-Range Imaging, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Long-range imaging is a critical component to many NASA applications including range surveillance, launch tracking, and astronomical observation. However,...

  17. Processing computed tomography images by using personal computer

    International Nuclear Information System (INIS)

    Seto, Kazuhiko; Fujishiro, Kazuo; Seki, Hirofumi; Yamamoto, Tetsuo.

    1994-01-01

    Processing of CT images was attempted by using a popular personal computer. The program for image-processing was made with C compiler. The original images, acquired with CT scanner (TCT-60A, Toshiba), were transferred to the computer by 8-inch flexible diskette. Many fundamental image-processing, such as displaying image to the monitor, calculating CT value and drawing the profile curve. The result showed that a popular personal computer had ability to process CT images. It seemed that 8-inch flexible diskette was still useful medium of transferring image data. (author)

  18. Process perspective on image quality evaluation

    Science.gov (United States)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  19. Digital processing of radiographic images

    Science.gov (United States)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  20. Three-Dimensional Microwave Imaging for Concealed Weapon Detection Using Range Stacking Technique

    Directory of Open Access Journals (Sweden)

    Weixian Tan

    2017-01-01

    Full Text Available Three-dimensional (3D microwave imaging has been proven to be well suited for concealed weapon detection application. For the 3D image reconstruction under two-dimensional (2D planar aperture condition, most of current imaging algorithms focus on decomposing the 3D free space Green function by exploiting the stationary phase and, consequently, the accuracy of the final imagery is obtained at a sacrifice of computational complexity due to the need of interpolation. In this paper, from an alternative viewpoint, we propose a novel interpolation-free imaging algorithm based on wavefront reconstruction theory. The algorithm is an extension of the 2D range stacking algorithm (RSA with the advantages of low computational cost and high precision. The algorithm uses different reference signal spectrums at different range bins and then forms the target functions at desired range bin by a concise coherent summation. Several practical issues such as the propagation loss compensation, wavefront reconstruction, and aliasing mitigating are also considered. The sampling criterion and the achievable resolutions for the proposed algorithm are also derived. Finally, the proposed method is validated through extensive computer simulations and real-field experiments. The results show that accurate 3D image can be generated at a very high speed by utilizing the proposed algorithm.

  1. Image processing. Volumetric analysis with a digital image processing system. [GAMMA]. Bildverarbeitung. Volumetrie mittels eines digitalen Bildverarbeitungssystems

    Energy Technology Data Exchange (ETDEWEB)

    Kindler, M; Radtke, F; Demel, G

    1986-01-01

    The book is arranged in seven sections, describing various applications of volumetric analysis using image processing systems, and various methods of diagnostic evaluation of images obtained by gamma scintigraphy, cardic catheterisation, and echocardiography. A dynamic ventricular phantom is explained that has been developed for checking and calibration for safe examination of patient, the phantom allowing extensive simulation of volumetric and hemodynamic conditions of the human heart: One section discusses the program development for image processing, referring to a number of different computer systems. The equipment described includes a small non-expensive PC system, as well as a standardized nuclear medical diagnostic system, and a computer system especially suited to image processing.

  2. A contribution to laser range imaging technology

    Science.gov (United States)

    Defigueiredo, Rui J. P.; Denney, Bradley S.

    1991-01-01

    The goal of the project was to develop a methodology for fusion of a Laser Range Imaging Device (LRID) and camera data. Our initial work in the project led to the conclusion that none of the LRID's that were available were sufficiently adequate for this purpose. Thus we spent the time and effort on the development of the new LRID with several novel features which elicit the desired fusion objectives. In what follows, we describe the device developed and built under contract. The Laser Range Imaging Device (LRID) is an instrument which scans a scene using a laser and returns range and reflection intensity data. Such a system would be extremely useful in scene analysis in industry and space applications. The LRID will be eventually implemented on board a mobile robot. The current system has several advantages over some commercially available systems. One improvement is the use of X-Y galvonometer scanning mirrors instead of polygonal mirrors present in some systems. The advantage of the X-Y scanning mirrors is that the mirror system can be programmed to provide adjustable scanning regions. For each mirror there are two controls accessible by the computer. The first is the mirror position and the second is a zoom factor which modifies the amplitude of the position of the parameter. Another advantage of the LRID is the use of a visible low power laser. Some of the commercial systems use a higher intensity invisible laser which causes safety concerns. By using a low power visible laser, not only can one see the beam and avoid direct eye contact, but also the lower intensity reduces the risk of damage to the eye, and no protective eyeware is required.

  3. Efficient processing of fluorescence images using directional multiscale representations.

    Science.gov (United States)

    Labate, D; Laezza, F; Negi, P; Ozcan, B; Papadakis, M

    2014-01-01

    Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data.

  4. Organization of bubble chamber image processing

    International Nuclear Information System (INIS)

    Gritsaenko, I.A.; Petrovykh, L.P.; Petrovykh, Yu.L.; Fenyuk, A.B.

    1985-01-01

    A programme of bubble chamber image processing is described. The programme is written in FORTRAN, it is developed for the DEC-10 computer and is designed for operation of semi-automation processing-measurement projects PUOS-2 and PUOS-4. Fornalization of the image processing permits to use it for different physical experiments

  5. VIP: Vortex Image Processing Package for High-contrast Direct Imaging

    Science.gov (United States)

    Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Absil, Olivier; Christiaens, Valentin; Defrère, Denis; Mawet, Dimitri; Milli, Julien; Absil, Pierre-Antoine; Van Droogenbroeck, Marc; Cantalloube, Faustine; Hinz, Philip M.; Skemer, Andrew J.; Karlsson, Mikael; Surdej, Jean

    2017-07-01

    We present the Vortex Image Processing (VIP) library, a python package dedicated to astronomical high-contrast imaging. Our package relies on the extensive python stack of scientific libraries and aims to provide a flexible framework for high-contrast data and image processing. In this paper, we describe the capabilities of VIP related to processing image sequences acquired using the angular differential imaging (ADI) observing technique. VIP implements functionalities for building high-contrast data processing pipelines, encompassing pre- and post-processing algorithms, potential source position and flux estimation, and sensitivity curve generation. Among the reference point-spread function subtraction techniques for ADI post-processing, VIP includes several flavors of principal component analysis (PCA) based algorithms, such as annular PCA and incremental PCA algorithms capable of processing big datacubes (of several gigabytes) on a computer with limited memory. Also, we present a novel ADI algorithm based on non-negative matrix factorization, which comes from the same family of low-rank matrix approximations as PCA and provides fairly similar results. We showcase the ADI capabilities of the VIP library using a deep sequence on HR 8799 taken with the LBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP, we investigated the presence of additional companions around HR 8799 and did not find any significant additional point source beyond the four known planets. VIP is available at http://github.com/vortex-exoplanet/VIP and is accompanied with Jupyter notebook tutorials illustrating the main functionalities of the library.

  6. An Applied Image Processing for Radiographic Testing

    International Nuclear Information System (INIS)

    Ratchason, Surasak; Tuammee, Sopida; Srisroal Anusara

    2005-10-01

    An applied image processing for radiographic testing (RT) is desirable because it decreases time-consuming, decreases the cost of inspection process that need the experienced workers, and improves the inspection quality. This paper presents the primary study of image processing for RT-films that is the welding-film. The proposed approach to determine the defects on weld-images. The BMP image-files are opened and developed by computer program that using Borland C ++ . The software has five main methods that are Histogram, Contrast Enhancement, Edge Detection, Image Segmentation and Image Restoration. Each the main method has the several sub method that are the selected options. The results showed that the effective software can detect defects and the varied method suit for the different radiographic images. Furthermore, improving images are better when two methods are incorporated

  7. Quantitative image processing in fluid mechanics

    Science.gov (United States)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  8. Applications of the Integrated High-Performance CMOS Image Sensor to Range Finders — from Optical Triangulation to the Automotive Field

    Directory of Open Access Journals (Sweden)

    Joe-Air Jiang

    2008-03-01

    Full Text Available With their significant features, the applications of complementary metal-oxidesemiconductor (CMOS image sensors covers a very extensive range, from industrialautomation to traffic applications such as aiming systems, blind guidance, active/passiverange finders, etc. In this paper CMOS image sensor-based active and passive rangefinders are presented. The measurement scheme of the proposed active/passive rangefinders is based on a simple triangulation method. The designed range finders chieflyconsist of a CMOS image sensor and some light sources such as lasers or LEDs. Theimplementation cost of our range finders is quite low. Image processing software to adjustthe exposure time (ET of the CMOS image sensor to enhance the performance oftriangulation-based range finders was also developed. An extensive series of experimentswere conducted to evaluate the performance of the designed range finders. From theexperimental results, the distance measurement resolutions achieved by the active rangefinder and the passive range finder can be better than 0.6% and 0.25% within themeasurement ranges of 1 to 8 m and 5 to 45 m, respectively. Feasibility tests onapplications of the developed CMOS image sensor-based range finders to the automotivefield were also conducted. The experimental results demonstrated that our range finders arewell-suited for distance measurements in this field.

  9. [Optimization of digital chest radiography image post-processing in diagnosis of pneumoconiosis].

    Science.gov (United States)

    Sheng, Bing-yong; Mao, Ling; Zhou, Shao-wei; Shi, Jin

    2013-11-01

    To establish the optimal image post-processing parameters for digital chest radiography as preliminary research for introducing digital radiography (DR) to pneumoconiosis diagnosis in China. A total of 204 pneumoconiosis patients and 31 dust-exposed workers were enrolled as the subjects in this research. Film-screen radiography (FSR) and DR images were taken for all subjects. DR films were printed after raw images were processed and parameters were altered using DR workstation (GE Healthcare, U.S.A.). Image gradations, lung textures, and the imaging of thoracic vertebra were evaluated by pneumoconiosis experts, and the optimal post-processing parameters were selected. Optical density was measured for both DR films and FSR films. For the DR machine used in this research, the contrast adjustment (CA) and brightness adjustment (BA) were the main parameters that determine the brightness and gray levels of images. The optimal ranges for CA and BA were 115%∼120% and 160%∼165%, respectively. The quality of DR chest films would be optimized when tissue contrast was adjusted to a maximum of 0.15, edge to a minimum of 1, and both noise reduction and tissue equalization to0.The failure rate of chest DR (0.4%) was significantly lower than that of chest FSR (17%) (P image post-processing on DR machine purchased from GE Healthcare, the DR chest films can meet all requirements for the quality of chest X-ray films in the Chinese diagnostic criteria for pneumoconiosis.

  10. Digital image processing

    National Research Council Canada - National Science Library

    Gonzalez, Rafael C; Woods, Richard E

    2008-01-01

    ...-year graduate students in almost any technical discipline. The leading textbook in its field for more than twenty years, it continues its cutting-edge focus on contemporary developments in all mainstream areas of image processing-e.g...

  11. Hyperspectral image processing

    CERN Document Server

    Wang, Liguo

    2016-01-01

    Based on the authors’ research, this book introduces the main processing techniques in hyperspectral imaging. In this context, SVM-based classification, distance comparison-based endmember extraction, SVM-based spectral unmixing, spatial attraction model-based sub-pixel mapping, and MAP/POCS-based super-resolution reconstruction are discussed in depth. Readers will gain a comprehensive understanding of these cutting-edge hyperspectral imaging techniques. Researchers and graduate students in fields such as remote sensing, surveying and mapping, geosciences and information systems will benefit from this valuable resource.

  12. A concise introduction to image processing using C++

    CERN Document Server

    Wang, Meiqing

    2008-01-01

    Image recognition has become an increasingly dynamic field with new and emerging civil and military applications in security, exploration, and robotics. Written by experts in fractal-based image and video compression, A Concise Introduction to Image Processing using C++ strengthens your knowledge of fundamentals principles in image acquisition, conservation, processing, and manipulation, allowing you to easily apply these techniques in real-world problems. The book presents state-of-the-art image processing methodology, including current industrial practices for image compression, image de-noi

  13. On some applications of diffusion processes for image processing

    International Nuclear Information System (INIS)

    Morfu, S.

    2009-01-01

    We propose a new algorithm inspired by the properties of diffusion processes for image filtering. We show that purely nonlinear diffusion processes ruled by Fisher equation allows contrast enhancement and noise filtering, but involves a blurry image. By contrast, anisotropic diffusion, described by Perona and Malik algorithm, allows noise filtering and preserves the edges. We show that combining the properties of anisotropic diffusion with those of nonlinear diffusion provides a better processing tool which enables noise filtering, contrast enhancement and edge preserving.

  14. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  15. Trends in medical image processing

    International Nuclear Information System (INIS)

    Robilotta, C.C.

    1987-01-01

    The function of medical image processing is analysed, mentioning the developments, the physical agents, and the main categories, as conection of distortion in image formation, detectability increase, parameters quantification, etc. (C.G.C.) [pt

  16. Image processing and analysis software development

    International Nuclear Information System (INIS)

    Shahnaz, R.

    1999-01-01

    The work presented in this project is aimed at developing a software 'IMAGE GALLERY' to investigate various image processing and analysis techniques. The work was divided into two parts namely the image processing techniques and pattern recognition, which further comprised of character and face recognition. Various image enhancement techniques including negative imaging, contrast stretching, compression of dynamic, neon, diffuse, emboss etc. have been studied. Segmentation techniques including point detection, line detection, edge detection have been studied. Also some of the smoothing and sharpening filters have been investigated. All these imaging techniques have been implemented in a window based computer program written in Visual Basic Neural network techniques based on Perception model have been applied for face and character recognition. (author)

  17. Application of Java technology in radiation image processing

    International Nuclear Information System (INIS)

    Cheng Weifeng; Li Zheng; Chen Zhiqiang; Zhang Li; Gao Wenhuan

    2002-01-01

    The acquisition and processing of radiation image plays an important role in modern application of civil nuclear technology. The author analyzes the rationale of Java image processing technology which includes Java AWT, Java 2D and JAI. In order to demonstrate applicability of Java technology in field of image processing, examples of application of JAI technology in processing of radiation images of large container have been given

  18. Spin-image surface matching based target recognition in laser radar range imagery

    International Nuclear Information System (INIS)

    Li, Wang; Jian-Feng, Sun; Qi, Wang

    2010-01-01

    We explore the problem of in-plane rotation-invariance existing in the vertical detection of laser radar (Ladar) using the algorithm of spin-image surface matching. The method used to recognize the target in the range imagery of Ladar is time-consuming, owing to its complicated procedure, which violates the requirement of real-time target recognition in practical applications. To simplify the troublesome procedures, we improve the spin-image algorithm by introducing a statistical correlated coefficient into target recognition in range imagery of Ladar. The system performance is demonstrated on sixteen simulated noise range images with targets rotated through an arbitrary angle in plane. A high efficiency and an acceptable recognition rate obtained herein testify the validity of the improved algorithm for practical applications. The proposed algorithm not only solves the problem of in-plane rotation-invariance rationally, but also meets the real-time requirement. This paper ends with a comparison of the proposed method and the previous one. (classical areas of phenomenology)

  19. Halftoning processing on a JPEG-compressed image

    Science.gov (United States)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  20. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    Science.gov (United States)

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  1. Image processing in 60Co container inspection system

    International Nuclear Information System (INIS)

    Wu Zhifang; Zhou Liye; Wang Liqiang; Liu Ximing

    1999-01-01

    The authors analyzes the features of 60 Co container inspection image, the design of several special processing methods for container image and some normal processing methods for two-dimensional digital image, including gray enhancement, pseudo-enhancement, space filter, edge enhancement, geometry process, etc. It gives out the way to carry out the above mentioned process in Windows 95 or Win NT. It discusses some ways to improve the image processing speed on microcomputer and good results were obtained

  2. Crack Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune

    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal, Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better than that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  3. Enhancement tuning and control for high dynamic range images in multi-scale locally adaptive contrast enhancement algorithms

    Science.gov (United States)

    Cvetkovic, Sascha D.; Schirris, Johan; de With, Peter H. N.

    2009-01-01

    For real-time imaging in surveillance applications, visibility of details is of primary importance to ensure customer confidence. If we display High Dynamic-Range (HDR) scenes whose contrast spans four or more orders of magnitude on a conventional monitor without additional processing, results are unacceptable. Compression of the dynamic range is therefore a compulsory part of any high-end video processing chain because standard monitors are inherently Low- Dynamic Range (LDR) devices with maximally two orders of display dynamic range. In real-time camera processing, many complex scenes are improved with local contrast enhancements, bringing details to the best possible visibility. In this paper, we show how a multi-scale high-frequency enhancement scheme, in which gain is a non-linear function of the detail energy, can be used for the dynamic range compression of HDR real-time video camera signals. We also show the connection of our enhancement scheme to the processing way of the Human Visual System (HVS). Our algorithm simultaneously controls perceived sharpness, ringing ("halo") artifacts (contrast) and noise, resulting in a good balance between visibility of details and non-disturbance of artifacts. The overall quality enhancement, suitable for both HDR and LDR scenes, is based on a careful selection of the filter types for the multi-band decomposition and a detailed analysis of the signal per frequency band.

  4. Eliminating "Hotspots" in Digital Image Processing

    Science.gov (United States)

    Salomon, P. M.

    1984-01-01

    Signals from defective picture elements rejected. Image processing program for use with charge-coupled device (CCD) or other mosaic imager augmented with algorithm that compensates for common type of electronic defect. Algorithm prevents false interpretation of "hotspots". Used for robotics, image enhancement, image analysis and digital television.

  5. A contest of sensors in close range 3D imaging: performance evaluation with a new metric test object

    Directory of Open Access Journals (Sweden)

    M. Hess

    2014-06-01

    Full Text Available An independent means of 3D image quality assessment is introduced, addressing non-professional users of sensors and freeware, which is largely characterized as closed-sourced and by the absence of quality metrics for processing steps, such as alignment. A performance evaluation of commercially available, state-of-the-art close range 3D imaging technologies is demonstrated with the help of a newly developed Portable Metric Test Artefact. The use of this test object provides quality control by a quantitative assessment of 3D imaging sensors. It will enable users to give precise specifications which spatial resolution and geometry recording they expect as outcome from their 3D digitizing process. This will lead to the creation of high-quality 3D digital surrogates and 3D digital assets. The paper is presented in the form of a competition of teams, and a possible winner will emerge.

  6. High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering

    Directory of Open Access Journals (Sweden)

    Nelson Eduardo Diaz

    2015-09-01

    Full Text Available The coded aperture snapshot spectral imaging system (CASSI is an imaging architecture which senses the three dimensional informa-tion of a scene with two dimensional (2D focal plane array (FPA coded projection measurements. A reconstruction algorithm takes advantage of the compressive measurements sparsity to recover the underlying 3D data cube. Traditionally, CASSI uses block-un-block coded apertures (BCA to spatially modulate the light. In CASSI the quality of the reconstructed images depends on the design of these coded apertures and the FPA dynamic range. This work presents a new CASSI architecture based on grayscaled coded apertu-res (GCA which reduce the FPA saturation and increase the dynamic range of the reconstructed images. The set of GCA is calculated in a real-time adaptive manner exploiting the information from the FPA compressive measurements. Extensive simulations show the attained improvement in the quality of the reconstructed images when GCA are employed.  In addition, a comparison between traditional coded apertures and GCA is realized with respect to noise tolerance.

  7. A Compton Imaging Prototype for Range Verification in Particle Therapy

    International Nuclear Information System (INIS)

    Golnik, C.; Hueso Gonzalez, F.; Kormoll, T.; Pausch, G.; Rohling, H.; Fiedler, F.; Heidel, K.; Schoene, S.; Sobiella, M.; Wagner, A.; Enghardt, W.

    2013-06-01

    During the 2012 AAPM Annual Meeting 33 percent of the delegates considered the range uncertainty in proton therapy as the main obstacle of becoming a mainstream treatment modality. Utilizing prompt gamma emission, a side product of particle tissue interaction, opens the possibility of in-beam dose verification, due to the direct correlation between prompt gamma emission and particle dose deposition. Compton imaging has proven to be a technique to measure three dimensional gamma emission profiles and opens the possibility of adaptive dose monitoring and treatment correction. We successfully built a Compton Imaging prototype, characterized the detectors and showed the imaging capability of the complete device. The major advantage of CZT detectors is the high energy resolution and the high spatial resolution, which are key parameters for Compton Imaging. However, our measurements at the proton beam accelerator facility KVI in Groningen (Netherlands) disclosed a spectrum of prompt gamma rays under proton irradiation up to 4.4 MeV. As CZT detectors of 5 mm thickness do not efficiently absorb photons in such energy ranges, another absorption, based on a Siemens LSO block detector is added behind CZT1. This setup provides a higher absorption probability of high energy photons. With a size of 5.2 cm x 5.2 cm x 2.0 cm, this scintillation detector further increases the angular acceptance of Compton scattered photons due to geometric size. (authors)

  8. How Digital Image Processing Became Really Easy

    Science.gov (United States)

    Cannon, Michael

    1988-02-01

    In the early and mid-1970s, digital image processing was the subject of intense university and corporate research. The research lay along two lines: (1) developing mathematical techniques for improving the appearance of or analyzing the contents of images represented in digital form, and (2) creating cost-effective hardware to carry out these techniques. The research has been very effective, as evidenced by the continued decline of image processing as a research topic, and the rapid increase of commercial companies to market digital image processing software and hardware.

  9. A report on digital image processing and analysis

    International Nuclear Information System (INIS)

    Singh, B.; Alex, J.; Haridasan, G.

    1989-01-01

    This report presents developments in software, connected with digital image processing and analysis in the Centre. In image processing, one resorts to either alteration of grey level values so as to enhance features in the image or resorts to transform domain operations for restoration or filtering. Typical transform domain operations like Karhunen-Loeve transforms are statistical in nature and are used for a good registration of images or template - matching. Image analysis procedures segment grey level images into images contained within selectable windows, for the purpose of estimating geometrical features in the image, like area, perimeter, projections etc. In short, in image processing both the input and output are images, whereas in image analyses, the input is an image whereas the output is a set of numbers and graphs. (author). 19 refs

  10. Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit.

    Science.gov (United States)

    Yi, Faliu; Lee, Jieun; Moon, Inkyu

    2014-05-01

    The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.

  11. Processed images in human perception: A case study in ultrasound breast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yap, Moi Hoon [Department of Computer Science, Loughborough University, FH09, Ergonomics and Safety Research Institute, Holywell Park (United Kingdom)], E-mail: M.H.Yap@lboro.ac.uk; Edirisinghe, Eran [Department of Computer Science, Loughborough University, FJ.05, Garendon Wing, Holywell Park, Loughborough LE11 3TU (United Kingdom); Bez, Helmut [Department of Computer Science, Loughborough University, Room N.2.26, Haslegrave Building, Loughborough University, Loughborough LE11 3TU (United Kingdom)

    2010-03-15

    Two main research efforts in early detection of breast cancer include the development of software tools to assist radiologists in identifying abnormalities and the development of training tools to enhance their skills. Medical image analysis systems, widely known as Computer-Aided Diagnosis (CADx) systems, play an important role in this respect. Often it is important to determine whether there is a benefit in including computer-processed images in the development of such software tools. In this paper, we investigate the effects of computer-processed images in improving human performance in ultrasound breast cancer detection (a perceptual task) and classification (a cognitive task). A survey was conducted on a group of expert radiologists and a group of non-radiologists. In our experiments, random test images from a large database of ultrasound images were presented to subjects. In order to gather appropriate formal feedback, questionnaires were prepared to comment on random selections of original images only, and on image pairs consisting of original images displayed alongside computer-processed images. We critically compare and contrast the performance of the two groups according to perceptual and cognitive tasks. From a Receiver Operating Curve (ROC) analysis, we conclude that the provision of computer-processed images alongside the original ultrasound images, significantly improve the perceptual tasks of non-radiologists but only marginal improvements are shown in the perceptual and cognitive tasks of the group of expert radiologists.

  12. Processed images in human perception: A case study in ultrasound breast imaging

    International Nuclear Information System (INIS)

    Yap, Moi Hoon; Edirisinghe, Eran; Bez, Helmut

    2010-01-01

    Two main research efforts in early detection of breast cancer include the development of software tools to assist radiologists in identifying abnormalities and the development of training tools to enhance their skills. Medical image analysis systems, widely known as Computer-Aided Diagnosis (CADx) systems, play an important role in this respect. Often it is important to determine whether there is a benefit in including computer-processed images in the development of such software tools. In this paper, we investigate the effects of computer-processed images in improving human performance in ultrasound breast cancer detection (a perceptual task) and classification (a cognitive task). A survey was conducted on a group of expert radiologists and a group of non-radiologists. In our experiments, random test images from a large database of ultrasound images were presented to subjects. In order to gather appropriate formal feedback, questionnaires were prepared to comment on random selections of original images only, and on image pairs consisting of original images displayed alongside computer-processed images. We critically compare and contrast the performance of the two groups according to perceptual and cognitive tasks. From a Receiver Operating Curve (ROC) analysis, we conclude that the provision of computer-processed images alongside the original ultrasound images, significantly improve the perceptual tasks of non-radiologists but only marginal improvements are shown in the perceptual and cognitive tasks of the group of expert radiologists.

  13. Photonics-based real-time ultra-high-range-resolution radar with broadband signal generation and processing.

    Science.gov (United States)

    Zhang, Fangzheng; Guo, Qingshui; Pan, Shilong

    2017-10-23

    Real-time and high-resolution target detection is highly desirable in modern radar applications. Electronic techniques have encountered grave difficulties in the development of such radars, which strictly rely on a large instantaneous bandwidth. In this article, a photonics-based real-time high-range-resolution radar is proposed with optical generation and processing of broadband linear frequency modulation (LFM) signals. A broadband LFM signal is generated in the transmitter by photonic frequency quadrupling, and the received echo is de-chirped to a low frequency signal by photonic frequency mixing. The system can operate at a high frequency and a large bandwidth while enabling real-time processing by low-speed analog-to-digital conversion and digital signal processing. A conceptual radar is established. Real-time processing of an 8-GHz LFM signal is achieved with a sampling rate of 500 MSa/s. Accurate distance measurement is implemented with a maximum error of 4 mm within a range of ~3.5 meters. Detection of two targets is demonstrated with a range-resolution as high as 1.875 cm. We believe the proposed radar architecture is a reliable solution to overcome the limitations of current radar on operation bandwidth and processing speed, and it is hopefully to be used in future radars for real-time and high-resolution target detection and imaging.

  14. Computational Intelligence in Image Processing

    CERN Document Server

    Siarry, Patrick

    2013-01-01

    Computational intelligence based techniques have firmly established themselves as viable, alternate, mathematical tools for more than a decade. They have been extensively employed in many systems and application domains, among these signal processing, automatic control, industrial and consumer electronics, robotics, finance, manufacturing systems, electric power systems, and power electronics. Image processing is also an extremely potent area which has attracted the atten­tion of many researchers who are interested in the development of new computational intelligence-based techniques and their suitable applications, in both research prob­lems and in real-world problems. Part I of the book discusses several image preprocessing algorithms; Part II broadly covers image compression algorithms; Part III demonstrates how computational intelligence-based techniques can be effectively utilized for image analysis purposes; and Part IV shows how pattern recognition, classification and clustering-based techniques can ...

  15. PCB Fault Detection Using Image Processing

    Science.gov (United States)

    Nayak, Jithendra P. R.; Anitha, K.; Parameshachari, B. D., Dr.; Banu, Reshma, Dr.; Rashmi, P.

    2017-08-01

    The importance of the Printed Circuit Board inspection process has been magnified by requirements of the modern manufacturing environment where delivery of 100% defect free PCBs is the expectation. To meet such expectations, identifying various defects and their types becomes the first step. In this PCB inspection system the inspection algorithm mainly focuses on the defect detection using the natural images. Many practical issues like tilt of the images, bad light conditions, height at which images are taken etc. are to be considered to ensure good quality of the image which can then be used for defect detection. Printed circuit board (PCB) fabrication is a multidisciplinary process, and etching is the most critical part in the PCB manufacturing process. The main objective of Etching process is to remove the exposed unwanted copper other than the required circuit pattern. In order to minimize scrap caused by the wrongly etched PCB panel, inspection has to be done in early stage. However, all of the inspections are done after the etching process where any defective PCB found is no longer useful and is simply thrown away. Since etching process costs 0% of the entire PCB fabrication, it is uneconomical to simply discard the defective PCBs. In this paper a method to identify the defects in natural PCB images and associated practical issues are addressed using Software tools and some of the major types of single layer PCB defects are Pattern Cut, Pin hole, Pattern Short, Nick etc., Therefore the defects should be identified before the etching process so that the PCB would be reprocessed. In the present approach expected to improve the efficiency of the system in detecting the defects even in low quality images

  16. Medical Image Processing for Fully Integrated Subject Specific Whole Brain Mesh Generation

    Directory of Open Access Journals (Sweden)

    Chih-Yang Hsu

    2015-05-01

    Full Text Available Currently, anatomically consistent segmentation of vascular trees acquired with magnetic resonance imaging requires the use of multiple image processing steps, which, in turn, depend on manual intervention. In effect, segmentation of vascular trees from medical images is time consuming and error prone due to the tortuous geometry and weak signal in small blood vessels. To overcome errors and accelerate the image processing time, we introduce an automatic image processing pipeline for constructing subject specific computational meshes for entire cerebral vasculature, including segmentation of ancillary structures; the grey and white matter, cerebrospinal fluid space, skull, and scalp. To demonstrate the validity of the new pipeline, we segmented the entire intracranial compartment with special attention of the angioarchitecture from magnetic resonance imaging acquired for two healthy volunteers. The raw images were processed through our pipeline for automatic segmentation and mesh generation. Due to partial volume effect and finite resolution, the computational meshes intersect with each other at respective interfaces. To eliminate anatomically inconsistent overlap, we utilized morphological operations to separate the structures with a physiologically sound gap spaces. The resulting meshes exhibit anatomically correct spatial extent and relative positions without intersections. For validation, we computed critical biometrics of the angioarchitecture, the cortical surfaces, ventricular system, and cerebrospinal fluid (CSF spaces and compared against literature values. Volumina and surface areas of the computational mesh were found to be in physiological ranges. In conclusion, we present an automatic image processing pipeline to automate the segmentation of the main intracranial compartments including a subject-specific vascular trees. These computational meshes can be used in 3D immersive visualization for diagnosis, surgery planning with haptics

  17. Characterization of the range effect in synthetic aperture radar images of concrete specimens for width estimation

    Science.gov (United States)

    Alzeyadi, Ahmed; Yu, Tzuyang

    2018-03-01

    Nondestructive evaluation (NDE) is an indispensable approach for the sustainability of critical civil infrastructure systems such as bridges and buildings. Recently, microwave/radar sensors are widely used for assessing the condition of concrete structures. Among existing imaging techniques in microwave/radar sensors, synthetic aperture radar (SAR) imaging enables researchers to conduct surface and subsurface inspection of concrete structures in the range-cross-range representation of SAR images. The objective of this paper is to investigate the range effect of concrete specimens in SAR images at various ranges (15 cm, 50 cm, 75 cm, 100 cm, and 200 cm). One concrete panel specimen (water-to-cement ratio = 0.45) of 30-cm-by-30-cm-by-5-cm was manufactured and scanned by a 10 GHz SAR imaging radar sensor inside an anechoic chamber. Scatterers in SAR images representing two corners of the concrete panel were used to estimate the width of the panel. It was found that the range-dependent pattern of corner scatters can be used to predict the width of concrete panels. Also, the maximum SAR amplitude decreases when the range increases. An empirical model was also proposed for width estimation of concrete panels.

  18. An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis

    Science.gov (United States)

    Kim, Yongmin; Alexander, Thomas

    1986-06-01

    In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.

  19. Applying Enhancement Filters in the Pre-processing of Images of Lymphoma

    International Nuclear Information System (INIS)

    Silva, Sérgio Henrique; Do Nascimento, Marcelo Zanchetta; Neves, Leandro Alves; Batista, Valério Ramos

    2015-01-01

    Lymphoma is a type of cancer that affects the immune system, and is classified as Hodgkin or non-Hodgkin. It is one of the ten types of cancer that are the most common on earth. Among all malignant neoplasms diagnosed in the world, lymphoma ranges from three to four percent of them. Our work presents a study of some filters devoted to enhancing images of lymphoma at the pre-processing step. Here the enhancement is useful for removing noise from the digital images. We have analysed the noise caused by different sources like room vibration, scraps and defocusing, and in the following classes of lymphoma: follicular, mantle cell and B-cell chronic lymphocytic leukemia. The filters Gaussian, Median and Mean-Shift were applied to different colour models (RGB, Lab and HSV). Afterwards, we performed a quantitative analysis of the images by means of the Structural Similarity Index. This was done in order to evaluate the similarity between the images. In all cases we have obtained a certainty of at least 75%, which rises to 99% if one considers only HSV. Namely, we have concluded that HSV is an important choice of colour model at pre-processing histological images of lymphoma, because in this case the resulting image will get the best enhancement

  20. Prototype system for proton beam range measurement based on gamma electron vertex imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Han Rim [Neutron Utilization Technology Division, Korea Atomic Energy Research Institute, 111, Daedeok-daero 989beon-gil, Yuseong-gu, Daejeon 34057 (Korea, Republic of); Kim, Sung Hun; Park, Jong Hoon [Department of Nuclear Engineering, Hanyang University, Seongdong-gu, Seoul 04763 (Korea, Republic of); Jung, Won Gyun [Heavy-ion Clinical Research Division, Korean Institute of Radiological & Medical Sciences, Seoul 01812 (Korea, Republic of); Lim, Hansang [Department of Electronics Convergence Engineering, Kwangwoon University, Seoul 01897 (Korea, Republic of); Kim, Chan Hyeong, E-mail: chkim@hanyang.ac.kr [Department of Nuclear Engineering, Hanyang University, Seongdong-gu, Seoul 04763 (Korea, Republic of)

    2017-06-11

    In proton therapy, for both therapeutic effectiveness and patient safety, it is very important to accurately measure the proton dose distribution, especially the range of the proton beam. For this purpose, recently we proposed a new imaging method named gamma electron vertex imaging (GEVI), in which the prompt gammas emitting from the nuclear reactions of the proton beam in the patient are converted to electrons, and then the converted electrons are tracked to determine the vertices of the prompt gammas, thereby producing a 2D image of the vertices. In the present study, we developed a prototype GEVI system, including dedicated signal processing and data acquisition systems, which consists of a beryllium plate (= electron converter) to convert the prompt gammas to electrons, two double-sided silicon strip detectors (= hodoscopes) to determine the trajectories of those converted electrons, and a plastic scintillation detector (= calorimeter) to measure their kinetic energies. The system uses triple coincidence logic and multiple energy windows to select only the events from prompt gammas. The detectors of the prototype GEVI system were evaluated for electronic noise level, energy resolution, and time resolution. Finally, the imaging capability of the GEVI system was tested by imaging a {sup 90}Sr beta source, a {sup 60}Co gamma source, and a 45-MeV proton beam in a PMMA phantom. The overall results of the present study generally show that the prototype GEVI system can image the vertices of the prompt gammas produced by the proton nuclear interactions.

  1. Long-Range Reduced Predictive Information Transfers of Autistic Youths in EEG Sensor-Space During Face Processing.

    Science.gov (United States)

    Khadem, Ali; Hossein-Zadeh, Gholam-Ali; Khorrami, Anahita

    2016-03-01

    The majority of previous functional/effective connectivity studies conducted on the autistic patients converged to the underconnectivity theory of ASD: "long-range underconnectivity and sometimes short-rang overconnectivity". However, to the best of our knowledge the total (linear and nonlinear) predictive information transfers (PITs) of autistic patients have not been investigated yet. Also, EEG data have rarely been used for exploring the information processing deficits in autistic subjects. This study is aimed at comparing the total (linear and nonlinear) PITs of autistic and typically developing healthy youths during human face processing by using EEG data. The ERPs of 12 autistic youths and 19 age-matched healthy control (HC) subjects were recorded while they were watching upright and inverted human face images. The PITs among EEG channels were quantified using two measures separately: transfer entropy with self-prediction optimality (TESPO), and modified transfer entropy with self-prediction optimality (MTESPO). Afterwards, the directed differential connectivity graphs (dDCGs) were constructed to characterize the significant changes in the estimated PITs of autistic subjects compared with HC ones. By using both TESPO and MTESPO, long-range reduction of PITs of ASD group during face processing was revealed (particularly from frontal channels to right temporal channels). Also, it seemed the orientation of face images (upright or upside down) did not modulate the binary pattern of PIT-based dDCGs, significantly. Moreover, compared with TESPO, the results of MTESPO were more compatible with the underconnectivity theory of ASD in the sense that MTESPO showed no long-range increase in PIT. It is also noteworthy that to the best of our knowledge it is the first time that a version of MTE is applied for patients (here ASD) and it is also its first use for EEG data analysis.

  2. Cellular automata in image processing and geometry

    CERN Document Server

    Adamatzky, Andrew; Sun, Xianfang

    2014-01-01

    The book presents findings, views and ideas on what exact problems of image processing, pattern recognition and generation can be efficiently solved by cellular automata architectures. This volume provides a convenient collection in this area, in which publications are otherwise widely scattered throughout the literature. The topics covered include image compression and resizing; skeletonization, erosion and dilation; convex hull computation, edge detection and segmentation; forgery detection and content based retrieval; and pattern generation. The book advances the theory of image processing, pattern recognition and generation as well as the design of efficient algorithms and hardware for parallel image processing and analysis. It is aimed at computer scientists, software programmers, electronic engineers, mathematicians and physicists, and at everyone who studies or develops cellular automaton algorithms and tools for image processing and analysis, or develops novel architectures and implementations of mass...

  3. Image processing unit with fall-back.

    NARCIS (Netherlands)

    2011-01-01

    An image processing unit ( 100,200,300 ) for computing a sequence of output images on basis of a sequence of input images, comprises: a motion estimation unit ( 102 ) for computing a motion vector field on basis of the input images; a quality measurement unit ( 104 ) for computing a value of a

  4. Macro-SICM: A Scanning Ion Conductance Microscope for Large-Range Imaging.

    Science.gov (United States)

    Schierbaum, Nicolas; Hack, Martin; Betz, Oliver; Schäffer, Tilman E

    2018-04-17

    The scanning ion conductance microscope (SICM) is a versatile, high-resolution imaging technique that uses an electrolyte-filled nanopipet as a probe. Its noncontact imaging principle makes the SICM uniquely suited for the investigation of soft and delicate surface structures in a liquid environment. The SICM has found an ever-increasing number of applications in chemistry, physics, and biology. However, a drawback of conventional SICMs is their relatively small scan range (typically 100 μm × 100 μm in the lateral and 10 μm in the vertical direction). We have developed a Macro-SICM with an exceedingly large scan range of 25 mm × 25 mm in the lateral and 0.25 mm in the vertical direction. We demonstrate the high versatility of the Macro-SICM by imaging at different length scales: from centimeters (fingerprint, coin) to millimeters (bovine tongue tissue, insect wing) to micrometers (cellular extensions). We applied the Macro-SICM to the study of collective cell migration in epithelial wound healing.

  5. An Integrated Tone Mapping for High Dynamic Range Image Visualization

    Science.gov (United States)

    Liang, Lei; Pan, Jeng-Shyang; Zhuang, Yongjun

    2018-01-01

    There are two type tone mapping operators for high dynamic range (HDR) image visualization. HDR image mapped by perceptual operators have strong sense of reality, but will lose local details. Empirical operators can maximize local detail information of HDR image, but realism is not strong. A common tone mapping operator suitable for all applications is not available. This paper proposes a novel integrated tone mapping framework which can achieve conversion between empirical operators and perceptual operators. In this framework, the empirical operator is rendered based on improved saliency map, which simulates the visual attention mechanism of the human eye to the natural scene. The results of objective evaluation prove the effectiveness of the proposed solution.

  6. Observation of plasma-facing-wall via high dynamic range imaging

    International Nuclear Information System (INIS)

    Villamayor, Michelle Marie S.; Rosario, Leo Mendel D.; Viloan, Rommel Paulo B.

    2013-01-01

    Pictures of plasmas and deposits in a discharge chamber taken by varying shutter speeds have been integrated into high dynamic range (HDR) images. The HDR images of a graphite target surface of a compact planar magnetron (CPM) discharge device have clearly indicated the erosion pattern of the target, which are correlated to the light intensity distribution of plasma during operation. Based upon the HDR image technique coupled to colorimetry, a formation history of dust-like deposits inside of the CPM chamber has been recorded. The obtained HDR images have shown how the patterns of deposits changed in accordance with discharge duration. Results show that deposition takes place near the evacuation ports during the early stage of the plasma discharge. Discoloration of the plasma-facing-walls indicating erosion and redeposition eventually spreads at the periphery after several hours of operation. (author)

  7. Tensors in image processing and computer vision

    CERN Document Server

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong

    2009-01-01

    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  8. Automated synthesis of image processing procedures using AI planning techniques

    Science.gov (United States)

    Chien, Steve; Mortensen, Helen

    1994-01-01

    This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

  9. Musashi dynamic image processing system

    International Nuclear Information System (INIS)

    Murata, Yutaka; Mochiki, Koh-ichi; Taguchi, Akira

    1992-01-01

    In order to produce transmitted neutron dynamic images using neutron radiography, a real time system called Musashi dynamic image processing system (MDIPS) was developed to collect, process, display and record image data. The block diagram of the MDIPS is shown. The system consists of a highly sensitive, high resolution TV camera driven by a custom-made scanner, a TV camera deflection controller for optimal scanning, which adjusts to the luminous intensity and the moving speed of an object, a real-time corrector to perform the real time correction of dark current, shading distortion and field intensity fluctuation, a real time filter for increasing the image signal to noise ratio, a video recording unit and a pseudocolor monitor to realize recording in commercially available products and monitoring by means of the CRTs in standard TV scanning, respectively. The TV camera and the TV camera deflection controller utilized for producing still images can be applied to this case. The block diagram of the real-time corrector is shown. Its performance is explained. Linear filters and ranked order filters were developed. (K.I.)

  10. Achieving sub-millimetre precision with a solid-state full-field heterodyning range imaging camera

    Science.gov (United States)

    Dorrington, A. A.; Cree, M. J.; Payne, A. D.; Conroy, R. M.; Carnegie, D. A.

    2007-09-01

    We have developed a full-field solid-state range imaging system capable of capturing range and intensity data simultaneously for every pixel in a scene with sub-millimetre range precision. The system is based on indirect time-of-flight measurements by heterodyning intensity-modulated illumination with a gain modulation intensified digital video camera. Sub-millimetre precision to beyond 5 m and 2 mm precision out to 12 m has been achieved. In this paper, we describe the new sub-millimetre class range imaging system in detail, and review the important aspects that have been instrumental in achieving high precision ranging. We also present the results of performance characterization experiments and a method of resolving the range ambiguity problem associated with homodyne and heterodyne ranging systems.

  11. Three-dimensional image signals: processing methods

    Science.gov (United States)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  12. Analysis of Variance in Statistical Image Processing

    Science.gov (United States)

    Kurz, Ludwik; Hafed Benteftifa, M.

    1997-04-01

    A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.

  13. Improved measurement linearity and precision for AMCW time-of-flight range imaging cameras.

    Science.gov (United States)

    Payne, Andrew D; Dorrington, Adrian A; Cree, Michael J; Carnegie, Dale A

    2010-08-10

    Time-of-flight range imaging systems utilizing the amplitude modulated continuous wave (AMCW) technique often suffer from measurement nonlinearity due to the presence of aliased harmonics within the amplitude modulation signals. Typically a calibration is performed to correct these errors. We demonstrate an alternative phase encoding approach that attenuates the harmonics during the sampling process, thereby improving measurement linearity in the raw measurements. This mitigates the need to measure the system's response or calibrate for environmental changes. In conjunction with improved linearity, we demonstrate that measurement precision can also be increased by reducing the duty cycle of the amplitude modulated illumination source (while maintaining overall illumination power).

  14. A High-Dynamic-Range Optical Remote Sensing Imaging Method for Digital TDI CMOS

    Directory of Open Access Journals (Sweden)

    Taiji Lan

    2017-10-01

    Full Text Available The digital time delay integration (digital TDI technology of the complementary metal-oxide-semiconductor (CMOS image sensor has been widely adopted and developed in the optical remote sensing field. However, the details of targets that have low illumination or low contrast in scenarios of high contrast are often drowned out because of the superposition of multi-stage images in digital domain multiplies the read noise and the dark noise, thus limiting the imaging dynamic range. Through an in-depth analysis of the information transfer model of digital TDI, this paper attempts to explore effective ways to overcome this issue. Based on the evaluation and analysis of multi-stage images, the entropy-maximized adaptive histogram equalization (EMAHE algorithm is proposed to improve the ability of images to express the details of dark or low-contrast targets. Furthermore, in this paper, an image fusion method is utilized based on gradient pyramid decomposition and entropy weighting of different TDI stage images, which can improve the detection ability of the digital TDI CMOS for complex scenes with high contrast, and obtain images that are suitable for recognition by the human eye. The experimental results show that the proposed methods can effectively improve the high-dynamic-range imaging (HDRI capability of the digital TDI CMOS. The obtained images have greater entropy and average gradients.

  15. Image restoration and processing methods

    International Nuclear Information System (INIS)

    Daniell, G.J.

    1984-01-01

    This review will stress the importance of using image restoration techniques that deal with incomplete, inconsistent, and noisy data and do not introduce spurious features into the processed image. No single image is equally suitable for both the resolution of detail and the accurate measurement of intensities. A good general purpose technique is the maximum entropy method and the basis and use of this will be explained. (orig.)

  16. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  17. Finite Range Decomposition of Gaussian Processes

    CERN Document Server

    Brydges, C D; Mitter, P K

    2003-01-01

    Let $D$ be the finite difference Laplacian associated to the lattice $bZ^{d}$. For dimension $dge 3$, $age 0$ and $L$ a sufficiently large positive dyadic integer, we prove that the integral kernel of the resolvent $G^{a}:=(a-D)^{-1}$ can be decomposed as an infinite sum of positive semi-definite functions $ V_{n} $ of finite range, $ V_{n} (x-y) = 0$ for $|x-y|ge O(L)^{n}$. Equivalently, the Gaussian process on the lattice with covariance $G^{a}$ admits a decomposition into independent Gaussian processes with finite range covariances. For $a=0$, $ V_{n} $ has a limiting scaling form $L^{-n(d-2)}Gamma_{ c,ast }{bigl (frac{x-y}{ L^{n}}bigr )}$ as $nrightarrow infty$. As a corollary, such decompositions also exist for fractional powers $(-D)^{-alpha/2}$, $0

  18. In flight image processing on multi-rotor aircraft for autonomous landing

    Science.gov (United States)

    Henry, Richard, Jr.

    An estimated $6.4 billion was spent during the year 2013 on developing drone technology around the world and is expected to double in the next decade. However, drone applications typically require strong pilot skills, safety, responsibilities and adherence to regulations during flight. If the flight control process could be safer and more reliable in terms of landing, it would be possible to further develop a wider range of applications. The objective of this research effort is to describe the design and evaluation of a fully autonomous Unmanned Aerial system (UAS), specifically a four rotor aircraft, commonly known as quad copter for precise landing applications. The full landing autonomy is achieved by image processing capabilities during flight for target recognition by employing the open source library OpenCV. In addition, all imaging data is processed by a single embedded computer that estimates a relative position with respect to the target landing pad. Results shows a reduction on the average offset error by 67.88% in comparison to the current return to lunch (RTL) method which only relies on GPS positioning. The present work validates the need for relying on image processing for precise landing applications instead of the inexact method of a commercial low cost GPS dependency.

  19. TU-FG-BRB-05: A 3 Dimensional Prompt Gamma Imaging System for Range Verification in Proton Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Draeger, E; Chen, H; Polf, J [University of Maryland School of Medicine, Baltimore, MD (United States); Mackin, D; Beddar, S [MD Anderson Cancer Center, Houston, TX (United States); Avery, S [University of Cape Town, Rondebosch (South Africa); Peterson, S

    2016-06-15

    Purpose: To report on the initial developments of a clinical 3-dimensional (3D) prompt gamma (PG) imaging system for proton radiotherapy range verification. Methods: The new imaging system under development consists of a prototype Compton camera to measure PG emission during proton beam irradiation and software to reconstruct, display, and analyze 3D images of the PG emission. For initial test of the system, PGs were measured with a prototype CC during a 200 cGy dose delivery with clinical proton pencil beams (ranging from 100 MeV – 200 MeV) to a water phantom. Measurements were also carried out with the CC placed 15 cm from the phantom for a full range 150 MeV pencil beam and with its range shifted by 2 mm. Reconstructed images of the PG emission were displayed by the clinical PG imaging software and compared to the dose distributions of the proton beams calculated by a commercial treatment planning system. Results: Measurements made with the new PG imaging system showed that a 3D image could be reconstructed from PGs measured during the delivery of 200 cGy of dose, and that shifts in the Bragg peak range of as little as 2 mm could be detected. Conclusion: Initial tests of a new PG imaging system show its potential to provide 3D imaging and range verification for proton radiotherapy. Based on these results, we have begun work to improve the system with the goal that images can be produced from delivery of as little as 20 cGy so that the system could be used for in-vivo proton beam range verification on a daily basis.

  20. Crack detection using image processing

    International Nuclear Information System (INIS)

    Moustafa, M.A.A

    2010-01-01

    This thesis contains five main subjects in eight chapters and two appendices. The first subject discus Wiener filter for filtering images. In the second subject, we examine using different methods, as Steepest Descent Algorithm (SDA) and the Wavelet Transformation, to detect and filling the cracks, and it's applications in different areas as Nano technology and Bio-technology. In third subject, we attempt to find 3-D images from 1-D or 2-D images using texture mapping with Open Gl under Visual C ++ language programming. The fourth subject consists of the process of using the image warping methods for finding the depth of 2-D images using affine transformation, bilinear transformation, projective mapping, Mosaic warping and similarity transformation. More details about this subject will be discussed below. The fifth subject, the Bezier curves and surface, will be discussed in details. The methods for creating Bezier curves and surface with unknown distribution, using only control points. At the end of our discussion we will obtain the solid form, using the so called NURBS (Non-Uniform Rational B-Spline); which depends on: the degree of freedom, control points, knots, and an evaluation rule; and is defined as a mathematical representation of 3-D geometry that can accurately describe any shape from a simple 2-D line, circle, arc, or curve to the most complex 3-D organic free-form surface or (solid) which depends on finding the Bezier curve and creating family of curves (surface), then filling in between to obtain the solid form. Another representation for this subject is concerned with building 3D geometric models from physical objects using image-based techniques. The advantage of image techniques is that they require no expensive equipment; we use NURBS, subdivision surface and mesh for finding the depth of any image with one still view or 2D image. The quality of filtering depends on the way the data is incorporated into the model. The data should be treated with

  1. High Dynamic Range Imaging at the Quantum Limit with Single Photon Avalanche Diode-Based Image Sensors †

    Science.gov (United States)

    Mattioli Della Rocca, Francescopaolo

    2018-01-01

    This paper examines methods to best exploit the High Dynamic Range (HDR) of the single photon avalanche diode (SPAD) in a high fill-factor HDR photon counting pixel that is scalable to megapixel arrays. The proposed method combines multi-exposure HDR with temporal oversampling in-pixel. We present a silicon demonstration IC with 96 × 40 array of 8.25 µm pitch 66% fill-factor SPAD-based pixels achieving >100 dB dynamic range with 3 back-to-back exposures (short, mid, long). Each pixel sums 15 bit-planes or binary field images internally to constitute one frame providing 3.75× data compression, hence the 1k frames per second (FPS) output off-chip represents 45,000 individual field images per second on chip. Two future projections of this work are described: scaling SPAD-based image sensors to HDR 1 MPixel formats and shrinking the pixel pitch to 1–3 µm. PMID:29641479

  2. JIP: Java image processing on the Internet

    Science.gov (United States)

    Wang, Dongyan; Lin, Bo; Zhang, Jun

    1998-12-01

    In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.

  3. Computational model of lightness perception in high dynamic range imaging

    Science.gov (United States)

    Krawczyk, Grzegorz; Myszkowski, Karol; Seidel, Hans-Peter

    2006-02-01

    An anchoring theory of lightness perception by Gilchrist et al. [1999] explains many characteristics of human visual system such as lightness constancy and its spectacular failures which are important in the perception of images. The principal concept of this theory is the perception of complex scenes in terms of groups of consistent areas (frameworks). Such areas, following the gestalt theorists, are defined by the regions of common illumination. The key aspect of the image perception is the estimation of lightness within each framework through the anchoring to the luminance perceived as white, followed by the computation of the global lightness. In this paper we provide a computational model for automatic decomposition of HDR images into frameworks. We derive a tone mapping operator which predicts lightness perception of the real world scenes and aims at its accurate reproduction on low dynamic range displays. Furthermore, such a decomposition into frameworks opens new grounds for local image analysis in view of human perception.

  4. Multispectral image enhancement processing for microsat-borne imager

    Science.gov (United States)

    Sun, Jianying; Tan, Zheng; Lv, Qunbo; Pei, Linlin

    2017-10-01

    With the rapid development of remote sensing imaging technology, the micro satellite, one kind of tiny spacecraft, appears during the past few years. A good many studies contribute to dwarfing satellites for imaging purpose. Generally speaking, micro satellites weigh less than 100 kilograms, even less than 50 kilograms, which are slightly larger or smaller than the common miniature refrigerators. However, the optical system design is hard to be perfect due to the satellite room and weight limitation. In most cases, the unprocessed data captured by the imager on the microsatellite cannot meet the application need. Spatial resolution is the key problem. As for remote sensing applications, the higher spatial resolution of images we gain, the wider fields we can apply them. Consequently, how to utilize super resolution (SR) and image fusion to enhance the quality of imagery deserves studying. Our team, the Key Laboratory of Computational Optical Imaging Technology, Academy Opto-Electronics, is devoted to designing high-performance microsat-borne imagers and high-efficiency image processing algorithms. This paper addresses a multispectral image enhancement framework for space-borne imagery, jointing the pan-sharpening and super resolution techniques to deal with the spatial resolution shortcoming of microsatellites. We test the remote sensing images acquired by CX6-02 satellite and give the SR performance. The experiments illustrate the proposed approach provides high-quality images.

  5. Advanced Secure Optical Image Processing for Communications

    Science.gov (United States)

    Al Falou, Ayman

    2018-04-01

    New image processing tools and data-processing network systems have considerably increased the volume of transmitted information such as 2D and 3D images with high resolution. Thus, more complex networks and long processing times become necessary, and high image quality and transmission speeds are requested for an increasing number of applications. To satisfy these two requests, several either numerical or optical solutions were offered separately. This book explores both alternatives and describes research works that are converging towards optical/numerical hybrid solutions for high volume signal and image processing and transmission. Without being limited to hybrid approaches, the latter are particularly investigated in this book in the purpose of combining the advantages of both techniques. Additionally, pure numerical or optical solutions are also considered since they emphasize the advantages of one of the two approaches separately.

  6. PARAGON-IPS: A Portable Imaging Software System For Multiple Generations Of Image Processing Hardware

    Science.gov (United States)

    Montelione, John

    1989-07-01

    Paragon-IPS is a comprehensive software system which is available on virtually all generations of image processing hardware. It is designed for an image processing department or a scientist and engineer who is doing image processing full-time. It is being used by leading R&D labs in government agencies and Fortune 500 companies. Applications include reconnaissance, non-destructive testing, remote sensing, medical imaging, etc.

  7. Flame analysis using image processing techniques

    Science.gov (United States)

    Her Jie, Albert Chang; Zamli, Ahmad Faizal Ahmad; Zulazlan Shah Zulkifli, Ahmad; Yee, Joanne Lim Mun; Lim, Mooktzeng

    2018-04-01

    This paper presents image processing techniques with the use of fuzzy logic and neural network approach to perform flame analysis. Flame diagnostic is important in the industry to extract relevant information from flame images. Experiment test is carried out in a model industrial burner with different flow rates. Flame features such as luminous and spectral parameters are extracted using image processing and Fast Fourier Transform (FFT). Flame images are acquired using FLIR infrared camera. Non-linearities such as thermal acoustic oscillations and background noise affect the stability of flame. Flame velocity is one of the important characteristics that determines stability of flame. In this paper, an image processing method is proposed to determine flame velocity. Power spectral density (PSD) graph is a good tool for vibration analysis where flame stability can be approximated. However, a more intelligent diagnostic system is needed to automatically determine flame stability. In this paper, flame features of different flow rates are compared and analyzed. The selected flame features are used as inputs to the proposed fuzzy inference system to determine flame stability. Neural network is used to test the performance of the fuzzy inference system.

  8. An integral design strategy combining optical system and image processing to obtain high resolution images

    Science.gov (United States)

    Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun

    2016-05-01

    In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.

  9. Entropy-Based Block Processing for Satellite Image Registration

    Directory of Open Access Journals (Sweden)

    Ikhyun Lee

    2012-11-01

    Full Text Available Image registration is an important task in many computer vision applications such as fusion systems, 3D shape recovery and earth observation. Particularly, registering satellite images is challenging and time-consuming due to limited resources and large image size. In such scenario, state-of-the-art image registration methods such as scale-invariant feature transform (SIFT may not be suitable due to high processing time. In this paper, we propose an algorithm based on block processing via entropy to register satellite images. The performance of the proposed method is evaluated using different real images. The comparative analysis shows that it not only reduces the processing time but also enhances the accuracy.

  10. Electrophoresis gel image processing and analysis using the KODAK 1D software.

    Science.gov (United States)

    Pizzonia, J

    2001-06-01

    The present article reports on the performance of the KODAK 1D Image Analysis Software for the acquisition of information from electrophoresis experiments and highlights the utility of several mathematical functions for subsequent image processing, analysis, and presentation. Digital images of Coomassie-stained polyacrylamide protein gels containing molecular weight standards and ethidium bromide stained agarose gels containing DNA mass standards are acquired using the KODAK Electrophoresis Documentation and Analysis System 290 (EDAS 290). The KODAK 1D software is used to optimize lane and band identification using features such as isomolecular weight lines. Mathematical functions for mass standard representation are presented, and two methods for estimation of unknown band mass are compared. Given the progressive transition of electrophoresis data acquisition and daily reporting in peer-reviewed journals to digital formats ranging from 8-bit systems such as EDAS 290 to more expensive 16-bit systems, the utility of algorithms such as Gaussian modeling, which can correct geometric aberrations such as clipping due to signal saturation common at lower bit depth levels, is discussed. Finally, image-processing tools that can facilitate image preparation for presentation are demonstrated.

  11. Target acquisition performance : Effects of target aspect angle, dynamic imaging and signal processing

    NARCIS (Netherlands)

    Beintema, J.A.; Bijl, P.; Hogervorst, M.A.; Dijk, J.

    2008-01-01

    In an extensive Target Acquisition (TA) performance study, we recorded static and dynamic imagery of a set of military and civilian two-handheld objects at a range of distances and aspect angles with an under-sampled uncooled thermal imager. Next, we applied signal processing techniques including

  12. A sampler of useful computational tools for applied geometry, computer graphics, and image processing foundations for computer graphics, vision, and image processing

    CERN Document Server

    Cohen-Or, Daniel; Ju, Tao; Mitra, Niloy J; Shamir, Ariel; Sorkine-Hornung, Olga; Zhang, Hao (Richard)

    2015-01-01

    A Sampler of Useful Computational Tools for Applied Geometry, Computer Graphics, and Image Processing shows how to use a collection of mathematical techniques to solve important problems in applied mathematics and computer science areas. The book discusses fundamental tools in analytical geometry and linear algebra. It covers a wide range of topics, from matrix decomposition to curvature analysis and principal component analysis to dimensionality reduction.Written by a team of highly respected professors, the book can be used in a one-semester, intermediate-level course in computer science. It

  13. Fusing range and intensity images for generating dense models of three-dimensional environments

    DEFF Research Database (Denmark)

    Ellekilde, Lars-Peter; Miró, Jaime Valls; Dissanayake., Gamini

    This paper presents a novel strategy for the construction of dense three-dimensional environment models by combining images from a conventional camera and a range imager. Ro- bust data association is ?rst accomplished by exploiting the Scale Invariant Feature Transformation (SIFT) technique...

  14. A Low-Power High-Dynamic-Range Receiver System for In-Probe 3-D Ultrasonic Imaging.

    Science.gov (United States)

    Attarzadeh, Hourieh; Xu, Ye; Ytterdal, Trond

    2017-10-01

    In this paper, a dual-mode low-power, high dynamic-range receiver circuit is designed for the interface with a capacitive micromachined ultrasonic transducer. The proposed ultrasound receiver chip enables the development of an in-probe digital beamforming imaging system. The flexibility of having two operation modes offers a high dynamic range with minimum power sacrifice. A prototype of the chip containing one receive channel, with one variable transimpedance amplifier (TIA) and one analog to digital converter (ADC) circuit is implemented. Combining variable gain TIA functionality with ADC gain settings achieves an enhanced overall high dynamic range, while low power dissipation is maintained. The chip is designed and fabricated in a 65 nm standard CMOS process technology. The test chip occupies an area of 76[Formula: see text] 170 [Formula: see text]. A total average power range of 60-240 [Formula: see text] for a sampling frequency of 30 MHz, and a center frequency of 5 MHz is measured. An instantaneous dynamic range of 50.5 dB with an overall dynamic range of 72 dB is obtained from the receiver circuit.

  15. The Pan-STARRS PS1 Image Processing Pipeline

    Science.gov (United States)

    Magnier, E.

    The Pan-STARRS PS1 Image Processing Pipeline (IPP) performs the image processing and data analysis tasks needed to enable the scientific use of the images obtained by the Pan-STARRS PS1 prototype telescope. The primary goals of the IPP are to process the science images from the Pan-STARRS telescopes and make the results available to other systems within Pan-STARRS. It also is responsible for combining all of the science images in a given filter into a single representation of the non-variable component of the night sky defined as the "Static Sky". To achieve these goals, the IPP also performs other analysis functions to generate the calibrations needed in the science image processing, and to occasionally use the derived data to generate improved astrometric and photometric reference catalogs. It also provides the infrastructure needed to store the incoming data and the resulting data products. The IPP inherits lessons learned, and in some cases code and prototype code, from several other astronomy image analysis systems, including Imcat (Kaiser), the Sloan Digital Sky Survey (REF), the Elixir system (Magnier & Cuillandre), and Vista (Tonry). Imcat and Vista have a large number of robust image processing functions. SDSS has demonstrated a working analysis pipeline and large-scale databasesystem for a dedicated project. The Elixir system has demonstrated an automatic image processing system and an object database system for operational usage. This talk will present an overview of the IPP architecture, functional flow, code development structure, and selected analysis algorithms. Also discussed is the HW highly parallel HW configuration necessary to support PS1 operational requirements. Finally, results are presented of the processing of images collected during PS1 early commissioning tasks utilizing the Pan-STARRS Test Camera #3.

  16. A Document Imaging Technique for Implementing Electronic Loan Approval Process

    Directory of Open Access Journals (Sweden)

    J. Manikandan

    2015-04-01

    Full Text Available The image processing is one of the leading technologies of computer applications. Image processing is a type of signal processing, the input for image processor is an image or video frame and the output will be an image or subset of image [1]. Computer graphics and computer vision process uses an image processing techniques. Image processing systems are used in various environments like medical fields, computer-aided design (CAD, research fields, crime investigation fields and military fields. In this paper, we proposed a document image processing technique, for establishing electronic loan approval process (E-LAP [2]. Loan approval process has been tedious process, the E-LAP system attempts to reduce the complexity of loan approval process. Customers have to login to fill the loan application form online with all details and submit the form. The loan department then processes the submitted form and then sends an acknowledgement mail via the E-LAP to the requested customer with the details about list of documents required for the loan approval process [3]. The approaching customer can upload the scanned copies of all required documents. All this interaction between customer and bank take place using an E-LAP system.

  17. Positron range in PET imaging: an alternative approach for assessing and correcting the blurring

    DEFF Research Database (Denmark)

    Jødal, Lars; Le Loirec, Cindy; Champion, Christophe

    2012-01-01

    Background: Positron range impairs resolution in PET imaging, especially for high-energy emitters and for small-animal PET. De-blurring in image reconstruction is possible if the blurring distribution is known. Further, the percentage of annihilation events within a given distance from the point...... on allowed-decay isotopes. Methods: It is argued that blurring at the detection level should not be described by positron range r, but instead the 2D-projected distance δ (equal to the closest distance between decay and line-of-response). To determine these 2D distributions, results from a dedicated positron...... is important for improved resolution in PET imaging. Relevant distributions for positron range have been derived for seven isotopes. Distributions for other allowed-decay isotopes may be estimated with the above formulas....

  18. Automatic tissue image segmentation based on image processing and deep learning

    Science.gov (United States)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies or other novel imaging technologies. Plus, image segmentation also provides detailed structure description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation method. Here we used image enhancement, operators, and morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in a deep learning way. We also introduced parallel computing. Such approaches greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. Our results can be used as a criteria when diagnosing diseases such as cerebral atrophy, which is caused by pathological changes in gray matter or white matter. We demonstrated the great potential of such image processing and deep leaning combined automatic tissue image segmentation in personalized medicine, especially in monitoring, and treatments.

  19. Application of off-line image processing for optimization in chest computed radiography using a low cost system.

    Science.gov (United States)

    Muhogora, Wilbroad E; Msaki, Peter; Padovani, Renato

    2015-03-08

     The objective of this study was to improve the visibility of anatomical details by applying off-line postimage processing in chest computed radiography (CR). Four spatial domain-based external image processing techniques were developed by using MATLAB software version 7.0.0.19920 (R14) and image processing tools. The developed techniques were implemented to sample images and their visual appearances confirmed by two consultant radiologists to be clinically adequate. The techniques were then applied to 200 chest clinical images and randomized with other 100 images previously processed online. These 300 images were presented to three experienced radiologists for image quality assessment using standard quality criteria. The mean and ranges of the average scores for three radiologists were characterized for each of the developed technique and imaging system. The Mann-Whitney U-test was used to test the difference of details visibility between the images processed using each of the developed techniques and the corresponding images processed using default algorithms. The results show that the visibility of anatomical features improved significantly (0.005 ≤ p ≤ 0.02) with combinations of intensity values adjustment and/or spatial linear filtering techniques for images acquired using 60 ≤ kVp ≤ 70. However, there was no improvement for images acquired using 102 ≤ kVp ≤ 107 (0.127 ≤ p ≤ 0.48). In conclusion, the use of external image processing for optimization can be effective in chest CR, but should be implemented in consultations with the radiologists.

  20. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    Science.gov (United States)

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  1. Image processing techniques for remote sensing data

    Digital Repository Service at National Institute of Oceanography (India)

    RameshKumar, M.R.

    interpretation and for processing of scene data for autonomous machine perception. The technique of digital image processing are used for' automatic character/pattern recognition, industrial robots for product assembly and inspection, military recognizance... and spatial co-ordinates into discrete components. The mathematical concepts involved are the sampling and transform theory. Two dimensional transforms are used for image enhancement, restoration, encoding and description too. The main objective of the image...

  2. Integrating digital topology in image-processing libraries.

    Science.gov (United States)

    Lamy, Julien

    2007-01-01

    This paper describes a method to integrate digital topology informations in image-processing libraries. This additional information allows a library user to write algorithms respecting topological constraints, for example, a seed fill or a skeletonization algorithm. As digital topology is absent from most image-processing libraries, such constraints cannot be fulfilled. We describe and give code samples for all the structures necessary for this integration, and show a use case in the form of a homotopic thinning filter inside ITK. The obtained filter can be up to a hundred times as fast as ITK's thinning filter and works for any image dimension. This paper mainly deals of integration within ITK, but can be adapted with only minor modifications to other image-processing libraries.

  3. Segmentation of laser range radar images using hidden Markov field models

    International Nuclear Information System (INIS)

    Pucar, P.

    1993-01-01

    Segmentation of images in the context of model based stochastic techniques is connected with high, very often unpracticle computational complexity. The objective with this thesis is to take the models used in model based image processing, simplify and use them in suboptimal, but not computationally demanding algorithms. Algorithms that are essentially one-dimensional, and their extensions to two dimensions are given. The model used in this thesis is the well known hidden Markov model. Estimation of the number of hidden states from observed data is a problem that is addressed. The state order estimation problem is of general interest and is not specifically connected to image processing. An investigation of three state order estimation techniques for hidden Markov models is given. 76 refs

  4. New segmentation-based tone mapping algorithm for high dynamic range image

    Science.gov (United States)

    Duan, Weiwei; Guo, Huinan; Zhou, Zuofeng; Huang, Huimin; Cao, Jianzhong

    2017-07-01

    The traditional tone mapping algorithm for the display of high dynamic range (HDR) image has the drawback of losing the impression of brightness, contrast and color information. To overcome this phenomenon, we propose a new tone mapping algorithm based on dividing the image into different exposure regions in this paper. Firstly, the over-exposure region is determined using the Local Binary Pattern information of HDR image. Then, based on the peak and average gray of the histogram, the under-exposure and normal-exposure region of HDR image are selected separately. Finally, the different exposure regions are mapped by differentiated tone mapping methods to get the final result. The experiment results show that the proposed algorithm achieve the better performance both in visual quality and objective contrast criterion than other algorithms.

  5. A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images

    NARCIS (Netherlands)

    Moschini, Ugo; Meijster, Arnold; Wilkinson, Michael

    Max-trees, or component trees, are graph structures that represent the connected components of an image in a hierarchical way. Nowadays, many application fields rely on images with high-dynamic range or floating point values. Efficient sequential algorithms exist to build trees and compute

  6. Advances in low-level color image processing

    CERN Document Server

    Smolka, Bogdan

    2014-01-01

    Color perception plays an important role in object recognition and scene understanding both for humans and intelligent vision systems. Recent advances in digital color imaging and computer hardware technology have led to an explosion in the use of color images in a variety of applications including medical imaging, content-based image retrieval, biometrics, watermarking, digital inpainting, remote sensing, visual quality inspection, among many others. As a result, automated processing and analysis of color images has become an active area of research, to which the large number of publications of the past two decades bears witness. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for single channel images are often not directly applicable to multichannel  ones. The goal of this volume is to summarize the state-of-the-art in the early stages of the color image processing pipeline.

  7. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  8. High-dynamic-range coherent diffractive imaging: ptychography using the mixed-mode pixel array detector

    Energy Technology Data Exchange (ETDEWEB)

    Giewekemeyer, Klaus, E-mail: klaus.giewekemeyer@xfel.eu [European XFEL GmbH, Hamburg (Germany); Philipp, Hugh T. [Cornell University, Ithaca, NY (United States); Wilke, Robin N. [Georg-August-Universität Göttingen, Göttingen (Germany); Aquila, Andrew [European XFEL GmbH, Hamburg (Germany); Osterhoff, Markus [Georg-August-Universität Göttingen, Göttingen (Germany); Tate, Mark W.; Shanks, Katherine S. [Cornell University, Ithaca, NY (United States); Zozulya, Alexey V. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Salditt, Tim [Georg-August-Universität Göttingen, Göttingen (Germany); Gruner, Sol M. [Cornell University, Ithaca, NY (United States); Cornell University, Ithaca, NY (United States); Kavli Institute of Cornell for Nanoscience, Ithaca, NY (United States); Mancuso, Adrian P. [European XFEL GmbH, Hamburg (Germany)

    2014-08-07

    The advantages of a novel wide dynamic range hard X-ray detector are demonstrated for (ptychographic) coherent X-ray diffractive imaging. Coherent (X-ray) diffractive imaging (CDI) is an increasingly popular form of X-ray microscopy, mainly due to its potential to produce high-resolution images and the lack of an objective lens between the sample and its corresponding imaging detector. One challenge, however, is that very high dynamic range diffraction data must be collected to produce both quantitative and high-resolution images. In this work, hard X-ray ptychographic coherent diffractive imaging has been performed at the P10 beamline of the PETRA III synchrotron to demonstrate the potential of a very wide dynamic range imaging X-ray detector (the Mixed-Mode Pixel Array Detector, or MM-PAD). The detector is capable of single photon detection, detecting fluxes exceeding 1 × 10{sup 8} 8-keV photons pixel{sup −1} s{sup −1}, and framing at 1 kHz. A ptychographic reconstruction was performed using a peak focal intensity on the order of 1 × 10{sup 10} photons µm{sup −2} s{sup −1} within an area of approximately 325 nm × 603 nm. This was done without need of a beam stop and with a very modest attenuation, while ‘still’ images of the empty beam far-field intensity were recorded without any attenuation. The treatment of the detector frames and CDI methodology for reconstruction of non-sensitive detector regions, partially also extending the active detector area, are described.

  9. Signal Processing in Medical Ultrasound B-mode Imaging

    International Nuclear Information System (INIS)

    Song, Tai Kyong

    2000-01-01

    Ultrasonic imaging is the most widely used modality among modern imaging device for medical diagnosis and the system performance has been improved dramatically since early 90's due to the rapid advances in DSP performance and VLSI technology that made it possible to employ more sophisticated algorithms. This paper describes 'main stream' digital signal processing functions along with the associated implementation considerations in modern medical ultrasound imaging systems. Topics covered include signal processing methods for resolution improvement, ultrasound imaging system architectures, roles and necessity of the applications of DSP and VLSI technology in the development of the medical ultrasound imaging systems, and array signal processing techniques for ultrasound focusing

  10. Low-power low-noise mixed-mode VLSI ASIC for infinite dynamic range imaging applications

    Science.gov (United States)

    Turchetta, Renato; Hu, Y.; Zinzius, Y.; Colledani, C.; Loge, A.

    1998-11-01

    Solid state solutions for imaging are mainly represented by CCDs and, more recently, by CMOS imagers. Both devices are based on the integration of the total charge generated by the impinging radiation, with no processing of the single photon information. The dynamic range of these devices is intrinsically limited by the finite value of noise. Here we present the design of an architecture which allows efficient, in-pixel, noise reduction to a practically zero level, thus allowing infinite dynamic range imaging. A detailed calculation of the dynamic range is worked out, showing that noise is efficiently suppressed. This architecture is based on the concept of single-photon counting. In each pixel, we integrate both the front-end, low-noise, low-power analog part and the digital part. The former consists of a charge preamplifier, an active filter for optimal noise bandwidth reduction, a buffer and a threshold comparator, and the latter is simply a counter, which can be programmed to act as a normal shift register for the readout of the counters' contents. Two different ASIC's based on this concept have been designed for different applications. The first one has been optimized for silicon edge-on microstrips detectors, used in a digital mammography R and D project. It is a 32-channel circuit, with a 16-bit binary static counter.It has been optimized for a relatively large detector capacitance of 5 pF. Noise has been measured to be equal to 100 + 7*Cd (pF) electron rms with the digital part, showing no degradation of the noise performances with respect to the design values. The power consumption is 3.8mW/channel for a peaking time of about 1 microsecond(s) . The second circuit is a prototype for pixel imaging. The total active area is about (250 micrometers )**2. The main differences of the electronic architecture with respect to the first prototype are: i) different optimization of the analog front-end part for low-capacitance detectors, ii) in- pixel 4-bit comparator

  11. Adaptive Algorithms for Automated Processing of Document Images

    Science.gov (United States)

    2011-01-01

    ABSTRACT Title of dissertation: ADAPTIVE ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES Mudit Agrawal, Doctor of Philosophy, 2011...2011 4. TITLE AND SUBTITLE Adaptive Algorithms for Automated Processing of Document Images 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES by Mudit Agrawal Dissertation submitted to the Faculty of the Graduate School of the University

  12. Optimization of super-resolution processing using incomplete image sets in PET imaging.

    Science.gov (United States)

    Chang, Guoping; Pan, Tinsu; Clark, John W; Mawlawi, Osama R

    2008-12-01

    Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POVs). The number of low-resolution images used defines the processing time and memory storage necessary to generate the SR image. In this paper, the authors propose two optimized SR implementations (ISR-1 and ISR-2) that require only a subset of the low-resolution images (two sides and diagonal of the image matrix, respectively), thereby reducing the overall processing time and memory storage. In an N x N matrix of low-resolution images, ISR-1 would be generated using images from the two sides of the N x N matrix, while ISR-2 would be generated from images across the diagonal of the image matrix. The objective of this paper is to investigate whether the two proposed SR methods can achieve similar performance in contrast and signal-to-noise ratio (SNR) as the SR image generated from a complete set of low-resolution images (CSR) using simulation and experimental studies. A simulation, a point source, and a NEMA/IEC phantom study were conducted for this investigation. In each study, 4 (2 x 2) or 16 (4 x 4) low-resolution images were reconstructed from the same acquired data set while shifting the reconstruction grid to generate images from different POVs. SR processing was then applied in each study to combine all as well as two different subsets of the low-resolution images to generate the CSR, ISR-1, and ISR-2 images, respectively. For reference purpose, a native reconstruction (NR) image using the same matrix size as the three SR images was also generated. The resultant images (CSR, ISR-1, ISR-2, and NR) were then analyzed using visual inspection, line profiles, SNR plots, and background noise spectra. The simulation study showed that the contrast and the SNR difference between the two ISR images and the CSR image were on average 0.4% and 0.3%, respectively. Line profiles of

  13. Bio-inspired approach to multistage image processing

    Science.gov (United States)

    Timchenko, Leonid I.; Pavlov, Sergii V.; Kokryatskaya, Natalia I.; Poplavska, Anna A.; Kobylyanska, Iryna M.; Burdenyuk, Iryna I.; Wójcik, Waldemar; Uvaysova, Svetlana; Orazbekov, Zhassulan; Kashaganova, Gulzhan

    2017-08-01

    Multistage integration of visual information in the brain allows people to respond quickly to most significant stimuli while preserving the ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing, described in this paper, comprises main types of cortical multistage convergence. One of these types occurs within each visual pathway and the other between the pathways. This approach maps input images into a flexible hierarchy which reflects the complexity of the image data. The procedures of temporal image decomposition and hierarchy formation are described in mathematical terms. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image which encapsulates, in a computer manner, structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a very quick response from the system. The result is represented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match.

  14. Real-time imaging as an emerging process analytical technology tool for monitoring of fluid bed coating process.

    Science.gov (United States)

    Naidu, Venkata Ramana; Deshpande, Rucha S; Syed, Moinuddin R; Wakte, Pravin S

    2018-07-01

    A direct imaging system (Eyecon TM ) was used as a Process Analytical Technology (PAT) tool to monitor fluid bed coating process. Eyecon TM generated real-time onscreen images, particle size and shape information of two identically manufactured laboratory-scale batches. Eyecon TM has accuracy of measuring the particle size increase of ±1 μm on particles in the size range of 50-3000 μm. Eyecon TM captured data every 2 s during the entire process. The moving average of D90 particle size values recorded by Eyecon TM were calculated for every 30 min to calculate the radial coating thickness of coated particles. After the completion of coating process, the radial coating thickness was found to be 11.3 and 9.11 μm, with a standard deviation of ±0.68 and 1.8 μm for Batch 1 and Batch 2, respectively. The coating thickness was also correlated with percent weight build-up by gel permeation chromatography (GPC) and dissolution. GPC indicated weight build-up of 10.6% and 9.27% for Batch 1 and Batch 2, respectively. In conclusion, weight build-up of 10% can also be correlated with 10 ± 2 μm increase in the coating thickness of pellets, indicating the potential applicability of real-time imaging as an endpoint determination tool for fluid bed coating process.

  15. Crack Length Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune

    1990-01-01

    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better then that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  16. Automated processing of X-ray images in medicine

    International Nuclear Information System (INIS)

    Babij, Ya.S.; B'yalyuk, Ya.O.; Yanovich, I.A.; Lysenko, A.V.

    1991-01-01

    Theoretical and practical achievements in application of computing technology means for processing of X-ray images in medicine were generalized. The scheme of the main directions and tasks of processing of X-ray images was given and analyzed. The principal problems appeared in automated processing of X-ray images were distinguished. It is shown that for interpretation of X-ray images it is expedient to introduce a notion of relative operating characteristic (ROC) of a roentgenologist. Every point on ROC curve determines the individual criteria of roentgenologist to put a positive diagnosis for definite situation

  17. Employing image processing techniques for cancer detection using microarray images.

    Science.gov (United States)

    Dehghan Khalilabad, Nastaran; Hassanpour, Hamid

    2017-02-01

    Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Twofold processing for denoising ultrasound medical images.

    Science.gov (United States)

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.

  19. An improved method to estimate reflectance parameters for high dynamic range imaging

    Science.gov (United States)

    Li, Shiying; Deguchi, Koichiro; Li, Renfa; Manabe, Yoshitsugu; Chihara, Kunihiro

    2008-01-01

    Two methods are described to accurately estimate diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness, over the dynamic range of the camera used to capture input images. Neither method needs to segment color areas on an image, or to reconstruct a high dynamic range (HDR) image. The second method improves on the first, bypassing the requirement for specific separation of diffuse and specular reflection components. For the latter method, diffuse and specular reflectance parameters are estimated separately, using the least squares method. Reflection values are initially assumed to be diffuse-only reflection components, and are subjected to the least squares method to estimate diffuse reflectance parameters. Specular reflection components, obtained by subtracting the computed diffuse reflection components from reflection values, are then subjected to a logarithmically transformed equation of the Torrance-Sparrow reflection model, and specular reflectance parameters for gloss intensity and surface roughness are finally estimated using the least squares method. Experiments were carried out using both methods, with simulation data at different saturation levels, generated according to the Lambert and Torrance-Sparrow reflection models, and the second method, with spectral images captured by an imaging spectrograph and a moving light source. Our results show that the second method can estimate the diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness more accurately and faster than the first one, so that colors and gloss can be reproduced more efficiently for HDR imaging.

  20. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Science.gov (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  1. The Digital Microscope and Its Image Processing Utility

    Directory of Open Access Journals (Sweden)

    Tri Wahyu Supardi

    2011-12-01

    Full Text Available Many institutions, including high schools, own a large number of analog or ordinary microscopes. These microscopes are used to observe small objects. Unfortunately, object observations on the ordinary microscope require precision and visual acuity of the user. This paper discusses the development of a high-resolution digital microscope from an analog microscope, including the image processing utility, which allows the digital microscope users to capture, store and process the digital images of the object being observed. The proposed microscope is constructed from hardware components that can be easily found in Indonesia. The image processing software is capable of performing brightness adjustment, contrast enhancement, histogram equalization, scaling and cropping. The proposed digital microscope has a maximum magnification of 1600x, and image resolution can be varied from 320x240 pixels up to 2592x1944 pixels. The microscope was tested with various objects with a variety of magnification, and image processing was carried out on the image of the object. The results showed that the digital microscope and its image processing system were capable of enhancing the observed object and other operations in accordance with the user need. The digital microscope has eliminated the need for direct observation by human eye as with the traditional microscope.

  2. Image processing on the image with pixel noise bits removed

    Science.gov (United States)

    Chuang, Keh-Shih; Wu, Christine

    1992-06-01

    Our previous studies used statistical methods to assess the noise level in digital images of various radiological modalities. We separated the pixel data into signal bits and noise bits and demonstrated visually that the removal of the noise bits does not affect the image quality. In this paper we apply image enhancement techniques on noise-bits-removed images and demonstrate that the removal of noise bits has no effect on the image property. The image processing techniques used are gray-level look up table transformation, Sobel edge detector, and 3-D surface display. Preliminary results show no noticeable difference between original image and noise bits removed image using look up table operation and Sobel edge enhancement. There is a slight enhancement of the slicing artifact in the 3-D surface display of the noise bits removed image.

  3. Soil transference patterns on bras: Image processing and laboratory dragging experiments.

    Science.gov (United States)

    Murray, Kathleen R; Fitzpatrick, Robert W; Bottrill, Ralph S; Berry, Ron; Kobus, Hilton

    2016-01-01

    In a recent Australian homicide, trace soil on the victim's clothing suggested she was initially attacked in her front yard and not the park where her body was buried. However the important issue that emerged during the trial was how soil was transferred to her clothing. This became the catalyst for designing a range of soil transference experiments (STEs) to study, recognise and classify soil patterns transferred onto fabric when a body is dragged across a soil surface. Soil deposits of interest in this murder were on the victim's bra and this paper reports the results of anthropogenic soil transfer to bra-cups and straps caused by dragging. Transfer patterns were recorded by digital photography and photomicroscopy. Eight soil transfer patterns on fabric, specific to dragging as the transfer method, appeared consistently throughout the STEs. The distinctive soil patterns were largely dependent on a wide range of soil features that were measured and identified for each soil tested using X-ray Diffraction and Non-Dispersive Infra-Red analysis. Digital photographs of soil transfer patterns on fabric were analysed using image processing software to provide a soil object-oriented classification of all soil objects with a diameter of 2 pixels and above transferred. Although soil transfer patterns were easily identifiable by naked-eye alone, image processing software provided objective numerical data to support this traditional (but subjective) interpretation. Image software soil colour analysis assigned a range of Munsell colours to identify and compare trace soil on fabric to other trace soil evidence from the same location; without requiring a spectrophotometer. Trace soil from the same location was identified by linking soils with similar dominant and sub-dominant Munsell colour peaks. Image processing numerical data on the quantity of soil transferred to fabric, enabled a relationship to be discovered between soil type, clay mineralogy (smectite), particle size and

  4. Post-processing of digital images.

    Science.gov (United States)

    Perrone, Luca; Politi, Marco; Foschi, Raffaella; Masini, Valentina; Reale, Francesca; Costantini, Alessandro Maria; Marano, Pasquale

    2003-01-01

    Post-processing of bi- and three-dimensional images plays a major role for clinicians and surgeons in both diagnosis and therapy. The new spiral (single and multislice) CT and MRI machines have allowed better quality of images. With the associated development of hardware and software, post-processing has become indispensable in many radiologic applications in order to address precise clinical questions. In particular, in CT the acquisition technique is fundamental and should be targeted and optimized to obtain good image reconstruction. Multiplanar reconstructions ensure simple, immediate display of sections along different planes. Three-dimensional reconstructions include numerous procedures: multiplanar techniques as maximum intensity projections (MIP); surface rendering techniques as the Shaded Surface Display (SSD); volume techniques as the Volume Rendering Technique; techniques of virtual endoscopy. In surgery computer-aided techniques as the neuronavigator, which with information provided by neuroimaging helps the neurosurgeon in simulating and performing the operation, are extremely interesting.

  5. Experimental design and instability analysis of coaxial electrospray process for microencapsulation of drugs and imaging agents.

    Science.gov (United States)

    Si, Ting; Zhang, Leilei; Li, Guangbin; Roberts, Cynthia J; Yin, Xiezhen; Xu, Ronald

    2013-07-01

    Recent developments in multimodal imaging and image-guided therapy requires multilayered microparticles that encapsulate several imaging and therapeutic agents in the same carrier. However, commonly used microencapsulation processes have multiple limitations such as low encapsulation efficiency and loss of bioactivity for the encapsulated biological cargos. To overcome these limitations, we have carried out both experimental and theoretical studies on coaxial electrospray of multilayered microparticles. On the experimental side, an improved coaxial electrospray setup has been developed. A customized coaxial needle assembly combined with two ring electrodes has been used to enhance the stability of the cone and widen the process parameter range of the stable cone-jet mode. With this assembly, we have obtained poly(lactide-co-glycolide) microparticles with fine morphology and uniform size distribution. On the theoretical side, an instability analysis of the coaxial electrified jet has been performed based on the experimental parameters. The effects of process parameters on the formation of different unstable modes have been studied. The reported experimental and theoretical research represents a significant step toward quantitative control and optimization of the coaxial electrospray process for microencapsulation of multiple drugs and imaging agents in multimodal imaging and image-guided therapy.

  6. Radiography and digital image processing for detection of internal breakdown in fruits of mango tree (Mangifera indica L.)

    International Nuclear Information System (INIS)

    Ferreira, Rubemar de Souza

    2004-01-01

    This work proposes a methodology aimed to be an adviser system for detection of internal breakdown in mangoes during the post-harvest phase to packinghouses. It was arranged a set-up to product digital images from X-ray spectrum in the range of 18 and 20 keV, where the primary images acquired were tested by a digital image processing routine for differentiation of seed, pulp, peel and injured zones. The analysis ROC applied to a only cut on a sample of 114 primary images generated, showed that digital image processing routine was able to identify 88% of true-positive injuries and 7% of false-negatives. When tested against the absence of injuries, the DIP routine had identified 22 % of false-positives and 88% of true-negatives. Besides, a cognitive analysis was applied to a sample of 76 digital images of mangoes. Results showed that the images offer enough information for dichotomic interpretation about the main injuries in the fruit, including those of difficult diagnosis under destructive assay. Measurements of observer agreement, performed on the same group of readers showed themselves in the range of fair and substantial strength of agreement. (author)

  7. Digital image processing techniques in archaeology

    Digital Repository Service at National Institute of Oceanography (India)

    Santanam, K.; Vaithiyanathan, R.; Tripati, S.

    Digital image processing involves the manipulation and interpretation of digital images with the aid of a computer. This form of remote sensing actually began in the 1960's with a limited number of researchers analysing multispectral scanner data...

  8. Automatic detection of blurred images in UAV image sets

    Science.gov (United States)

    Sieberth, Till; Wackrow, Rene; Chandler, Jim H.

    2016-12-01

    additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values from the same dataset. The speed and reliability of the method was tested using a range of different UAV datasets. Two datasets will be presented in this paper to demonstrate the effectiveness of the algorithm. The algorithm proves to be fast and the returned values are optically correct, making the algorithm applicable for UAV datasets. Additionally, a close range dataset was processed to determine whether the method is also useful for close range applications. The results show that the method is also reliable for close range images, which significantly extends the field of application for the algorithm.

  9. Radiology image orientation processing for workstation display

    Science.gov (United States)

    Chang, Chung-Fu; Hu, Kermit; Wilson, Dennis L.

    1998-06-01

    Radiology images are acquired electronically using phosphor plates that are read in Computed Radiology (CR) readers. An automated radiology image orientation processor (RIOP) for determining the orientation for chest images and for abdomen images has been devised. In addition, the chest images are differentiated as front (AP or PA) or side (Lateral). Using the processing scheme outlined, hospitals will improve the efficiency of quality assurance (QA) technicians who orient images and prepare the images for presentation to the radiologists.

  10. Image processing by use of the digital cross-correlator

    International Nuclear Information System (INIS)

    Katou, Yoshinori

    1982-01-01

    We manufactured for trial an instrument which achieved the image processing using digital correlators. A digital correlator perform 64-bit parallel correlation at 20 MH. The output of a digital correlator is a 7-bit word representing. An A-D converter is used to quantize it a precision of six bits. The resulting 6-bit word is fed to six correlators, wired in parallel. The image processing achieved in 12 bits, whose digital outputs converted an analog signal by a D-A converter. This instrument is named the digital cross-correlator. The method which was used in the image processing system calculated the convolution with the digital correlator. It makes various digital filters. In the experiment with the image processing video signals from TV camera were used. The digital image processing time was approximately 5 μs. The contrast was enhanced and smoothed. The digital cross-correlator has the image processing of 16 sorts, and was produced inexpensively. (author)

  11. Geometric processing workflow for vertical and oblique hyperspectral frame images collected using UAV

    Science.gov (United States)

    Markelin, L.; Honkavaara, E.; Näsi, R.; Nurminen, K.; Hakala, T.

    2014-08-01

    Remote sensing based on unmanned airborne vehicles (UAVs) is a rapidly developing field of technology. UAVs enable accurate, flexible, low-cost and multiangular measurements of 3D geometric, radiometric, and temporal properties of land and vegetation using various sensors. In this paper we present a geometric processing chain for multiangular measurement system that is designed for measuring object directional reflectance characteristics in a wavelength range of 400-900 nm. The technique is based on a novel, lightweight spectral camera designed for UAV use. The multiangular measurement is conducted by collecting vertical and oblique area-format spectral images. End products of the geometric processing are image exterior orientations, 3D point clouds and digital surface models (DSM). This data is needed for the radiometric processing chain that produces reflectance image mosaics and multiangular bidirectional reflectance factor (BRF) observations. The geometric processing workflow consists of the following three steps: (1) determining approximate image orientations using Visual Structure from Motion (VisualSFM) software, (2) calculating improved orientations and sensor calibration using a method based on self-calibrating bundle block adjustment (standard photogrammetric software) (this step is optional), and finally (3) creating dense 3D point clouds and DSMs using Photogrammetric Surface Reconstruction from Imagery (SURE) software that is based on semi-global-matching algorithm and it is capable of providing a point density corresponding to the pixel size of the image. We have tested the geometric processing workflow over various targets, including test fields, agricultural fields, lakes and complex 3D structures like forests.

  12. Measurement and Image Processing Techniques for Particle Image Velocimetry Using Solid-Phase Carbon Dioxide

    Science.gov (United States)

    2014-03-27

    stereoscopic PIV: the angular displacement configuration and the translation configuration. The angular displacement configuration is most commonly used today...images were processed using ImageJ, an open-source, Java -based image processing software available from the National Institute of Health (NIH). The

  13. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process.

    Science.gov (United States)

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-12

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke - . Readout noise under the highest pixel gain condition is 1 e - with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7", 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach.

  14. Digital Image Processing Overview For Helmet Mounted Displays

    Science.gov (United States)

    Parise, Michael J.

    1989-09-01

    Digital image processing provides a means to manipulate an image and presents a user with a variety of display formats that are not available in the analog image processing environment. When performed in real time and presented on a Helmet Mounted Display, system capability and flexibility are greatly enhanced. The information content of a display can be increased by the addition of real time insets and static windows from secondary sensor sources, near real time 3-D imaging from a single sensor can be achieved, graphical information can be added, and enhancement techniques can be employed. Such increased functionality is generating a considerable amount of interest in the military and commercial markets. This paper discusses some of these image processing techniques and their applications.

  15. ARTIP: Automated Radio Telescope Image Processing Pipeline

    Science.gov (United States)

    Sharma, Ravi; Gyanchandani, Dolly; Kulkarni, Sarang; Gupta, Neeraj; Pathak, Vineet; Pande, Arti; Joshi, Unmesh

    2018-02-01

    The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.

  16. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  17. Increasing Linear Dynamic Range of a CMOS Image Sensor

    Science.gov (United States)

    Pain, Bedabrata

    2007-01-01

    A generic design and a corresponding operating sequence have been developed for increasing the linear-response dynamic range of a complementary metal oxide/semiconductor (CMOS) image sensor. The design provides for linear calibrated dual-gain pixels that operate at high gain at a low signal level and at low gain at a signal level above a preset threshold. Unlike most prior designs for increasing dynamic range of an image sensor, this design does not entail any increase in noise (including fixed-pattern noise), decrease in responsivity or linearity, or degradation of photometric calibration. The figure is a simplified schematic diagram showing the circuit of one pixel and pertinent parts of its column readout circuitry. The conventional part of the pixel circuit includes a photodiode having a small capacitance, CD. The unconventional part includes an additional larger capacitance, CL, that can be connected to the photodiode via a transfer gate controlled in part by a latch. In the high-gain mode, the signal labeled TSR in the figure is held low through the latch, which also helps to adapt the gain on a pixel-by-pixel basis. Light must be coupled to the pixel through a microlens or by back illumination in order to obtain a high effective fill factor; this is necessary to ensure high quantum efficiency, a loss of which would minimize the efficacy of the dynamic- range-enhancement scheme. Once the level of illumination of the pixel exceeds the threshold, TSR is turned on, causing the transfer gate to conduct, thereby adding CL to the pixel capacitance. The added capacitance reduces the conversion gain, and increases the pixel electron-handling capacity, thereby providing an extension of the dynamic range. By use of an array of comparators also at the bottom of the column, photocharge voltages on sampling capacitors in each column are compared with a reference voltage to determine whether it is necessary to switch from the high-gain to the low-gain mode. Depending upon

  18. Kilovoltage energy imaging with a radiotherapy linac with a continuously variable energy range.

    Science.gov (United States)

    Roberts, D A; Hansen, V N; Thompson, M G; Poludniowski, G; Niven, A; Seco, J; Evans, P M

    2012-03-01

    In this paper, the effect on image quality of significantly reducing the primary electron energy of a radiotherapy accelerator is investigated using a novel waveguide test piece. The waveguide contains a novel variable coupling device (rotovane), allowing for a wide continuously variable energy range of between 1.4 and 9 MeV suitable for both imaging and therapy. Imaging at linac accelerating potentials close to 1 MV was investigated experimentally and via Monte Carlo simulations. An imaging beam line was designed, and planar and cone beam computed tomography images were obtained to enable qualitative and quantitative comparisons with kilovoltage and megavoltage imaging systems. The imaging beam had an electron energy of 1.4 MeV, which was incident on a water cooled electron window consisting of stainless steel, a 5 mm carbon electron absorber and 2.5 mm aluminium filtration. Images were acquired with an amorphous silicon detector sensitive to diagnostic x-ray energies. The x-ray beam had an average energy of 220 keV and half value layer of 5.9 mm of copper. Cone beam CT images with the same contrast to noise ratio as a gantry mounted kilovoltage imaging system were obtained with doses as low as 2 cGy. This dose is equivalent to a single 6 MV portal image. While 12 times higher than a 100 kVp CBCT system (Elekta XVI), this dose is 140 times lower than a 6 MV cone beam imaging system and 6 times lower than previously published LowZ imaging beams operating at higher (4-5 MeV) energies. The novel coupling device provides for a wide range of electron energies that are suitable for kilovoltage quality imaging and therapy. The imaging system provides high contrast images from the therapy portal at low dose, approaching that of gantry mounted kilovoltage x-ray systems. Additionally, the system provides low dose imaging directly from the therapy portal, potentially allowing for target tracking during radiotherapy treatment. There is the scope with such a tuneable system

  19. Image processing of integrated video image obtained with a charged-particle imaging video monitor system

    International Nuclear Information System (INIS)

    Iida, Takao; Nakajima, Takehiro

    1988-01-01

    A new type of charged-particle imaging video monitor system was constructed for video imaging of the distributions of alpha-emitting and low-energy beta-emitting nuclides. The system can display not only the scintillation image due to radiation on the video monitor but also the integrated video image becoming gradually clearer on another video monitor. The distortion of the image is about 5% and the spatial resolution is about 2 line pairs (lp)mm -1 . The integrated image is transferred to a personal computer and image processing is performed qualitatively and quantitatively. (author)

  20. A STEP TOWARDS DYNAMIC SCENE ANALYSIS WITH ACTIVE MULTI-VIEW RANGE IMAGING SYSTEMS

    Directory of Open Access Journals (Sweden)

    M. Weinmann

    2012-07-01

    Full Text Available Obtaining an appropriate 3D description of the local environment remains a challenging task in photogrammetric research. As terrestrial laser scanners (TLSs perform a highly accurate, but time-dependent spatial scanning of the local environment, they are only suited for capturing static scenes. In contrast, new types of active sensors provide the possibility of simultaneously capturing range and intensity information by images with a single measurement, and the high frame rate also allows for capturing dynamic scenes. However, due to the limited field of view, one observation is not sufficient to obtain a full scene coverage and therefore, typically, multiple observations are collected from different locations. This can be achieved by either placing several fixed sensors at different known locations or by using a moving sensor. In the latter case, the relation between different observations has to be estimated by using information extracted from the captured data and then, a limited field of view may lead to problems if there are too many moving objects within it. Hence, a moving sensor platform with multiple and coupled sensor devices offers the advantages of an extended field of view which results in a stabilized pose estimation, an improved registration of the recorded point clouds and an improved reconstruction of the scene. In this paper, a new experimental setup for investigating the potentials of such multi-view range imaging systems is presented which consists of a moving cable car equipped with two synchronized range imaging devices. The presented setup allows for monitoring in low altitudes and it is suitable for getting dynamic observations which might arise from moving cars or from moving pedestrians. Relying on both 3D geometry and 2D imagery, a reliable and fully automatic approach for co-registration of captured point cloud data is presented which is essential for a high quality of all subsequent tasks. The approach involves using

  1. Processing Of Binary Images

    Science.gov (United States)

    Hou, H. S.

    1985-07-01

    An overview of the recent progress in the area of digital processing of binary images in the context of document processing is presented here. The topics covered include input scan, adaptive thresholding, halftoning, scaling and resolution conversion, data compression, character recognition, electronic mail, digital typography, and output scan. Emphasis has been placed on illustrating the basic principles rather than descriptions of a particular system. Recent technology advances and research in this field are also mentioned.

  2. Application of two-dimensional crystallography and image processing to atomic resolution Z-contrast images.

    Science.gov (United States)

    Morgan, David G; Ramasse, Quentin M; Browning, Nigel D

    2009-06-01

    Zone axis images recorded using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM or Z-contrast imaging) reveal the atomic structure with a resolution that is defined by the probe size of the microscope. In most cases, the full images contain many sub-images of the crystal unit cell and/or interface structure. Thanks to the repetitive nature of these images, it is possible to apply standard image processing techniques that have been developed for the electron crystallography of biological macromolecules and have been used widely in other fields of electron microscopy for both organic and inorganic materials. These methods can be used to enhance the signal-to-noise present in the original images, to remove distortions in the images that arise from either the instrumentation or the specimen itself and to quantify properties of the material in ways that are difficult without such data processing. In this paper, we describe briefly the theory behind these image processing techniques and demonstrate them for aberration-corrected, high-resolution HAADF-STEM images of Si(46) clathrates developed for hydrogen storage.

  3. Alterations in affective processing of attack images following September 11, 2001.

    Science.gov (United States)

    Tso, Ivy F; Chiu, Pearl H; King-Casas, Brooks R; Deldin, Patricia J

    2011-10-01

    The events of September 11, 2001 created unprecedented uncertainty about safety in the United States and created an aftermath with significant psychological impact across the world. This study examined emotional information encoding in 31 healthy individuals whose stress response symptoms ranged from none to a moderate level shortly after the attacks as assessed by the Impact of Event Scale-Revised. Participants viewed attack-related, negative (but attack-irrelevant), and neutral images while their event-related brain potentials (ERPs) were recorded. Attack images elicited enhanced P300 relative to negative and neutral images, and emotional images prompted larger slow waves than neutral images did. Total symptoms were correlated with altered N2, P300, and slow wave responses during valence processing. Specifically, hyperarousal and intrusion symptoms were associated with diminished stimulus discrimination between neutral and unpleasant images; avoidance symptoms were associated with hypervigilance, as suggested by reduced P300 difference between attack and other images and reduced appraisal of attack images as indicated by attenuated slow wave. The findings in this minimally symptomatic sample are compatible with the alterations in cognition in the posttraumatic stress disorder (PTSD) literature and are consistent with a dimensional model of PTSD. Copyright © 2011 International Society for Traumatic Stress Studies.

  4. Volumetric image interpretation in radiology: scroll behavior and cognitive processes.

    Science.gov (United States)

    den Boer, Larissa; van der Schaaf, Marieke F; Vincken, Koen L; Mol, Chris P; Stuijfzand, Bobby G; van der Gijp, Anouk

    2018-05-16

    The interpretation of medical images is a primary task for radiologists. Besides two-dimensional (2D) images, current imaging technologies allow for volumetric display of medical images. Whereas current radiology practice increasingly uses volumetric images, the majority of studies on medical image interpretation is conducted on 2D images. The current study aimed to gain deeper insight into the volumetric image interpretation process by examining this process in twenty radiology trainees who all completed four volumetric image cases. Two types of data were obtained concerning scroll behaviors and think-aloud data. Types of scroll behavior concerned oscillations, half runs, full runs, image manipulations, and interruptions. Think-aloud data were coded by a framework of knowledge and skills in radiology including three cognitive processes: perception, analysis, and synthesis. Relating scroll behavior to cognitive processes showed that oscillations and half runs coincided more often with analysis and synthesis than full runs, whereas full runs coincided more often with perception than oscillations and half runs. Interruptions were characterized by synthesis and image manipulations by perception. In addition, we investigated relations between cognitive processes and found an overall bottom-up way of reasoning with dynamic interactions between cognitive processes, especially between perception and analysis. In sum, our results highlight the dynamic interactions between these processes and the grounding of cognitive processes in scroll behavior. It suggests, that the types of scroll behavior are relevant to describe how radiologists interact with and manipulate volumetric images.

  5. Image processing of early gastric cancer cases

    International Nuclear Information System (INIS)

    Inamoto, Kazuo; Umeda, Tokuo; Inamura, Kiyonari

    1992-01-01

    Computer image processing was used to enhance gastric lesions in order to improve the detection of stomach cancer. Digitization was performed in 25 cases of early gastric cancer that had been confirmed surgically and pathologically. The image processing consisted of grey scale transformation, edge enhancement (Sobel operator), and high-pass filtering (unsharp masking). Grey scale transformation improved image quality for the detection of gastric lesions. The Sobel operator enhanced linear and curved margins, and consequently, suppressed the rest. High-pass filtering with unsharp masking was superior to visualization of the texture pattern on the mucosa. Eight of 10 small lesions (less than 2.0 cm) were successfully demonstrated. However, the detection of two lesions in the antrum, was difficult even with the aid of image enhancement. In the other 15 lesions (more than 2.0 cm), the tumor surface pattern and margin between the tumor and non-pathological mucosa were clearly visualized. Image processing was considered to contribute to the detection of small early gastric cancer lesions by enhancing the pathological lesions. (author)

  6. REVIEW OF MATHEMATICAL METHODS AND ALGORITHMS OF MEDICAL IMAGE PROCESSING ON THE EXAMPLE OF TECHNOLOGY OF MEDICAL IMAGE PROCESSING FROM WOLFRAM MATHEMATICA

    Directory of Open Access Journals (Sweden)

    О. E. Prokopchenko

    2015-09-01

    Full Text Available The article analyzes the basic methods and algorithms of mathematical processing of medical images as objects of computer mathematics. The presented methods and computer algorithms of mathematics relevant and may find application in the field of medical imaging - automated processing of images; as a tool for measurement and determination the optical parameters; identification and formation of medical images database. Methods and computer algorithms presented in the article & based on Wolfram Mathematica are also relevant to the problem of modern medical education. As an example of Wolfram Mathematica may be considered appropriate demonstration, such as recognition of special radiographs and morphological imaging. These methods are used to improve the diagnostic significance and value of medical (clinical research and can serve as an educational interactive demonstration. Implementation submitted individual methods and algorithms of computer Wolfram Mathematics contributes, in general, the optimization process of practical processing and presentation of medical images.

  7. Novel welding image processing method based on fractal theory

    Institute of Scientific and Technical Information of China (English)

    陈强; 孙振国; 肖勇; 路井荣

    2002-01-01

    Computer vision has come into used in the fields of welding process control and automation. In order to improve precision and rapidity of welding image processing, a novel method based on fractal theory has been put forward in this paper. Compared with traditional methods, the image is preliminarily processed in the macroscopic regions then thoroughly analyzed in the microscopic regions in the new method. With which, an image is divided up to some regions according to the different fractal characters of image edge, and the fuzzy regions including image edges are detected out, then image edges are identified with Sobel operator and curved by LSM (Lease Square Method). Since the data to be processed have been decreased and the noise of image has been reduced, it has been testified through experiments that edges of weld seam or weld pool could be recognized correctly and quickly.

  8. Automated measurement of pressure injury through image processing.

    Science.gov (United States)

    Li, Dan; Mathews, Carol

    2017-11-01

    To develop an image processing algorithm to automatically measure pressure injuries using electronic pressure injury images stored in nursing documentation. Photographing pressure injuries and storing the images in the electronic health record is standard practice in many hospitals. However, the manual measurement of pressure injury is time-consuming, challenging and subject to intra/inter-reader variability with complexities of the pressure injury and the clinical environment. A cross-sectional algorithm development study. A set of 32 pressure injury images were obtained from a western Pennsylvania hospital. First, we transformed the images from an RGB (i.e. red, green and blue) colour space to a YC b C r colour space to eliminate inferences from varying light conditions and skin colours. Second, a probability map, generated by a skin colour Gaussian model, guided the pressure injury segmentation process using the Support Vector Machine classifier. Third, after segmentation, the reference ruler - included in each of the images - enabled perspective transformation and determination of pressure injury size. Finally, two nurses independently measured those 32 pressure injury images, and intraclass correlation coefficient was calculated. An image processing algorithm was developed to automatically measure the size of pressure injuries. Both inter- and intra-rater analysis achieved good level reliability. Validation of the size measurement of the pressure injury (1) demonstrates that our image processing algorithm is a reliable approach to monitoring pressure injury progress through clinical pressure injury images and (2) offers new insight to pressure injury evaluation and documentation. Once our algorithm is further developed, clinicians can be provided with an objective, reliable and efficient computational tool for segmentation and measurement of pressure injuries. With this, clinicians will be able to more effectively monitor the healing process of pressure

  9. Application range of micro focus radiographic devices associated to image processors

    International Nuclear Information System (INIS)

    Cappabianca, C.; Ferriani, S.; Verre, F.

    1987-01-01

    X-ray devices having a focus area less than 100 μ are called micro focus X-ray equipment. Here the range of application and the characteristics of these devices including the possibility of employing the coupling with real time image enhancement computers are defined

  10. Effects of image processing on the detective quantum efficiency

    Science.gov (United States)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  11. Document Image Processing: Going beyond the Black-and-White Barrier. Progress, Issues and Options with Greyscale and Colour Image Processing.

    Science.gov (United States)

    Hendley, Tom

    1995-01-01

    Discussion of digital document image processing focuses on issues and options associated with greyscale and color image processing. Topics include speed; size of original document; scanning resolution; markets for different categories of scanners, including photographic libraries, publishing, and office applications; hybrid systems; data…

  12. Image analysis and mathematical modelling for the supervision of the dough fermentation process

    Science.gov (United States)

    Zettel, Viktoria; Paquet-Durand, Olivier; Hecker, Florian; Hitzmann, Bernd

    2016-10-01

    The fermentation (proof) process of dough is one of the quality-determining steps in the production of baking goods. Beside the fluffiness, whose fundaments are built during fermentation, the flavour of the final product is influenced very much during this production stage. However, until now no on-line measurement system is available, which can supervise this important process step. In this investigation the potential of an image analysis system is evaluated, that enables the determination of the volume of fermented dough pieces. The camera is moving around the fermenting pieces and collects images from the objects by means of different angles (360° range). Using image analysis algorithms the volume increase of individual dough pieces is determined. Based on a detailed mathematical description of the volume increase, which based on the Bernoulli equation, carbon dioxide production rate of yeast cells and the diffusion processes of carbon dioxide, the fermentation process is supervised. Important process parameters, like the carbon dioxide production rate of the yeast cells and the dough viscosity can be estimated just after 300 s of proofing. The mean percentage error for forecasting the further evolution of the relative volume of the dough pieces is just 2.3 %. Therefore, a forecast of the further evolution can be performed and used for fault detection.

  13. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2013-01-01

      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  14. MR imaging of abnormal synovial processes

    International Nuclear Information System (INIS)

    Quinn, S.F.; Sanchez, R.; Murray, W.T.; Silbiger, M.L.; Ogden, J.; Cochran, C.

    1987-01-01

    MR imaging can directly image abnormal synovium. The authors reviewed over 50 cases with abnormal synovial processes. The abnormalities include Baker cysts, semimembranous bursitis, chronic shoulder bursitis, peroneal tendon ganglion cyst, periarticular abscesses, thickened synovium from rheumatoid and septic arthritis, and synovial hypertrophy secondary to Legg-Calve-Perthes disease. MR imaging has proved invaluable in identifying abnormal synovium, defining the extent and, to a limited degree, characterizing its makeup

  15. Quaternion Fourier transforms for signal and image processing

    CERN Document Server

    Ell, Todd A; Sangwine, Stephen J

    2014-01-01

    Based on updates to signal and image processing technology made in the last two decades, this text examines the most recent research results pertaining to Quaternion Fourier Transforms. QFT is a central component of processing color images and complex valued signals. The book's attention to mathematical concepts, imaging applications, and Matlab compatibility render it an irreplaceable resource for students, scientists, researchers, and engineers.

  16. Energy-Driven Image Interpolation Using Gaussian Process Regression

    Directory of Open Access Journals (Sweden)

    Lingling Zi

    2012-01-01

    Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.

  17. High resolution axicon-based endoscopic FD OCT imaging with a large depth range

    Science.gov (United States)

    Lee, Kye-Sung; Hurley, William; Deegan, John; Dean, Scott; Rolland, Jannick P.

    2010-02-01

    Endoscopic imaging in tubular structures, such as the tracheobronchial tree, could benefit from imaging optics with an extended depth of focus (DOF). This optics could accommodate for varying sizes of tubular structures across patients and along the tree within a single patient. In the paper, we demonstrate an extended DOF without sacrificing resolution showing rotational images in biological tubular samples with 2.5 μm axial resolution, 10 ìm lateral resolution, and > 4 mm depth range using a custom designed probe.

  18. A Novel Multi-View-Angle Range Images Generation Method for Measurement of Complicated Polyhedron in 3D Space

    Directory of Open Access Journals (Sweden)

    Deming Kong

    2017-01-01

    Full Text Available A new kind of generation method is proposed in this paper to acquire range images for complicated polyhedron in 3D space from a series of view angles. In the proposed generation method, concept of three-view drawing in mechanical cartography is introduced into the range image generation procedure. Negative and positive directions of x-, y-, and z-axes are selected as the view angles to generate the range images for complicated polyhedron in 3D space. Furthermore, a novel iterative operation of mathematical morphology is proposed to ensure that satisfactory range images can be generated for the polyhedron from all the selected view angles. Compared with the existing method based on single view angle and interpolation operation, structure features contained in surface of the complicated polyhedron can be represented more consistently with the reality by using the proposed multi-view-angle range images generation method. The proposed generation method is validated by using an experiment.

  19. Developments in the recovery of colour in fine art prints using spatial image processing

    International Nuclear Information System (INIS)

    Rizzi, A; Parraman, C

    2010-01-01

    Printmakers have at their disposal a wide range of colour printing processes. The majority of artists will utilise high quality materials with the expectation that the best materials and pigments will ensure image permanence. However, as many artists have experienced, this is not always the case. Inks, papers and materials can deteriorate over time. For artists and conservators who need to restore colour or tone to a print could benefit from the assistance of spatial colour enhancement tools. This paper studies two collections from the same edition of fine art prints that were made in 1991. The first edition has been kept in an archive and not exposed to light. The second edition has been framed and exposed to light for about 18 years. Previous experiments using colour enhancement methods [9,10] have involved a series of photographs that had been taken under poor or extreme lighting conditions, fine art works, scanned works. There are a range of colour enhancement methods: Retinex, RSR, ACE, Histogram Equalisation, Auto Levels, which are described in this paper. In this paper we will concentrate on the ACE algorithm and use a range of parameters to process the printed images and describe these results.

  20. Developments in the recovery of colour in fine art prints using spatial image processing

    Science.gov (United States)

    Rizzi, A.; Parraman, C.

    2010-06-01

    Printmakers have at their disposal a wide range of colour printing processes. The majority of artists will utilise high quality materials with the expectation that the best materials and pigments will ensure image permanence. However, as many artists have experienced, this is not always the case. Inks, papers and materials can deteriorate over time. For artists and conservators who need to restore colour or tone to a print could benefit from the assistance of spatial colour enhancement tools. This paper studies two collections from the same edition of fine art prints that were made in 1991. The first edition has been kept in an archive and not exposed to light. The second edition has been framed and exposed to light for about 18 years. Previous experiments using colour enhancement methods [9,10] have involved a series of photographs that had been taken under poor or extreme lighting conditions, fine art works, scanned works. There are a range of colour enhancement methods: Retinex, RSR, ACE, Histogram Equalisation, Auto Levels, which are described in this paper. In this paper we will concentrate on the ACE algorithm and use a range of parameters to process the printed images and describe these results.

  1. REVIEW OF MATHEMATICAL METHODS AND ALGORITHMS OF MEDICAL IMAGE PROCESSING ON THE EXAMPLE OF TECHNOLOGY OF MEDICAL IMAGE PROCESSING FROM WOLFRAM MATHEMATICS

    Directory of Open Access Journals (Sweden)

    O. Ye. Prokopchenko

    2015-10-01

    Full Text Available The article analyzes the basic methods and algorithms of mathematical processing of medical images as objects of computer mathematics. The presented methods and computer algorithms of mathematics relevant and may find application in the field of medical imaging - automated processing of images; as a tool for measurement and determination the optical parameters; identification and formation of medical images database. Methods and computer algorithms presented in the article and based on Wolfram Mathematica are also relevant to the problem of modern medical education. As an example of Wolfram Mathematics may be considered appropriate demonstration, such as recognition of special radiographs and morphological imaging. These methods are used to improve  the diagnostic significance and value of medical (clinical research and can serve as an educational interactive demonstration. Implementation submitted individual methods and algorithms of computer Wolfram Mathematics contributes, in general, the optimization process of practical processing and presentation of medical images.

  2. Fundamental concepts of digital image processing

    Energy Technology Data Exchange (ETDEWEB)

    Twogood, R.E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  3. Fundamental Concepts of Digital Image Processing

    Science.gov (United States)

    Twogood, R. E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  4. Full Waveform Analysis for Long-Range 3D Imaging Laser Radar

    Directory of Open Access Journals (Sweden)

    Wallace AndrewM

    2010-01-01

    Full Text Available The new generation of 3D imaging systems based on laser radar (ladar offers significant advantages in defense and security applications. In particular, it is possible to retrieve 3D shape information directly from the scene and separate a target from background or foreground clutter by extracting a narrow depth range from the field of view by range gating, either in the sensor or by postprocessing. We discuss and demonstrate the applicability of full-waveform ladar to produce multilayer 3D imagery, in which each pixel produces a complex temporal response that describes the scene structure. Such complexity caused by multiple and distributed reflection arises in many relevant scenarios, for example in viewing partially occluded targets, through semitransparent materials (e.g., windows and through distributed reflective media such as foliage. We demonstrate our methodology on 3D image data acquired by a scanning time-of-flight system, developed in our own laboratories, which uses the time-correlated single-photon counting technique.

  5. An Efficient Secret Key Homomorphic Encryption Used in Image Processing Service

    Directory of Open Access Journals (Sweden)

    Pan Yang

    2017-01-01

    Full Text Available Homomorphic encryption can protect user’s privacy when operating on user’s data in cloud computing. But it is not practical for wide using as the data and services types in cloud computing are diverse. Among these data types, digital image is an important personal data for users. There are also many image processing services in cloud computing. To protect user’s privacy in these services, this paper proposed a scheme using homomorphic encryption in image processing. Firstly, a secret key homomorphic encryption (IGHE was constructed for encrypting image. IGHE can operate on encrypted floating numbers efficiently to adapt to the image processing service. Then, by translating the traditional image processing methods into the operations on encrypted pixels, the encrypted image can be processed homomorphically. That is, service can process the encrypted image directly, and the result after decryption is the same as processing the plain image. To illustrate our scheme, three common image processing instances were given in this paper. The experiments show that our scheme is secure, correct, and efficient enough to be used in practical image processing applications.

  6. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process

    Directory of Open Access Journals (Sweden)

    Isao Takayanagi

    2018-01-01

    Full Text Available To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR approach.

  7. Parallel asynchronous systems and image processing algorithms

    Science.gov (United States)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  8. Bayesian image processing in two and three dimensions

    International Nuclear Information System (INIS)

    Hart, H.; Liang, Z.

    1986-01-01

    Tomographic image processing customarily analyzes data acquired over a series of projective orientations. If, however, the point source function (the matrix R) of the system is strongly depth dependent, tomographic information is also obtainable from a series of parallel planar images corresponding to different ''focal'' depths. Bayesian image processing (BIP) was carried out for two and three dimensional spatially uncorrelated discrete amplitude a priori source distributions

  9. Topographic laser ranging and scanning principles and processing

    CERN Document Server

    Shan, Jie

    2008-01-01

    A systematic, in-depth introduction to theories and principles of Light Detection and Ranging (LiDAR) technology is long overdue, as it is the most important geospatial data acquisition technology to be introduced in recent years. An advanced discussion, this text fills the void.Professionals in fields ranging from geology, geography and geoinformatics to physics, transportation, and law enforcement will benefit from this comprehensive discussion of topographic LiDAR principles, systems, data acquisition, and data processing techniques. The book covers ranging and scanning fundamentals, and broad, contemporary analysis of airborne LiDAR systems, as well as those situated on land and in space. The authors present data collection at the signal level in terms of waveforms and their properties; at the system level with regard to calibration and georeferencing; and at the data level to discuss error budget, quality control, and data organization. They devote the bulk of the book to LiDAR data processing and inform...

  10. Morphology and probability in image processing

    International Nuclear Information System (INIS)

    Fabbri, A.G.

    1985-01-01

    The author presents an analysis of some concepts which relate morphological attributes of digital objects to statistically meaningful measures. Some elementary transformations of binary images are described and examples of applications are drawn from the geological and image analysis domains. Some of the morphological models applicablle in astronomy are discussed. It is shown that the development of new spatially oriented computers leads to more extensive applications of image processing in the geosciences

  11. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.

    Science.gov (United States)

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

    2014-08-01

    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.

  12. Quantitative analysis of velopharyngeal movement using a stereoendoscope: accuracy and reliability of range images.

    Science.gov (United States)

    Nakano, Asuka; Mishima, Katsuaki; Shiraishi, Ruriko; Ueyama, Yoshiya

    2015-01-01

    We developed a novel method of producing accurate range images of the velopharynx using a three-dimensional (3D) endoscope to obtain detailed measurements of velopharyngeal movements. The purpose of the present study was to determine the relationship between the distance from the endoscope to an object, elucidate the measurement accuracy along the temporal axes, and determine the degree of blurring when using a jig to fix the endoscope. An endoscopic measuring system was developed in which a pattern projection system was incorporated into a commercially available 3D endoscope. After correcting the distortion of the camera images, range images were produced using pattern projection to achieve stereo matching. Graph paper was used to measure the appropriate distance from the camera to an object, the mesial buccal cusp of the right maxillary first molar was measured to clarify the range image stability, and an electric actuator was used to evaluate the measurement accuracy along the temporal axes. The measurement error was substantial when the distance from the camera to the subject was >6.5 cm. The standard error of the 3D coordinate value produced from 30 frames was within 0.1 mm (range, 0.01-0.08 mm). The measurement error of the temporal axes was 9.16% in the horizontal direction and 9.27% in the vertical direction. The optimal distance from the camera to an object is movements.

  13. Viewpoints on Medical Image Processing: From Science to Application

    Science.gov (United States)

    Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-01-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804

  14. Viewpoints on Medical Image Processing: From Science to Application.

    Science.gov (United States)

    Deserno Né Lehmann, Thomas M; Handels, Heinz; Maier-Hein Né Fritzsche, Klaus H; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-05-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment.

  15. Tumor image signatures and habitats: a processing pipeline of multimodality metabolic and physiological images.

    Science.gov (United States)

    You, Daekeun; Kim, Michelle M; Aryal, Madhava P; Parmar, Hemant; Piert, Morand; Lawrence, Theodore S; Cao, Yue

    2018-01-01

    To create tumor "habitats" from the "signatures" discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix [Formula: see text] containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix [Formula: see text] of the image parameters to discover major image "signatures;" (4) clustering the superpixels and organizing the parameter order of the [Formula: see text] matrix according to the one found in step 3; (5) creating "habitats" in the image space from the superpixels associated with the "signatures;" and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image "signatures" were identified. The three major "habitats" plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined "signatures" and "habitats," the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined "signatures" and "habitats" was revealed. These image-defined "signatures" and "habitats" can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.

  16. Opportunities and applications of medical imaging and image processing techniques for nondestructive testing

    International Nuclear Information System (INIS)

    Song, Samuel Moon Ho; Cho, Jung Ho; Son, Sang Rock; Sung, Je Jonng; Ahn, Hyung Keun; Lee, Jeong Soon

    2002-01-01

    Nondestructive testing (NDT) of structures strives to extract all relevant data regarding the state of the structure without altering its form or properties. The success enjoyed by imaging and image processing technologies in the field of modem medicine forecasts similar success of image processing related techniques both in research and practice of NDT. In this paper, we focus on two particular instances of such applications: a modern vision technique for 3-D profile and shape measurement, and ultrasonic imaging with rendering for 3-D visualization. Ultrasonic imaging of 3-D structures for nondestructive evaluation purposes must provide readily recognizable 3-D images with enough details to clearly show various faults that may or may not be present. As a step towards Improving conspicuity and thus detection of faults, we propose a pulse-echo ultrasonic imaging technique to generate a 3-D image of the 3-D object under evaluation through strategic scanning and processing of the pulse-echo data. This three-dimensional processing and display improves conspicuity of faults and in addition, provides manipulation capabilities, such as pan and rotation of the 3-D structure. As a second application, we consider an image based three-dimensional shape determination system. The shape, and thus the three-dimensional coordinate information of the 3-D object, is determined solely from captured images of the 3-D object from a prescribed set of viewpoints. The approach is based on the shape from silhouette (SFS) technique and the efficacy of the SFS method is tested using a sample data set. This system may be used to visualize the 3-D object efficiently, or to quickly generate initial CAD data for reverse engineering purposes. The proposed system potentially may be used in three dimensional design applications such as 3-D animation and 3-D games.

  17. Apparatus and method X-ray image processing

    International Nuclear Information System (INIS)

    1984-01-01

    The invention relates to a method for X-ray image processing. The radiation passed through the object is transformed into an electric image signal from which the logarithmic value is determined and displayed by a display device. Its main objective is to provide a method and apparatus that renders X-ray images or X-ray subtraction images with strong reduction of stray radiation. (Auth.)

  18. Suitable post processing algorithms for X-ray imaging using oversampled displaced multiple images

    International Nuclear Information System (INIS)

    Thim, J; Reza, S; Nawaz, K; Norlin, B; O'Nils, M; Oelmann, B

    2011-01-01

    X-ray imaging systems such as photon counting pixel detectors have a limited spatial resolution of the pixels, based on the complexity and processing technology of the readout electronics. For X-ray imaging situations where the features of interest are smaller than the imaging system pixel size, and the pixel size cannot be made smaller in the hardware, alternative means of resolution enhancement require to be considered. Oversampling with the usage of multiple displaced images, where the pixels of all images are mapped to a final resolution enhanced image, has proven a viable method of reaching a sub-pixel resolution exceeding the original resolution. The effectiveness of the oversampling method declines with the number of images taken, the sub-pixel resolution increases, but relative to a real reduction of imaging pixel sizes yielding a full resolution image, the perceived resolution from the sub-pixel oversampled image is lower. This is because the oversampling method introduces blurring noise into the mapped final images, and the blurring relative to full resolution images increases with the oversampling factor. One way of increasing the performance of the oversampling method is by sharpening the images in post processing. This paper focus on characterizing the performance increase of the oversampling method after the use of some suitable post processing filters, for digital X-ray images specifically. The results show that spatial domain filters and frequency domain filters of the same type yield indistinguishable results, which is to be expected. The results also show that the effectiveness of applying sharpening filters to oversampled multiple images increase with the number of images used (oversampling factor), leaving 60-80% of the original blurring noise after filtering a 6 x 6 mapped image (36 images taken), where the percentage is depending on the type of filter. This means that the effectiveness of the oversampling itself increase by using sharpening

  19. SIP: A Web-Based Astronomical Image Processing Program

    Science.gov (United States)

    Simonetti, J. H.

    1999-12-01

    I have written an astronomical image processing and analysis program designed to run over the internet in a Java-compatible web browser. The program, Sky Image Processor (SIP), is accessible at the SIP webpage (http://www.phys.vt.edu/SIP). Since nothing is installed on the user's machine, there is no need to download upgrades; the latest version of the program is always instantly available. Furthermore, the Java programming language is designed to work on any computer platform (any machine and operating system). The program could be used with students in web-based instruction or in a computer laboratory setting; it may also be of use in some research or outreach applications. While SIP is similar to other image processing programs, it is unique in some important respects. For example, SIP can load images from the user's machine or from the Web. An instructor can put images on a web server for students to load and analyze on their own personal computer. Or, the instructor can inform the students of images to load from any other web server. Furthermore, since SIP was written with students in mind, the philosophy is to present the user with the most basic tools necessary to process and analyze astronomical images. Images can be combined (by addition, subtraction, multiplication, or division), multiplied by a constant, smoothed, cropped, flipped, rotated, and so on. Statistics can be gathered for pixels within a box drawn by the user. Basic tools are available for gathering data from an image which can be used for performing simple differential photometry, or astrometry. Therefore, students can learn how astronomical image processing works. Since SIP is not part of a commercial CCD camera package, the program is written to handle the most common denominator image file, the FITS format.

  20. Digital image processing an algorithmic approach with Matlab

    CERN Document Server

    Qidwai, Uvais

    2009-01-01

    Introduction to Image Processing and the MATLAB EnvironmentIntroduction Digital Image Definitions: Theoretical Account Image Properties MATLAB Algorithmic Account MATLAB CodeImage Acquisition, Types, and File I/OImage Acquisition Image Types and File I/O Basics of Color Images Other Color Spaces Algorithmic Account MATLAB CodeImage ArithmeticIntroduction Operator Basics Theoretical TreatmentAlgorithmic Treatment Coding ExamplesAffine and Logical Operations, Distortions, and Noise in ImagesIntroduction Affine Operations Logical Operators Noise in Images Distortions in ImagesAlgorithmic Account

  1. Penn State astronomical image processing system

    International Nuclear Information System (INIS)

    Truax, R.J.; Nousek, J.A.; Feigelson, E.D.; Lonsdale, C.J.

    1987-01-01

    The needs of modern astronomy for image processing set demanding standards in simultaneously requiring fast computation speed, high-quality graphic display, large data storage, and interactive response. An innovative image processing system was designed, integrated, and used; it is based on a supermicro architecture which is tailored specifically for astronomy, which provides a highly cost-effective alternative to the traditional minicomputer installation. The paper describes the design rationale, equipment selection, and software developed to allow other astronomers with similar needs to benefit from the present experience. 9 references

  2. Software architecture for intelligent image processing using Prolog

    Science.gov (United States)

    Jones, Andrew C.; Batchelor, Bruce G.

    1994-10-01

    We describe a prototype system for interactive image processing using Prolog, implemented by the first author on an Apple Macintosh computer. This system is inspired by Prolog+, but differs from it in two particularly important respects. The first is that whereas Prolog+ assumes the availability of dedicated image processing hardware, with which the Prolog system communicates, our present system implements image processing functions in software using the C programming language. The second difference is that although our present system supports Prolog+ commands, these are implemented in terms of lower-level Prolog predicates which provide a more flexible approach to image manipulation. We discuss the impact of the Apple Macintosh operating system upon the implementation of the image-processing functions, and the interface between these functions and the Prolog system. We also explain how the Prolog+ commands have been implemented. The system described in this paper is a fairly early prototype, and we outline how we intend to develop the system, a task which is expedited by the extensible architecture we have implemented.

  3. An ImageJ plugin for ion beam imaging and data processing at AIFIRA facility

    Energy Technology Data Exchange (ETDEWEB)

    Devès, G.; Daudin, L. [Univ. Bordeaux, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Bessy, A.; Buga, F.; Ghanty, J.; Naar, A.; Sommar, V. [Univ. Bordeaux, F-33170 Gradignan (France); Michelet, C.; Seznec, H.; Barberet, P. [Univ. Bordeaux, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France)

    2015-04-01

    Quantification and imaging of chemical elements at the cellular level requires the use of a combination of techniques such as micro-PIXE, micro-RBS, STIM, secondary electron imaging associated with optical and fluorescence microscopy techniques employed prior to irradiation. Such a numerous set of methods generates an important amount of data per experiment. Typically for each acquisition the following data has to be processed: chemical map for each element present with a concentration above the detection limit, density and backscattered maps, mean and local spectra corresponding to relevant region of interest such as whole cell, intracellular compartment, or nanoparticles. These operations are time consuming, repetitive and as such could be source of errors in data manipulation. In order to optimize data processing, we have developed a new tool for batch data processing and imaging. This tool has been developed as a plugin for ImageJ, a versatile software for image processing that is suitable for the treatment of basic IBA data operations. Because ImageJ is written in Java, the plugin can be used under Linux, Mas OS X and Windows in both 32-bits and 64-bits modes, which may interest developers working on open-access ion beam facilities like AIFIRA. The main features of this plugin are presented here: listfile processing, spectroscopic imaging, local information extraction, quantitative density maps and database management using OMERO.

  4. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    Science.gov (United States)

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  5. MO-FG-CAMPUS-JeP1-03: Luminescence Imaging of Water During Proton Beam Irradiation for Range Estimation

    International Nuclear Information System (INIS)

    Yamamoto, S; Komori, M; Toshito, T; Watabe, H

    2016-01-01

    Purpose: Since proton therapy has the ability to selectively deliver a dose to a target tumor, the dose distribution should be accurately measured. A precise and efficient method to evaluate the dose distribution is desired. We found that luminescence was emitted from water during proton irradiation and thought this phenomenon could be used for estimating the dose distribution. Methods: For this purpose, we placed water phantoms set on a table with a spot-scanning proton-therapy system, and luminescence images of these phantoms were measured with a high-sensitivity cooled charge coupled device (CCD) camera during proton-beam irradiation. We also conducted the imaging of phantoms of pure-water, fluorescein solution and acrylic block. We made three dimensional images from the projection data. Results: The luminescence images of water phantoms during the proton-beam irradiations showed clear Bragg peaks, and the measured proton ranges from the images were almost the same as those obtained with an ionization chamber. The image of the pure-water phantom also showed almost the same distribution as the tap-water phantom, indicating that the luminescence image was not related to impurities in the water. The luminescence image of fluorescein solution had ∼3 times higher intensity than water, with the same proton range as that of water. The luminescence image of the acrylic phantom had 14.5% shorter proton range than that of water; the proton range in the acrylic phantom was relatively matched with the calculated value. The luminescence images of the tap-water phantom during proton irradiation could be obtained in less than 2 sec. Three dimensional images were successfully obtained which have more quantitative information. Conclusion: Luminescence imaging during proton-beam irradiation has the potential to be a new method for range estimations in proton therapy.

  6. MO-FG-CAMPUS-JeP1-03: Luminescence Imaging of Water During Proton Beam Irradiation for Range Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, S; Komori, M [Nagoya University, Nagoya, Aichi (Japan); Toshito, T [Nagoya Proton Therapy Center, Nagoya, Aichi (Japan); Watabe, H [Tohoku University, Sendai, Miyagi (Japan)

    2016-06-15

    Purpose: Since proton therapy has the ability to selectively deliver a dose to a target tumor, the dose distribution should be accurately measured. A precise and efficient method to evaluate the dose distribution is desired. We found that luminescence was emitted from water during proton irradiation and thought this phenomenon could be used for estimating the dose distribution. Methods: For this purpose, we placed water phantoms set on a table with a spot-scanning proton-therapy system, and luminescence images of these phantoms were measured with a high-sensitivity cooled charge coupled device (CCD) camera during proton-beam irradiation. We also conducted the imaging of phantoms of pure-water, fluorescein solution and acrylic block. We made three dimensional images from the projection data. Results: The luminescence images of water phantoms during the proton-beam irradiations showed clear Bragg peaks, and the measured proton ranges from the images were almost the same as those obtained with an ionization chamber. The image of the pure-water phantom also showed almost the same distribution as the tap-water phantom, indicating that the luminescence image was not related to impurities in the water. The luminescence image of fluorescein solution had ∼3 times higher intensity than water, with the same proton range as that of water. The luminescence image of the acrylic phantom had 14.5% shorter proton range than that of water; the proton range in the acrylic phantom was relatively matched with the calculated value. The luminescence images of the tap-water phantom during proton irradiation could be obtained in less than 2 sec. Three dimensional images were successfully obtained which have more quantitative information. Conclusion: Luminescence imaging during proton-beam irradiation has the potential to be a new method for range estimations in proton therapy.

  7. Architecture Of High Speed Image Processing System

    Science.gov (United States)

    Konishi, Toshio; Hayashi, Hiroshi; Ohki, Tohru

    1988-01-01

    One of architectures for a high speed image processing system which corresponds to a new algorithm for a shape understanding is proposed. And the hardware system which is based on the archtecture was developed. Consideration points of the architecture are mainly that using processors should match with the processing sequence of the target image and that the developed system should be used practically in an industry. As the result, it was possible to perform each processing at a speed of 80 nano-seconds a pixel.

  8. Three-dimensional short-range MR angiography and multiplanar reconstruction images in the evaluation of neurovascular compression in hemifacial spasm

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Woo Suk; Kim, Eui Jong; Lee, Jae Gue; Rhee, Bong Arm [Kyunghee Univ. Hospital, Seoul (Korea, Republic of)

    1998-08-01

    To evaluate the diagnostic efficacy of three-dimensional(3D) short-range MR angiography(MRA) and multiplanar reconstruction(MPR) imaging in hemifacial spasm(HS). Materials and Methods : Two hundreds patients with HS were studied using a 1.5T MRI system with a 3D time-of-flight(TOF) MRA sequence. To reconstruct short-range MRA, 6-10 source images near the 7-8th cranial nerve complex were processed using a maximum-intensity projection technique. In addition, an MPR technique was used to investigate neurovascular compression. We observed the relationship between the root-exit zone(REZ) of the 7th cranial nerve and compressive vessel, and identified the compressive vessels on symptomatic sides. To investigate neurovascular contact, asymptomatic contralateral sides were also evaluated. Results : MRI showed that in 197 of 200 patients there was vascular compression or contact with the facial nerve REZ on symptomatic sides. One of the three remaining patients was suffering from acoustic neurinoma on the symptomatic side, while in two patients there were no definite abnormal findings.Compressive vessels were demonstrated in all 197 patients; 80 cases involved the anterior inferior cerebellar artery(AICA), 74 the posterior cerebellar artery(PICA), 13 the vertebral artery(VA), 16 the VA and AICA, eight the VA and PICA, and six the AICA and PICA. In all 197 patients, compressive vessels were reconstructed on one 3D short-range MRA image without discontinuation from vertebral or basilar arteries. 3D MPR studies provided additional information such as the direction of compression and course of the compressive vessel. In 31 patients there was neurovascular contact on the contralateral side at the 7-8th cranial nerve complex. Conclusion : Inpatients with HS, 3D short-range MRA and MPR images are excellent and very helpful for the investigation of neurovascular compression and the identification of compressive vessels.

  9. Three-dimensional short-range MR angiography and multiplanar reconstruction images in the evaluation of neurovascular compression in hemifacial spasm

    International Nuclear Information System (INIS)

    Choi, Woo Suk; Kim, Eui Jong; Lee, Jae Gue; Rhee, Bong Arm

    1998-01-01

    To evaluate the diagnostic efficacy of three-dimensional(3D) short-range MR angiography(MRA) and multiplanar reconstruction(MPR) imaging in hemifacial spasm(HS). Materials and Methods : Two hundreds patients with HS were studied using a 1.5T MRI system with a 3D time-of-flight(TOF) MRA sequence. To reconstruct short-range MRA, 6-10 source images near the 7-8th cranial nerve complex were processed using a maximum-intensity projection technique. In addition, an MPR technique was used to investigate neurovascular compression. We observed the relationship between the root-exit zone(REZ) of the 7th cranial nerve and compressive vessel, and identified the compressive vessels on symptomatic sides. To investigate neurovascular contact, asymptomatic contralateral sides were also evaluated. Results : MRI showed that in 197 of 200 patients there was vascular compression or contact with the facial nerve REZ on symptomatic sides. One of the three remaining patients was suffering from acoustic neurinoma on the symptomatic side, while in two patients there were no definite abnormal findings.Compressive vessels were demonstrated in all 197 patients; 80 cases involved the anterior inferior cerebellar artery(AICA), 74 the posterior cerebellar artery(PICA), 13 the vertebral artery(VA), 16 the VA and AICA, eight the VA and PICA, and six the AICA and PICA. In all 197 patients, compressive vessels were reconstructed on one 3D short-range MRA image without discontinuation from vertebral or basilar arteries. 3D MPR studies provided additional information such as the direction of compression and course of the compressive vessel. In 31 patients there was neurovascular contact on the contralateral side at the 7-8th cranial nerve complex. Conclusion : Inpatients with HS, 3D short-range MRA and MPR images are excellent and very helpful for the investigation of neurovascular compression and the identification of compressive vessels

  10. Study on Processing Method of Image Shadow

    Directory of Open Access Journals (Sweden)

    Wang Bo

    2014-07-01

    Full Text Available In order to effectively remove disturbance of shadow and enhance robustness of information processing of computer visual image, this paper makes study on inspection and removal of image shadow. It makes study the continual removal algorithm of shadow based on integration, the illumination surface and texture, it respectively introduces their work principles and realization method, it can effectively carrying processing for shadow by test.

  11. Earth Observation Services (Image Processing Software)

    Science.gov (United States)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  12. Nonlinear dynamic range transformation in visual communication channels.

    Science.gov (United States)

    Alter-Gartenberg, R

    1996-01-01

    The article evaluates nonlinear dynamic range transformation in the context of the end-to-end continuous-input/discrete processing/continuous-display imaging process. Dynamic range transformation is required when we have the following: (i) the wide dynamic range encountered in nature is compressed into the relatively narrow dynamic range of the display, particularly for spatially varying irradiance (e.g., shadow); (ii) coarse quantization is expanded to the wider dynamic range of the display; and (iii) nonlinear tone scale transformation compensates for the correction in the camera amplifier.

  13. Stochastic processes and long range dependence

    CERN Document Server

    Samorodnitsky, Gennady

    2016-01-01

    This monograph is a gateway for researchers and graduate students to explore the profound, yet subtle, world of long-range dependence (also known as long memory). The text is organized around the probabilistic properties of stationary processes that are important for determining the presence or absence of long memory. The first few chapters serve as an overview of the general theory of stochastic processes which gives the reader sufficient background, language, and models for the subsequent discussion of long memory. The later chapters devoted to long memory begin with an introduction to the subject along with a brief history of its development, followed by a presentation of what is currently the best known approach, applicable to stationary processes with a finite second moment. The book concludes with a chapter devoted to the author’s own, less standard, point of view of long memory as a phase transition, and even includes some novel results. Most of the material in the book has not previously been publis...

  14. Graphical user interface for image acquisition and processing

    Science.gov (United States)

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  15. Image simulation and a model of noise power spectra across a range of mammographic beam qualities

    Energy Technology Data Exchange (ETDEWEB)

    Mackenzie, Alistair, E-mail: alistairmackenzie@nhs.net; Dance, David R.; Young, Kenneth C. [National Coordinating Centre for the Physics of Mammography, Royal Surrey County Hospital, Guildford GU2 7XX, United Kingdom and Department of Physics, University of Surrey, Guildford GU2 7XH (United Kingdom); Diaz, Oliver [Centre for Vision, Speech and Signal Processing, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH, United Kingdom and Computer Vision and Robotics Research Institute, University of Girona, Girona 17071 (Spain)

    2014-12-15

    Purpose: The aim of this work is to create a model to predict the noise power spectra (NPS) for a range of mammographic radiographic factors. The noise model was necessary to degrade images acquired on one system to match the image quality of different systems for a range of beam qualities. Methods: Five detectors and x-ray systems [Hologic Selenia (ASEh), Carestream computed radiography CR900 (CRc), GE Essential (CSI), Carestream NIP (NIPc), and Siemens Inspiration (ASEs)] were characterized for this study. The signal transfer property was measured as the pixel value against absorbed energy per unit area (E) at a reference beam quality of 28 kV, Mo/Mo or 29 kV, W/Rh with 45 mm polymethyl methacrylate (PMMA) at the tube head. The contributions of the three noise sources (electronic, quantum, and structure) to the NPS were calculated by fitting a quadratic at each spatial frequency of the NPS against E. A quantum noise correction factor which was dependent on beam quality was quantified using a set of images acquired over a range of radiographic factors with different thicknesses of PMMA. The noise model was tested for images acquired at 26 kV, Mo/Mo with 20 mm PMMA and 34 kV, Mo/Rh with 70 mm PMMA for three detectors (ASEh, CRc, and CSI) over a range of exposures. The NPS were modeled with and without the noise correction factor and compared with the measured NPS. A previous method for adapting an image to appear as if acquired on a different system was modified to allow the reference beam quality to be different from the beam quality of the image. The method was validated by adapting the ASEh flat field images with two thicknesses of PMMA (20 and 70 mm) to appear with the imaging characteristics of the CSI and CRc systems. Results: The quantum noise correction factor rises with higher beam qualities, except for CR systems at high spatial frequencies, where a flat response was found against mean photon energy. This is due to the dominance of secondary quantum noise

  16. IDAPS (Image Data Automated Processing System) System Description

    Science.gov (United States)

    1988-06-24

    This document describes the physical configuration and components used in the image processing system referred to as IDAPS (Image Data Automated ... Processing System). This system was developed by the Environmental Research Institute of Michigan (ERIM) for Eglin Air Force Base. The system is designed

  17. Defects quantization in industrial radiographs by image processing

    International Nuclear Information System (INIS)

    Briand, F.Y.; Brillault, B.; Philipp, S.

    1988-01-01

    This paper refers to the industrial application of image processing using Non Destructive Testing by radiography. The various problems involved by the conception of a numerical tool are described. This tool intends to help radiograph experts to quantify defects and to follow up their evolution, using numerical techniques. The sequences of processings that achieve defect segmentation and quantization are detailed. They are based on the thorough knowledge of radiographs formation techniques. The process uses various methods of image analysis, including textural analysis and morphological mathematics. The interface between the final product and users will occur in an explicit language, using the terms of radiographic expertise without showing any processing details. The problem is thoroughly described: image formation, digitization, processings fitted to flaw morphology and finally product structure in progress. 12 refs [fr

  18. Quality Control in Automated Manufacturing Processes – Combined Features for Image Processing

    Directory of Open Access Journals (Sweden)

    B. Kuhlenkötter

    2006-01-01

    Full Text Available In production processes the use of image processing systems is widespread. Hardware solutions and cameras respectively are available for nearly every application. One important challenge of image processing systems is the development and selection of appropriate algorithms and software solutions in order to realise ambitious quality control for production processes. This article characterises the development of innovative software by combining features for an automatic defect classification on product surfaces. The artificial intelligent method Support Vector Machine (SVM is used to execute the classification task according to the combined features. This software is one crucial element for the automation of a manually operated production process

  19. Pattern recognition and expert image analysis systems in biomedical image processing (Invited Paper)

    Science.gov (United States)

    Oosterlinck, A.; Suetens, P.; Wu, Q.; Baird, M.; F. M., C.

    1987-09-01

    This paper gives an overview of pattern recoanition techniques (P.R.) used in biomedical image processing and problems related to the different P.R. solutions. Also the use of knowledge based systems to overcome P.R. difficulties, is described. This is illustrated by a common example ofabiomedical image processing application.

  20. Polarization information processing and software system design for simultaneously imaging polarimetry

    Science.gov (United States)

    Wang, Yahui; Liu, Jing; Jin, Weiqi; Wen, Renjie

    2015-08-01

    Simultaneous imaging polarimetry can realize real-time polarization imaging of the dynamic scene, which has wide application prospect. This paper first briefly illustrates the design of the double separate Wollaston Prism simultaneous imaging polarimetry, and then emphases are put on the polarization information processing methods and software system design for the designed polarimetry. Polarization information processing methods consist of adaptive image segmentation, high-accuracy image registration, instrument matrix calibration. Morphological image processing was used for image segmentation by taking dilation of an image; The accuracy of image registration can reach 0.1 pixel based on the spatial and frequency domain cross-correlation; Instrument matrix calibration adopted four-point calibration method. The software system was implemented under Windows environment based on C++ programming language, which realized synchronous polarization images acquisition and preservation, image processing and polarization information extraction and display. Polarization data obtained with the designed polarimetry shows that: the polarization information processing methods and its software system effectively performs live realize polarization measurement of the four Stokes parameters of a scene. The polarization information processing methods effectively improved the polarization detection accuracy.

  1. Effects of image processing on the detective quantum efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na [Yonsei University, Wonju (Korea, Republic of)

    2010-02-15

    The evaluation of image quality is an important part of digital radiography. The modulation transfer function (MTF), the noise power spectrum (NPS), and the detective quantum efficiency (DQE) are widely accepted measurements of the digital radiographic system performance. However, as the methodologies for such characterization have not been standardized, it is difficult to compare directly reported the MTF, NPS, and DQE results. In this study, we evaluated the effect of an image processing algorithm for estimating the MTF, NPS, and DQE. The image performance parameters were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) posterior-anterior (PA) images of a hand for measuring the signal to noise ratio (SNR), the slit images for measuring the MTF, and the white images for measuring the NPS were obtained, and various multi-Scale image contrast amplification (MUSICA) factors were applied to each of the acquired images. All of the modifications of the images obtained by using image processing had a considerable influence on the evaluated image quality. In conclusion, the control parameters of image processing can be accounted for evaluating characterization of image quality in same way. The results of this study should serve as a baseline for based on evaluating imaging systems and their imaging characteristics by MTF, NPS, and DQE measurements.

  2. Effects of image processing on the detective quantum efficiency

    International Nuclear Information System (INIS)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-01-01

    The evaluation of image quality is an important part of digital radiography. The modulation transfer function (MTF), the noise power spectrum (NPS), and the detective quantum efficiency (DQE) are widely accepted measurements of the digital radiographic system performance. However, as the methodologies for such characterization have not been standardized, it is difficult to compare directly reported the MTF, NPS, and DQE results. In this study, we evaluated the effect of an image processing algorithm for estimating the MTF, NPS, and DQE. The image performance parameters were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) posterior-anterior (PA) images of a hand for measuring the signal to noise ratio (SNR), the slit images for measuring the MTF, and the white images for measuring the NPS were obtained, and various multi-Scale image contrast amplification (MUSICA) factors were applied to each of the acquired images. All of the modifications of the images obtained by using image processing had a considerable influence on the evaluated image quality. In conclusion, the control parameters of image processing can be accounted for evaluating characterization of image quality in same way. The results of this study should serve as a baseline for based on evaluating imaging systems and their imaging characteristics by MTF, NPS, and DQE measurements.

  3. Image analysis for ophthalmological diagnosis image processing of Corvis ST images using Matlab

    CERN Document Server

    Koprowski, Robert

    2016-01-01

    This monograph focuses on the use of analysis and processing methods for images from the Corvis® ST tonometer. The presented analysis is associated with the quantitative, repeatable and fully automatic evaluation of the response of the eye, eyeball and cornea to an air-puff. All the described algorithms were practically implemented in MATLAB®. The monograph also describes and provides the full source code designed to perform the discussed calculations. As a result, this monograph is intended for scientists, graduate students and students of computer science and bioengineering as well as doctors wishing to expand their knowledge of modern diagnostic methods assisted by various image analysis and processing methods.

  4. Image processing for medical diagnosis of human organs

    International Nuclear Information System (INIS)

    Tamura, Shin-ichi

    1989-01-01

    The report first describes expectations and needs for diagnostic imaging in the field of clinical medicine, radiation medicine in particular, viewed by the author as an image processing expert working at a medical institute. Then, medical image processing techniques are discussed in relation to advanced information processing techniques that are currently drawing much attention in the field of engineering. Finally, discussion is also made of practical applications of image processing techniques to diagnosis. In the field of clinical diagnosis, advanced equipment such as PACS (picture archiving and communication system) has come into wider use, and efforts have been made to shift from visual examination to more quantitative and objective diagnosis by means of such advanced systems. In clinical medicine, practical, robust systems are more useful than sophisticated ones. It is difficult, though important, to develop completely automatized diagnostic systems. The urgent, realistic goal, therefore, is to develop effective diagnosis support systems. In particular, operation support systems equipped with three-dimensional displays will be very useful. (N.K.)

  5. Evaluation of clinical image processing algorithms used in digital mammography.

    Science.gov (United States)

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

    2009-03-01

    Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the

  6. Image processing system for flow pattern measurements

    International Nuclear Information System (INIS)

    Ushijima, Satoru; Miyanaga, Yoichi; Takeda, Hirofumi

    1989-01-01

    This paper describes the development and application of an image processing system for measurements of flow patterns occuring in natural circulation water flows. In this method, the motions of particles scattered in the flow are visualized by a laser light slit and they are recorded on normal video tapes. These image data are converted to digital data with an image processor and then transfered to a large computer. The center points and pathlines of the particle images are numerically analized, and velocity vectors are obtained with these results. In this image processing system, velocity vectors in a vertical plane are measured simultaneously, so that the two dimensional behaviors of various eddies, with low velocity and complicated flow patterns usually observed in natural circulation flows, can be determined almost quantitatively. The measured flow patterns, which were obtained from natural circulation flow experiments, agreed with photographs of the particle movements, and the validity of this measuring system was confirmed in this study. (author)

  7. Image processing for HTS SQUID probe microscope

    International Nuclear Information System (INIS)

    Hayashi, T.; Koetitz, R.; Itozaki, H.; Ishikawa, T.; Kawabe, U.

    2005-01-01

    An HTS SQUID probe microscope has been developed using a high-permeability needle to enable high spatial resolution measurement of samples in air even at room temperature. Image processing techniques have also been developed to improve the magnetic field images obtained from the microscope. Artifacts in the data occur due to electromagnetic interference from electric power lines, line drift and flux trapping. The electromagnetic interference could successfully be removed by eliminating the noise peaks from the power spectrum of fast Fourier transforms of line scans of the image. The drift between lines was removed by interpolating the mean field value of each scan line. Artifacts in line scans occurring due to flux trapping or unexpected noise were removed by the detection of a sharp drift and interpolation using the line data of neighboring lines. Highly detailed magnetic field images were obtained from the HTS SQUID probe microscope by the application of these image processing techniques

  8. The Dark Energy Survey Image Processing Pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Morganson, E.; et al.

    2018-01-09

    The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a 5000 square degree survey of the southern sky in five optical bands (g,r,i,z,Y) to a depth of ~24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g,r,i,z) over 27 square degrees. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On a bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.

  9. High-speed Imaging of Global Surface Temperature Distributions on Hypersonic Ballistic-Range Projectiles

    Science.gov (United States)

    Wilder, Michael C.; Reda, Daniel C.

    2004-01-01

    The NASA-Ames ballistic range provides a unique capability for aerothermodynamic testing of configurations in hypersonic, real-gas, free-flight environments. The facility can closely simulate conditions at any point along practically any trajectory of interest experienced by a spacecraft entering an atmosphere. Sub-scale models of blunt atmospheric entry vehicles are accelerated by a two-stage light-gas gun to speeds as high as 20 times the speed of sound to fly ballistic trajectories through an 24 m long vacuum-rated test section. The test-section pressure (effective altitude), the launch velocity of the model (flight Mach number), and the test-section working gas (planetary atmosphere) are independently variable. The model travels at hypersonic speeds through a quiescent test gas, creating a strong bow-shock wave and real-gas effects that closely match conditions achieved during actual atmospheric entry. The challenge with ballistic range experiments is to obtain quantitative surface measurements from a model traveling at hypersonic speeds. The models are relatively small (less than 3.8 cm in diameter), which limits the spatial resolution possible with surface mounted sensors. Furthermore, since the model is in flight, surface-mounted sensors require some form of on-board telemetry, which must survive the massive acceleration loads experienced during launch (up to 500,000 gravities). Finally, the model and any on-board instrumentation will be destroyed at the terminal wall of the range. For these reasons, optical measurement techniques are the most practical means of acquiring data. High-speed thermal imaging has been employed in the Ames ballistic range to measure global surface temperature distributions and to visualize the onset of transition to turbulent-flow on the forward regions of hypersonic blunt bodies. Both visible wavelength and infrared high-speed cameras are in use. The visible wavelength cameras are intensified CCD imagers capable of integration

  10. Current status on image processing in medical fields in Japan

    International Nuclear Information System (INIS)

    Atsumi, Kazuhiko

    1979-01-01

    Information on medical images are classified in the two patterns. 1) off-line images on films-x-ray films, cell image, chromosome image etc. 2) on-line images detected through sensors, RI image, ultrasonic image, thermogram etc. These images are divided into three characteristic, two dimensional three dimensional and dynamic images. The research on medical image processing have been reported in several meeting in Japan and many fields on images have been studied on RI, thermogram, x-ray film, x-ray-TV image, cancer cell, blood cell, bacteria, chromosome, ultrasonics, and vascular image. Processing on TI image useful and easy because of their digital displays. Software on smoothing, restoration (iterative approximation), fourier transformation, differentiation and subtration. Image on stomach and chest x-ray films have been processed automatically utilizing computer system. Computed Tomography apparatuses have been already developed in Japan and automated screening instruments on cancer cells and recently on blood cells classification have been also developed. Acoustical holography imaging and moire topography have been also studied in Japan. (author)

  11. Image Segmentation and Processing for Efficient Parking Space Analysis

    OpenAIRE

    Tutika, Chetan Sai; Vallapaneni, Charan; R, Karthik; KP, Bharath; Muthu, N Ruban Rajesh Kumar

    2018-01-01

    In this paper, we develop a method to detect vacant parking spaces in an environment with unclear segments and contours with the help of MATLAB image processing capabilities. Due to the anomalies present in the parking spaces, such as uneven illumination, distorted slot lines and overlapping of cars. The present-day conventional algorithms have difficulties processing the image for accurate results. The algorithm proposed uses a combination of image pre-processing and false contour detection ...

  12. The operation technology of realtime image processing system (Datacube)

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Lee, Yong Bum; Lee, Nam Ho; Choi, Young Soo; Park, Soon Yong; Park, Jin Seok

    1997-02-01

    In this project, a Sparc VME-based MaxSparc system, running the solaris operating environment, is selected as the dedicated image processing hardware for robot vision applications. In this report, the operation of Datacube maxSparc system, which is high performance realtime image processing hardware, is systematized. And image flow example programs for running MaxSparc system are studied and analyzed. The state-of-the-arts of Datacube system utilizations are studied and analyzed. For the next phase, advanced realtime image processing platform for robot vision application is going to be developed. (author). 19 refs., 71 figs., 11 tabs.

  13. Application of lidar techniques to time-of-flight range imaging.

    Science.gov (United States)

    Whyte, Refael; Streeter, Lee; Cree, Michael J; Dorrington, Adrian A

    2015-11-20

    Amplitude-modulated continuous wave (AMCW) time-of-flight (ToF) range imaging cameras measure distance by illuminating the scene with amplitude-modulated light and measuring the phase difference between the transmitted and reflected modulation envelope. This method of optical range measurement suffers from errors caused by multiple propagation paths, motion, phase wrapping, and nonideal amplitude modulation. In this paper a ToF camera is modified to operate in modes analogous to continuous wave (CW) and stepped frequency continuous wave (SFCW) lidar. In CW operation the velocity of objects can be measured. CW measurement of velocity was linear with true velocity (R2=0.9969). Qualitative analysis of a complex scene confirms that range measured by SFCW is resilient to errors caused by multiple propagation paths, phase wrapping, and nonideal amplitude modulation which plague AMCW operation. In viewing a complicated scene through a translucent sheet, quantitative comparison of AMCW with SFCW demonstrated a reduction in the median error from -1.3  m to -0.06  m with interquartile range of error reduced from 4.0 m to 0.18 m.

  14. Open source software in a practical approach for post processing of radiologic images.

    Science.gov (United States)

    Valeri, Gianluca; Mazza, Francesco Antonino; Maggi, Stefania; Aramini, Daniele; La Riccia, Luigi; Mazzoni, Giovanni; Giovagnoni, Andrea

    2015-03-01

    The purpose of this paper is to evaluate the use of open source software (OSS) to process DICOM images. We selected 23 programs for Windows and 20 programs for Mac from 150 possible OSS programs including DICOM viewers and various tools (converters, DICOM header editors, etc.). The programs selected all meet the basic requirements such as free availability, stand-alone application, presence of graphical user interface, ease of installation and advanced features beyond simple display monitor. Capabilities of data import, data export, metadata, 2D viewer, 3D viewer, support platform and usability of each selected program were evaluated on a scale ranging from 1 to 10 points. Twelve programs received a score higher than or equal to eight. Among them, five obtained a score of 9: 3D Slicer, MedINRIA, MITK 3M3, VolView, VR Render; while OsiriX received 10. OsiriX appears to be the only program able to perform all the operations taken into consideration, similar to a workstation equipped with proprietary software, allowing the analysis and interpretation of images in a simple and intuitive way. OsiriX is a DICOM PACS workstation for medical imaging and software for image processing for medical research, functional imaging, 3D imaging, confocal microscopy and molecular imaging. This application is also a good tool for teaching activities because it facilitates the attainment of learning objectives among students and other specialists.

  15. Matching rendered and real world images by digital image processing

    Science.gov (United States)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  16. Processing of space images and geologic interpretation

    Energy Technology Data Exchange (ETDEWEB)

    Yudin, V S

    1981-01-01

    Using data for standard sections, a correlation was established between natural formations in geologic/geophysical dimensions and the form they take in the imaging. With computer processing, important data can be derived from the image. Use of the above correlations has allowed to make a number of preliminary classifications of tectonic structures, and to determine certain ongoing processes in the given section. The derived data may be used for search of useful minerals.

  17. Advances in the Application of Image Processing Fruit Grading

    OpenAIRE

    Fang , Chengjun; Hua , Chunjian

    2013-01-01

    International audience; In the perspective of actual production, the paper presents the advances in the application of image processing fruit grading from several aspects, such as processing precision and processing speed of image processing technology. Furthermore, the different algorithms about detecting size, shape, color and defects are combined effectively to reduce the complexity of each algorithm and achieve a balance between the processing precision and processing speed are keys to au...

  18. A Dynamic Range Enhanced Readout Technique with a Two-Step TDC for High Speed Linear CMOS Image Sensors

    Directory of Open Access Journals (Sweden)

    Zhiyuan Gao

    2015-11-01

    Full Text Available This paper presents a dynamic range (DR enhanced readout technique with a two-step time-to-digital converter (TDC for high speed linear CMOS image sensors. A multi-capacitor and self-regulated capacitive trans-impedance amplifier (CTIA structure is employed to extend the dynamic range. The gain of the CTIA is auto adjusted by switching different capacitors to the integration node asynchronously according to the output voltage. A column-parallel ADC based on a two-step TDC is utilized to improve the conversion rate. The conversion is divided into coarse phase and fine phase. An error calibration scheme is also proposed to correct quantization errors caused by propagation delay skew within −Tclk~+Tclk. A linear CMOS image sensor pixel array is designed in the 0.13 μm CMOS process to verify this DR-enhanced high speed readout technique. The post simulation results indicate that the dynamic range of readout circuit is 99.02 dB and the ADC achieves 60.22 dB SNDR and 9.71 bit ENOB at a conversion rate of 2 MS/s after calibration, with 14.04 dB and 2.4 bit improvement, compared with SNDR and ENOB of that without calibration.

  19. Geometric correction of radiographic images using general purpose image processing program

    International Nuclear Information System (INIS)

    Kim, Eun Kyung; Cheong, Ji Seong; Lee, Sang Hoon

    1994-01-01

    The present study was undertaken to compare geometric corrected image by general-purpose image processing program for the Apple Macintosh II computer (NIH Image, Adobe Photoshop) with standardized image by individualized custom fabricated alignment instrument. Two non-standardized periapical films with XCP film holder only were taken at the lower molar portion of 19 volunteers. Two standardized periapical films with customized XCP film holder with impression material on the bite-block were taken for each person. Geometric correction was performed with Adobe Photoshop and NIH Image program. Specially, arbitrary image rotation function of 'Adobe Photoshop' and subtraction with transparency function of 'NIH Image' were utilized. The standard deviations of grey values of subtracted images were used to measure image similarity. Average standard deviation of grey values of subtracted images if standardized group was slightly lower than that of corrected group. However, the difference was found to be statistically insignificant (p>0.05). It is considered that we can use 'NIH Image' and 'Adobe Photoshop' program for correction of nonstandardized film, taken with XCP film holder at lower molar portion.

  20. Evaluation of processing methods for static radioisotope scan images

    International Nuclear Information System (INIS)

    Oakberg, J.A.

    1976-12-01

    Radioisotope scanning in the field of nuclear medicine provides a method for the mapping of a radioactive drug in the human body to produce maps (images) which prove useful in detecting abnormalities in vital organs. At best, radioisotope scanning methods produce images with poor counting statistics. One solution to improving the body scan images is using dedicated small computers with appropriate software to process the scan data. Eleven methods for processing image data are compared

  1. Digital image processing in NDT : Application to industrial radiography

    International Nuclear Information System (INIS)

    Aguirre, J.; Gonzales, C.; Pereira, D.

    1988-01-01

    Digital image processing techniques are applied to image enhancement discontinuity detection and characterization is radiographic test. Processing is performed mainly by image histogram modification, edge enhancement, texture and user interactive segmentation. Implementation was achieved in a microcomputer with video image capture system. Results are compared with those obtained through more specialized equipment main frame computers and high precision mechanical scanning digitisers. Procedures are intended as a precious stage for automatic defect detection

  2. Method for automatic localization of MR-visible markers using morphological image processing and conventional pulse sequences: feasibility for image-guided procedures.

    Science.gov (United States)

    Busse, Harald; Trampel, Robert; Gründer, Wilfried; Moche, Michael; Kahn, Thomas

    2007-10-01

    To evaluate the feasibility and accuracy of an automated method to determine the 3D position of MR-visible markers. Inductively coupled RF coils were imaged in a whole-body 1.5T scanner using the body coil and two conventional gradient echo sequences (FLASH and TrueFISP) and large imaging volumes up to (300 mm(3)). To minimize background signals, a flip angle of approximately 1 degrees was used. Morphological 2D image processing in orthogonal scan planes was used to determine the 3D positions of a configuration of three fiducial markers (FMC). The accuracies of the marker positions and of the orientation of the plane defined by the FMC were evaluated at various distances r(M) from the isocenter. Fiducial marker detection with conventional equipment (pulse sequences, imaging coils) was very reliable and highly reproducible over a wide range of experimental conditions. For r(M) image processing is feasible, simple, and very accurate. In combination with safe wireless markers, the method is found to be useful for image-guided procedures. (c) 2007 Wiley-Liss, Inc.

  3. Quantum Computation-Based Image Representation, Processing Operations and Their Applications

    Directory of Open Access Journals (Sweden)

    Fei Yan

    2014-10-01

    Full Text Available A flexible representation of quantum images (FRQI was proposed to facilitate the extension of classical (non-quantum-like image processing applications to the quantum computing domain. The representation encodes a quantum image in the form of a normalized state, which captures information about colors and their corresponding positions in the images. Since its conception, a handful of processing transformations have been formulated, among which are the geometric transformations on quantum images (GTQI and the CTQI that are focused on the color information of the images. In addition, extensions and applications of FRQI representation, such as multi-channel representation for quantum images (MCQI, quantum image data searching, watermarking strategies for quantum images, a framework to produce movies on quantum computers and a blueprint for quantum video encryption and decryption have also been suggested. These proposals extend classical-like image and video processing applications to the quantum computing domain and offer a significant speed-up with low computational resources in comparison to performing the same tasks on traditional computing devices. Each of the algorithms and the mathematical foundations for their execution were simulated using classical computing resources, and their results were analyzed alongside other classical computing equivalents. The work presented in this review is intended to serve as the epitome of advances made in FRQI quantum image processing over the past five years and to simulate further interest geared towards the realization of some secure and efficient image and video processing applications on quantum computers.

  4. An invertebrate embryologist's guide to routine processing of confocal images.

    Science.gov (United States)

    von Dassow, George

    2014-01-01

    It is almost impossible to use a confocal microscope without encountering the need to transform the raw data through image processing. Adherence to a set of straightforward guidelines will help ensure that image manipulations are both credible and repeatable. Meanwhile, attention to optimal data collection parameters will greatly simplify image processing, not only for convenience but for quality and credibility as well. Here I describe how to conduct routine confocal image processing tasks, including creating 3D animations or stereo images, false coloring or merging channels, background suppression, and compressing movie files for display.

  5. Development of X-ray radiography examination technology by image processing method

    Energy Technology Data Exchange (ETDEWEB)

    Min, Duck Kee; Koo, Dae Seo; Kim, Eun Ka [Korea Atomic Energy Research Institute, Taejon (Korea)

    1998-06-01

    Because the dimension of nuclear fuel rods was measured with rapidity and accuracy by X-ray radiography examination, the set-up of image processing system which was composed of 979 CCD-L camera, image processing card and fluorescent lighting was carried out, and the image processing system enabled image processing to perform. The examination technology of X-ray radiography, which enabled dimension measurement of nuclear fuel rods to perform, was developed by image processing method. The result of dimension measurement of standard fuel rod by image processing method was 2% reduction in relative measuring error than that of X-ray radiography film, while the former was better by 100 {approx} 200 {mu}m in measuring accuracy than the latter. (author). 9 refs., 22 figs., 3 tabs.

  6. Roles of medical image processing in medical physics

    International Nuclear Information System (INIS)

    Arimura, Hidetaka

    2011-01-01

    Image processing techniques including pattern recognition techniques play important roles in high precision diagnosis and radiation therapy. The author reviews a symposium on medical image information, which was held in the 100th Memorial Annual Meeting of the Japan Society of Medical Physics from September 23rd to 25th. In this symposium, we had three invited speakers, Dr. Akinobu Shimizu, Dr. Hideaki Haneishi, and Dr. Hirohito Mekata, who are active engineering researchers of segmentation, image registration, and pattern recognition, respectively. In this paper, the author reviews the roles of the medical imaging processing in medical physics field, and the talks of the three invited speakers. (author)

  7. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Limiting liability via high resolution image processing

    Energy Technology Data Exchange (ETDEWEB)

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  9. Performance Measure as Feedback Variable in Image Processing

    Directory of Open Access Journals (Sweden)

    Ristić Danijela

    2006-01-01

    Full Text Available This paper extends the view of image processing performance measure presenting the use of this measure as an actual value in a feedback structure. The idea behind is that the control loop, which is built in that way, drives the actual feedback value to a given set point. Since the performance measure depends explicitly on the application, the inclusion of feedback structures and choice of appropriate feedback variables are presented on example of optical character recognition in industrial application. Metrics for quantification of performance at different image processing levels are discussed. The issues that those metrics should address from both image processing and control point of view are considered. The performance measures of individual processing algorithms that form a character recognition system are determined with respect to the overall system performance.

  10. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  11. Making the PACS workstation a browser of image processing software: a feasibility study using inter-process communication techniques.

    Science.gov (United States)

    Wang, Chunliang; Ritter, Felix; Smedby, Orjan

    2010-07-01

    To enhance the functional expandability of a picture archiving and communication systems (PACS) workstation and to facilitate the integration of third-part image-processing modules, we propose a browser-server style method. In the proposed solution, the PACS workstation shows the front-end user interface defined in an XML file while the image processing software is running in the background as a server. Inter-process communication (IPC) techniques allow an efficient exchange of image data, parameters, and user input between the PACS workstation and stand-alone image-processing software. Using a predefined communication protocol, the PACS workstation developer or image processing software developer does not need detailed information about the other system, but will still be able to achieve seamless integration between the two systems and the IPC procedure is totally transparent to the final user. A browser-server style solution was built between OsiriX (PACS workstation software) and MeVisLab (Image-Processing Software). Ten example image-processing modules were easily added to OsiriX by converting existing MeVisLab image processing networks. Image data transfer using shared memory added communication based on IPC techniques is an appealing method that allows PACS workstation developers and image processing software developers to cooperate while focusing on different interests.

  12. Improved linearity using harmonic error rejection in a full-field range imaging system

    Science.gov (United States)

    Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.

    2008-02-01

    Full field range imaging cameras are used to simultaneously measure the distance for every pixel in a given scene using an intensity modulated illumination source and a gain modulated receiver array. The light is reflected from an object in the scene, and the modulation envelope experiences a phase shift proportional to the target distance. Ideally the waveforms are sinusoidal, allowing the phase, and hence object range, to be determined from four measurements using an arctangent function. In practice these waveforms are often not perfectly sinusoidal, and in some cases square waveforms are instead used to simplify the electronic drive requirements. The waveforms therefore commonly contain odd harmonics which contribute a nonlinear error to the phase determination, and therefore an error in the range measurement. We have developed a unique sampling method to cancel the effect of these harmonics, with the results showing an order of magnitude improvement in the measurement linearity without the need for calibration or lookup tables, while the acquisition time remains unchanged. The technique can be applied to existing range imaging systems without having to change or modify the complex illumination or sensor systems, instead only requiring a change to the signal generation and timing electronics.

  13. Diagnosis of skin cancer using image processing

    Science.gov (United States)

    Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué; Coronel-Beltrán, Ángel

    2014-10-01

    In this papera methodology for classifying skin cancerin images of dermatologie spots based on spectral analysis using the K-law Fourier non-lineartechnique is presented. The image is segmented and binarized to build the function that contains the interest area. The image is divided into their respective RGB channels to obtain the spectral properties of each channel. The green channel contains more information and therefore this channel is always chosen. This information is point to point multiplied by a binary mask and to this result a Fourier transform is applied written in nonlinear form. If the real part of this spectrum is positive, the spectral density takeunit values, otherwise are zero. Finally the ratio of the sum of the unit values of the spectral density with the sum of values of the binary mask are calculated. This ratio is called spectral index. When the value calculated is in the spectral index range three types of cancer can be detected. Values found out of this range are benign injure.

  14. A software package for biomedical image processing and analysis

    International Nuclear Information System (INIS)

    Goncalves, J.G.M.; Mealha, O.

    1988-01-01

    The decreasing cost of computing power and the introduction of low cost imaging boards justifies the increasing number of applications of digital image processing techniques in the area of biomedicine. There is however a large software gap to be fulfilled, between the application and the equipment. The requirements to bridge this gap are twofold: good knowledge of the hardware provided and its interface to the host computer, and expertise in digital image processing and analysis techniques. A software package incorporating these two requirements was developed using the C programming language, in order to create a user friendly image processing programming environment. The software package can be considered in two different ways: as a data structure adapted to image processing and analysis, which acts as the backbone and the standard of communication for all the software; and as a set of routines implementing the basic algorithms used in image processing and analysis. Hardware dependency is restricted to a single module upon which all hardware calls are based. The data structure that was built has four main features: hierchical, open, object oriented, and object dependent dimensions. Considering the vast amount of memory needed by imaging applications and the memory available in small imaging systems, an effective image memory management scheme was implemented. This software package is being used for more than one and a half years by users with different applications. It proved to be an excellent tool for helping people to get adapted into the system, and for standardizing and exchanging software, yet preserving flexibility allowing for users' specific implementations. The philosophy of the software package is discussed and the data structure that was built is described in detail

  15. A gamma cammera image processing system

    International Nuclear Information System (INIS)

    Chen Weihua; Mei Jufang; Jiang Wenchuan; Guo Zhenxiang

    1987-01-01

    A microcomputer based gamma camera image processing system has been introduced. Comparing with other systems, the feature of this system is that an inexpensive microcomputer has been combined with specially developed hardware, such as, data acquisition controller, data processor and dynamic display controller, ect. Thus the process of picture processing has been speeded up and the function expense ratio of the system raised

  16. Intensity-dependent point spread image processing

    International Nuclear Information System (INIS)

    Cornsweet, T.N.; Yellott, J.I.

    1984-01-01

    There is ample anatomical, physiological and psychophysical evidence that the mammilian retina contains networks that mediate interactions among neighboring receptors, resulting in intersecting transformations between input images and their corresponding neural output patterns. The almost universally accepted view is that the principal form of interaction involves lateral inhibition, resulting in an output pattern that is the convolution of the input with a ''Mexican hat'' or difference-of-Gaussians spread function, having a positive center and a negative surround. A closely related process is widely applied in digital image processing, and in photography as ''unsharp masking''. The authors show that a simple and fundamentally different process, involving no inhibitory or subtractive terms can also account for the physiological and psychophysical findings that have been attributed to lateral inhibition. This process also results in a number of fundamental effects that occur in mammalian vision and that would be of considerable significance in robotic vision, but which cannot be explained by lateral inhibitory interaction

  17. Advanced Spectroscopic and Thermal Imaging Instrumentation for Shock Tube and Ballistic Range Facilities

    Science.gov (United States)

    Grinstead, Jay H.; Wilder, Michael C.; Reda, Daniel C.; Cruden, Brett A.; Bogdanoff, David W.

    2010-01-01

    The Electric Arc Shock Tube (EAST) facility and Hypervelocity Free Flight Aerodynamic Facility (HFFAF, an aeroballistic range) at NASA Ames support basic research in aerothermodynamic phenomena of atmospheric entry, specifically shock layer radiation spectroscopy, convective and radiative heat transfer, and transition to turbulence. Innovative optical instrumentation has been developed and implemented to meet the challenges posed from obtaining such data in these impulse facilities. Spatially and spectrally resolved measurements of absolute radiance of a travelling shock wave in EAST are acquired using multiplexed, time-gated imaging spectrographs. Nearly complete spectral coverage from the vacuum ultraviolet to the near infrared is possible in a single experiment. Time-gated thermal imaging of ballistic range models in flight enables quantitative, global measurements of surface temperature. These images can be interpreted to determine convective heat transfer rates and reveal transition to turbulence due to isolated and distributed surface roughness at hypersonic velocities. The focus of this paper is a detailed description of the optical instrumentation currently in use in the EAST and HFFAF.

  18. Low level image processing techniques using the pipeline image processing engine in the flight telerobotic servicer

    Science.gov (United States)

    Nashman, Marilyn; Chaconas, Karen J.

    1988-01-01

    The sensory processing system for the NASA/NBS Standard Reference Model (NASREM) for telerobotic control is described. This control system architecture was adopted by NASA of the Flight Telerobotic Servicer. The control system is hierarchically designed and consists of three parallel systems: task decomposition, world modeling, and sensory processing. The Sensory Processing System is examined, and in particular the image processing hardware and software used to extract features at low levels of sensory processing for tasks representative of those envisioned for the Space Station such as assembly and maintenance are described.

  19. An Automated, Image Processing System for Concrete Evaluation

    International Nuclear Information System (INIS)

    Baumgart, C.W.; Cave, S.P.; Linder, K.E.

    1998-01-01

    Allied Signal Federal Manufacturing ampersand Technologies (FM ampersand T) was asked to perform a proof-of-concept study for the Missouri Highway and Transportation Department (MHTD), Research Division, in June 1997. The goal of this proof-of-concept study was to ascertain if automated scanning and imaging techniques might be applied effectively to the problem of concrete evaluation. In the current evaluation process, a concrete sample core is manually scanned under a microscope. Voids (or air spaces) within the concrete are then detected visually by a human operator by incrementing the sample under the cross-hairs of a microscope and by counting the number of ''pixels'' which fall within a void. Automation of the scanning and image analysis processes is desired to improve the speed of the scanning process, to improve evaluation consistency, and to reduce operator fatigue. An initial, proof-of-concept image analysis approach was successfully developed and demonstrated using acquired black and white imagery of concrete samples. In this paper, the automated scanning and image capture system currently under development will be described and the image processing approach developed for the proof-of-concept study will be demonstrated. A development update and plans for future enhancements are also presented

  20. Digital image processing and analysis human and computer vision applications with CVIPtools

    CERN Document Server

    Umbaugh, Scott E

    2010-01-01

    Section I Introduction to Digital Image Processing and AnalysisDigital Image Processing and AnalysisOverviewImage Analysis and Computer VisionImage Processing and Human VisionKey PointsExercisesReferencesFurther ReadingComputer Imaging SystemsImaging Systems OverviewImage Formation and SensingCVIPtools SoftwareImage RepresentationKey PointsExercisesSupplementary ExercisesReferencesFurther ReadingSection II Digital Image Analysis and Computer VisionIntroduction to Digital Image AnalysisIntroductionPreprocessingBinary Image AnalysisKey PointsExercisesSupplementary ExercisesReferencesFurther Read

  1. Real-time progressive hyperspectral image processing endmember finding and anomaly detection

    CERN Document Server

    Chang, Chein-I

    2016-01-01

    The book covers the most crucial parts of real-time hyperspectral image processing: causality and real-time capability. Recently, two new concepts of real time hyperspectral image processing, Progressive Hyperspectral Imaging (PHSI) and Recursive Hyperspectral Imaging (RHSI). Both of these can be used to design algorithms and also form an integral part of real time hyperpsectral image processing. This book focuses on progressive nature in algorithms on their real-time and causal processing implementation in two major applications, endmember finding and anomaly detection, both of which are fundamental tasks in hyperspectral imaging but generally not encountered in multispectral imaging. This book is written to particularly address PHSI in real time processing, while a book, Recursive Hyperspectral Sample and Band Processing: Algorithm Architecture and Implementation (Springer 2016) can be considered as its companion book. Includes preliminary background which is essential to those who work in hyperspectral ima...

  2. Automatic DNA Diagnosis for 1D Gel Electrophoresis Images using Bio-image Processing Technique.

    Science.gov (United States)

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Shaw, Philip J; Ukosakit, Kittipat; Tragoonrung, Somvong; Tongsima, Sissades

    2015-01-01

    DNA gel electrophoresis is a molecular biology technique for separating different sizes of DNA fragments. Applications of DNA gel electrophoresis include DNA fingerprinting (genetic diagnosis), size estimation of DNA, and DNA separation for Southern blotting. Accurate interpretation of DNA banding patterns from electrophoretic images can be laborious and error prone when a large number of bands are interrogated manually. Although many bio-imaging techniques have been proposed, none of them can fully automate the typing of DNA owing to the complexities of migration patterns typically obtained. We developed an image-processing tool that automatically calls genotypes from DNA gel electrophoresis images. The image processing workflow comprises three main steps: 1) lane segmentation, 2) extraction of DNA bands and 3) band genotyping classification. The tool was originally intended to facilitate large-scale genotyping analysis of sugarcane cultivars. We tested the proposed tool on 10 gel images (433 cultivars) obtained from polyacrylamide gel electrophoresis (PAGE) of PCR amplicons for detecting intron length polymorphisms (ILP) on one locus of the sugarcanes. These gel images demonstrated many challenges in automated lane/band segmentation in image processing including lane distortion, band deformity, high degree of noise in the background, and bands that are very close together (doublets). Using the proposed bio-imaging workflow, lanes and DNA bands contained within are properly segmented, even for adjacent bands with aberrant migration that cannot be separated by conventional techniques. The software, called GELect, automatically performs genotype calling on each lane by comparing with an all-banding reference, which was created by clustering the existing bands into the non-redundant set of reference bands. The automated genotype calling results were verified by independent manual typing by molecular biologists. This work presents an automated genotyping tool from DNA

  3. Automatic DNA Diagnosis for 1D Gel Electrophoresis Images using Bio-image Processing Technique

    Science.gov (United States)

    2015-01-01

    Background DNA gel electrophoresis is a molecular biology technique for separating different sizes of DNA fragments. Applications of DNA gel electrophoresis include DNA fingerprinting (genetic diagnosis), size estimation of DNA, and DNA separation for Southern blotting. Accurate interpretation of DNA banding patterns from electrophoretic images can be laborious and error prone when a large number of bands are interrogated manually. Although many bio-imaging techniques have been proposed, none of them can fully automate the typing of DNA owing to the complexities of migration patterns typically obtained. Results We developed an image-processing tool that automatically calls genotypes from DNA gel electrophoresis images. The image processing workflow comprises three main steps: 1) lane segmentation, 2) extraction of DNA bands and 3) band genotyping classification. The tool was originally intended to facilitate large-scale genotyping analysis of sugarcane cultivars. We tested the proposed tool on 10 gel images (433 cultivars) obtained from polyacrylamide gel electrophoresis (PAGE) of PCR amplicons for detecting intron length polymorphisms (ILP) on one locus of the sugarcanes. These gel images demonstrated many challenges in automated lane/band segmentation in image processing including lane distortion, band deformity, high degree of noise in the background, and bands that are very close together (doublets). Using the proposed bio-imaging workflow, lanes and DNA bands contained within are properly segmented, even for adjacent bands with aberrant migration that cannot be separated by conventional techniques. The software, called GELect, automatically performs genotype calling on each lane by comparing with an all-banding reference, which was created by clustering the existing bands into the non-redundant set of reference bands. The automated genotype calling results were verified by independent manual typing by molecular biologists. Conclusions This work presents an

  4. Effects of optimization and image processing in digital chest radiography

    International Nuclear Information System (INIS)

    Kheddache, S.; Maansson, L.G.; Angelhed, J.E.; Denbratt, L.; Gottfridsson, B.; Schlossman, D.

    1991-01-01

    A digital system for chest radiography based on a large image intensifier was compared to a conventional film-screen system. The digital system was optimized with regard to spatial and contrast resolution and dose. The images were digitally processed for contrast and edge enhancement. A simulated pneumothorax and two and two simulated nodules were positioned over the lungs and the mediastinum of an anthro-pomorphic phantom. Observer performance was evaluated with Receiver Operating Characteristic (ROC) analysis. Five observers assessed the processed digital images and the conventional full-size radiographs. The time spent viewing the full-size radiographs and the digital images was recorded. For the simulated pneumothorax, the results showed perfect performance for the full-size radiographs and detectability was high also for the processed digital images. No significant differences in the detectability of the simulated nodules was seen between the two imaging systems. The results for the digital images showed a significantly improved detectability for the nodules in the mediastinum as compared to a previous ROC study where no optimization and image processing was available. No significant difference in detectability was seen between the former and the present ROC study for small nodules in the lung. No difference was seen in the time spent assessing the conventional full-size radiographs and the digital images. The study indicates that processed digital images produced by a large image intensifier are equal in image quality to conventional full-size radiographs for low-contrast objects such as nodules. (author). 38 refs.; 4 figs.; 1 tab

  5. Processing Infrared Images For Fire Management Applications

    Science.gov (United States)

    Warren, John R.; Pratt, William K.

    1981-12-01

    The USDA Forest Service has used airborne infrared systems for forest fire detection and mapping for many years. The transfer of the images from plane to ground and the transposition of fire spots and perimeters to maps has been performed manually. A new system has been developed which uses digital image processing, transmission, and storage. Interactive graphics, high resolution color display, calculations, and computer model compatibility are featured in the system. Images are acquired by an IR line scanner and converted to 1024 x 1024 x 8 bit frames for transmission to the ground at a 1.544 M bit rate over a 14.7 GHZ carrier. Individual frames are received and stored, then transferred to a solid state memory to refresh the display at a conventional 30 frames per second rate. Line length and area calculations, false color assignment, X-Y scaling, and image enhancement are available. Fire spread can be calculated for display and fire perimeters plotted on maps. The performance requirements, basic system, and image processing will be described.

  6. iMAGE cloud: medical image processing as a service for regional healthcare in a hybrid cloud environment.

    Science.gov (United States)

    Liu, Li; Chen, Weiping; Nie, Min; Zhang, Fengjuan; Wang, Yu; He, Ailing; Wang, Xiaonan; Yan, Gen

    2016-11-01

    To handle the emergence of the regional healthcare ecosystem, physicians and surgeons in various departments and healthcare institutions must process medical images securely, conveniently, and efficiently, and must integrate them with electronic medical records (EMRs). In this manuscript, we propose a software as a service (SaaS) cloud called the iMAGE cloud. A three-layer hybrid cloud was created to provide medical image processing services in the smart city of Wuxi, China, in April 2015. In the first step, medical images and EMR data were received and integrated via the hybrid regional healthcare network. Then, traditional and advanced image processing functions were proposed and computed in a unified manner in the high-performance cloud units. Finally, the image processing results were delivered to regional users using the virtual desktop infrastructure (VDI) technology. Security infrastructure was also taken into consideration. Integrated information query and many advanced medical image processing functions-such as coronary extraction, pulmonary reconstruction, vascular extraction, intelligent detection of pulmonary nodules, image fusion, and 3D printing-were available to local physicians and surgeons in various departments and healthcare institutions. Implementation results indicate that the iMAGE cloud can provide convenient, efficient, compatible, and secure medical image processing services in regional healthcare networks. The iMAGE cloud has been proven to be valuable in applications in the regional healthcare system, and it could have a promising future in the healthcare system worldwide.

  7. Incorporation of a laser range scanner into image-guided liver surgery: Surface acquisition, registration, and tracking

    International Nuclear Information System (INIS)

    Cash, David M.; Sinha, Tuhin K.; Chapman, William C.; Terawaki, Hiromi; Dawant, Benoit M.; Galloway, Robert L.; Miga, Michael I.

    2003-01-01

    As image guided surgical procedures become increasingly diverse, there will be more scenarios where point-based fiducials cannot be accurately localized for registration and rigid body assumptions no longer hold. As a result, procedures will rely more frequently on anatomical surfaces for the basis of image alignment and will require intraoperative geometric data to measure and compensate for tissue deformation in the organ. In this paper we outline methods for which a laser range scanner may be used to accomplish these tasks intraoperatively. A laser range scanner based on the optical principle of triangulation acquires a dense set of three-dimensional point data in a very rapid, noncontact fashion. Phantom studies were performed to test the ability to link range scan data with traditional modes of image-guided surgery data through localization, registration, and tracking in physical space. The experiments demonstrate that the scanner is capable of localizing point-based fiducials to within 0.2 mm and capable of achieving point and surface based registrations with target registration error of less than 2.0 mm. Tracking points in physical space with the range scanning system yields an error of 1.4±0.8 mm. Surface deformation studies were performed with the range scanner in order to determine if this device was capable of acquiring enough information for compensation algorithms. In the surface deformation studies, the range scanner was able to detect changes in surface shape due to deformation comparable to those detected by tomographic image studies. Use of the range scanner has been approved for clinical trials, and an initial intraoperative range scan experiment is presented. In all of these studies, the primary source of error in range scan data is deterministically related to the position and orientation of the surface within the scanner's field of view. However, this systematic error can be corrected, allowing the range scanner to provide a rapid, robust

  8. High-performance method of morphological medical image processing

    Directory of Open Access Journals (Sweden)

    Ryabykh M. S.

    2016-07-01

    Full Text Available the article shows the implementation of grayscale morphology vHGW algorithm for selection borders in the medical image. Image processing is executed using OpenMP and NVIDIA CUDA technology for images with different resolution and different size of the structuring element.

  9. Spatially assisted down-track median filter for GPR image post-processing

    Science.gov (United States)

    Paglieroni, David W; Beer, N Reginald

    2014-10-07

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  10. Enhancement of dental x-ray images by two channel image processing

    International Nuclear Information System (INIS)

    Mitra, S.; Yu, T.H.

    1991-01-01

    In this paper, the authors develop a new algorithm for the enhancement of low-contrast details of dental X-ray images using a two channel structure. The algorithm first decomposes an input image in the frequency domain into two parts by filtering: one containing the low frequency components and the other containing the high frequency components. Then these parts are enhanced separately using a transform magnitude modifier. Finally a contrast enhanced image is formed by combining these two processed pats. The performance of the proposed algorithm is illustrated through enhancement of dental X-ray images. The algorithm can be easily implemented on a personal computer

  11. Subband/Transform MATLAB Functions For Processing Images

    Science.gov (United States)

    Glover, D.

    1995-01-01

    SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.

  12. Image processing tensor transform and discrete tomography with Matlab

    CERN Document Server

    Grigoryan, Artyom M

    2012-01-01

    Focusing on mathematical methods in computer tomography, Image Processing: Tensor Transform and Discrete Tomography with MATLAB(R) introduces novel approaches to help in solving the problem of image reconstruction on the Cartesian lattice. Specifically, it discusses methods of image processing along parallel rays to more quickly and accurately reconstruct images from a finite number of projections, thereby avoiding overradiation of the body during a computed tomography (CT) scan. The book presents several new ideas, concepts, and methods, many of which have not been published elsewhere. New co

  13. New real-time image processing system for IRFPA

    Institute of Scientific and Technical Information of China (English)

    WANG Bing-jian; LIU Shang-qian; CHENG Yu-bao

    2006-01-01

    Influenced by detectors' material,manufacturing technology etc,every detector in infrared focal plane array (IRFPA) will output different voltages even if their input radiation flux is the same.And this is called non-uniformity of IRFPA.At the same time,the high background temperature,low temperature difference between targets and background and the low responsivity of IRFPA result in low contrast of infrared images.So non-uniformity correction and image enhancement are important techniques for IRFPA imaging system.This paper proposes a new real-time infrared image processing system based on Field Programmable Gate Array(FPGA).The system implements non-uniformity correction,image enhancement and video synthesization etc.By using parallel architecture and pipeline technique,the system processing speed is as high as 50Mx12bits per second.It is appropriate greatly to a large IRFPA and a high frame frequency IRFPA imaging system.The system is miniatured in one FPGA.

  14. Application of image processing technology in yarn hairiness detection

    Directory of Open Access Journals (Sweden)

    Guohong ZHANG

    2016-02-01

    Full Text Available Digital image processing technology is one of the new methods for yarn detection, which can realize the digital characterization and objective evaluation of yarn appearance. This paper overviews the current status of development and application of digital image processing technology used for yarn hairiness evaluation, and analyzes and compares the traditional detection methods and this new developed method. Compared with the traditional methods, the image processing technology based method is more objective, fast and accurate, which is the vital development trend of the yarn appearance evaluation.

  15. Community Tools for Cartographic and Photogrammetric Processing of Mars Express HRSC Images

    Science.gov (United States)

    Kirk, R. L.; Howington-Kraus, E.; Edmundson, K.; Redding, B.; Galuszka, D.; Hare, T.; Gwinner, K.

    2017-07-01

    The High Resolution Stereo Camera (HRSC) on the Mars Express orbiter (Neukum et al. 2004) is a multi-line pushbroom scanner that can obtain stereo and color coverage of targets in a single overpass, with pixel scales as small as 10 m at periapsis. Since commencing operations in 2004 it has imaged  77 % of Mars at 20 m/pixel or better. The instrument team uses the Video Image Communication And Retrieval (VICAR) software to produce and archive a range of data products from uncalibrated and radiometrically calibrated images to controlled digital topographic models (DTMs) and orthoimages and regional mosaics of DTM and orthophoto data (Gwinner et al. 2009; 2010b; 2016). Alternatives to this highly effective standard processing pipeline are nevertheless of interest to researchers who do not have access to the full VICAR suite and may wish to make topographic products or perform other (e. g., spectrophotometric) analyses prior to the release of the highest level products. We have therefore developed software to ingest HRSC images and model their geometry in the USGS Integrated Software for Imagers and Spectrometers (ISIS3), which can be used for data preparation, geodetic control, and analysis, and the commercial photogrammetric software SOCET SET (® BAE Systems; Miller and Walker 1993; 1995) which can be used for independent production of DTMs and orthoimages. The initial implementation of this capability utilized the then-current ISIS2 system and the generic pushbroom sensor model of SOCET SET, and was described in the DTM comparison of independent photogrammetric processing by different elements of the HRSC team (Heipke et al. 2007). A major drawback of this prototype was that neither software system then allowed for pushbroom images in which the exposure time changes from line to line. Except at periapsis, HRSC makes such timing changes every few hundred lines to accommodate changes of altitude and velocity in its elliptical orbit. As a result, it was

  16. COMMUNITY TOOLS FOR CARTOGRAPHIC AND PHOTOGRAMMETRIC PROCESSING OF MARS EXPRESS HRSC IMAGES

    Directory of Open Access Journals (Sweden)

    R. L. Kirk

    2017-07-01

    Full Text Available The High Resolution Stereo Camera (HRSC on the Mars Express orbiter (Neukum et al. 2004 is a multi-line pushbroom scanner that can obtain stereo and color coverage of targets in a single overpass, with pixel scales as small as 10 m at periapsis. Since commencing operations in 2004 it has imaged ~ 77 % of Mars at 20 m/pixel or better. The instrument team uses the Video Image Communication And Retrieval (VICAR software to produce and archive a range of data products from uncalibrated and radiometrically calibrated images to controlled digital topographic models (DTMs and orthoimages and regional mosaics of DTM and orthophoto data (Gwinner et al. 2009; 2010b; 2016. Alternatives to this highly effective standard processing pipeline are nevertheless of interest to researchers who do not have access to the full VICAR suite and may wish to make topographic products or perform other (e. g., spectrophotometric analyses prior to the release of the highest level products. We have therefore developed software to ingest HRSC images and model their geometry in the USGS Integrated Software for Imagers and Spectrometers (ISIS3, which can be used for data preparation, geodetic control, and analysis, and the commercial photogrammetric software SOCET SET (® BAE Systems; Miller and Walker 1993; 1995 which can be used for independent production of DTMs and orthoimages. The initial implementation of this capability utilized the then-current ISIS2 system and the generic pushbroom sensor model of SOCET SET, and was described in the DTM comparison of independent photogrammetric processing by different elements of the HRSC team (Heipke et al. 2007. A major drawback of this prototype was that neither software system then allowed for pushbroom images in which the exposure time changes from line to line. Except at periapsis, HRSC makes such timing changes every few hundred lines to accommodate changes of altitude and velocity in its elliptical orbit. As a result

  17. Lyophilized skeletal imaging composition

    International Nuclear Information System (INIS)

    Vanduzee, B.F.

    1983-01-01

    This invention encompasses a process for producing a dry-powder skeletal imaging kit. An aqueous solution of a diphosphonate, a stannous reductant, and, optionally, a stabilizer is prepared. The solution is adjusted to a pH within the range 4.2 to 4.8 and the pH-adjusted solution is then lyophilized. The adjustment of pH, within a particular range, during the process of manufacturing lyophilized diphosphonate containing skeletal imaging kits yields a kit which produces a technetium skeletal imaging agent with superior imaging properties. This improved performance is manifested through faster blood clearance and higher skeletal uptake of the technetium imaging agent

  18. PROCESSING, CATALOGUING AND DISTRIBUTION OF UAS IMAGES IN NEAR REAL TIME

    Directory of Open Access Journals (Sweden)

    I. Runkel

    2013-08-01

    Full Text Available Why are UAS such a hype? UAS make the data capture flexible, fast and easy. For many applications this is more important than a perfect photogrammetric aerial image block. To ensure, that the advantage of a fast data capturing will be valid up to the end of the processing chain, all intermediate steps like data processing and data dissemination to the customer need to be flexible and fast as well. GEOSYSTEMS has established the whole processing workflow as server/client solution. This is the focus of the presentation. Depending on the image acquisition system the image data can be down linked during the flight to the data processing computer or it is stored on a mobile device and hooked up to the data processing computer after the flight campaign. The image project manager reads the data from the device and georeferences the images according to the position data. The meta data is converted into an ISO conform format and subsequently all georeferenced images are catalogued in the raster data management System ERDAS APOLLO. APOLLO provides the data, respectively the images as an OGC-conform services to the customer. Within seconds the UAV-images are ready to use for GIS application, image processing or direct interpretation via web applications – where ever you want. The whole processing chain is built in a generic manner. It can be adapted to a magnitude of applications. The UAV imageries can be processed and catalogued as single ortho imges or as image mosaic. Furthermore, image data of various cameras can be fusioned. By using WPS (web processing services image enhancement, image analysis workflows like change detection layers can be calculated and provided to the image analysts. The processing of the WPS runs direct on the raster data management server. The image analyst has no data and no software on his local computer. This workflow is proven to be fast, stable and accurate. It is designed to support time critical applications for security

  19. Processing, Cataloguing and Distribution of Uas Images in Near Real Time

    Science.gov (United States)

    Runkel, I.

    2013-08-01

    Why are UAS such a hype? UAS make the data capture flexible, fast and easy. For many applications this is more important than a perfect photogrammetric aerial image block. To ensure, that the advantage of a fast data capturing will be valid up to the end of the processing chain, all intermediate steps like data processing and data dissemination to the customer need to be flexible and fast as well. GEOSYSTEMS has established the whole processing workflow as server/client solution. This is the focus of the presentation. Depending on the image acquisition system the image data can be down linked during the flight to the data processing computer or it is stored on a mobile device and hooked up to the data processing computer after the flight campaign. The image project manager reads the data from the device and georeferences the images according to the position data. The meta data is converted into an ISO conform format and subsequently all georeferenced images are catalogued in the raster data management System ERDAS APOLLO. APOLLO provides the data, respectively the images as an OGC-conform services to the customer. Within seconds the UAV-images are ready to use for GIS application, image processing or direct interpretation via web applications - where ever you want. The whole processing chain is built in a generic manner. It can be adapted to a magnitude of applications. The UAV imageries can be processed and catalogued as single ortho imges or as image mosaic. Furthermore, image data of various cameras can be fusioned. By using WPS (web processing services) image enhancement, image analysis workflows like change detection layers can be calculated and provided to the image analysts. The processing of the WPS runs direct on the raster data management server. The image analyst has no data and no software on his local computer. This workflow is proven to be fast, stable and accurate. It is designed to support time critical applications for security demands - the images

  20. Practical image and video processing using MATLAB

    CERN Document Server

    Marques, Oge

    2011-01-01

    "The book provides a practical introduction to the most important topics in image and video processing using MATLAB (and its Image Processing Toolbox) as a tool to demonstrate the most important techniques and algorithms. The contents are presented in a clear, technically accurate, objective way, with just enough mathematical detail. Most of the chapters are supported by figures, examples, illustrative problems, MATLAB scripts, suggestions for further reading, bibliographical references, useful Web sites, and exercises and computer projects to extend the understanding of their contents"--

  1. Diversification in an image retrieval system based on text and image processing

    Directory of Open Access Journals (Sweden)

    Adrian Iftene

    2014-11-01

    Full Text Available In this paper we present an image retrieval system created within the research project MUCKE (Multimedia and User Credibility Knowledge Extraction, a CHIST-ERA research project where UAIC{\\footnote{"Alexandru Ioan Cuza" University of Iasi}} is one of the partners{\\footnote{Together with Technical University from Wienna, Austria, CEA-LIST Institute from Paris, France and BILKENT University from Ankara, Turkey}}. Our discussion in this work will focus mainly on components that are part of our image retrieval system proposed in MUCKE, and we present the work done by the UAIC group. MUCKE incorporates modules for processing multimedia content in different modes and languages (like English, French, German and Romanian and UAIC is responsible with text processing tasks (for Romanian and English. One of the problems addressed by our work is related to search results diversification. In order to solve this problem, we first process the user queries in both languages and secondly, we create clusters of similar images.

  2. Parallel Processing of Images in Mobile Devices using BOINC

    Science.gov (United States)

    Curiel, Mariela; Calle, David F.; Santamaría, Alfredo S.; Suarez, David F.; Flórez, Leonardo

    2018-04-01

    Medical image processing helps health professionals make decisions for the diagnosis and treatment of patients. Since some algorithms for processing images require substantial amounts of resources, one could take advantage of distributed or parallel computing. A mobile grid can be an adequate computing infrastructure for this problem. A mobile grid is a grid that includes mobile devices as resource providers. In a previous step of this research, we selected BOINC as the infrastructure to build our mobile grid. However, parallel processing of images in mobile devices poses at least two important challenges: the execution of standard libraries for processing images and obtaining adequate performance when compared to desktop computers grids. By the time we started our research, the use of BOINC in mobile devices also involved two issues: a) the execution of programs in mobile devices required to modify the code to insert calls to the BOINC API, and b) the division of the image among the mobile devices as well as its merging required additional code in some BOINC components. This article presents answers to these four challenges.

  3. Parallel Processing of Images in Mobile Devices using BOINC

    Directory of Open Access Journals (Sweden)

    Curiel Mariela

    2018-04-01

    Full Text Available Medical image processing helps health professionals make decisions for the diagnosis and treatment of patients. Since some algorithms for processing images require substantial amounts of resources, one could take advantage of distributed or parallel computing. A mobile grid can be an adequate computing infrastructure for this problem. A mobile grid is a grid that includes mobile devices as resource providers. In a previous step of this research, we selected BOINC as the infrastructure to build our mobile grid. However, parallel processing of images in mobile devices poses at least two important challenges: the execution of standard libraries for processing images and obtaining adequate performance when compared to desktop computers grids. By the time we started our research, the use of BOINC in mobile devices also involved two issues: a the execution of programs in mobile devices required to modify the code to insert calls to the BOINC API, and b the division of the image among the mobile devices as well as its merging required additional code in some BOINC components. This article presents answers to these four challenges.

  4. Development of an image processing system at the Technology Applications Center, UNM: Landsat image processing in mineral exploration and related activities. Final report

    International Nuclear Information System (INIS)

    Budge, T.K.

    1980-09-01

    This project was a demonstration of the capabilities of Landsat satellite image processing applied to the monitoring of mining activity in New Mexico. Study areas included the Navajo coal surface mine, the Jackpile uranium surface mine, and the potash mining district near Carlsbad, New Mexico. Computer classifications of a number of land use categories in these mines were presented and discussed. A literature review of a number of case studies concerning the use of Landsat image processing in mineral exploration and related activities was prepared. Included in this review is a discussion of the Landsat satellite system and the basics of computer image processing. Topics such as destriping, contrast stretches, atmospheric corrections, ratioing, and classification techniques are addressed. Summaries of the STANSORT II and ELAS software packages and the Technology Application Center's Digital Image Processing System (TDIPS) are presented

  5. Reconstruction, Processing and Display of 3D-Images

    International Nuclear Information System (INIS)

    Lenz, R.

    1986-01-01

    In the last few years a number of methods have been developed which can produce true 3D images, volumes of density values. We review two of these techniques (confocal microscopy and X-ray tomography) which were used in the reconstruction of some of our images. The other images came from transmission electron microscopes, gammacameras and magnetic resonance scanners. A new algorithm is suggested which uses projection onto convex sets to improve the depth resolution in the microscopy case. Since we use a TV-monitor as display device we have to project 3D volumes to 2D images. We use the following type of projections: reprojections, range images, colorcoded depth and shaded surface displays. Shaded surface displays use the surface gradient to compute the gray value in the projection. We describe how this gradient can be computed from the range image and from the original density volume. Normally we compute a whole series of projections where the volume is rotated some degrees between two projections. In a separate display session we can display these images in stereo and motion. We describe how noise reduction filters, gray value transformations, geometric manipulations, gradient filters, texture filters and binary techniques can be used to remove uninteresting points from the volume. Finally, a filter design strategy is developed which is based on the optimal basis function approach by Hummel. We show that for a large class of patterns, in images of arbitrary dimensions, the optimal basis functions are rotation-invariant operators as introduced by Danielsson in the 2D case. We also describe how the orientation of a pattern can be computed from its feature vector. (With 107 refs.) (author)

  6. Mapping spatial patterns with morphological image processing

    Science.gov (United States)

    Peter Vogt; Kurt H. Riitters; Christine Estreguil; Jacek Kozak; Timothy G. Wade; James D. Wickham

    2006-01-01

    We use morphological image processing for classifying spatial patterns at the pixel level on binary land-cover maps. Land-cover pattern is classified as 'perforated,' 'edge,' 'patch,' and 'core' with higher spatial precision and thematic accuracy compared to a previous approach based on image convolution, while retaining the...

  7. Image processing in digital chest radiography

    International Nuclear Information System (INIS)

    Manninen, H.; Partanen, K.; Lehtovirta, J.; Matsi, P.; Soimakallio, S.

    1992-01-01

    The usefulness of digital image processing of chest radiographs was evaluated in a clinical study. In 54 patients, chest radiographs in the posteroanterior projection were obtained by both 14 inch digital image intensifier equipment and the conventional screen-film technique. The digital radiographs (512x512 image format) viewed on a 625 line monitor were processed in 3 different ways: 1.standard display; 2.digital edge enhancement for the standard display; 3.inverse intensity display. The radiographs were interpreted independently by 3 radiologists. Diagnoses were confirmed by CT, follow-up radiographs and clinical records. Chest abnormalities of the films analyzed included 21 primary lung tumors, 44 pulmonary nodules, 16 cases with mediastinal disease, 17 with pneumonia /atelectasis. Interstitial lung disease, pleural plaques, and pulmonary emphysema were found in 30, 18 and 19 cases respectively. Sensitivity of conventional radiography when averaged overall findings was better than that of digital techniques (P<0.001). Differences in diagnostic accuracy measured by sensitivity and specificity between the 3 digital display modes were small. Standard image display showed better sensitivity for pulmonary nodules (0.74 vs 0.66; P<0.05) but poorer specificity for pulmonary emphysema (0.85 vs 0.93; P<0.05) compared with inverse intensity display. It is concluded that when using 512x512 image format, the routine use of digital edge enhancement and tone reversal at digital chest radiographs is not warranted. (author). 12 refs.; 4 figs.; 2 tabs

  8. Comparative performance evaluation of transform coding in image pre-processing

    Science.gov (United States)

    Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha

    2017-07-01

    We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.

  9. Image matching for digital close-range stereo photogrammetry based on constraints of Delaunay triangulated network and epipolar-line

    Science.gov (United States)

    Zhang, K.; Sheng, Y. H.; Li, Y. Q.; Han, B.; Liang, Ch.; Sha, W.

    2006-10-01

    In the field of digital photogrammetry and computer vision, the determination of conjugate points in a stereo image pair, referred to as "image matching," is the critical step to realize automatic surveying and recognition. Traditional matching methods encounter some problems in the digital close-range stereo photogrammetry, because the change of gray-scale or texture is not obvious in the close-range stereo images. The main shortcoming of traditional matching methods is that geometric information of matching points is not fully used, which will lead to wrong matching results in regions with poor texture. To fully use the geometry and gray-scale information, a new stereo image matching algorithm is proposed in this paper considering the characteristics of digital close-range photogrammetry. Compared with the traditional matching method, the new algorithm has three improvements on image matching. Firstly, shape factor, fuzzy maths and gray-scale projection are introduced into the design of synthetical matching measure. Secondly, the topology connecting relations of matching points in Delaunay triangulated network and epipolar-line are used to decide matching order and narrow the searching scope of conjugate point of the matching point. Lastly, the theory of parameter adjustment with constraint is introduced into least square image matching to carry out subpixel level matching under epipolar-line constraint. The new algorithm is applied to actual stereo images of a building taken by digital close-range photogrammetric system. The experimental result shows that the algorithm has a higher matching speed and matching accuracy than pyramid image matching algorithm based on gray-scale correlation.

  10. Fast and robust wavelet-based dynamic range compression and contrast enhancement model with color restoration

    Science.gov (United States)

    Unaldi, Numan; Asari, Vijayan K.; Rahman, Zia-ur

    2009-05-01

    Recently we proposed a wavelet-based dynamic range compression algorithm to improve the visual quality of digital images captured from high dynamic range scenes with non-uniform lighting conditions. The fast image enhancement algorithm that provides dynamic range compression, while preserving the local contrast and tonal rendition, is also a good candidate for real time video processing applications. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some "pathological" scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for the final color restoration process. In this paper the latest version of the proposed algorithm, which deals with this issue is presented. The results obtained by applying the algorithm to numerous natural images show strong robustness and high image quality.

  11. Image processing can cause some malignant soft-tissue lesions to be missed in digital mammography images.

    Science.gov (United States)

    Warren, L M; Halling-Brown, M D; Looney, P T; Dance, D R; Wallis, M G; Given-Wilson, R M; Wilkinson, L; McAvinchey, R; Young, K C

    2017-09-01

    To investigate the effect of image processing on cancer detection in mammography. An observer study was performed using 349 digital mammography images of women with normal breasts, calcification clusters, or soft-tissue lesions including 191 subtle cancers. Images underwent two types of processing: FlavourA (standard) and FlavourB (added enhancement). Six observers located features in the breast they suspected to be cancerous (4,188 observations). Data were analysed using jackknife alternative free-response receiver operating characteristic (JAFROC) analysis. Characteristics of the cancers detected with each image processing type were investigated. For calcifications, the JAFROC figure of merit (FOM) was equal to 0.86 for both types of image processing. For soft-tissue lesions, the JAFROC FOM were better for FlavourA (0.81) than FlavourB (0.78); this difference was significant (p=0.001). Using FlavourA a greater number of cancers of all grades and sizes were detected than with FlavourB. FlavourA improved soft-tissue lesion detection in denser breasts (p=0.04 when volumetric density was over 7.5%) CONCLUSIONS: The detection of malignant soft-tissue lesions (which were primarily invasive) was significantly better with FlavourA than FlavourB image processing. This is despite FlavourB having a higher contrast appearance often preferred by radiologists. It is important that clinical choice of image processing is based on objective measures. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  12. An image processing approach to analyze morphological features of microscopic images of muscle fibers.

    Science.gov (United States)

    Comin, Cesar Henrique; Xu, Xiaoyin; Wang, Yaming; Costa, Luciano da Fontoura; Yang, Zhong

    2014-12-01

    We present an image processing approach to automatically analyze duo-channel microscopic images of muscular fiber nuclei and cytoplasm. Nuclei and cytoplasm play a critical role in determining the health and functioning of muscular fibers as changes of nuclei and cytoplasm manifest in many diseases such as muscular dystrophy and hypertrophy. Quantitative evaluation of muscle fiber nuclei and cytoplasm thus is of great importance to researchers in musculoskeletal studies. The proposed computational approach consists of steps of image processing to segment and delineate cytoplasm and identify nuclei in two-channel images. Morphological operations like skeletonization is applied to extract the length of cytoplasm for quantification. We tested the approach on real images and found that it can achieve high accuracy, objectivity, and robustness. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Image processing with a cellular nonlinear network

    International Nuclear Information System (INIS)

    Morfu, S.

    2005-01-01

    A cellular nonlinear network (CNN) based on uncoupled nonlinear oscillators is proposed for image processing purposes. It is shown theoretically and numerically that the contrast of an image loaded at the nodes of the CNN is strongly enhanced, even if this one is initially weak. An image inversion can be also obtained without reconfiguration of the network whereas a gray levels extraction can be performed with an additional threshold filtering. Lastly, an electronic implementation of this CNN is presented

  14. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  15. Reducing the absorbed dose in analogue radiography of infant chest images by improving the image quality, using image processing techniques

    International Nuclear Information System (INIS)

    Karimian, A.; Yazdani, S.; Askari, M. A.

    2011-01-01

    Radiographic inspection is one of the most widely employed techniques for medical testing methods. Because of poor contrast and high un-sharpness of radiographic image quality in films, converting radiographs to a digital format and using further digital image processing is the best method of enhancing the image quality and assisting the interpreter in their evaluation. In this research work, radiographic films of 70 infant chest images with different sizes of defects were selected. To digitise the chest images and employ image processing the two algorithms (i) spatial domain and (ii) frequency domain techniques were used. The MATLAB environment was selected for processing in the digital format. Our results showed that by using these two techniques, the defects with small dimensions are detectable. Therefore, these suggested techniques may help medical specialists to diagnose the defects in the primary stages and help to prevent more repeat X-ray examination of paediatric patients. (authors)

  16. The development of application technology for image processing in nuclear facilities

    International Nuclear Information System (INIS)

    Lee, Jong Min; Lee, Yong Bum; Kim, Woog Ki; Sohn, Surg Won; Kim, Seung Ho; Hwang, Suk Yeoung; Kim, Byung Soo

    1991-01-01

    The object of this project is to develop application technology of image processing in nuclear facilities where image signal are used for reliability and safety enhancement of operation, radiation exposure reduce of operator, and automation of operation processing. We has studied such application technology for image processing in nuclear facilities as non-tactile measurement, remote and automatic inspection, remote control, and enhanced analysis of visual information. On these bases, automation system and real-time image processing system are developed. Nuclear power consists in over 50% share of electic power supply of our country nowdays. So, it is required of technological support for top-notch technology in nuclear industry and its related fields. Especially, it is indispensable for image processing technology to enhance the reliabilty and safety of operation, to automate the process in a place like a nuclear power plant and radioactive envionment. It is important that image processing technology is linked to a nuclear engineering, and enhance the reliability abd safety of nuclear operation, as well as decrease the dose rate. (Author)

  17. Digital Data Processing of Images | Lotter | South African Medical ...

    African Journals Online (AJOL)

    Digital data processing was investigated to perform image processing. Image smoothing and restoration were explored and promising results obtained. The use of the computer, not only as a data management device, but as an important tool to render quantitative information, was illustrated by lung function determination.

  18. Analysis of the Growth Process of Neural Cells in Culture Environment Using Image Processing Techniques

    Science.gov (United States)

    Mirsafianf, Atefeh S.; Isfahani, Shirin N.; Kasaei, Shohreh; Mobasheri, Hamid

    Here we present an approach for processing neural cells images to analyze their growth process in culture environment. We have applied several image processing techniques for: 1- Environmental noise reduction, 2- Neural cells segmentation, 3- Neural cells classification based on their dendrites' growth conditions, and 4- neurons' features Extraction and measurement (e.g., like cell body area, number of dendrites, axon's length, and so on). Due to the large amount of noise in the images, we have used feed forward artificial neural networks to detect edges more precisely.

  19. IDP: Image and data processing (software) in C++

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    IDP++(Image and Data Processing in C++) is a complied, multidimensional, multi-data type, signal processing environment written in C++. It is being developed within the Radar Ocean Imaging group and is intended as a partial replacement for View. IDP++ takes advantage of the latest object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is being designed for real-time environment where interpreted signal processing packages are less efficient.

  20. Multimedia image and video processing

    CERN Document Server

    Guan, Ling

    2012-01-01

    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  1. An electronic image processing device featuring continuously selectable two-dimensional bipolar filter functions and real-time operation

    International Nuclear Information System (INIS)

    Charleston, B.D.; Beckman, F.H.; Franco, M.J.; Charleston, D.B.

    1981-01-01

    A versatile electronic-analogue image processing system has been developed for use in improving the quality of various types of images with emphasis on those encountered in experimental and diagnostic medicine. The operational principle utilizes spatial filtering which selectively controls the contrast of an image according to the spatial frequency content of relevant and non-relevant features of the image. Noise can be reduced or eliminated by selectively lowering the contrast of information in the high spatial frequency range. Edge sharpness can be enhanced by accentuating the upper midrange spatial frequencies. Both methods of spatial frequency control may be adjusted continuously in the same image to obtain maximum visibility of the features of interest. A precision video camera is used to view medical diagnostic images, either prints, transparencies or CRT displays. The output of the camera provides the analogue input signal for both the electronic processing system and the video display of the unprocessed image. The video signal input to the electronic processing system is processed by a two-dimensional spatial convolution operation. The system employs charged-coupled devices (CCDs), both tapped analogue delay lines (TADs) and serial analogue delay lines (SADs), to store information in the form of analogue potentials which are constantly being updated as new sampled analogue data arrive at the input. This information is convolved with a programmed bipolar radially symmetrical hexagonal function which may be controlled and varied at each radius by the operator in real-time by adjusting a set of front panel controls or by a programmed microprocessor control. Two TV monitors are used, one for processed image display and the other for constant reference to the original image. The working prototype has a full-screen display matrix size of 200 picture elements per horizontal line by 240 lines. The matrix can be expanded vertically and horizontally for the

  2. Grid Portal for Image and Video Processing

    International Nuclear Information System (INIS)

    Dinitrovski, I.; Kakasevski, G.; Buckovska, A.; Loskovska, S.

    2007-01-01

    Users are typically best served by G rid Portals . G rid Portals a re web servers that allow the user to configure or run a class of applications. The server is then given the task of authentication of the user with the Grid and invocation of the required grid services to launch the user's application. PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML. PHP is powerful and modern server-side scripting language producing HTML or XML output which easily can be accessed by everyone via web interface (with the browser of your choice) and can execute shell scripts on the server side. The aim of our work is development of Grid portal for image and video processing. The shell scripts contains gLite and globus commands for obtaining proxy certificate, job submission, data management etc. Using this technique we can easily create web interface to the Grid infrastructure. The image and video processing algorithms are implemented in C++ language using various image processing libraries. (Author)

  3. Digital image processing in art conservation

    Czech Academy of Sciences Publication Activity Database

    Zitová, Barbara; Flusser, Jan

    č. 53 (2003), s. 44-45 ISSN 0926-4981 Institutional research plan: CEZ:AV0Z1075907 Keywords : art conservation * digital image processing * change detection Subject RIV: JD - Computer Applications, Robotics

  4. Imaging partons in exclusive scattering processes

    Energy Technology Data Exchange (ETDEWEB)

    Diehl, Markus

    2012-06-15

    The spatial distribution of partons in the proton can be probed in suitable exclusive scattering processes. I report on recent performance estimates for parton imaging at a proposed Electron-Ion Collider.

  5. Shack-Hartmann centroid detection method based on high dynamic range imaging and normalization techniques

    International Nuclear Information System (INIS)

    Vargas, Javier; Gonzalez-Fernandez, Luis; Quiroga, Juan Antonio; Belenguer, Tomas

    2010-01-01

    In the optical quality measuring process of an optical system, including diamond-turning components, the use of a laser light source can produce an undesirable speckle effect in a Shack-Hartmann (SH) CCD sensor. This speckle noise can deteriorate the precision and accuracy of the wavefront sensor measurement. Here we present a SH centroid detection method founded on computer-based techniques and capable of measurement in the presence of strong speckle noise. The method extends the dynamic range imaging capabilities of the SH sensor through the use of a set of different CCD integration times. The resultant extended range spot map is normalized to accurately obtain the spot centroids. The proposed method has been applied to measure the optical quality of the main optical system (MOS) of the mid-infrared instrument telescope smulator. The wavefront at the exit of this optical system is affected by speckle noise when it is illuminated by a laser source and by air turbulence because it has a long back focal length (3017 mm). Using the proposed technique, the MOS wavefront error was measured and satisfactory results were obtained.

  6. Computational analysis of Pelton bucket tip erosion using digital image processing

    Science.gov (United States)

    Shrestha, Bim Prasad; Gautam, Bijaya; Bajracharya, Tri Ratna

    2008-03-01

    Erosion of hydro turbine components through sand laden river is one of the biggest problems in Himalayas. Even with sediment trapping systems, complete removal of fine sediment from water is impossible and uneconomical; hence most of the turbine components in Himalayan Rivers are exposed to sand laden water and subject to erode. Pelton bucket which are being wildly used in different hydropower generation plant undergoes erosion on the continuous presence of sand particles in water. The subsequent erosion causes increase in splitter thickness, which is supposed to be theoretically zero. This increase in splitter thickness gives rise to back hitting of water followed by decrease in turbine efficiency. This paper describes the process of measurement of sharp edges like bucket tip using digital image processing. Image of each bucket is captured and allowed to run for 72 hours; sand concentration in water hitting the bucket is closely controlled and monitored. Later, the image of the test bucket is taken in the same condition. The process is repeated for 10 times. In this paper digital image processing which encompasses processes that performs image enhancement in both spatial and frequency domain. In addition, the processes that extract attributes from images, up to and including the measurement of splitter's tip. Processing of image has been done in MATLAB 6.5 platform. The result shows that quantitative measurement of edge erosion of sharp edges could accurately be detected and the erosion profile could be generated using image processing technique.

  7. Advanced Color Image Processing and Analysis

    CERN Document Server

    2013-01-01

    This volume does much more than survey modern advanced color processing. Starting with a historical perspective on ways we have classified color, it sets out the latest numerical techniques for analyzing and processing colors, the leading edge in our search to accurately record and print what we see. The human eye perceives only a fraction of available light wavelengths, yet we live in a multicolor world of myriad shining hues. Colors rich in metaphorical associations make us “purple with rage” or “green with envy” and cause us to “see red.” Defining colors has been the work of centuries, culminating in today’s complex mathematical coding that nonetheless remains a work in progress: only recently have we possessed the computing capacity to process the algebraic matrices that reproduce color more accurately. With chapters on dihedral color and image spectrometers, this book provides technicians and researchers with the knowledge they need to grasp the intricacies of today’s color imaging.

  8. Panorama Image Processing for Condition Monitoring with Thermography in Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Byoung Joon; Kim, Tae Hwan; Kim, Soon Geol; Mo, Yoon Syub [UNETWARE, Seoul (Korea, Republic of); Kim, Won Tae [Kongju National University, Gongju (Korea, Republic of)

    2010-04-15

    In this paper, imaging processing study obtained from CCD image and thermography image was performed in order to treat easily thermographic data without any risks of personnel who conduct the condition monitoring for the abnormal or failure status occurrable in industrial power plants. This imaging processing is also applicable to the predictive maintenance. For confirming the broad monitoring, a methodology producting single image from the panorama technique was developed no matter how many cameras are employed, including fusion method for discrete configuration for the target. As results, image fusion from quick realtime processing was obtained and it was possible to save time to track the location monitoring in matching the images between CCTV and thermography

  9. Panorama Image Processing for Condition Monitoring with Thermography in Power Plant

    International Nuclear Information System (INIS)

    Jeon, Byoung Joon; Kim, Tae Hwan; Kim, Soon Geol; Mo, Yoon Syub; Kim, Won Tae

    2010-01-01

    In this paper, imaging processing study obtained from CCD image and thermography image was performed in order to treat easily thermographic data without any risks of personnel who conduct the condition monitoring for the abnormal or failure status occurrable in industrial power plants. This imaging processing is also applicable to the predictive maintenance. For confirming the broad monitoring, a methodology producting single image from the panorama technique was developed no matter how many cameras are employed, including fusion method for discrete configuration for the target. As results, image fusion from quick realtime processing was obtained and it was possible to save time to track the location monitoring in matching the images between CCTV and thermography

  10. Image recognition on raw and processed potato detection: a review

    Science.gov (United States)

    Qi, Yan-nan; Lü, Cheng-xu; Zhang, Jun-ning; Li, Ya-shuo; Zeng, Zhen; Mao, Wen-hua; Jiang, Han-lu; Yang, Bing-nan

    2018-02-01

    Objective: Chinese potato staple food strategy clearly pointed out the need to improve potato processing, while the bottleneck of this strategy is technology and equipment of selection of appropriate raw and processed potato. The purpose of this paper is to summarize the advanced raw and processed potato detection methods. Method: According to consult research literatures in the field of image recognition based potato quality detection, including the shape, weight, mechanical damage, germination, greening, black heart, scab potato etc., the development and direction of this field were summarized in this paper. Result: In order to obtain whole potato surface information, the hardware was built by the synchronous of image sensor and conveyor belt to achieve multi-angle images of a single potato. Researches on image recognition of potato shape are popular and mature, including qualitative discrimination on abnormal and sound potato, and even round and oval potato, with the recognition accuracy of more than 83%. Weight is an important indicator for potato grading, and the image classification accuracy presents more than 93%. The image recognition of potato mechanical damage focuses on qualitative identification, with the main affecting factors of damage shape and damage time. The image recognition of potato germination usually uses potato surface image and edge germination point. Both of the qualitative and quantitative detection of green potato have been researched, currently scab and blackheart image recognition need to be operated using the stable detection environment or specific device. The image recognition of processed potato mainly focuses on potato chips, slices and fries, etc. Conclusion: image recognition as a food rapid detection tool have been widely researched on the area of raw and processed potato quality analyses, its technique and equipment have the potential for commercialization in short term, to meet to the strategy demand of development potato as

  11. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  12. Grid Computing Application for Brain Magnetic Resonance Image Processing

    International Nuclear Information System (INIS)

    Valdivia, F; Crépeault, B; Duchesne, S

    2012-01-01

    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  13. Study of TeV range cosmic ray detection with Cherenkov imaging techniques

    International Nuclear Information System (INIS)

    Ansari, R.; Gaillard, J.M.; Parrour, G.

    1992-03-01

    The Monte Carlo study of cosmic ray detection in the TeV energy range has been triggered by the authors' interest in the ARTEMIS (Antimatter Research Through the Earth Moon Ion Spectrometer) proposal. The properties of cosmic ray showers detected by Cherenkov imaging in the visible domain are studied. The detection sensitivity and the accuracy of the reconstruction of the parent particle direction using Cherenkov imaging are discussed. The backbone of the study is the atmospheric shower Monte Carlo generator developed by A.M. Hillas. A comparison between nucleon and photon induced showers of Cherenkov detection is also included. (R.P.) 14 refs., 48 figs., 3 tabs

  14. Method of the Aquatic Environment Image Processing for Determining the Mineral Suspension Parameters

    Directory of Open Access Journals (Sweden)

    D.A. Antonenkov

    2016-10-01

    Full Text Available The present article features the developed method to determine the mineral suspension characteristics by obtaining and following processing of the aquatic environment images. This method is capable of maintaining its performance under the conditions of considerable dynamic activity of the water masses. The method feature consists in application of the developed computing algorithm, simultaneous use of morphological filters and histogram methods for image processing, and in a special calibration technique. As a whole it provides a possibility to calculate size and concentration of the particles on the images obtained. The developed technical means permitting to get the environment images of the required quality are briefly described. The algorithm of the developed software operation is represented. The examples of numerical and weight distribution of the particles according to their sizes, and the totals of comparing the results obtained by the standard and developed methods are represented. The developed method makes it possible to obtain the particle size data in the range of 50–1000 μm and also to determine the suspension concentration with ~12 % error. This method can be technically implemented for the instruments intended for in situ measurements using the gauges, allowing obtaining exposure time short values, such as applying the electron-optical converter, which acts as the image intensifier, and the high-speed electronic shutter. The completed method testing in the laboratory makes possible to obtain the results similar in accuracy with the results of the in situ measurements.

  15. The model of illumination-transillumination for image enhancement of X-ray images

    Energy Technology Data Exchange (ETDEWEB)

    Lyu, Kwang Yeul [Shingu College, Sungnam (Korea, Republic of); Rhee, Sang Min [Kangwon National Univ., Chuncheon (Korea, Republic of)

    2001-06-01

    In digital image processing, the homomorphic filtering approach is derived from an illumination - reflectance model of the image. It can also be used with an illumination-transillumination model X-ray film. Several X-ray images were applied to enhancement with histogram equalization and homomorphic filter based on an illumination-transillumination model. The homomorphic filter has proven theoretical claim of image density range compression and balanced contrast enhancement, and also was found a valuable tool to process analog X-ray images to digital images.

  16. Image processing using pulse-coupled neural networks applications in Python

    CERN Document Server

    Lindblad, Thomas

    2013-01-01

    Image processing algorithms based on the mammalian visual cortex are powerful tools for extraction information and manipulating images. This book reviews the neural theory and translates them into digital models. Applications are given in areas of image recognition, foveation, image fusion and information extraction. The third edition reflects renewed international interest in pulse image processing with updated sections presenting several newly developed applications. This edition also introduces a suite of Python scripts that assist readers in replicating results presented in the text and to further develop their own applications.

  17. SlideJ: An ImageJ plugin for automated processing of whole slide images.

    Science.gov (United States)

    Della Mea, Vincenzo; Baroni, Giulia L; Pilutti, David; Di Loreto, Carla

    2017-01-01

    The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images-up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations.

  18. Scene matching based on non-linear pre-processing on reference image and sensed image

    Institute of Scientific and Technical Information of China (English)

    Zhong Sheng; Zhang Tianxu; Sang Nong

    2005-01-01

    To solve the heterogeneous image scene matching problem, a non-linear pre-processing method for the original images before intensity-based correlation is proposed. The result shows that the proper matching probability is raised greatly. Especially for the low S/N image pairs, the effect is more remarkable.

  19. Digital-image processing improves man-machine communication at a nuclear reactor

    International Nuclear Information System (INIS)

    Cook, S.A.; Harrington, T.P.; Toffer, H.

    1982-01-01

    The application of digital image processing to improve man-machine communication in a nuclear reactor control room is illustrated. At the Hanford N Reactor, operated by UNC Nuclear Industries for the United States Department of Energy, in Richland, Washington, digital image processing is applied to flow, temperature, and tube power data. Color displays are used to present the data in a clear and concise fashion. Specific examples are used to demonstrate the capabilities and benefits of digital image processing of reactor data. N Reactor flow and power maps for routine reactor operations and for perturbed reactor conditions are displayed. The advantages of difference mapping are demonstrated. Image processing techniques have also been applied to results of analytical reactor models; two examples are shown. The potential of combining experimental and analytical information with digital image processing to produce predictive and adaptive reactor core models is discussed. The applications demonstrate that digital image processing can provide new more effective ways for control room personnel to assess reactor status, to locate problems and explore corrective actions. 10 figures

  20. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    and evaluated. On-board there are six video cameras each capturing images of 1024times1024 pixels of 12 bpp at a frame rate of 15 fps, thus totalling 1080 Mbits/s. In comparison the average downlink data rate for these images is projected to be 50 kbit/s. This calls for efficient on-board processing to select...

  1. HDR Pathological Image Enhancement Based on Improved Bias Field Correction and Guided Image Filter

    Directory of Open Access Journals (Sweden)

    Qingjiao Sun

    2016-01-01

    Full Text Available Pathological image enhancement is a significant topic in the field of pathological image processing. This paper proposes a high dynamic range (HDR pathological image enhancement method based on improved bias field correction and guided image filter (GIF. Firstly, a preprocessing including stain normalization and wavelet denoising is performed for Haematoxylin and Eosin (H and E stained pathological image. Then, an improved bias field correction model is developed to enhance the influence of light for high-frequency part in image and correct the intensity inhomogeneity and detail discontinuity of image. Next, HDR pathological image is generated based on least square method using low dynamic range (LDR image, H and E channel images. Finally, the fine enhanced image is acquired after the detail enhancement process. Experiments with 140 pathological images demonstrate the performance advantages of our proposed method as compared with related work.

  2. Positron range in PET imaging: an alternative approach for assessing and correcting the blurring

    Science.gov (United States)

    Jødal, L.; Le Loirec, C.; Champion, C.

    2012-06-01

    Positron range impairs resolution in PET imaging, especially for high-energy emitters and for small-animal PET. De-blurring in image reconstruction is possible if the blurring distribution is known. Furthermore, the percentage of annihilation events within a given distance from the point of positron emission is relevant for assessing statistical noise. This paper aims to determine the positron range distribution relevant for blurring for seven medically relevant PET isotopes, 18F, 11C, 13N, 15O, 68Ga, 62Cu and 82Rb, and derive empirical formulas for the distributions. This paper focuses on allowed-decay isotopes. It is argued that blurring at the detection level should not be described by the positron range r, but instead the 2D projected distance δ (equal to the closest distance between decay and line of response). To determine these 2D distributions, results from a dedicated positron track-structure Monte Carlo code, Electron and POsitron TRANsport (EPOTRAN), were used. Materials other than water were studied with PENELOPE. The radial cumulative probability distribution G2D(δ) and the radial probability density distribution g2D(δ) were determined. G2D(δ) could be approximated by the empirical function 1 - exp(-Aδ2 - Bδ), where A = 0.0266 (Emean)-1.716 and B = 0.1119 (Emean)-1.934, with Emean being the mean positron energy in MeV and δ in mm. The radial density distribution g2D(δ) could be approximated by differentiation of G2D(δ). Distributions in other media were very similar to water. The positron range is important for improved resolution in PET imaging. Relevant distributions for the positron range have been derived for seven isotopes. Distributions for other allowed-decay isotopes may be estimated with the above formulas.

  3. Automatic grading of appearance retention of carpets using intensity and range images

    Science.gov (United States)

    Orjuela Vargas, Sergio Alejandro; Ortiz-Jaramillo, Benhur; Vansteenkiste, Ewout; Rooms, Filip; De Meulemeester, Simon; de Keyser, Robain; Van Langenhove, Lieva; Philips, Wilfried

    2012-04-01

    Textiles are mainly used for decoration and protection. In both cases, their original appearance and its retention are important factors for customers. Therefore, evaluation of appearance parameters are critical for quality assurance purposes, during and after manufacturing, to determine the lifetime and/or beauty of textile products. In particular, appearance retention of textile products is commonly certified with grades, which are currently assigned by human experts. However, manufacturers would prefer a more objective system. We present an objective system for grading appearance retention, particularly, for textile floor coverings. Changes in appearance are quantified by using linear regression models on texture features extracted from intensity and range images. Range images are obtained by our own laser scanner, reconstructing the carpet surface using two methods that have been previously presented. We extract texture features using a variant of the local binary pattern technique based on detecting those patterns whose frequencies are related to the appearance retention grades. We test models for eight types of carpets. Results show that the proposed approach describes the degree of wear with a precision within the range allowed to human inspectors by international standards. The methodology followed in this experiment has been designed to be general for evaluating global deviation of texture in other types of textiles, as well as other surface materials.

  4. Brain's tumor image processing using shearlet transform

    Science.gov (United States)

    Cadena, Luis; Espinosa, Nikolai; Cadena, Franklin; Korneeva, Anna; Kruglyakov, Alexey; Legalov, Alexander; Romanenko, Alexey; Zotin, Alexander

    2017-09-01

    Brain tumor detection is well known research area for medical and computer scientists. In last decades there has been much research done on tumor detection, segmentation, and classification. Medical imaging plays a central role in the diagnosis of brain tumors and nowadays uses methods non-invasive, high-resolution techniques, especially magnetic resonance imaging and computed tomography scans. Edge detection is a fundamental tool in image processing, particularly in the areas of feature detection and feature extraction, which aim at identifying points in a digital image at which the image has discontinuities. Shearlets is the most successful frameworks for the efficient representation of multidimensional data, capturing edges and other anisotropic features which frequently dominate multidimensional phenomena. The paper proposes an improved brain tumor detection method by automatically detecting tumor location in MR images, its features are extracted by new shearlet transform.

  5. Theoretical analysis of radiographic images by nonstationary Poisson processes

    International Nuclear Information System (INIS)

    Tanaka, Kazuo; Uchida, Suguru; Yamada, Isao.

    1980-01-01

    This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process. (author)

  6. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available.

  7. The vision guidance and image processing of AGV

    Science.gov (United States)

    Feng, Tongqing; Jiao, Bin

    2017-08-01

    Firstly, the principle of AGV vision guidance is introduced and the deviation and deflection angle are measured by image coordinate system. The visual guidance image processing platform is introduced. In view of the fact that the AGV guidance image contains more noise, the image has already been smoothed by a statistical sorting. By using AGV sampling way to obtain image guidance, because the image has the best and different threshold segmentation points. In view of this situation, the method of two-dimensional maximum entropy image segmentation is used to solve the problem. We extract the foreground image in the target band by calculating the contour area method and obtain the centre line with the least square fitting algorithm. With the help of image and physical coordinates, we can obtain the guidance information.

  8. A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images.

    Science.gov (United States)

    Moschini, Ugo; Meijster, Arnold; Wilkinson, Michael H F

    2018-03-01

    Max-trees, or component trees, are graph structures that represent the connected components of an image in a hierarchical way. Nowadays, many application fields rely on images with high-dynamic range or floating point values. Efficient sequential algorithms exist to build trees and compute attributes for images of any bit depth. However, we show that the current parallel algorithms perform poorly already with integers at bit depths higher than 16 bits per pixel. We propose a parallel method combining the two worlds of flooding and merging max-tree algorithms. First, a pilot max-tree of a quantized version of the image is built in parallel using a flooding method. Later, this structure is used in a parallel leaf-to-root approach to compute efficiently the final max-tree and to drive the merging of the sub-trees computed by the threads. We present an analysis of the performance both on simulated and actual 2D images and 3D volumes. Execution times are about better than the fastest sequential algorithm and speed-up goes up to on 64 threads.

  9. Image Post-Processing and Analysis. Chapter 17

    Energy Technology Data Exchange (ETDEWEB)

    Yushkevich, P. A. [University of Pennsylvania, Philadelphia (United States)

    2014-09-15

    For decades, scientists have used computers to enhance and analyse medical images. At first, they developed simple computer algorithms to enhance the appearance of interesting features in images, helping humans read and interpret them better. Later, they created more advanced algorithms, where the computer would not only enhance images but also participate in facilitating understanding of their content. Segmentation algorithms were developed to detect and extract specific anatomical objects in images, such as malignant lesions in mammograms. Registration algorithms were developed to align images of different modalities and to find corresponding anatomical locations in images from different subjects. These algorithms have made computer aided detection and diagnosis, computer guided surgery and other highly complex medical technologies possible. Nowadays, the field of image processing and analysis is a complex branch of science that lies at the intersection of applied mathematics, computer science, physics, statistics and biomedical sciences. This chapter will give a general overview of the most common problems in this field and the algorithms that address them.

  10. SlideJ: An ImageJ plugin for automated processing of whole slide images.

    Directory of Open Access Journals (Sweden)

    Vincenzo Della Mea

    Full Text Available The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images-up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations.

  11. Enhancing the Teaching of Digital Processing of Remote Sensing Image Course through Geospatial Web Processing Services

    Science.gov (United States)

    di, L.; Deng, M.

    2010-12-01

    Remote sensing (RS) is an essential method to collect data for Earth science research. Huge amount of remote sensing data, most of them in the image form, have been acquired. Almost all geography departments in the world offer courses in digital processing of remote sensing images. Such courses place emphasis on how to digitally process large amount of multi-source images for solving real world problems. However, due to the diversity and complexity of RS images and the shortcomings of current data and processing infrastructure, obstacles for effectively teaching such courses still remain. The major obstacles include 1) difficulties in finding, accessing, integrating and using massive RS images by students and educators, and 2) inadequate processing functions and computing facilities for students to freely explore the massive data. Recent development in geospatial Web processing service systems, which make massive data, computing powers, and processing capabilities to average Internet users anywhere in the world, promises the removal of the obstacles. The GeoBrain system developed by CSISS is an example of such systems. All functions available in GRASS Open Source GIS have been implemented as Web services in GeoBrain. Petabytes of remote sensing images in NASA data centers, the USGS Landsat data archive, and NOAA CLASS are accessible transparently and processable through GeoBrain. The GeoBrain system is operated on a high performance cluster server with large disk storage and fast Internet connection. All GeoBrain capabilities can be accessed by any Internet-connected Web browser. Dozens of universities have used GeoBrain as an ideal platform to support data-intensive remote sensing education. This presentation gives a specific example of using GeoBrain geoprocessing services to enhance the teaching of GGS 588, Digital Remote Sensing taught at the Department of Geography and Geoinformation Science, George Mason University. The course uses the textbook "Introductory

  12. Development of a Reference Image Collection Library for Histopathology Image Processing, Analysis and Decision Support Systems Research.

    Science.gov (United States)

    Kostopoulos, Spiros; Ravazoula, Panagiota; Asvestas, Pantelis; Kalatzis, Ioannis; Xenogiannopoulos, George; Cavouras, Dionisis; Glotsos, Dimitris

    2017-06-01

    Histopathology image processing, analysis and computer-aided diagnosis have been shown as effective assisting tools towards reliable and intra-/inter-observer invariant decisions in traditional pathology. Especially for cancer patients, decisions need to be as accurate as possible in order to increase the probability of optimal treatment planning. In this study, we propose a new image collection library (HICL-Histology Image Collection Library) comprising 3831 histological images of three different diseases, for fostering research in histopathology image processing, analysis and computer-aided diagnosis. Raw data comprised 93, 116 and 55 cases of brain, breast and laryngeal cancer respectively collected from the archives of the University Hospital of Patras, Greece. The 3831 images were generated from the most representative regions of the pathology, specified by an experienced histopathologist. The HICL Image Collection is free for access under an academic license at http://medisp.bme.teiath.gr/hicl/ . Potential exploitations of the proposed library may span over a board spectrum, such as in image processing to improve visualization, in segmentation for nuclei detection, in decision support systems for second opinion consultations, in statistical analysis for investigation of potential correlations between clinical annotations and imaging findings and, generally, in fostering research on histopathology image processing and analysis. To the best of our knowledge, the HICL constitutes the first attempt towards creation of a reference image collection library in the field of traditional histopathology, publicly and freely available to the scientific community.

  13. Digital image processing of mandibular trabeculae on radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Ogino, Toshi

    1987-06-01

    The present study was aimed to reveal the texture patterns of the radiographs of the mandibular trabeculae by digital image processing. The 32 cases of normal subjects and the 13 cases of patients with mandibular diseases of ameloblastoma, primordial cysts, squamous cell carcinoma and odontoma were analyzed by their intra-oral radiographs in the right premolar regions. The radiograms were digitized by the use of a drum scanner densitometry method. The input radiographic images were processed by a histogram equalization method. The result are as follows : First, the histogram equalization method enhances the image contrast of the textures. Second, the output images of the textures for normal mandible-trabeculae radiograms are of network pattern in nature. Third, the output images for the patients are characterized by the non-network pattern and replaced by the patterns of the fabric texture, intertwined plants (karakusa-pattern), scattered small masses and amorphous texture. Thus, these results indicates that the present digital image system is expected to be useful for revealing the texture patterns of the radiographs and in the future for the texture analysis of the clinical radiographs to obtain quantitative diagnostic findings.

  14. Application of digital image processing to industrial radiography

    International Nuclear Information System (INIS)

    Bodson; Varcin; Crescenzo; Theulot

    1985-01-01

    Radiography is widely used for quality control of fabrication of large reactor components. Image processing methods are applied to industrial radiographs in order to help to take a decision as well as to reduce costs and delays for examination. Films, performed in representative operating conditions, are used to test results obtained with algorithms for the restauration of images and for the detection, characterisation of indications in order to determine the possibility of an automatic radiographs processing [fr

  15. Digital image processing for real-time neutron radiography and its applications

    International Nuclear Information System (INIS)

    Fujine, Shigenori

    1989-01-01

    The present paper describes several digital image processing approaches for the real-time neutron radiography (neutron television-NTV), such as image integration, adaptive smoothing and image enhancement, which have beneficial effects on image improvements, and also describes how to use these techniques for applications. Details invisible in direct images of NTV are able to be revealed by digital image processing, such as reversed image, gray level correction, gray scale transformation, contoured image, subtraction technique, pseudo color display and so on. For real-time application a contouring operation and an averaging approach can also be utilized effectively. (author)

  16. Color Processing using Max-trees : A Comparison on Image Compression

    NARCIS (Netherlands)

    Tushabe, Florence; Wilkinson, M.H.F.

    2012-01-01

    This paper proposes a new method of processing color images using mathematical morphology techniques. It adapts the Max-tree image representation to accommodate color and other vectorial images. The proposed method introduces three new ways of transforming the color image into a gray scale image

  17. Mathematical problems in image processing

    International Nuclear Information System (INIS)

    Chidume, C.E.

    2000-01-01

    This is the second volume of a new series of lecture notes of the Abdus Salam International Centre for Theoretical Physics. This volume contains the lecture notes given by A. Chambolle during the School on Mathematical Problems in Image Processing. The school consisted of two weeks of lecture courses and one week of conference

  18. Signal and image processing for monitoring and testing at EDF

    International Nuclear Information System (INIS)

    Georgel, B.; Garreau, D.

    1992-04-01

    The quality of monitoring and non destructive testing devices in plants and utilities today greatly depends on the efficient processing of signal and image data. In this context, signal or image processing techniques, such as adaptive filtering or detection or 3D reconstruction, are required whenever manufacturing nonconformances or faulty operation have to be recognized and identified. This paper reviews the issues of industrial image and signal processing, by briefly considering the relevant studies and projects under way at EDF. (authors). 1 fig., 11 refs

  19. Textural Analysis of Fatique Crack Surfaces: Image Pre-processing

    Directory of Open Access Journals (Sweden)

    H. Lauschmann

    2000-01-01

    Full Text Available For the fatique crack history reconstitution, new methods of quantitative microfractography are beeing developed based on the image processing and textural analysis. SEM magnifications between micro- and macrofractography are used. Two image pre-processing operatins were suggested and proved to prepare the crack surface images for analytical treatment: 1. Normalization is used to transform the image to a stationary form. Compared to the generally used equalization, it conserves the shape of brightness distribution and saves the character of the texture. 2. Binarization is used to transform the grayscale image to a system of thick fibres. An objective criterion for the threshold brightness value was found as that resulting into the maximum number of objects. Both methods were succesfully applied together with the following textural analysis.

  20. Functional imaging of the pancreas. Image processing techniques and clinical evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Nakanishi, Fumiko

    1984-02-01

    An image processing technique for functional imaging of the pancreas was developed and is here reported. In this paper, clinical efficacy of the technique for detecting pancreatic abnormality is evaluated in comparison with conventional pancreatic scintigraphy and CT. For quantitative evaluation, functional rate, i.e. the rate of normal functioning pancreatic area, was calculated from the functional image and subtraction image. Two hundred and ninety-five cases were studied using this technique. Conventional image had a sensitivity of 65% and a specificity of 78%, while the use of functional imaging improved sensitivity to 88% and specificity to 88%. The mean functional rate in patients with pancreatic disease was significantly lower (33.3 +- 24.5 in patients with chronic pancreatitis, 28.1 +- 26.9 in patients with acute pancreatitis, 43.4 +- 22.3 in patients with diabetes mellitus, 20.4 +- 23.4 in patients with pancreatic cancer) than the mean functional rate in cases without pancreatic disease (86.4 +- 14.2). It is suggested that functional image of the pancreas reflecting pancreatic exocrine function and functional rate is a useful indicator of pancreatic exocrine function.